00:00:00.001 Started by upstream project "autotest-spdk-v24.05-vs-dpdk-v23.11" build number 115 00:00:00.001 originally caused by: 00:00:00.002 Started by upstream project "nightly-trigger" build number 3293 00:00:00.002 originally caused by: 00:00:00.002 Started by timer 00:00:00.002 Started by timer 00:00:00.097 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.098 The recommended git tool is: git 00:00:00.098 using credential 00000000-0000-0000-0000-000000000002 00:00:00.100 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.135 Fetching changes from the remote Git repository 00:00:00.137 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.172 Using shallow fetch with depth 1 00:00:00.172 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.172 > git --version # timeout=10 00:00:00.196 > git --version # 'git version 2.39.2' 00:00:00.196 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.218 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.218 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:05.958 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:05.967 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:05.976 Checking out Revision f0c44d8f8e3d61ecd9e3e442b9b5901b0cc7ca08 (FETCH_HEAD) 00:00:05.976 > git config core.sparsecheckout # timeout=10 00:00:05.984 > git read-tree -mu HEAD # timeout=10 00:00:05.998 > git checkout -f f0c44d8f8e3d61ecd9e3e442b9b5901b0cc7ca08 # timeout=5 00:00:06.023 Commit message: "spdk-abi-per-patch: fix check-so-deps-docker-autotest parameters" 00:00:06.024 > git rev-list --no-walk f0c44d8f8e3d61ecd9e3e442b9b5901b0cc7ca08 # timeout=10 00:00:06.100 [Pipeline] Start of Pipeline 00:00:06.110 [Pipeline] library 00:00:06.111 Loading library shm_lib@master 00:00:06.111 Library shm_lib@master is cached. Copying from home. 00:00:06.123 [Pipeline] node 00:00:06.131 Running on GP11 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:06.132 [Pipeline] { 00:00:06.139 [Pipeline] catchError 00:00:06.140 [Pipeline] { 00:00:06.148 [Pipeline] wrap 00:00:06.155 [Pipeline] { 00:00:06.161 [Pipeline] stage 00:00:06.162 [Pipeline] { (Prologue) 00:00:06.314 [Pipeline] sh 00:00:06.588 + logger -p user.info -t JENKINS-CI 00:00:06.606 [Pipeline] echo 00:00:06.608 Node: GP11 00:00:06.616 [Pipeline] sh 00:00:06.908 [Pipeline] setCustomBuildProperty 00:00:06.921 [Pipeline] echo 00:00:06.923 Cleanup processes 00:00:06.928 [Pipeline] sh 00:00:07.210 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:07.210 3529594 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:07.220 [Pipeline] sh 00:00:07.498 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:07.498 ++ grep -v 'sudo pgrep' 00:00:07.498 ++ awk '{print $1}' 00:00:07.498 + sudo kill -9 00:00:07.498 + true 00:00:07.513 [Pipeline] cleanWs 00:00:07.524 [WS-CLEANUP] Deleting project workspace... 00:00:07.524 [WS-CLEANUP] Deferred wipeout is used... 00:00:07.530 [WS-CLEANUP] done 00:00:07.534 [Pipeline] setCustomBuildProperty 00:00:07.551 [Pipeline] sh 00:00:07.832 + sudo git config --global --replace-all safe.directory '*' 00:00:07.917 [Pipeline] httpRequest 00:00:07.936 [Pipeline] echo 00:00:07.938 Sorcerer 10.211.164.101 is alive 00:00:07.946 [Pipeline] httpRequest 00:00:07.950 HttpMethod: GET 00:00:07.951 URL: http://10.211.164.101/packages/jbp_f0c44d8f8e3d61ecd9e3e442b9b5901b0cc7ca08.tar.gz 00:00:07.951 Sending request to url: http://10.211.164.101/packages/jbp_f0c44d8f8e3d61ecd9e3e442b9b5901b0cc7ca08.tar.gz 00:00:07.957 Response Code: HTTP/1.1 200 OK 00:00:07.957 Success: Status code 200 is in the accepted range: 200,404 00:00:07.958 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_f0c44d8f8e3d61ecd9e3e442b9b5901b0cc7ca08.tar.gz 00:00:16.133 [Pipeline] sh 00:00:16.416 + tar --no-same-owner -xf jbp_f0c44d8f8e3d61ecd9e3e442b9b5901b0cc7ca08.tar.gz 00:00:16.433 [Pipeline] httpRequest 00:00:16.460 [Pipeline] echo 00:00:16.462 Sorcerer 10.211.164.101 is alive 00:00:16.471 [Pipeline] httpRequest 00:00:16.477 HttpMethod: GET 00:00:16.478 URL: http://10.211.164.101/packages/spdk_241d0f3c94f275e2bee7a7c76d26b4d9fc729108.tar.gz 00:00:16.478 Sending request to url: http://10.211.164.101/packages/spdk_241d0f3c94f275e2bee7a7c76d26b4d9fc729108.tar.gz 00:00:16.496 Response Code: HTTP/1.1 200 OK 00:00:16.497 Success: Status code 200 is in the accepted range: 200,404 00:00:16.497 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_241d0f3c94f275e2bee7a7c76d26b4d9fc729108.tar.gz 00:01:07.462 [Pipeline] sh 00:01:07.742 + tar --no-same-owner -xf spdk_241d0f3c94f275e2bee7a7c76d26b4d9fc729108.tar.gz 00:01:10.282 [Pipeline] sh 00:01:10.560 + git -C spdk log --oneline -n5 00:01:10.560 241d0f3c9 test: fix dpdk builds on ubuntu24 00:01:10.560 327de4622 test/bdev: Skip "hidden" nvme devices from the sysfs 00:01:10.560 5fa2f5086 nvme: add lock_depth for ctrlr_lock 00:01:10.560 330a4f94d nvme: check pthread_mutex_destroy() return value 00:01:10.560 7b72c3ced nvme: add nvme_ctrlr_lock 00:01:10.574 [Pipeline] withCredentials 00:01:10.582 > git --version # timeout=10 00:01:10.593 > git --version # 'git version 2.39.2' 00:01:10.607 Masking supported pattern matches of $GIT_PASSWORD or $GIT_ASKPASS 00:01:10.609 [Pipeline] { 00:01:10.617 [Pipeline] retry 00:01:10.619 [Pipeline] { 00:01:10.633 [Pipeline] sh 00:01:10.905 + git ls-remote http://dpdk.org/git/dpdk-stable v23.11 00:01:11.480 [Pipeline] } 00:01:11.502 [Pipeline] // retry 00:01:11.508 [Pipeline] } 00:01:11.529 [Pipeline] // withCredentials 00:01:11.539 [Pipeline] httpRequest 00:01:11.583 [Pipeline] echo 00:01:11.585 Sorcerer 10.211.164.101 is alive 00:01:11.594 [Pipeline] httpRequest 00:01:11.599 HttpMethod: GET 00:01:11.599 URL: http://10.211.164.101/packages/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:01:11.600 Sending request to url: http://10.211.164.101/packages/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:01:11.601 Response Code: HTTP/1.1 200 OK 00:01:11.601 Success: Status code 200 is in the accepted range: 200,404 00:01:11.602 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:01:17.537 [Pipeline] sh 00:01:17.819 + tar --no-same-owner -xf dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:01:19.751 [Pipeline] sh 00:01:20.033 + git -C dpdk log --oneline -n5 00:01:20.033 eeb0605f11 version: 23.11.0 00:01:20.033 238778122a doc: update release notes for 23.11 00:01:20.033 46aa6b3cfc doc: fix description of RSS features 00:01:20.033 dd88f51a57 devtools: forbid DPDK API in cnxk base driver 00:01:20.033 7e421ae345 devtools: support skipping forbid rule check 00:01:20.043 [Pipeline] } 00:01:20.062 [Pipeline] // stage 00:01:20.071 [Pipeline] stage 00:01:20.073 [Pipeline] { (Prepare) 00:01:20.094 [Pipeline] writeFile 00:01:20.110 [Pipeline] sh 00:01:20.391 + logger -p user.info -t JENKINS-CI 00:01:20.412 [Pipeline] sh 00:01:20.697 + logger -p user.info -t JENKINS-CI 00:01:20.707 [Pipeline] sh 00:01:20.984 + cat autorun-spdk.conf 00:01:20.984 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:20.984 SPDK_TEST_NVMF=1 00:01:20.984 SPDK_TEST_NVME_CLI=1 00:01:20.984 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:20.984 SPDK_TEST_NVMF_NICS=e810 00:01:20.984 SPDK_TEST_VFIOUSER=1 00:01:20.984 SPDK_RUN_UBSAN=1 00:01:20.984 NET_TYPE=phy 00:01:20.984 SPDK_TEST_NATIVE_DPDK=v23.11 00:01:20.984 SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:20.990 RUN_NIGHTLY=1 00:01:20.995 [Pipeline] readFile 00:01:21.019 [Pipeline] withEnv 00:01:21.020 [Pipeline] { 00:01:21.033 [Pipeline] sh 00:01:21.315 + set -ex 00:01:21.315 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:01:21.315 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:21.315 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:21.315 ++ SPDK_TEST_NVMF=1 00:01:21.315 ++ SPDK_TEST_NVME_CLI=1 00:01:21.315 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:21.315 ++ SPDK_TEST_NVMF_NICS=e810 00:01:21.315 ++ SPDK_TEST_VFIOUSER=1 00:01:21.315 ++ SPDK_RUN_UBSAN=1 00:01:21.315 ++ NET_TYPE=phy 00:01:21.315 ++ SPDK_TEST_NATIVE_DPDK=v23.11 00:01:21.315 ++ SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:21.315 ++ RUN_NIGHTLY=1 00:01:21.315 + case $SPDK_TEST_NVMF_NICS in 00:01:21.315 + DRIVERS=ice 00:01:21.315 + [[ tcp == \r\d\m\a ]] 00:01:21.315 + [[ -n ice ]] 00:01:21.315 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:01:21.315 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:01:21.315 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:01:21.315 rmmod: ERROR: Module irdma is not currently loaded 00:01:21.315 rmmod: ERROR: Module i40iw is not currently loaded 00:01:21.315 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:01:21.315 + true 00:01:21.315 + for D in $DRIVERS 00:01:21.315 + sudo modprobe ice 00:01:21.315 + exit 0 00:01:21.323 [Pipeline] } 00:01:21.339 [Pipeline] // withEnv 00:01:21.345 [Pipeline] } 00:01:21.361 [Pipeline] // stage 00:01:21.370 [Pipeline] catchError 00:01:21.371 [Pipeline] { 00:01:21.386 [Pipeline] timeout 00:01:21.386 Timeout set to expire in 50 min 00:01:21.388 [Pipeline] { 00:01:21.402 [Pipeline] stage 00:01:21.404 [Pipeline] { (Tests) 00:01:21.418 [Pipeline] sh 00:01:21.699 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:21.699 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:21.699 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:21.699 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:01:21.699 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:21.699 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:21.699 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:01:21.699 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:21.699 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:21.699 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:21.699 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:01:21.699 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:21.699 + source /etc/os-release 00:01:21.699 ++ NAME='Fedora Linux' 00:01:21.699 ++ VERSION='38 (Cloud Edition)' 00:01:21.699 ++ ID=fedora 00:01:21.699 ++ VERSION_ID=38 00:01:21.699 ++ VERSION_CODENAME= 00:01:21.699 ++ PLATFORM_ID=platform:f38 00:01:21.699 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:01:21.699 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:21.699 ++ LOGO=fedora-logo-icon 00:01:21.699 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:01:21.699 ++ HOME_URL=https://fedoraproject.org/ 00:01:21.699 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:01:21.699 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:21.699 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:21.699 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:21.699 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:01:21.699 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:21.699 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:01:21.699 ++ SUPPORT_END=2024-05-14 00:01:21.699 ++ VARIANT='Cloud Edition' 00:01:21.699 ++ VARIANT_ID=cloud 00:01:21.699 + uname -a 00:01:21.699 Linux spdk-gp-11 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:01:21.699 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:01:22.632 Hugepages 00:01:22.632 node hugesize free / total 00:01:22.632 node0 1048576kB 0 / 0 00:01:22.632 node0 2048kB 0 / 0 00:01:22.632 node1 1048576kB 0 / 0 00:01:22.632 node1 2048kB 0 / 0 00:01:22.632 00:01:22.632 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:22.632 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:01:22.633 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:01:22.633 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:01:22.633 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:01:22.633 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:01:22.633 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:01:22.633 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:01:22.633 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:01:22.633 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:01:22.633 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:01:22.633 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:01:22.633 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:01:22.633 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:01:22.633 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:01:22.633 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:01:22.633 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:01:22.633 NVMe 0000:88:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:01:22.633 + rm -f /tmp/spdk-ld-path 00:01:22.633 + source autorun-spdk.conf 00:01:22.633 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:22.633 ++ SPDK_TEST_NVMF=1 00:01:22.633 ++ SPDK_TEST_NVME_CLI=1 00:01:22.633 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:22.633 ++ SPDK_TEST_NVMF_NICS=e810 00:01:22.633 ++ SPDK_TEST_VFIOUSER=1 00:01:22.633 ++ SPDK_RUN_UBSAN=1 00:01:22.633 ++ NET_TYPE=phy 00:01:22.633 ++ SPDK_TEST_NATIVE_DPDK=v23.11 00:01:22.633 ++ SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:22.633 ++ RUN_NIGHTLY=1 00:01:22.633 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:22.633 + [[ -n '' ]] 00:01:22.633 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:22.891 + for M in /var/spdk/build-*-manifest.txt 00:01:22.891 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:22.891 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:22.891 + for M in /var/spdk/build-*-manifest.txt 00:01:22.891 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:22.891 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:22.891 ++ uname 00:01:22.891 + [[ Linux == \L\i\n\u\x ]] 00:01:22.891 + sudo dmesg -T 00:01:22.891 + sudo dmesg --clear 00:01:22.891 + dmesg_pid=3530299 00:01:22.891 + sudo dmesg -Tw 00:01:22.891 + [[ Fedora Linux == FreeBSD ]] 00:01:22.891 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:22.891 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:22.891 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:22.891 + [[ -x /usr/src/fio-static/fio ]] 00:01:22.891 + export FIO_BIN=/usr/src/fio-static/fio 00:01:22.891 + FIO_BIN=/usr/src/fio-static/fio 00:01:22.891 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:22.891 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:22.891 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:22.891 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:22.891 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:22.891 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:22.891 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:22.891 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:22.891 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:22.891 Test configuration: 00:01:22.891 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:22.891 SPDK_TEST_NVMF=1 00:01:22.891 SPDK_TEST_NVME_CLI=1 00:01:22.891 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:22.891 SPDK_TEST_NVMF_NICS=e810 00:01:22.891 SPDK_TEST_VFIOUSER=1 00:01:22.891 SPDK_RUN_UBSAN=1 00:01:22.891 NET_TYPE=phy 00:01:22.891 SPDK_TEST_NATIVE_DPDK=v23.11 00:01:22.891 SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:22.891 RUN_NIGHTLY=1 00:47:15 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:01:22.891 00:47:15 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:22.891 00:47:15 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:22.891 00:47:15 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:22.891 00:47:15 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:22.891 00:47:15 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:22.891 00:47:15 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:22.891 00:47:15 -- paths/export.sh@5 -- $ export PATH 00:01:22.891 00:47:15 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:22.891 00:47:15 -- common/autobuild_common.sh@439 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:01:22.891 00:47:15 -- common/autobuild_common.sh@440 -- $ date +%s 00:01:22.891 00:47:15 -- common/autobuild_common.sh@440 -- $ mktemp -dt spdk_1721861235.XXXXXX 00:01:22.891 00:47:15 -- common/autobuild_common.sh@440 -- $ SPDK_WORKSPACE=/tmp/spdk_1721861235.e2F6qQ 00:01:22.891 00:47:15 -- common/autobuild_common.sh@442 -- $ [[ -n '' ]] 00:01:22.891 00:47:15 -- common/autobuild_common.sh@446 -- $ '[' -n v23.11 ']' 00:01:22.891 00:47:15 -- common/autobuild_common.sh@447 -- $ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:22.891 00:47:15 -- common/autobuild_common.sh@447 -- $ scanbuild_exclude=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk' 00:01:22.891 00:47:15 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:01:22.891 00:47:15 -- common/autobuild_common.sh@455 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:01:22.891 00:47:15 -- common/autobuild_common.sh@456 -- $ get_config_params 00:01:22.891 00:47:15 -- common/autotest_common.sh@395 -- $ xtrace_disable 00:01:22.891 00:47:15 -- common/autotest_common.sh@10 -- $ set +x 00:01:22.891 00:47:15 -- common/autobuild_common.sh@456 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build' 00:01:22.891 00:47:15 -- common/autobuild_common.sh@458 -- $ start_monitor_resources 00:01:22.891 00:47:15 -- pm/common@17 -- $ local monitor 00:01:22.891 00:47:15 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:22.891 00:47:15 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:22.891 00:47:15 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:22.891 00:47:15 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:22.891 00:47:15 -- pm/common@21 -- $ date +%s 00:01:22.891 00:47:15 -- pm/common@21 -- $ date +%s 00:01:22.891 00:47:15 -- pm/common@25 -- $ sleep 1 00:01:22.891 00:47:15 -- pm/common@21 -- $ date +%s 00:01:22.891 00:47:15 -- pm/common@21 -- $ date +%s 00:01:22.891 00:47:15 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721861235 00:01:22.892 00:47:15 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721861235 00:01:22.892 00:47:15 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721861235 00:01:22.892 00:47:15 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721861235 00:01:22.892 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721861235_collect-vmstat.pm.log 00:01:22.892 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721861235_collect-cpu-load.pm.log 00:01:22.892 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721861235_collect-cpu-temp.pm.log 00:01:22.892 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721861235_collect-bmc-pm.bmc.pm.log 00:01:23.822 00:47:16 -- common/autobuild_common.sh@459 -- $ trap stop_monitor_resources EXIT 00:01:23.822 00:47:16 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:23.822 00:47:16 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:23.822 00:47:16 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:23.822 00:47:16 -- spdk/autobuild.sh@16 -- $ date -u 00:01:23.822 Wed Jul 24 10:47:16 PM UTC 2024 00:01:23.822 00:47:16 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:23.822 v24.05-15-g241d0f3c9 00:01:23.822 00:47:16 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:23.822 00:47:16 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:23.822 00:47:16 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:23.822 00:47:16 -- common/autotest_common.sh@1097 -- $ '[' 3 -le 1 ']' 00:01:23.822 00:47:16 -- common/autotest_common.sh@1103 -- $ xtrace_disable 00:01:23.822 00:47:16 -- common/autotest_common.sh@10 -- $ set +x 00:01:24.080 ************************************ 00:01:24.080 START TEST ubsan 00:01:24.080 ************************************ 00:01:24.080 00:47:16 ubsan -- common/autotest_common.sh@1121 -- $ echo 'using ubsan' 00:01:24.080 using ubsan 00:01:24.080 00:01:24.080 real 0m0.000s 00:01:24.080 user 0m0.000s 00:01:24.080 sys 0m0.000s 00:01:24.080 00:47:16 ubsan -- common/autotest_common.sh@1122 -- $ xtrace_disable 00:01:24.080 00:47:16 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:24.080 ************************************ 00:01:24.080 END TEST ubsan 00:01:24.080 ************************************ 00:01:24.080 00:47:16 -- spdk/autobuild.sh@27 -- $ '[' -n v23.11 ']' 00:01:24.080 00:47:16 -- spdk/autobuild.sh@28 -- $ build_native_dpdk 00:01:24.080 00:47:16 -- common/autobuild_common.sh@432 -- $ run_test build_native_dpdk _build_native_dpdk 00:01:24.080 00:47:16 -- common/autotest_common.sh@1097 -- $ '[' 2 -le 1 ']' 00:01:24.080 00:47:16 -- common/autotest_common.sh@1103 -- $ xtrace_disable 00:01:24.080 00:47:16 -- common/autotest_common.sh@10 -- $ set +x 00:01:24.080 ************************************ 00:01:24.080 START TEST build_native_dpdk 00:01:24.080 ************************************ 00:01:24.080 00:47:17 build_native_dpdk -- common/autotest_common.sh@1121 -- $ _build_native_dpdk 00:01:24.080 00:47:17 build_native_dpdk -- common/autobuild_common.sh@48 -- $ local external_dpdk_dir 00:01:24.080 00:47:17 build_native_dpdk -- common/autobuild_common.sh@49 -- $ local external_dpdk_base_dir 00:01:24.080 00:47:17 build_native_dpdk -- common/autobuild_common.sh@50 -- $ local compiler_version 00:01:24.080 00:47:17 build_native_dpdk -- common/autobuild_common.sh@51 -- $ local compiler 00:01:24.080 00:47:17 build_native_dpdk -- common/autobuild_common.sh@52 -- $ local dpdk_kmods 00:01:24.080 00:47:17 build_native_dpdk -- common/autobuild_common.sh@53 -- $ local repo=dpdk 00:01:24.080 00:47:17 build_native_dpdk -- common/autobuild_common.sh@55 -- $ compiler=gcc 00:01:24.080 00:47:17 build_native_dpdk -- common/autobuild_common.sh@61 -- $ export CC=gcc 00:01:24.080 00:47:17 build_native_dpdk -- common/autobuild_common.sh@61 -- $ CC=gcc 00:01:24.080 00:47:17 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *clang* ]] 00:01:24.080 00:47:17 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *gcc* ]] 00:01:24.080 00:47:17 build_native_dpdk -- common/autobuild_common.sh@68 -- $ gcc -dumpversion 00:01:24.080 00:47:17 build_native_dpdk -- common/autobuild_common.sh@68 -- $ compiler_version=13 00:01:24.080 00:47:17 build_native_dpdk -- common/autobuild_common.sh@69 -- $ compiler_version=13 00:01:24.080 00:47:17 build_native_dpdk -- common/autobuild_common.sh@70 -- $ external_dpdk_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:24.080 00:47:17 build_native_dpdk -- common/autobuild_common.sh@71 -- $ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:24.080 00:47:17 build_native_dpdk -- common/autobuild_common.sh@71 -- $ external_dpdk_base_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:01:24.080 00:47:17 build_native_dpdk -- common/autobuild_common.sh@73 -- $ [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk ]] 00:01:24.080 00:47:17 build_native_dpdk -- common/autobuild_common.sh@82 -- $ orgdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:24.080 00:47:17 build_native_dpdk -- common/autobuild_common.sh@83 -- $ git -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk log --oneline -n 5 00:01:24.080 eeb0605f11 version: 23.11.0 00:01:24.080 238778122a doc: update release notes for 23.11 00:01:24.080 46aa6b3cfc doc: fix description of RSS features 00:01:24.080 dd88f51a57 devtools: forbid DPDK API in cnxk base driver 00:01:24.080 7e421ae345 devtools: support skipping forbid rule check 00:01:24.080 00:47:17 build_native_dpdk -- common/autobuild_common.sh@85 -- $ dpdk_cflags='-fPIC -g -fcommon' 00:01:24.080 00:47:17 build_native_dpdk -- common/autobuild_common.sh@86 -- $ dpdk_ldflags= 00:01:24.080 00:47:17 build_native_dpdk -- common/autobuild_common.sh@87 -- $ dpdk_ver=23.11.0 00:01:24.080 00:47:17 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ gcc == *gcc* ]] 00:01:24.080 00:47:17 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ 13 -ge 5 ]] 00:01:24.080 00:47:17 build_native_dpdk -- common/autobuild_common.sh@90 -- $ dpdk_cflags+=' -Werror' 00:01:24.080 00:47:17 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ gcc == *gcc* ]] 00:01:24.080 00:47:17 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ 13 -ge 10 ]] 00:01:24.080 00:47:17 build_native_dpdk -- common/autobuild_common.sh@94 -- $ dpdk_cflags+=' -Wno-stringop-overflow' 00:01:24.080 00:47:17 build_native_dpdk -- common/autobuild_common.sh@100 -- $ DPDK_DRIVERS=("bus" "bus/pci" "bus/vdev" "mempool/ring" "net/i40e" "net/i40e/base") 00:01:24.080 00:47:17 build_native_dpdk -- common/autobuild_common.sh@102 -- $ local mlx5_libs_added=n 00:01:24.080 00:47:17 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:01:24.080 00:47:17 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:01:24.080 00:47:17 build_native_dpdk -- common/autobuild_common.sh@139 -- $ [[ 0 -eq 1 ]] 00:01:24.080 00:47:17 build_native_dpdk -- common/autobuild_common.sh@167 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:01:24.080 00:47:17 build_native_dpdk -- common/autobuild_common.sh@168 -- $ uname -s 00:01:24.080 00:47:17 build_native_dpdk -- common/autobuild_common.sh@168 -- $ '[' Linux = Linux ']' 00:01:24.080 00:47:17 build_native_dpdk -- common/autobuild_common.sh@169 -- $ lt 23.11.0 21.11.0 00:01:24.080 00:47:17 build_native_dpdk -- scripts/common.sh@370 -- $ cmp_versions 23.11.0 '<' 21.11.0 00:01:24.080 00:47:17 build_native_dpdk -- scripts/common.sh@330 -- $ local ver1 ver1_l 00:01:24.080 00:47:17 build_native_dpdk -- scripts/common.sh@331 -- $ local ver2 ver2_l 00:01:24.080 00:47:17 build_native_dpdk -- scripts/common.sh@333 -- $ IFS=.-: 00:01:24.080 00:47:17 build_native_dpdk -- scripts/common.sh@333 -- $ read -ra ver1 00:01:24.080 00:47:17 build_native_dpdk -- scripts/common.sh@334 -- $ IFS=.-: 00:01:24.080 00:47:17 build_native_dpdk -- scripts/common.sh@334 -- $ read -ra ver2 00:01:24.080 00:47:17 build_native_dpdk -- scripts/common.sh@335 -- $ local 'op=<' 00:01:24.080 00:47:17 build_native_dpdk -- scripts/common.sh@337 -- $ ver1_l=3 00:01:24.080 00:47:17 build_native_dpdk -- scripts/common.sh@338 -- $ ver2_l=3 00:01:24.080 00:47:17 build_native_dpdk -- scripts/common.sh@340 -- $ local lt=0 gt=0 eq=0 v 00:01:24.080 00:47:17 build_native_dpdk -- scripts/common.sh@341 -- $ case "$op" in 00:01:24.080 00:47:17 build_native_dpdk -- scripts/common.sh@342 -- $ : 1 00:01:24.080 00:47:17 build_native_dpdk -- scripts/common.sh@361 -- $ (( v = 0 )) 00:01:24.080 00:47:17 build_native_dpdk -- scripts/common.sh@361 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:01:24.080 00:47:17 build_native_dpdk -- scripts/common.sh@362 -- $ decimal 23 00:01:24.080 00:47:17 build_native_dpdk -- scripts/common.sh@350 -- $ local d=23 00:01:24.080 00:47:17 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 23 =~ ^[0-9]+$ ]] 00:01:24.080 00:47:17 build_native_dpdk -- scripts/common.sh@352 -- $ echo 23 00:01:24.080 00:47:17 build_native_dpdk -- scripts/common.sh@362 -- $ ver1[v]=23 00:01:24.080 00:47:17 build_native_dpdk -- scripts/common.sh@363 -- $ decimal 21 00:01:24.080 00:47:17 build_native_dpdk -- scripts/common.sh@350 -- $ local d=21 00:01:24.080 00:47:17 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 21 =~ ^[0-9]+$ ]] 00:01:24.080 00:47:17 build_native_dpdk -- scripts/common.sh@352 -- $ echo 21 00:01:24.080 00:47:17 build_native_dpdk -- scripts/common.sh@363 -- $ ver2[v]=21 00:01:24.080 00:47:17 build_native_dpdk -- scripts/common.sh@364 -- $ (( ver1[v] > ver2[v] )) 00:01:24.080 00:47:17 build_native_dpdk -- scripts/common.sh@364 -- $ return 1 00:01:24.080 00:47:17 build_native_dpdk -- common/autobuild_common.sh@173 -- $ patch -p1 00:01:24.080 patching file config/rte_config.h 00:01:24.080 Hunk #1 succeeded at 60 (offset 1 line). 00:01:24.080 00:47:17 build_native_dpdk -- common/autobuild_common.sh@176 -- $ lt 23.11.0 24.07.0 00:01:24.080 00:47:17 build_native_dpdk -- scripts/common.sh@370 -- $ cmp_versions 23.11.0 '<' 24.07.0 00:01:24.080 00:47:17 build_native_dpdk -- scripts/common.sh@330 -- $ local ver1 ver1_l 00:01:24.080 00:47:17 build_native_dpdk -- scripts/common.sh@331 -- $ local ver2 ver2_l 00:01:24.080 00:47:17 build_native_dpdk -- scripts/common.sh@333 -- $ IFS=.-: 00:01:24.080 00:47:17 build_native_dpdk -- scripts/common.sh@333 -- $ read -ra ver1 00:01:24.080 00:47:17 build_native_dpdk -- scripts/common.sh@334 -- $ IFS=.-: 00:01:24.080 00:47:17 build_native_dpdk -- scripts/common.sh@334 -- $ read -ra ver2 00:01:24.080 00:47:17 build_native_dpdk -- scripts/common.sh@335 -- $ local 'op=<' 00:01:24.080 00:47:17 build_native_dpdk -- scripts/common.sh@337 -- $ ver1_l=3 00:01:24.080 00:47:17 build_native_dpdk -- scripts/common.sh@338 -- $ ver2_l=3 00:01:24.080 00:47:17 build_native_dpdk -- scripts/common.sh@340 -- $ local lt=0 gt=0 eq=0 v 00:01:24.081 00:47:17 build_native_dpdk -- scripts/common.sh@341 -- $ case "$op" in 00:01:24.081 00:47:17 build_native_dpdk -- scripts/common.sh@342 -- $ : 1 00:01:24.081 00:47:17 build_native_dpdk -- scripts/common.sh@361 -- $ (( v = 0 )) 00:01:24.081 00:47:17 build_native_dpdk -- scripts/common.sh@361 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:01:24.081 00:47:17 build_native_dpdk -- scripts/common.sh@362 -- $ decimal 23 00:01:24.081 00:47:17 build_native_dpdk -- scripts/common.sh@350 -- $ local d=23 00:01:24.081 00:47:17 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 23 =~ ^[0-9]+$ ]] 00:01:24.081 00:47:17 build_native_dpdk -- scripts/common.sh@352 -- $ echo 23 00:01:24.081 00:47:17 build_native_dpdk -- scripts/common.sh@362 -- $ ver1[v]=23 00:01:24.081 00:47:17 build_native_dpdk -- scripts/common.sh@363 -- $ decimal 24 00:01:24.081 00:47:17 build_native_dpdk -- scripts/common.sh@350 -- $ local d=24 00:01:24.081 00:47:17 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:01:24.081 00:47:17 build_native_dpdk -- scripts/common.sh@352 -- $ echo 24 00:01:24.081 00:47:17 build_native_dpdk -- scripts/common.sh@363 -- $ ver2[v]=24 00:01:24.081 00:47:17 build_native_dpdk -- scripts/common.sh@364 -- $ (( ver1[v] > ver2[v] )) 00:01:24.081 00:47:17 build_native_dpdk -- scripts/common.sh@365 -- $ (( ver1[v] < ver2[v] )) 00:01:24.081 00:47:17 build_native_dpdk -- scripts/common.sh@365 -- $ return 0 00:01:24.081 00:47:17 build_native_dpdk -- common/autobuild_common.sh@177 -- $ patch -p1 00:01:24.081 patching file lib/pcapng/rte_pcapng.c 00:01:24.081 00:47:17 build_native_dpdk -- common/autobuild_common.sh@180 -- $ dpdk_kmods=false 00:01:24.081 00:47:17 build_native_dpdk -- common/autobuild_common.sh@181 -- $ uname -s 00:01:24.081 00:47:17 build_native_dpdk -- common/autobuild_common.sh@181 -- $ '[' Linux = FreeBSD ']' 00:01:24.081 00:47:17 build_native_dpdk -- common/autobuild_common.sh@185 -- $ printf %s, bus bus/pci bus/vdev mempool/ring net/i40e net/i40e/base 00:01:24.081 00:47:17 build_native_dpdk -- common/autobuild_common.sh@185 -- $ meson build-tmp --prefix=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build --libdir lib -Denable_docs=false -Denable_kmods=false -Dtests=false -Dc_link_args= '-Dc_args=-fPIC -g -fcommon -Werror -Wno-stringop-overflow' -Dmachine=native -Denable_drivers=bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:01:28.273 The Meson build system 00:01:28.273 Version: 1.3.1 00:01:28.273 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:01:28.273 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp 00:01:28.273 Build type: native build 00:01:28.273 Program cat found: YES (/usr/bin/cat) 00:01:28.273 Project name: DPDK 00:01:28.273 Project version: 23.11.0 00:01:28.273 C compiler for the host machine: gcc (gcc 13.2.1 "gcc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:28.273 C linker for the host machine: gcc ld.bfd 2.39-16 00:01:28.273 Host machine cpu family: x86_64 00:01:28.273 Host machine cpu: x86_64 00:01:28.273 Message: ## Building in Developer Mode ## 00:01:28.273 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:28.273 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/buildtools/check-symbols.sh) 00:01:28.273 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/buildtools/options-ibverbs-static.sh) 00:01:28.273 Program python3 found: YES (/usr/bin/python3) 00:01:28.273 Program cat found: YES (/usr/bin/cat) 00:01:28.273 config/meson.build:113: WARNING: The "machine" option is deprecated. Please use "cpu_instruction_set" instead. 00:01:28.273 Compiler for C supports arguments -march=native: YES 00:01:28.273 Checking for size of "void *" : 8 00:01:28.273 Checking for size of "void *" : 8 (cached) 00:01:28.273 Library m found: YES 00:01:28.273 Library numa found: YES 00:01:28.273 Has header "numaif.h" : YES 00:01:28.273 Library fdt found: NO 00:01:28.273 Library execinfo found: NO 00:01:28.273 Has header "execinfo.h" : YES 00:01:28.273 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:28.273 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:28.273 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:28.273 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:28.273 Run-time dependency openssl found: YES 3.0.9 00:01:28.273 Run-time dependency libpcap found: YES 1.10.4 00:01:28.273 Has header "pcap.h" with dependency libpcap: YES 00:01:28.273 Compiler for C supports arguments -Wcast-qual: YES 00:01:28.273 Compiler for C supports arguments -Wdeprecated: YES 00:01:28.273 Compiler for C supports arguments -Wformat: YES 00:01:28.273 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:28.273 Compiler for C supports arguments -Wformat-security: NO 00:01:28.273 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:28.273 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:28.273 Compiler for C supports arguments -Wnested-externs: YES 00:01:28.273 Compiler for C supports arguments -Wold-style-definition: YES 00:01:28.273 Compiler for C supports arguments -Wpointer-arith: YES 00:01:28.273 Compiler for C supports arguments -Wsign-compare: YES 00:01:28.273 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:28.273 Compiler for C supports arguments -Wundef: YES 00:01:28.273 Compiler for C supports arguments -Wwrite-strings: YES 00:01:28.273 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:28.273 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:28.273 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:28.273 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:28.273 Program objdump found: YES (/usr/bin/objdump) 00:01:28.273 Compiler for C supports arguments -mavx512f: YES 00:01:28.273 Checking if "AVX512 checking" compiles: YES 00:01:28.273 Fetching value of define "__SSE4_2__" : 1 00:01:28.273 Fetching value of define "__AES__" : 1 00:01:28.273 Fetching value of define "__AVX__" : 1 00:01:28.273 Fetching value of define "__AVX2__" : (undefined) 00:01:28.273 Fetching value of define "__AVX512BW__" : (undefined) 00:01:28.273 Fetching value of define "__AVX512CD__" : (undefined) 00:01:28.273 Fetching value of define "__AVX512DQ__" : (undefined) 00:01:28.273 Fetching value of define "__AVX512F__" : (undefined) 00:01:28.273 Fetching value of define "__AVX512VL__" : (undefined) 00:01:28.273 Fetching value of define "__PCLMUL__" : 1 00:01:28.273 Fetching value of define "__RDRND__" : 1 00:01:28.273 Fetching value of define "__RDSEED__" : (undefined) 00:01:28.273 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:01:28.273 Fetching value of define "__znver1__" : (undefined) 00:01:28.273 Fetching value of define "__znver2__" : (undefined) 00:01:28.273 Fetching value of define "__znver3__" : (undefined) 00:01:28.273 Fetching value of define "__znver4__" : (undefined) 00:01:28.273 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:28.273 Message: lib/log: Defining dependency "log" 00:01:28.273 Message: lib/kvargs: Defining dependency "kvargs" 00:01:28.273 Message: lib/telemetry: Defining dependency "telemetry" 00:01:28.273 Checking for function "getentropy" : NO 00:01:28.273 Message: lib/eal: Defining dependency "eal" 00:01:28.273 Message: lib/ring: Defining dependency "ring" 00:01:28.273 Message: lib/rcu: Defining dependency "rcu" 00:01:28.273 Message: lib/mempool: Defining dependency "mempool" 00:01:28.273 Message: lib/mbuf: Defining dependency "mbuf" 00:01:28.273 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:28.273 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:28.273 Compiler for C supports arguments -mpclmul: YES 00:01:28.273 Compiler for C supports arguments -maes: YES 00:01:28.273 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:28.274 Compiler for C supports arguments -mavx512bw: YES 00:01:28.274 Compiler for C supports arguments -mavx512dq: YES 00:01:28.274 Compiler for C supports arguments -mavx512vl: YES 00:01:28.274 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:28.274 Compiler for C supports arguments -mavx2: YES 00:01:28.274 Compiler for C supports arguments -mavx: YES 00:01:28.274 Message: lib/net: Defining dependency "net" 00:01:28.274 Message: lib/meter: Defining dependency "meter" 00:01:28.274 Message: lib/ethdev: Defining dependency "ethdev" 00:01:28.274 Message: lib/pci: Defining dependency "pci" 00:01:28.274 Message: lib/cmdline: Defining dependency "cmdline" 00:01:28.274 Message: lib/metrics: Defining dependency "metrics" 00:01:28.274 Message: lib/hash: Defining dependency "hash" 00:01:28.274 Message: lib/timer: Defining dependency "timer" 00:01:28.274 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:28.274 Fetching value of define "__AVX512VL__" : (undefined) (cached) 00:01:28.274 Fetching value of define "__AVX512CD__" : (undefined) (cached) 00:01:28.274 Fetching value of define "__AVX512BW__" : (undefined) (cached) 00:01:28.274 Compiler for C supports arguments -mavx512f -mavx512vl -mavx512cd -mavx512bw: YES 00:01:28.274 Message: lib/acl: Defining dependency "acl" 00:01:28.274 Message: lib/bbdev: Defining dependency "bbdev" 00:01:28.274 Message: lib/bitratestats: Defining dependency "bitratestats" 00:01:28.274 Run-time dependency libelf found: YES 0.190 00:01:28.274 Message: lib/bpf: Defining dependency "bpf" 00:01:28.274 Message: lib/cfgfile: Defining dependency "cfgfile" 00:01:28.274 Message: lib/compressdev: Defining dependency "compressdev" 00:01:28.274 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:28.274 Message: lib/distributor: Defining dependency "distributor" 00:01:28.274 Message: lib/dmadev: Defining dependency "dmadev" 00:01:28.274 Message: lib/efd: Defining dependency "efd" 00:01:28.274 Message: lib/eventdev: Defining dependency "eventdev" 00:01:28.274 Message: lib/dispatcher: Defining dependency "dispatcher" 00:01:28.274 Message: lib/gpudev: Defining dependency "gpudev" 00:01:28.274 Message: lib/gro: Defining dependency "gro" 00:01:28.274 Message: lib/gso: Defining dependency "gso" 00:01:28.274 Message: lib/ip_frag: Defining dependency "ip_frag" 00:01:28.274 Message: lib/jobstats: Defining dependency "jobstats" 00:01:28.274 Message: lib/latencystats: Defining dependency "latencystats" 00:01:28.274 Message: lib/lpm: Defining dependency "lpm" 00:01:28.274 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:28.274 Fetching value of define "__AVX512DQ__" : (undefined) (cached) 00:01:28.274 Fetching value of define "__AVX512IFMA__" : (undefined) 00:01:28.274 Compiler for C supports arguments -mavx512f -mavx512dq -mavx512ifma: YES 00:01:28.274 Message: lib/member: Defining dependency "member" 00:01:28.274 Message: lib/pcapng: Defining dependency "pcapng" 00:01:28.274 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:28.274 Message: lib/power: Defining dependency "power" 00:01:28.274 Message: lib/rawdev: Defining dependency "rawdev" 00:01:28.274 Message: lib/regexdev: Defining dependency "regexdev" 00:01:28.274 Message: lib/mldev: Defining dependency "mldev" 00:01:28.274 Message: lib/rib: Defining dependency "rib" 00:01:28.274 Message: lib/reorder: Defining dependency "reorder" 00:01:28.274 Message: lib/sched: Defining dependency "sched" 00:01:28.274 Message: lib/security: Defining dependency "security" 00:01:28.274 Message: lib/stack: Defining dependency "stack" 00:01:28.274 Has header "linux/userfaultfd.h" : YES 00:01:28.274 Has header "linux/vduse.h" : YES 00:01:28.274 Message: lib/vhost: Defining dependency "vhost" 00:01:28.274 Message: lib/ipsec: Defining dependency "ipsec" 00:01:28.274 Message: lib/pdcp: Defining dependency "pdcp" 00:01:28.274 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:28.274 Fetching value of define "__AVX512DQ__" : (undefined) (cached) 00:01:28.274 Compiler for C supports arguments -mavx512f -mavx512dq: YES 00:01:28.274 Compiler for C supports arguments -mavx512bw: YES (cached) 00:01:28.274 Message: lib/fib: Defining dependency "fib" 00:01:28.274 Message: lib/port: Defining dependency "port" 00:01:28.274 Message: lib/pdump: Defining dependency "pdump" 00:01:28.274 Message: lib/table: Defining dependency "table" 00:01:28.274 Message: lib/pipeline: Defining dependency "pipeline" 00:01:28.274 Message: lib/graph: Defining dependency "graph" 00:01:28.274 Message: lib/node: Defining dependency "node" 00:01:29.653 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:29.653 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:29.653 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:29.653 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:29.653 Compiler for C supports arguments -Wno-sign-compare: YES 00:01:29.653 Compiler for C supports arguments -Wno-unused-value: YES 00:01:29.653 Compiler for C supports arguments -Wno-format: YES 00:01:29.653 Compiler for C supports arguments -Wno-format-security: YES 00:01:29.653 Compiler for C supports arguments -Wno-format-nonliteral: YES 00:01:29.653 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:01:29.653 Compiler for C supports arguments -Wno-unused-but-set-variable: YES 00:01:29.653 Compiler for C supports arguments -Wno-unused-parameter: YES 00:01:29.653 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:29.653 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:29.653 Compiler for C supports arguments -mavx512bw: YES (cached) 00:01:29.653 Compiler for C supports arguments -march=skylake-avx512: YES 00:01:29.653 Message: drivers/net/i40e: Defining dependency "net_i40e" 00:01:29.653 Has header "sys/epoll.h" : YES 00:01:29.653 Program doxygen found: YES (/usr/bin/doxygen) 00:01:29.653 Configuring doxy-api-html.conf using configuration 00:01:29.653 Configuring doxy-api-man.conf using configuration 00:01:29.653 Program mandb found: YES (/usr/bin/mandb) 00:01:29.653 Program sphinx-build found: NO 00:01:29.653 Configuring rte_build_config.h using configuration 00:01:29.653 Message: 00:01:29.653 ================= 00:01:29.653 Applications Enabled 00:01:29.653 ================= 00:01:29.653 00:01:29.653 apps: 00:01:29.653 dumpcap, graph, pdump, proc-info, test-acl, test-bbdev, test-cmdline, test-compress-perf, 00:01:29.653 test-crypto-perf, test-dma-perf, test-eventdev, test-fib, test-flow-perf, test-gpudev, test-mldev, test-pipeline, 00:01:29.653 test-pmd, test-regex, test-sad, test-security-perf, 00:01:29.653 00:01:29.653 Message: 00:01:29.653 ================= 00:01:29.653 Libraries Enabled 00:01:29.653 ================= 00:01:29.653 00:01:29.653 libs: 00:01:29.653 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:01:29.653 net, meter, ethdev, pci, cmdline, metrics, hash, timer, 00:01:29.653 acl, bbdev, bitratestats, bpf, cfgfile, compressdev, cryptodev, distributor, 00:01:29.653 dmadev, efd, eventdev, dispatcher, gpudev, gro, gso, ip_frag, 00:01:29.653 jobstats, latencystats, lpm, member, pcapng, power, rawdev, regexdev, 00:01:29.653 mldev, rib, reorder, sched, security, stack, vhost, ipsec, 00:01:29.653 pdcp, fib, port, pdump, table, pipeline, graph, node, 00:01:29.653 00:01:29.653 00:01:29.653 Message: 00:01:29.653 =============== 00:01:29.653 Drivers Enabled 00:01:29.653 =============== 00:01:29.653 00:01:29.653 common: 00:01:29.653 00:01:29.653 bus: 00:01:29.653 pci, vdev, 00:01:29.653 mempool: 00:01:29.653 ring, 00:01:29.653 dma: 00:01:29.653 00:01:29.653 net: 00:01:29.653 i40e, 00:01:29.653 raw: 00:01:29.653 00:01:29.653 crypto: 00:01:29.653 00:01:29.653 compress: 00:01:29.653 00:01:29.653 regex: 00:01:29.653 00:01:29.653 ml: 00:01:29.653 00:01:29.653 vdpa: 00:01:29.653 00:01:29.653 event: 00:01:29.653 00:01:29.653 baseband: 00:01:29.653 00:01:29.653 gpu: 00:01:29.653 00:01:29.653 00:01:29.653 Message: 00:01:29.653 ================= 00:01:29.653 Content Skipped 00:01:29.653 ================= 00:01:29.653 00:01:29.653 apps: 00:01:29.653 00:01:29.653 libs: 00:01:29.653 00:01:29.653 drivers: 00:01:29.653 common/cpt: not in enabled drivers build config 00:01:29.653 common/dpaax: not in enabled drivers build config 00:01:29.653 common/iavf: not in enabled drivers build config 00:01:29.653 common/idpf: not in enabled drivers build config 00:01:29.653 common/mvep: not in enabled drivers build config 00:01:29.653 common/octeontx: not in enabled drivers build config 00:01:29.653 bus/auxiliary: not in enabled drivers build config 00:01:29.653 bus/cdx: not in enabled drivers build config 00:01:29.653 bus/dpaa: not in enabled drivers build config 00:01:29.653 bus/fslmc: not in enabled drivers build config 00:01:29.653 bus/ifpga: not in enabled drivers build config 00:01:29.653 bus/platform: not in enabled drivers build config 00:01:29.653 bus/vmbus: not in enabled drivers build config 00:01:29.653 common/cnxk: not in enabled drivers build config 00:01:29.653 common/mlx5: not in enabled drivers build config 00:01:29.653 common/nfp: not in enabled drivers build config 00:01:29.653 common/qat: not in enabled drivers build config 00:01:29.653 common/sfc_efx: not in enabled drivers build config 00:01:29.653 mempool/bucket: not in enabled drivers build config 00:01:29.653 mempool/cnxk: not in enabled drivers build config 00:01:29.653 mempool/dpaa: not in enabled drivers build config 00:01:29.653 mempool/dpaa2: not in enabled drivers build config 00:01:29.653 mempool/octeontx: not in enabled drivers build config 00:01:29.653 mempool/stack: not in enabled drivers build config 00:01:29.653 dma/cnxk: not in enabled drivers build config 00:01:29.653 dma/dpaa: not in enabled drivers build config 00:01:29.653 dma/dpaa2: not in enabled drivers build config 00:01:29.653 dma/hisilicon: not in enabled drivers build config 00:01:29.654 dma/idxd: not in enabled drivers build config 00:01:29.654 dma/ioat: not in enabled drivers build config 00:01:29.654 dma/skeleton: not in enabled drivers build config 00:01:29.654 net/af_packet: not in enabled drivers build config 00:01:29.654 net/af_xdp: not in enabled drivers build config 00:01:29.654 net/ark: not in enabled drivers build config 00:01:29.654 net/atlantic: not in enabled drivers build config 00:01:29.654 net/avp: not in enabled drivers build config 00:01:29.654 net/axgbe: not in enabled drivers build config 00:01:29.654 net/bnx2x: not in enabled drivers build config 00:01:29.654 net/bnxt: not in enabled drivers build config 00:01:29.654 net/bonding: not in enabled drivers build config 00:01:29.654 net/cnxk: not in enabled drivers build config 00:01:29.654 net/cpfl: not in enabled drivers build config 00:01:29.654 net/cxgbe: not in enabled drivers build config 00:01:29.654 net/dpaa: not in enabled drivers build config 00:01:29.654 net/dpaa2: not in enabled drivers build config 00:01:29.654 net/e1000: not in enabled drivers build config 00:01:29.654 net/ena: not in enabled drivers build config 00:01:29.654 net/enetc: not in enabled drivers build config 00:01:29.654 net/enetfec: not in enabled drivers build config 00:01:29.654 net/enic: not in enabled drivers build config 00:01:29.654 net/failsafe: not in enabled drivers build config 00:01:29.654 net/fm10k: not in enabled drivers build config 00:01:29.654 net/gve: not in enabled drivers build config 00:01:29.654 net/hinic: not in enabled drivers build config 00:01:29.654 net/hns3: not in enabled drivers build config 00:01:29.654 net/iavf: not in enabled drivers build config 00:01:29.654 net/ice: not in enabled drivers build config 00:01:29.654 net/idpf: not in enabled drivers build config 00:01:29.654 net/igc: not in enabled drivers build config 00:01:29.654 net/ionic: not in enabled drivers build config 00:01:29.654 net/ipn3ke: not in enabled drivers build config 00:01:29.654 net/ixgbe: not in enabled drivers build config 00:01:29.654 net/mana: not in enabled drivers build config 00:01:29.654 net/memif: not in enabled drivers build config 00:01:29.654 net/mlx4: not in enabled drivers build config 00:01:29.654 net/mlx5: not in enabled drivers build config 00:01:29.654 net/mvneta: not in enabled drivers build config 00:01:29.654 net/mvpp2: not in enabled drivers build config 00:01:29.654 net/netvsc: not in enabled drivers build config 00:01:29.654 net/nfb: not in enabled drivers build config 00:01:29.654 net/nfp: not in enabled drivers build config 00:01:29.654 net/ngbe: not in enabled drivers build config 00:01:29.654 net/null: not in enabled drivers build config 00:01:29.654 net/octeontx: not in enabled drivers build config 00:01:29.654 net/octeon_ep: not in enabled drivers build config 00:01:29.654 net/pcap: not in enabled drivers build config 00:01:29.654 net/pfe: not in enabled drivers build config 00:01:29.654 net/qede: not in enabled drivers build config 00:01:29.654 net/ring: not in enabled drivers build config 00:01:29.654 net/sfc: not in enabled drivers build config 00:01:29.654 net/softnic: not in enabled drivers build config 00:01:29.654 net/tap: not in enabled drivers build config 00:01:29.654 net/thunderx: not in enabled drivers build config 00:01:29.654 net/txgbe: not in enabled drivers build config 00:01:29.654 net/vdev_netvsc: not in enabled drivers build config 00:01:29.654 net/vhost: not in enabled drivers build config 00:01:29.654 net/virtio: not in enabled drivers build config 00:01:29.654 net/vmxnet3: not in enabled drivers build config 00:01:29.654 raw/cnxk_bphy: not in enabled drivers build config 00:01:29.654 raw/cnxk_gpio: not in enabled drivers build config 00:01:29.654 raw/dpaa2_cmdif: not in enabled drivers build config 00:01:29.654 raw/ifpga: not in enabled drivers build config 00:01:29.654 raw/ntb: not in enabled drivers build config 00:01:29.654 raw/skeleton: not in enabled drivers build config 00:01:29.654 crypto/armv8: not in enabled drivers build config 00:01:29.654 crypto/bcmfs: not in enabled drivers build config 00:01:29.654 crypto/caam_jr: not in enabled drivers build config 00:01:29.654 crypto/ccp: not in enabled drivers build config 00:01:29.654 crypto/cnxk: not in enabled drivers build config 00:01:29.654 crypto/dpaa_sec: not in enabled drivers build config 00:01:29.654 crypto/dpaa2_sec: not in enabled drivers build config 00:01:29.654 crypto/ipsec_mb: not in enabled drivers build config 00:01:29.654 crypto/mlx5: not in enabled drivers build config 00:01:29.654 crypto/mvsam: not in enabled drivers build config 00:01:29.654 crypto/nitrox: not in enabled drivers build config 00:01:29.654 crypto/null: not in enabled drivers build config 00:01:29.654 crypto/octeontx: not in enabled drivers build config 00:01:29.654 crypto/openssl: not in enabled drivers build config 00:01:29.654 crypto/scheduler: not in enabled drivers build config 00:01:29.654 crypto/uadk: not in enabled drivers build config 00:01:29.654 crypto/virtio: not in enabled drivers build config 00:01:29.654 compress/isal: not in enabled drivers build config 00:01:29.654 compress/mlx5: not in enabled drivers build config 00:01:29.654 compress/octeontx: not in enabled drivers build config 00:01:29.654 compress/zlib: not in enabled drivers build config 00:01:29.654 regex/mlx5: not in enabled drivers build config 00:01:29.654 regex/cn9k: not in enabled drivers build config 00:01:29.654 ml/cnxk: not in enabled drivers build config 00:01:29.654 vdpa/ifc: not in enabled drivers build config 00:01:29.654 vdpa/mlx5: not in enabled drivers build config 00:01:29.654 vdpa/nfp: not in enabled drivers build config 00:01:29.654 vdpa/sfc: not in enabled drivers build config 00:01:29.654 event/cnxk: not in enabled drivers build config 00:01:29.654 event/dlb2: not in enabled drivers build config 00:01:29.654 event/dpaa: not in enabled drivers build config 00:01:29.654 event/dpaa2: not in enabled drivers build config 00:01:29.654 event/dsw: not in enabled drivers build config 00:01:29.654 event/opdl: not in enabled drivers build config 00:01:29.654 event/skeleton: not in enabled drivers build config 00:01:29.654 event/sw: not in enabled drivers build config 00:01:29.654 event/octeontx: not in enabled drivers build config 00:01:29.654 baseband/acc: not in enabled drivers build config 00:01:29.654 baseband/fpga_5gnr_fec: not in enabled drivers build config 00:01:29.654 baseband/fpga_lte_fec: not in enabled drivers build config 00:01:29.654 baseband/la12xx: not in enabled drivers build config 00:01:29.654 baseband/null: not in enabled drivers build config 00:01:29.654 baseband/turbo_sw: not in enabled drivers build config 00:01:29.654 gpu/cuda: not in enabled drivers build config 00:01:29.654 00:01:29.654 00:01:29.654 Build targets in project: 220 00:01:29.654 00:01:29.655 DPDK 23.11.0 00:01:29.655 00:01:29.655 User defined options 00:01:29.655 libdir : lib 00:01:29.655 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:29.655 c_args : -fPIC -g -fcommon -Werror -Wno-stringop-overflow 00:01:29.655 c_link_args : 00:01:29.655 enable_docs : false 00:01:29.655 enable_drivers: bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:01:29.655 enable_kmods : false 00:01:29.655 machine : native 00:01:29.655 tests : false 00:01:29.655 00:01:29.655 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:29.655 WARNING: Running the setup command as `meson [options]` instead of `meson setup [options]` is ambiguous and deprecated. 00:01:29.655 00:47:22 build_native_dpdk -- common/autobuild_common.sh@189 -- $ ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp -j48 00:01:29.655 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp' 00:01:29.916 [1/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:29.916 [2/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:29.916 [3/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:29.916 [4/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:29.916 [5/710] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:01:29.916 [6/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:29.916 [7/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:29.916 [8/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:29.916 [9/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:29.916 [10/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:29.916 [11/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:29.916 [12/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:29.916 [13/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:29.916 [14/710] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:29.916 [15/710] Linking static target lib/librte_kvargs.a 00:01:29.916 [16/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:29.916 [17/710] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:30.178 [18/710] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:30.178 [19/710] Compiling C object lib/librte_log.a.p/log_log.c.o 00:01:30.178 [20/710] Linking static target lib/librte_log.a 00:01:30.178 [21/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:30.178 [22/710] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:30.750 [23/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:30.750 [24/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:30.750 [25/710] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:01:31.073 [26/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:31.073 [27/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:31.073 [28/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:31.073 [29/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:31.073 [30/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:31.073 [31/710] Linking target lib/librte_log.so.24.0 00:01:31.073 [32/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:31.073 [33/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:31.073 [34/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:31.073 [35/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:31.073 [36/710] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:31.073 [37/710] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:31.073 [38/710] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:31.073 [39/710] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:31.073 [40/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:31.073 [41/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:31.073 [42/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:31.073 [43/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:31.073 [44/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:31.073 [45/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:31.073 [46/710] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:31.073 [47/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:31.073 [48/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:31.073 [49/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:31.073 [50/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:31.073 [51/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:31.073 [52/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:31.073 [53/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:31.073 [54/710] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:31.073 [55/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:31.073 [56/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:31.073 [57/710] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:31.073 [58/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:31.074 [59/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:31.334 [60/710] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols 00:01:31.334 [61/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:31.334 [62/710] Linking target lib/librte_kvargs.so.24.0 00:01:31.334 [63/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:31.334 [64/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:31.594 [65/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:31.594 [66/710] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols 00:01:31.594 [67/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:31.594 [68/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:31.594 [69/710] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:31.594 [70/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:31.594 [71/710] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:31.855 [72/710] Linking static target lib/librte_pci.a 00:01:31.855 [73/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:31.855 [74/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:31.855 [75/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:31.855 [76/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:31.855 [77/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:32.117 [78/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:32.117 [79/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:32.117 [80/710] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:32.117 [81/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:32.117 [82/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:32.117 [83/710] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:32.117 [84/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:32.117 [85/710] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:32.117 [86/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:32.117 [87/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:32.117 [88/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:32.117 [89/710] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:01:32.117 [90/710] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:32.117 [91/710] Linking static target lib/net/libnet_crc_avx512_lib.a 00:01:32.117 [92/710] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:32.117 [93/710] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:32.117 [94/710] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:32.117 [95/710] Linking static target lib/librte_ring.a 00:01:32.377 [96/710] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:32.377 [97/710] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:32.377 [98/710] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:32.377 [99/710] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:32.377 [100/710] Linking static target lib/librte_meter.a 00:01:32.377 [101/710] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:32.377 [102/710] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:32.377 [103/710] Linking static target lib/librte_telemetry.a 00:01:32.378 [104/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:32.378 [105/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:32.378 [106/710] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:32.378 [107/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:32.378 [108/710] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:32.638 [109/710] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:32.638 [110/710] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:32.638 [111/710] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:32.638 [112/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:32.638 [113/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:32.638 [114/710] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:32.638 [115/710] Linking static target lib/librte_eal.a 00:01:32.638 [116/710] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:32.638 [117/710] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:32.903 [118/710] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:32.903 [119/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:32.903 [120/710] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:32.903 [121/710] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:32.903 [122/710] Linking static target lib/librte_net.a 00:01:32.903 [123/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:32.903 [124/710] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:32.903 [125/710] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:32.903 [126/710] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:33.166 [127/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:33.166 [128/710] Linking static target lib/librte_cmdline.a 00:01:33.166 [129/710] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:33.166 [130/710] Linking target lib/librte_telemetry.so.24.0 00:01:33.166 [131/710] Linking static target lib/librte_mempool.a 00:01:33.166 [132/710] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:33.166 [133/710] Compiling C object lib/librte_cfgfile.a.p/cfgfile_rte_cfgfile.c.o 00:01:33.166 [134/710] Linking static target lib/librte_cfgfile.a 00:01:33.166 [135/710] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:33.427 [136/710] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics.c.o 00:01:33.427 [137/710] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols 00:01:33.427 [138/710] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:33.427 [139/710] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics_telemetry.c.o 00:01:33.427 [140/710] Linking static target lib/librte_metrics.a 00:01:33.427 [141/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:33.427 [142/710] Compiling C object lib/librte_acl.a.p/acl_tb_mem.c.o 00:01:33.427 [143/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:01:33.427 [144/710] Compiling C object lib/librte_acl.a.p/acl_rte_acl.c.o 00:01:33.691 [145/710] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:33.691 [146/710] Compiling C object lib/librte_bitratestats.a.p/bitratestats_rte_bitrate.c.o 00:01:33.691 [147/710] Linking static target lib/librte_rcu.a 00:01:33.691 [148/710] Linking static target lib/librte_bitratestats.a 00:01:33.691 [149/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf.c.o 00:01:33.691 [150/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_dump.c.o 00:01:33.691 [151/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_stub.c.o 00:01:33.691 [152/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:33.691 [153/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load.c.o 00:01:33.691 [154/710] Generating lib/cfgfile.sym_chk with a custom command (wrapped by meson to capture output) 00:01:33.954 [155/710] Compiling C object lib/librte_acl.a.p/acl_acl_gen.c.o 00:01:33.954 [156/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load_elf.c.o 00:01:33.954 [157/710] Compiling C object lib/librte_acl.a.p/acl_acl_run_scalar.c.o 00:01:33.954 [158/710] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:33.954 [159/710] Generating lib/metrics.sym_chk with a custom command (wrapped by meson to capture output) 00:01:33.954 [160/710] Linking static target lib/librte_timer.a 00:01:33.954 [161/710] Generating lib/bitratestats.sym_chk with a custom command (wrapped by meson to capture output) 00:01:33.954 [162/710] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:33.954 [163/710] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:33.954 [164/710] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:33.954 [165/710] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:34.213 [166/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_exec.c.o 00:01:34.213 [167/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_convert.c.o 00:01:34.213 [168/710] Compiling C object lib/librte_bbdev.a.p/bbdev_rte_bbdev.c.o 00:01:34.213 [169/710] Linking static target lib/librte_bbdev.a 00:01:34.213 [170/710] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:34.475 [171/710] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:34.475 [172/710] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:01:34.475 [173/710] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:34.475 [174/710] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:34.475 [175/710] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:34.475 [176/710] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_match_sse.c.o 00:01:34.475 [177/710] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:34.475 [178/710] Linking static target lib/librte_compressdev.a 00:01:34.736 [179/710] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_private.c.o 00:01:34.736 [180/710] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_single.c.o 00:01:34.736 [181/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_pkt.c.o 00:01:35.003 [182/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_validate.c.o 00:01:35.003 [183/710] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor.c.o 00:01:35.003 [184/710] Linking static target lib/librte_distributor.a 00:01:35.003 [185/710] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:35.003 [186/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_ring.c.o 00:01:35.262 [187/710] Generating lib/bbdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:35.262 [188/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_jit_x86.c.o 00:01:35.262 [189/710] Linking static target lib/librte_bpf.a 00:01:35.262 [190/710] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:35.262 [191/710] Linking static target lib/librte_dmadev.a 00:01:35.262 [192/710] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_trace_points.c.o 00:01:35.262 [193/710] Compiling C object lib/librte_gso.a.p/gso_gso_tcp4.c.o 00:01:35.521 [194/710] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:35.521 [195/710] Compiling C object lib/librte_gso.a.p/gso_gso_udp4.c.o 00:01:35.521 [196/710] Compiling C object lib/librte_dispatcher.a.p/dispatcher_rte_dispatcher.c.o 00:01:35.521 [197/710] Linking static target lib/librte_dispatcher.a 00:01:35.521 [198/710] Generating lib/distributor.sym_chk with a custom command (wrapped by meson to capture output) 00:01:35.521 [199/710] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_tcp4.c.o 00:01:35.521 [200/710] Compiling C object lib/librte_gro.a.p/gro_gro_tcp6.c.o 00:01:35.521 [201/710] Compiling C object lib/librte_gro.a.p/gro_gro_tcp4.c.o 00:01:35.521 [202/710] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_udp4.c.o 00:01:35.521 [203/710] Compiling C object lib/librte_gpudev.a.p/gpudev_gpudev.c.o 00:01:35.521 [204/710] Compiling C object lib/librte_gro.a.p/gro_rte_gro.c.o 00:01:35.521 [205/710] Linking static target lib/librte_gpudev.a 00:01:35.521 [206/710] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:35.781 [207/710] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:35.781 [208/710] Compiling C object lib/librte_gso.a.p/gso_rte_gso.c.o 00:01:35.781 [209/710] Compiling C object lib/librte_gro.a.p/gro_gro_udp4.c.o 00:01:35.781 [210/710] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_udp4.c.o 00:01:35.781 [211/710] Generating lib/bpf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:35.781 [212/710] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_tcp4.c.o 00:01:35.781 [213/710] Linking static target lib/librte_gro.a 00:01:35.781 [214/710] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:35.781 [215/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_reassembly.c.o 00:01:35.781 [216/710] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:36.044 [217/710] Compiling C object lib/librte_jobstats.a.p/jobstats_rte_jobstats.c.o 00:01:36.044 [218/710] Linking static target lib/librte_jobstats.a 00:01:36.044 [219/710] Compiling C object lib/librte_acl.a.p/acl_acl_bld.c.o 00:01:36.044 [220/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_reassembly.c.o 00:01:36.044 [221/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_dma_adapter.c.o 00:01:36.044 [222/710] Generating lib/dispatcher.sym_chk with a custom command (wrapped by meson to capture output) 00:01:36.044 [223/710] Generating lib/gro.sym_chk with a custom command (wrapped by meson to capture output) 00:01:36.303 [224/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_ip_frag_internal.c.o 00:01:36.303 [225/710] Compiling C object lib/librte_latencystats.a.p/latencystats_rte_latencystats.c.o 00:01:36.303 [226/710] Linking static target lib/librte_latencystats.a 00:01:36.562 [227/710] Compiling C object lib/librte_acl.a.p/acl_acl_run_sse.c.o 00:01:36.562 [228/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ip_frag_common.c.o 00:01:36.562 [229/710] Compiling C object lib/member/libsketch_avx512_tmp.a.p/rte_member_sketch_avx512.c.o 00:01:36.562 [230/710] Linking static target lib/member/libsketch_avx512_tmp.a 00:01:36.562 [231/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_fragmentation.c.o 00:01:36.562 [232/710] Generating lib/jobstats.sym_chk with a custom command (wrapped by meson to capture output) 00:01:36.562 [233/710] Compiling C object lib/librte_member.a.p/member_rte_member.c.o 00:01:36.562 [234/710] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm.c.o 00:01:36.562 [235/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_fragmentation.c.o 00:01:36.562 [236/710] Linking static target lib/librte_ip_frag.a 00:01:36.823 [237/710] Generating lib/latencystats.sym_chk with a custom command (wrapped by meson to capture output) 00:01:36.823 [238/710] Compiling C object lib/librte_sched.a.p/sched_rte_approx.c.o 00:01:36.823 [239/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_crypto_adapter.c.o 00:01:36.823 [240/710] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:01:36.823 [241/710] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:37.089 [242/710] Generating lib/gpudev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:37.089 [243/710] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:37.089 [244/710] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev_pmd.c.o 00:01:37.089 [245/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_tx_adapter.c.o 00:01:37.089 [246/710] Compiling C object lib/librte_gso.a.p/gso_gso_common.c.o 00:01:37.089 [247/710] Linking static target lib/librte_gso.a 00:01:37.089 [248/710] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils.c.o 00:01:37.089 [249/710] Generating lib/ip_frag.sym_chk with a custom command (wrapped by meson to capture output) 00:01:37.347 [250/710] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:37.347 [251/710] Compiling C object lib/librte_regexdev.a.p/regexdev_rte_regexdev.c.o 00:01:37.347 [252/710] Linking static target lib/librte_regexdev.a 00:01:37.347 [253/710] Compiling C object lib/librte_member.a.p/member_rte_member_vbf.c.o 00:01:37.347 [254/710] Compiling C object lib/librte_rawdev.a.p/rawdev_rte_rawdev.c.o 00:01:37.347 [255/710] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar_bfloat16.c.o 00:01:37.347 [256/710] Linking static target lib/librte_rawdev.a 00:01:37.347 [257/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_eventdev.c.o 00:01:37.347 [258/710] Generating lib/gso.sym_chk with a custom command (wrapped by meson to capture output) 00:01:37.347 [259/710] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev.c.o 00:01:37.612 [260/710] Compiling C object lib/librte_pcapng.a.p/pcapng_rte_pcapng.c.o 00:01:37.612 [261/710] Linking static target lib/librte_pcapng.a 00:01:37.612 [262/710] Compiling C object lib/librte_sched.a.p/sched_rte_pie.c.o 00:01:37.612 [263/710] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar.c.o 00:01:37.612 [264/710] Compiling C object lib/librte_efd.a.p/efd_rte_efd.c.o 00:01:37.612 [265/710] Linking static target lib/librte_mldev.a 00:01:37.612 [266/710] Compiling C object lib/librte_sched.a.p/sched_rte_red.c.o 00:01:37.612 [267/710] Linking static target lib/librte_efd.a 00:01:37.612 [268/710] Compiling C object lib/librte_stack.a.p/stack_rte_stack.c.o 00:01:37.612 [269/710] Compiling C object lib/librte_stack.a.p/stack_rte_stack_std.c.o 00:01:37.873 [270/710] Compiling C object lib/librte_stack.a.p/stack_rte_stack_lf.c.o 00:01:37.873 [271/710] Linking static target lib/librte_stack.a 00:01:37.873 [272/710] Compiling C object lib/librte_member.a.p/member_rte_member_ht.c.o 00:01:37.873 [273/710] Compiling C object lib/acl/libavx2_tmp.a.p/acl_run_avx2.c.o 00:01:37.873 [274/710] Linking static target lib/acl/libavx2_tmp.a 00:01:37.873 [275/710] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:37.873 [276/710] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:01:38.134 [277/710] Generating lib/pcapng.sym_chk with a custom command (wrapped by meson to capture output) 00:01:38.134 [278/710] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm6.c.o 00:01:38.134 [279/710] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:01:38.134 [280/710] Generating lib/efd.sym_chk with a custom command (wrapped by meson to capture output) 00:01:38.134 [281/710] Compiling C object lib/librte_rib.a.p/rib_rte_rib.c.o 00:01:38.134 [282/710] Linking static target lib/librte_lpm.a 00:01:38.134 [283/710] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:38.134 [284/710] Generating lib/stack.sym_chk with a custom command (wrapped by meson to capture output) 00:01:38.134 [285/710] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:38.134 [286/710] Generating lib/rawdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:38.134 [287/710] Linking static target lib/librte_hash.a 00:01:38.395 [288/710] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:38.395 [289/710] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:38.395 [290/710] Linking static target lib/librte_reorder.a 00:01:38.395 [291/710] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:38.395 [292/710] Linking static target lib/librte_power.a 00:01:38.395 [293/710] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:38.395 [294/710] Compiling C object lib/acl/libavx512_tmp.a.p/acl_run_avx512.c.o 00:01:38.395 [295/710] Linking static target lib/acl/libavx512_tmp.a 00:01:38.660 [296/710] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:38.660 [297/710] Linking static target lib/librte_acl.a 00:01:38.660 [298/710] Linking static target lib/librte_security.a 00:01:38.660 [299/710] Generating lib/regexdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:38.660 [300/710] Generating lib/lpm.sym_chk with a custom command (wrapped by meson to capture output) 00:01:38.660 [301/710] Compiling C object lib/librte_ipsec.a.p/ipsec_ses.c.o 00:01:38.660 [302/710] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:01:38.921 [303/710] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_telemetry.c.o 00:01:38.921 [304/710] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:01:38.921 [305/710] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:01:38.921 [306/710] Compiling C object lib/librte_rib.a.p/rib_rte_rib6.c.o 00:01:38.921 [307/710] Linking static target lib/librte_rib.a 00:01:38.921 [308/710] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:38.921 [309/710] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_ctrl_pdu.c.o 00:01:38.921 [310/710] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_crypto.c.o 00:01:38.921 [311/710] Generating lib/acl.sym_chk with a custom command (wrapped by meson to capture output) 00:01:38.921 [312/710] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:01:39.182 [313/710] Compiling C object lib/librte_ipsec.a.p/ipsec_sa.c.o 00:01:39.182 [314/710] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_reorder.c.o 00:01:39.182 [315/710] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:39.182 [316/710] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_cnt.c.o 00:01:39.182 [317/710] Linking static target lib/librte_mbuf.a 00:01:39.182 [318/710] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:01:39.182 [319/710] Compiling C object lib/fib/libtrie_avx512_tmp.a.p/trie_avx512.c.o 00:01:39.182 [320/710] Linking static target lib/fib/libtrie_avx512_tmp.a 00:01:39.182 [321/710] Compiling C object lib/librte_fib.a.p/fib_rte_fib.c.o 00:01:39.182 [322/710] Compiling C object lib/fib/libdir24_8_avx512_tmp.a.p/dir24_8_avx512.c.o 00:01:39.182 [323/710] Linking static target lib/fib/libdir24_8_avx512_tmp.a 00:01:39.444 [324/710] Compiling C object lib/librte_table.a.p/table_rte_swx_keycmp.c.o 00:01:39.444 [325/710] Compiling C object lib/librte_fib.a.p/fib_rte_fib6.c.o 00:01:39.444 [326/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:39.444 [327/710] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:01:39.444 [328/710] Generating lib/rib.sym_chk with a custom command (wrapped by meson to capture output) 00:01:39.703 [329/710] Generating lib/mldev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:39.703 [330/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_rx_adapter.c.o 00:01:39.703 [331/710] Compiling C object lib/librte_port.a.p/port_rte_port_sched.c.o 00:01:39.968 [332/710] Compiling C object lib/librte_pdcp.a.p/pdcp_rte_pdcp.c.o 00:01:39.968 [333/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_timer_adapter.c.o 00:01:39.968 [334/710] Linking static target lib/librte_eventdev.a 00:01:39.968 [335/710] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:39.968 [336/710] Compiling C object lib/librte_member.a.p/member_rte_member_sketch.c.o 00:01:39.968 [337/710] Linking static target lib/librte_member.a 00:01:40.226 [338/710] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_sad.c.o 00:01:40.226 [339/710] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:01:40.226 [340/710] Compiling C object lib/librte_port.a.p/port_rte_port_frag.c.o 00:01:40.487 [341/710] Compiling C object lib/librte_port.a.p/port_rte_port_fd.c.o 00:01:40.487 [342/710] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:40.487 [343/710] Linking static target lib/librte_cryptodev.a 00:01:40.487 [344/710] Compiling C object lib/librte_port.a.p/port_rte_port_ras.c.o 00:01:40.487 [345/710] Compiling C object lib/librte_table.a.p/table_rte_table_array.c.o 00:01:40.487 [346/710] Compiling C object lib/librte_fib.a.p/fib_trie.c.o 00:01:40.487 [347/710] Compiling C object lib/librte_table.a.p/table_rte_swx_table_wm.c.o 00:01:40.487 [348/710] Compiling C object lib/librte_table.a.p/table_rte_swx_table_selector.c.o 00:01:40.487 [349/710] Compiling C object lib/librte_port.a.p/port_rte_port_ethdev.c.o 00:01:40.487 [350/710] Compiling C object lib/librte_table.a.p/table_rte_swx_table_learner.c.o 00:01:40.487 [351/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:40.487 [352/710] Generating lib/member.sym_chk with a custom command (wrapped by meson to capture output) 00:01:40.487 [353/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_cuckoo.c.o 00:01:40.487 [354/710] Linking static target lib/librte_ethdev.a 00:01:40.487 [355/710] Compiling C object lib/librte_sched.a.p/sched_rte_sched.c.o 00:01:40.750 [356/710] Compiling C object lib/librte_fib.a.p/fib_dir24_8.c.o 00:01:40.750 [357/710] Linking static target lib/librte_sched.a 00:01:40.750 [358/710] Linking static target lib/librte_fib.a 00:01:40.750 [359/710] Compiling C object lib/librte_table.a.p/table_rte_swx_table_em.c.o 00:01:40.750 [360/710] Compiling C object lib/librte_table.a.p/table_rte_table_lpm.c.o 00:01:40.750 [361/710] Compiling C object lib/librte_table.a.p/table_rte_table_acl.c.o 00:01:40.750 [362/710] Compiling C object lib/librte_table.a.p/table_rte_table_lpm_ipv6.c.o 00:01:41.009 [363/710] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ethdev.c.o 00:01:41.010 [364/710] Compiling C object lib/librte_port.a.p/port_rte_swx_port_fd.c.o 00:01:41.010 [365/710] Compiling C object lib/librte_table.a.p/table_rte_table_stub.c.o 00:01:41.010 [366/710] Compiling C object lib/librte_port.a.p/port_rte_port_sym_crypto.c.o 00:01:41.010 [367/710] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:01:41.276 [368/710] Generating lib/fib.sym_chk with a custom command (wrapped by meson to capture output) 00:01:41.276 [369/710] Compiling C object lib/librte_port.a.p/port_rte_port_eventdev.c.o 00:01:41.276 [370/710] Generating lib/sched.sym_chk with a custom command (wrapped by meson to capture output) 00:01:41.276 [371/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_port_in_action.c.o 00:01:41.276 [372/710] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_outb.c.o 00:01:41.276 [373/710] Compiling C object lib/librte_node.a.p/node_null.c.o 00:01:41.536 [374/710] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ring.c.o 00:01:41.536 [375/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key8.c.o 00:01:41.801 [376/710] Compiling C object lib/librte_pdump.a.p/pdump_rte_pdump.c.o 00:01:41.801 [377/710] Linking static target lib/librte_pdump.a 00:01:41.801 [378/710] Compiling C object lib/librte_graph.a.p/graph_graph_debug.c.o 00:01:41.801 [379/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:01:41.801 [380/710] Compiling C object lib/librte_graph.a.p/graph_graph_ops.c.o 00:01:41.801 [381/710] Compiling C object lib/librte_graph.a.p/graph_rte_graph_worker.c.o 00:01:41.801 [382/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_ext.c.o 00:01:41.801 [383/710] Compiling C object lib/librte_graph.a.p/graph_node.c.o 00:01:41.801 [384/710] Compiling C object lib/librte_graph.a.p/graph_graph_populate.c.o 00:01:41.801 [385/710] Compiling C object lib/librte_graph.a.p/graph_graph_pcap.c.o 00:01:41.801 [386/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_lru.c.o 00:01:41.801 [387/710] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:01:41.801 [388/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key16.c.o 00:01:42.064 [389/710] Compiling C object lib/librte_node.a.p/node_ethdev_ctrl.c.o 00:01:42.064 [390/710] Generating lib/pdump.sym_chk with a custom command (wrapped by meson to capture output) 00:01:42.064 [391/710] Compiling C object lib/librte_graph.a.p/graph_graph.c.o 00:01:42.064 [392/710] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_inb.c.o 00:01:42.064 [393/710] Linking static target lib/librte_ipsec.a 00:01:42.064 [394/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key32.c.o 00:01:42.326 [395/710] Linking static target lib/librte_table.a 00:01:42.326 [396/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_pipeline.c.o 00:01:42.326 [397/710] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:42.326 [398/710] Compiling C object lib/librte_node.a.p/node_log.c.o 00:01:42.326 [399/710] Compiling C object lib/librte_node.a.p/node_pkt_drop.c.o 00:01:42.591 [400/710] Compiling C object lib/librte_node.a.p/node_kernel_tx.c.o 00:01:42.591 [401/710] Generating lib/ipsec.sym_chk with a custom command (wrapped by meson to capture output) 00:01:42.591 [402/710] Compiling C object lib/librte_port.a.p/port_rte_swx_port_source_sink.c.o 00:01:42.849 [403/710] Compiling C object lib/librte_graph.a.p/graph_graph_stats.c.o 00:01:43.122 [404/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:01:43.122 [405/710] Compiling C object lib/librte_node.a.p/node_ethdev_tx.c.o 00:01:43.122 [406/710] Compiling C object lib/librte_node.a.p/node_ethdev_rx.c.o 00:01:43.122 [407/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_diag.c.o 00:01:43.122 [408/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:01:43.122 [409/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:01:43.122 [410/710] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:01:43.122 [411/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:01:43.122 [412/710] Linking static target drivers/libtmp_rte_bus_vdev.a 00:01:43.383 [413/710] Compiling C object lib/librte_port.a.p/port_rte_port_ring.c.o 00:01:43.383 [414/710] Generating lib/eventdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:43.383 [415/710] Generating lib/table.sym_chk with a custom command (wrapped by meson to capture output) 00:01:43.383 [416/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_hmc.c.o 00:01:43.383 [417/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ipsec.c.o 00:01:43.383 [418/710] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:01:43.649 [419/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:01:43.649 [420/710] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:01:43.649 [421/710] Linking static target drivers/libtmp_rte_bus_pci.a 00:01:43.649 [422/710] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:43.649 [423/710] Linking static target drivers/librte_bus_vdev.a 00:01:43.649 [424/710] Linking target lib/librte_eal.so.24.0 00:01:43.649 [425/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_dcb.c.o 00:01:43.649 [426/710] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:43.920 [427/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ctl.c.o 00:01:43.920 [428/710] Compiling C object lib/librte_port.a.p/port_rte_port_source_sink.c.o 00:01:43.920 [429/710] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols 00:01:43.920 [430/710] Linking static target lib/librte_port.a 00:01:43.920 [431/710] Linking target lib/librte_ring.so.24.0 00:01:43.920 [432/710] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:01:43.920 [433/710] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:44.209 [434/710] Linking target lib/librte_meter.so.24.0 00:01:44.209 [435/710] Linking target lib/librte_pci.so.24.0 00:01:44.209 [436/710] Linking target lib/librte_timer.so.24.0 00:01:44.209 [437/710] Compiling C object lib/librte_graph.a.p/graph_rte_graph_model_mcore_dispatch.c.o 00:01:44.209 [438/710] Compiling C object lib/librte_node.a.p/node_ip4_reassembly.c.o 00:01:44.209 [439/710] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols 00:01:44.209 [440/710] Compiling C object lib/librte_node.a.p/node_kernel_rx.c.o 00:01:44.209 [441/710] Linking target lib/librte_acl.so.24.0 00:01:44.209 [442/710] Linking target lib/librte_cfgfile.so.24.0 00:01:44.209 [443/710] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols 00:01:44.209 [444/710] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols 00:01:44.209 [445/710] Linking target lib/librte_mempool.so.24.0 00:01:44.209 [446/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_lan_hmc.c.o 00:01:44.209 [447/710] Linking target lib/librte_rcu.so.24.0 00:01:44.476 [448/710] Linking target lib/librte_dmadev.so.24.0 00:01:44.476 [449/710] Linking target lib/librte_jobstats.so.24.0 00:01:44.476 [450/710] Linking target lib/librte_stack.so.24.0 00:01:44.476 [451/710] Linking target lib/librte_rawdev.so.24.0 00:01:44.476 [452/710] Generating symbol file lib/librte_timer.so.24.0.p/librte_timer.so.24.0.symbols 00:01:44.476 [453/710] Linking static target lib/librte_graph.a 00:01:44.476 [454/710] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:44.476 [455/710] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:44.476 [456/710] Linking static target drivers/librte_bus_pci.a 00:01:44.476 [457/710] Compiling C object app/dpdk-graph.p/graph_cli.c.o 00:01:44.476 [458/710] Linking target drivers/librte_bus_vdev.so.24.0 00:01:44.476 [459/710] Generating symbol file lib/librte_acl.so.24.0.p/librte_acl.so.24.0.symbols 00:01:44.476 [460/710] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:01:44.476 [461/710] Linking static target drivers/libtmp_rte_mempool_ring.a 00:01:44.476 [462/710] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols 00:01:44.476 [463/710] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols 00:01:44.476 [464/710] Generating symbol file lib/librte_dmadev.so.24.0.p/librte_dmadev.so.24.0.symbols 00:01:44.735 [465/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_vf_representor.c.o 00:01:44.735 [466/710] Linking target lib/librte_mbuf.so.24.0 00:01:44.735 [467/710] Compiling C object app/dpdk-graph.p/graph_conn.c.o 00:01:44.735 [468/710] Linking target lib/librte_rib.so.24.0 00:01:44.735 [469/710] Compiling C object app/dpdk-graph.p/graph_ethdev_rx.c.o 00:01:44.735 [470/710] Generating symbol file drivers/librte_bus_vdev.so.24.0.p/librte_bus_vdev.so.24.0.symbols 00:01:44.735 [471/710] Compiling C object app/dpdk-graph.p/graph_ip4_route.c.o 00:01:44.735 [472/710] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:01:45.001 [473/710] Generating symbol file lib/librte_rib.so.24.0.p/librte_rib.so.24.0.symbols 00:01:45.001 [474/710] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:45.001 [475/710] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols 00:01:45.001 [476/710] Linking static target drivers/librte_mempool_ring.a 00:01:45.002 [477/710] Linking target lib/librte_fib.so.24.0 00:01:45.002 [478/710] Linking target lib/librte_bbdev.so.24.0 00:01:45.002 [479/710] Compiling C object app/dpdk-graph.p/graph_ip6_route.c.o 00:01:45.002 [480/710] Compiling C object app/dpdk-graph.p/graph_graph.c.o 00:01:45.002 [481/710] Linking target lib/librte_net.so.24.0 00:01:45.002 [482/710] Compiling C object app/dpdk-graph.p/graph_ethdev.c.o 00:01:45.002 [483/710] Linking target lib/librte_compressdev.so.24.0 00:01:45.002 [484/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_tm.c.o 00:01:45.002 [485/710] Linking target lib/librte_cryptodev.so.24.0 00:01:45.002 [486/710] Linking target lib/librte_gpudev.so.24.0 00:01:45.002 [487/710] Linking target lib/librte_distributor.so.24.0 00:01:45.002 [488/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_hash.c.o 00:01:45.002 [489/710] Linking target lib/librte_regexdev.so.24.0 00:01:45.002 [490/710] Linking target lib/librte_mldev.so.24.0 00:01:45.002 [491/710] Linking target lib/librte_reorder.so.24.0 00:01:45.002 [492/710] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:45.002 [493/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_recycle_mbufs_vec_common.c.o 00:01:45.264 [494/710] Linking target lib/librte_sched.so.24.0 00:01:45.264 [495/710] Linking target drivers/librte_mempool_ring.so.24.0 00:01:45.264 [496/710] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:45.264 [497/710] Compiling C object app/dpdk-graph.p/graph_l3fwd.c.o 00:01:45.264 [498/710] Generating lib/port.sym_chk with a custom command (wrapped by meson to capture output) 00:01:45.264 [499/710] Compiling C object app/dpdk-dumpcap.p/dumpcap_main.c.o 00:01:45.264 [500/710] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols 00:01:45.264 [501/710] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_commands.c.o 00:01:45.264 [502/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_pf.c.o 00:01:45.264 [503/710] Linking target drivers/librte_bus_pci.so.24.0 00:01:45.264 [504/710] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_cmdline_test.c.o 00:01:45.264 [505/710] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols 00:01:45.264 [506/710] Generating symbol file lib/librte_reorder.so.24.0.p/librte_reorder.so.24.0.symbols 00:01:45.264 [507/710] Linking target lib/librte_cmdline.so.24.0 00:01:45.264 [508/710] Linking target lib/librte_hash.so.24.0 00:01:45.264 [509/710] Compiling C object lib/librte_node.a.p/node_ip4_local.c.o 00:01:45.526 [510/710] Linking target lib/librte_security.so.24.0 00:01:45.526 [511/710] Generating symbol file lib/librte_sched.so.24.0.p/librte_sched.so.24.0.symbols 00:01:45.526 [512/710] Generating lib/graph.sym_chk with a custom command (wrapped by meson to capture output) 00:01:45.526 [513/710] Generating symbol file drivers/librte_bus_pci.so.24.0.p/librte_bus_pci.so.24.0.symbols 00:01:45.526 [514/710] Generating symbol file lib/librte_hash.so.24.0.p/librte_hash.so.24.0.symbols 00:01:45.526 [515/710] Generating symbol file lib/librte_security.so.24.0.p/librte_security.so.24.0.symbols 00:01:45.526 [516/710] Linking target lib/librte_efd.so.24.0 00:01:45.789 [517/710] Linking target lib/librte_lpm.so.24.0 00:01:45.789 [518/710] Linking target lib/librte_member.so.24.0 00:01:45.789 [519/710] Compiling C object app/dpdk-graph.p/graph_mempool.c.o 00:01:45.789 [520/710] Compiling C object app/dpdk-graph.p/graph_main.c.o 00:01:45.789 [521/710] Linking target lib/librte_ipsec.so.24.0 00:01:45.789 [522/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline_spec.c.o 00:01:45.789 [523/710] Compiling C object app/dpdk-graph.p/graph_utils.c.o 00:01:46.051 [524/710] Generating symbol file lib/librte_lpm.so.24.0.p/librte_lpm.so.24.0.symbols 00:01:46.051 [525/710] Generating symbol file lib/librte_ipsec.so.24.0.p/librte_ipsec.so.24.0.symbols 00:01:46.051 [526/710] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_main.c.o 00:01:46.311 [527/710] Compiling C object lib/librte_node.a.p/node_udp4_input.c.o 00:01:46.311 [528/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_fdir.c.o 00:01:46.311 [529/710] Compiling C object app/dpdk-graph.p/graph_neigh.c.o 00:01:46.311 [530/710] Compiling C object drivers/net/i40e/libi40e_avx512_lib.a.p/i40e_rxtx_vec_avx512.c.o 00:01:46.311 [531/710] Linking static target drivers/net/i40e/libi40e_avx512_lib.a 00:01:46.311 [532/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_test.c.o 00:01:46.573 [533/710] Compiling C object app/dpdk-test-acl.p/test-acl_main.c.o 00:01:46.573 [534/710] Compiling C object drivers/net/i40e/libi40e_avx2_lib.a.p/i40e_rxtx_vec_avx2.c.o 00:01:46.573 [535/710] Linking static target drivers/net/i40e/libi40e_avx2_lib.a 00:01:46.573 [536/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_parser.c.o 00:01:46.838 [537/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_common.c.o 00:01:46.838 [538/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_options_parse.c.o 00:01:46.838 [539/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_adminq.c.o 00:01:46.838 [540/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_common.c.o 00:01:46.838 [541/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_main.c.o 00:01:47.099 [542/710] Compiling C object lib/librte_node.a.p/node_pkt_cls.c.o 00:01:47.099 [543/710] Compiling C object app/dpdk-pdump.p/pdump_main.c.o 00:01:47.099 [544/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vectors.c.o 00:01:47.361 [545/710] Compiling C object lib/librte_node.a.p/node_ip6_lookup.c.o 00:01:47.361 [546/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_options_parsing.c.o 00:01:47.361 [547/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_ops.c.o 00:01:47.361 [548/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_nvm.c.o 00:01:47.361 [549/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_flow.c.o 00:01:47.361 [550/710] Linking static target drivers/net/i40e/base/libi40e_base.a 00:01:47.361 [551/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vector_parsing.c.o 00:01:47.361 [552/710] Compiling C object app/dpdk-proc-info.p/proc-info_main.c.o 00:01:47.622 [553/710] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_main.c.o 00:01:47.622 [554/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_test.c.o 00:01:47.622 [555/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_options.c.o 00:01:47.622 [556/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_main.c.o 00:01:47.888 [557/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_main.c.o 00:01:47.888 [558/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_parser.c.o 00:01:47.888 [559/710] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_vector.c.o 00:01:48.466 [560/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_common.c.o 00:01:48.466 [561/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_rte_pmd_i40e.c.o 00:01:48.466 [562/710] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_flow_gen.c.o 00:01:48.727 [563/710] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_items_gen.c.o 00:01:48.727 [564/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_atq.c.o 00:01:48.727 [565/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_main.c.o 00:01:48.727 [566/710] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:48.727 [567/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_device_ops.c.o 00:01:48.727 [568/710] Compiling C object app/dpdk-test-gpudev.p/test-gpudev_main.c.o 00:01:48.727 [569/710] Linking target lib/librte_ethdev.so.24.0 00:01:48.727 [570/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_queue.c.o 00:01:48.727 [571/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_options.c.o 00:01:48.985 [572/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_common.c.o 00:01:48.985 [573/710] Generating symbol file lib/librte_ethdev.so.24.0.p/librte_ethdev.so.24.0.symbols 00:01:48.985 [574/710] Compiling C object lib/librte_node.a.p/node_ip4_lookup.c.o 00:01:48.985 [575/710] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_actions_gen.c.o 00:01:48.985 [576/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_sse.c.o 00:01:48.985 [577/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_ordered.c.o 00:01:48.985 [578/710] Linking target lib/librte_metrics.so.24.0 00:01:49.253 [579/710] Linking target lib/librte_gro.so.24.0 00:01:49.253 [580/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_ops.c.o 00:01:49.253 [581/710] Linking target lib/librte_eventdev.so.24.0 00:01:49.253 [582/710] Linking target lib/librte_bpf.so.24.0 00:01:49.253 [583/710] Linking target lib/librte_gso.so.24.0 00:01:49.253 [584/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_interleave.c.o 00:01:49.253 [585/710] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_benchmark.c.o 00:01:49.253 [586/710] Linking target lib/librte_ip_frag.so.24.0 00:01:49.253 [587/710] Linking target lib/librte_pcapng.so.24.0 00:01:49.253 [588/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_atq.c.o 00:01:49.253 [589/710] Linking target lib/librte_power.so.24.0 00:01:49.511 [590/710] Generating symbol file lib/librte_metrics.so.24.0.p/librte_metrics.so.24.0.symbols 00:01:49.511 [591/710] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_process.c.o 00:01:49.511 [592/710] Generating symbol file lib/librte_bpf.so.24.0.p/librte_bpf.so.24.0.symbols 00:01:49.511 [593/710] Linking static target lib/librte_pdcp.a 00:01:49.511 [594/710] Generating symbol file lib/librte_eventdev.so.24.0.p/librte_eventdev.so.24.0.symbols 00:01:49.511 [595/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_queue.c.o 00:01:49.511 [596/710] Generating symbol file lib/librte_ip_frag.so.24.0.p/librte_ip_frag.so.24.0.symbols 00:01:49.511 [597/710] Generating symbol file lib/librte_pcapng.so.24.0.p/librte_pcapng.so.24.0.symbols 00:01:49.511 [598/710] Linking target lib/librte_latencystats.so.24.0 00:01:49.511 [599/710] Linking target lib/librte_bitratestats.so.24.0 00:01:49.511 [600/710] Linking target lib/librte_dispatcher.so.24.0 00:01:49.512 [601/710] Linking target lib/librte_pdump.so.24.0 00:01:49.512 [602/710] Compiling C object app/dpdk-test-fib.p/test-fib_main.c.o 00:01:49.512 [603/710] Linking target lib/librte_graph.so.24.0 00:01:49.512 [604/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_stats.c.o 00:01:49.512 [605/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_common.c.o 00:01:49.512 [606/710] Linking target lib/librte_port.so.24.0 00:01:49.512 [607/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_common.c.o 00:01:49.772 [608/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_config.c.o 00:01:49.772 [609/710] Generating symbol file lib/librte_graph.so.24.0.p/librte_graph.so.24.0.symbols 00:01:49.772 [610/710] Generating symbol file lib/librte_port.so.24.0.p/librte_port.so.24.0.symbols 00:01:50.055 [611/710] Linking target lib/librte_table.so.24.0 00:01:50.055 [612/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_init.c.o 00:01:50.055 [613/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_stub.c.o 00:01:50.055 [614/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_main.c.o 00:01:50.055 [615/710] Generating lib/pdcp.sym_chk with a custom command (wrapped by meson to capture output) 00:01:50.055 [616/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_acl.c.o 00:01:50.055 [617/710] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev.c.o 00:01:50.055 [618/710] Linking target lib/librte_pdcp.so.24.0 00:01:50.055 [619/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm.c.o 00:01:50.055 [620/710] Generating symbol file lib/librte_table.so.24.0.p/librte_table.so.24.0.symbols 00:01:50.055 [621/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm_ipv6.c.o 00:01:50.317 [622/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_hash.c.o 00:01:50.317 [623/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_verify.c.o 00:01:50.317 [624/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_pmd_cyclecount.c.o 00:01:50.576 [625/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_throughput.c.o 00:01:50.576 [626/710] Compiling C object app/dpdk-testpmd.p/test-pmd_5tswap.c.o 00:01:50.576 [627/710] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_main.c.o 00:01:50.576 [628/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_cman.c.o 00:01:50.835 [629/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmd_flex_item.c.o 00:01:50.835 [630/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx.c.o 00:01:51.093 [631/710] Compiling C object app/dpdk-testpmd.p/test-pmd_iofwd.c.o 00:01:51.093 [632/710] Compiling C object app/dpdk-testpmd.p/test-pmd_macfwd.c.o 00:01:51.093 [633/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_throughput.c.o 00:01:51.093 [634/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_runtime.c.o 00:01:51.093 [635/710] Compiling C object app/dpdk-testpmd.p/test-pmd_flowgen.c.o 00:01:51.350 [636/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_mtr.c.o 00:01:51.350 [637/710] Compiling C object app/dpdk-testpmd.p/test-pmd_macswap.c.o 00:01:51.351 [638/710] Compiling C object app/dpdk-testpmd.p/test-pmd_recycle_mbufs.c.o 00:01:51.351 [639/710] Compiling C object app/dpdk-testpmd.p/test-pmd_rxonly.c.o 00:01:51.351 [640/710] Compiling C object app/dpdk-testpmd.p/test-pmd_shared_rxq_fwd.c.o 00:01:51.351 [641/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_tm.c.o 00:01:51.351 [642/710] Compiling C object app/dpdk-testpmd.p/test-pmd_ieee1588fwd.c.o 00:01:51.351 [643/710] Compiling C object app/dpdk-testpmd.p/test-pmd_bpf_cmd.c.o 00:01:51.608 [644/710] Compiling C object app/dpdk-test-security-perf.p/test-security-perf_test_security_perf.c.o 00:01:51.608 [645/710] Compiling C object app/dpdk-testpmd.p/test-pmd_icmpecho.c.o 00:01:51.608 [646/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_latency.c.o 00:01:51.608 [647/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_atq.c.o 00:01:51.608 [648/710] Compiling C object app/dpdk-testpmd.p/test-pmd_util.c.o 00:01:51.866 [649/710] Compiling C object app/dpdk-test-sad.p/test-sad_main.c.o 00:01:51.866 [650/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_queue.c.o 00:01:52.124 [651/710] Compiling C object app/dpdk-testpmd.p/.._drivers_net_i40e_i40e_testpmd.c.o 00:01:52.383 [652/710] Compiling C object app/dpdk-testpmd.p/test-pmd_parameters.c.o 00:01:52.383 [653/710] Compiling C object app/dpdk-test-regex.p/test-regex_main.c.o 00:01:52.641 [654/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_ethdev.c.o 00:01:52.641 [655/710] Linking static target drivers/libtmp_rte_net_i40e.a 00:01:52.641 [656/710] Compiling C object app/dpdk-testpmd.p/test-pmd_csumonly.c.o 00:01:52.641 [657/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_common.c.o 00:01:52.899 [658/710] Compiling C object app/dpdk-test-security-perf.p/test_test_cryptodev_security_ipsec.c.o 00:01:53.158 [659/710] Generating drivers/rte_net_i40e.pmd.c with a custom command 00:01:53.158 [660/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline.c.o 00:01:53.158 [661/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_verify.c.o 00:01:53.158 [662/710] Compiling C object drivers/librte_net_i40e.a.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:01:53.158 [663/710] Compiling C object drivers/librte_net_i40e.so.24.0.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:01:53.158 [664/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_cyclecount.c.o 00:01:53.158 [665/710] Linking static target drivers/librte_net_i40e.a 00:01:53.158 [666/710] Compiling C object app/dpdk-testpmd.p/test-pmd_testpmd.c.o 00:01:53.417 [667/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline.c.o 00:01:53.417 [668/710] Compiling C object app/dpdk-testpmd.p/test-pmd_noisy_vnf.c.o 00:01:53.675 [669/710] Generating drivers/rte_net_i40e.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.675 [670/710] Linking target drivers/librte_net_i40e.so.24.0 00:01:53.933 [671/710] Compiling C object lib/librte_node.a.p/node_ip6_rewrite.c.o 00:01:54.499 [672/710] Compiling C object lib/librte_node.a.p/node_ip4_rewrite.c.o 00:01:54.499 [673/710] Linking static target lib/librte_node.a 00:01:54.499 [674/710] Compiling C object app/dpdk-testpmd.p/test-pmd_txonly.c.o 00:01:54.757 [675/710] Generating lib/node.sym_chk with a custom command (wrapped by meson to capture output) 00:01:54.757 [676/710] Linking target lib/librte_node.so.24.0 00:01:55.324 [677/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_common.c.o 00:01:55.890 [678/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_common.c.o 00:01:55.890 [679/710] Compiling C object app/dpdk-testpmd.p/test-pmd_config.c.o 00:01:57.789 [680/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_flow.c.o 00:01:58.046 [681/710] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_perf.c.o 00:02:04.626 [682/710] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:36.682 [683/710] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:36.682 [684/710] Linking static target lib/librte_vhost.a 00:02:36.682 [685/710] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:36.682 [686/710] Linking target lib/librte_vhost.so.24.0 00:02:46.645 [687/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_table_action.c.o 00:02:46.645 [688/710] Linking static target lib/librte_pipeline.a 00:02:46.903 [689/710] Linking target app/dpdk-dumpcap 00:02:47.160 [690/710] Linking target app/dpdk-proc-info 00:02:47.160 [691/710] Linking target app/dpdk-pdump 00:02:47.160 [692/710] Linking target app/dpdk-test-dma-perf 00:02:47.160 [693/710] Linking target app/dpdk-test-regex 00:02:47.160 [694/710] Linking target app/dpdk-test-sad 00:02:47.160 [695/710] Linking target app/dpdk-test-gpudev 00:02:47.160 [696/710] Linking target app/dpdk-test-acl 00:02:47.160 [697/710] Linking target app/dpdk-test-fib 00:02:47.160 [698/710] Linking target app/dpdk-test-cmdline 00:02:47.160 [699/710] Linking target app/dpdk-test-security-perf 00:02:47.160 [700/710] Linking target app/dpdk-test-crypto-perf 00:02:47.160 [701/710] Linking target app/dpdk-test-pipeline 00:02:47.160 [702/710] Linking target app/dpdk-test-flow-perf 00:02:47.160 [703/710] Linking target app/dpdk-test-mldev 00:02:47.160 [704/710] Linking target app/dpdk-test-bbdev 00:02:47.160 [705/710] Linking target app/dpdk-test-compress-perf 00:02:47.160 [706/710] Linking target app/dpdk-graph 00:02:47.160 [707/710] Linking target app/dpdk-test-eventdev 00:02:47.160 [708/710] Linking target app/dpdk-testpmd 00:02:49.057 [709/710] Generating lib/pipeline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:49.313 [710/710] Linking target lib/librte_pipeline.so.24.0 00:02:49.313 00:48:42 build_native_dpdk -- common/autobuild_common.sh@190 -- $ ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp -j48 install 00:02:49.313 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp' 00:02:49.313 [0/1] Installing files. 00:02:49.575 Installing subdir /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples 00:02:49.575 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:49.575 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:49.575 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_default_v4.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:49.575 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:49.575 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:49.575 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_route.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:49.575 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:49.575 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:49.575 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:49.575 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:49.575 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:49.575 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:49.575 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:49.575 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_route_parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:49.575 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:49.575 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:49.575 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:49.575 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_fib.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:49.575 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:49.575 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_default_v6.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:49.575 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event_internal_port.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:49.575 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_sequential.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:49.575 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:49.575 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:49.575 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:49.575 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:49.575 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_route_parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:49.575 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_default_v4.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:49.575 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:49.575 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:49.575 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_default_v6.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:49.575 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl_scalar.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:49.575 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:49.575 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq_dcb/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq_dcb 00:02:49.575 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq_dcb/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq_dcb 00:02:49.575 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:02:49.575 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:02:49.575 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/flow_blocks.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:02:49.575 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event_internal_port.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:49.575 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:49.575 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:49.575 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:49.575 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:49.575 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:49.575 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:49.575 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_poll.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:49.575 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_common.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:49.575 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_poll.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:49.575 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/commands.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:49.575 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/parse_obj_list.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:49.575 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:49.575 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:49.575 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:49.575 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/parse_obj_list.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:49.575 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/pkt_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common 00:02:49.575 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/neon/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/neon 00:02:49.575 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/altivec/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/altivec 00:02:49.575 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/sse/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/sse 00:02:49.575 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ptpclient/ptpclient.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ptpclient 00:02:49.575 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ptpclient/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ptpclient 00:02:49.575 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/helloworld/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/helloworld 00:02:49.575 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/helloworld/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/helloworld 00:02:49.575 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd 00:02:49.575 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd 00:02:49.575 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/rxtx_callbacks/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:02:49.575 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/rxtx_callbacks/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:02:49.576 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor_x86.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:49.576 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/vm_power_cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:49.576 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_manager.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:49.576 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_monitor.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:49.576 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:49.576 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:49.576 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_monitor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:49.576 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor_nop.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:49.576 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:49.576 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:49.576 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_manager.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:49.576 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/vm_power_cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:49.576 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/power_manager.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:49.576 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:49.576 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/power_manager.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:49.576 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:49.576 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:49.576 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:49.576 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:49.576 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:49.576 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:49.576 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/app_thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:49.576 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_ov.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:49.576 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:49.576 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cfg_file.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:49.576 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:49.576 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cmdline.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:49.576 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:49.576 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cfg_file.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:49.576 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/stats.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:49.576 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:49.576 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:49.576 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_red.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:49.576 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_pie.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:49.576 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:49.576 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/l2fwd-cat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:49.576 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/cat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:49.576 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:49.576 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/cat.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:49.576 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/perf_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:49.576 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:49.576 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:49.576 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:49.576 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/perf_core.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:49.576 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:02:49.576 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:02:49.576 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:02:49.576 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/vdpa_blk_compact.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:02:49.576 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/virtio_net.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:02:49.576 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:02:49.576 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:02:49.576 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:02:49.576 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/blk_spec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:49.576 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:49.576 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk_compat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:49.576 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:49.576 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:49.576 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/blk.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:49.576 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_ecdsa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:49.576 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_aes.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:49.576 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_sha.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:49.576 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_tdes.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:49.576 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:49.576 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:49.577 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_dev_self_test.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:49.577 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_rsa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:49.577 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:49.577 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_dev_self_test.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:49.577 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_gcm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:49.577 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_cmac.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:49.577 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_xts.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:49.577 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_hmac.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:49.577 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_ccm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:49.577 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:49.577 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:02:49.577 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:02:49.577 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:02:49.577 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/dma/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/dma 00:02:49.577 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/dma/dmafwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/dma 00:02:49.577 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process 00:02:49.577 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:49.577 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:49.577 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:49.577 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:49.577 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/mp_commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:49.577 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:49.577 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:49.577 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:49.577 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/mp_commands.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:49.577 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/symmetric_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:02:49.577 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/symmetric_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:02:49.577 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp 00:02:49.577 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/args.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:49.577 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:49.577 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:49.577 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:49.577 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:49.577 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/init.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:49.577 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_client/client.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:02:49.577 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_client/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:02:49.577 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/shared/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/shared 00:02:49.577 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-graph/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-graph 00:02:49.577 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-graph/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-graph 00:02:49.577 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-jobstats/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:02:49.577 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-jobstats/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:02:49.577 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_fragmentation/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_fragmentation 00:02:49.577 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_fragmentation/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_fragmentation 00:02:49.577 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/packet_ordering/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/packet_ordering 00:02:49.577 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/packet_ordering/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/packet_ordering 00:02:49.577 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-crypto/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:02:49.577 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-crypto/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:02:49.577 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/shm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:49.577 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:49.577 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:49.577 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/shm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:49.577 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/ka-agent/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:02:49.577 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/ka-agent/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:02:49.577 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/skeleton/basicfwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/skeleton 00:02:49.577 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/skeleton/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/skeleton 00:02:49.577 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-macsec/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:02:49.577 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-macsec/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:02:49.577 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/service_cores/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/service_cores 00:02:49.577 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/service_cores/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/service_cores 00:02:49.577 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/distributor/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/distributor 00:02:49.577 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/distributor/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/distributor 00:02:49.577 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_worker_tx.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:49.577 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_worker_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:49.578 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:49.578 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:49.578 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:49.578 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/esp.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:49.578 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/event_helper.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:49.578 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_worker.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:49.578 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:49.578 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipip.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:49.578 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ep1.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:49.578 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sp4.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:49.578 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:49.578 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/flow.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:49.578 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_process.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:49.578 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/flow.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:49.578 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:49.578 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/rt.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:49.578 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_worker.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:49.578 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:49.578 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:49.578 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/event_helper.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:49.578 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sad.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:49.578 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:49.578 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sad.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:49.578 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/esp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:49.578 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec-secgw.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:49.578 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sp6.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:49.578 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/parser.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:49.578 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ep0.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:49.578 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/parser.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:49.578 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec-secgw.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:49.578 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:49.578 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:49.578 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:49.578 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:49.578 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/load_env.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:49.578 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:49.578 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_null_header_reconstruct.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:49.578 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/run_test.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:49.578 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesgcm_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:49.578 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/common_defs_secgw.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:49.578 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:49.578 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:49.578 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:49.578 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/data_rxtx.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:49.578 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/bypass_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:49.578 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_ipv6opts.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:49.578 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesgcm_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:49.578 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesgcm_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:49.578 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:49.578 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/pkttest.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:49.578 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesgcm_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:49.578 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:49.578 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/pkttest.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:49.578 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:49.578 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/linux_test.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:49.578 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:49.578 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:49.578 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipv4_multicast/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipv4_multicast 00:02:49.578 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipv4_multicast/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipv4_multicast 00:02:49.578 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd 00:02:49.578 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_node/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:02:49.578 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_node/node.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:02:49.578 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/args.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:49.578 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:49.579 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:49.579 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:49.579 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:49.579 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/init.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:49.579 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/shared/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/shared 00:02:49.579 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:49.579 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:49.579 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:49.579 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tap.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:49.579 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tmgr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:49.579 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/swq.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:49.579 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:49.579 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/pipeline.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:49.579 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/swq.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:49.579 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/action.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:49.579 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/conn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:49.579 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tap.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:49.579 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/link.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:49.579 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:49.579 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cryptodev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:49.579 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/mempool.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:49.579 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:49.579 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/conn.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:49.579 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/parser.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:49.579 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/link.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:49.579 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cryptodev.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:49.579 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:49.579 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/parser.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:49.579 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:49.579 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:49.579 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/mempool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:49.579 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tmgr.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:49.579 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/route_ecmp.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:49.579 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/rss.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:49.579 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/flow.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:49.579 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/firewall.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:49.579 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/tap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:49.579 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/l2fwd.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:49.579 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/route.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:49.579 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/flow_crypto.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:49.579 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t1.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:49.579 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t3.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:49.579 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/README to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:49.579 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/dummy.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:49.579 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t2.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:49.579 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq 00:02:49.579 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq 00:02:49.579 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/link_status_interrupt/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/link_status_interrupt 00:02:49.579 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/link_status_interrupt/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/link_status_interrupt 00:02:49.579 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_reassembly/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_reassembly 00:02:49.579 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_reassembly/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_reassembly 00:02:49.579 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bbdev_app/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bbdev_app 00:02:49.579 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bbdev_app/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bbdev_app 00:02:49.579 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:49.579 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:49.579 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:49.579 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/conn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:49.580 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:49.580 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/obj.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:49.580 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:49.580 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/conn.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:49.580 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:49.580 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/obj.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:49.580 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/rss.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:49.580 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/hash_func.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:49.580 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:49.580 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/mirroring.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:49.580 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ipsec.io to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:49.580 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:49.580 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ethdev.io to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:49.580 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/mirroring.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:49.580 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/varbit.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:49.580 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:49.580 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:49.580 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/recirculation.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:49.580 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/recirculation.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:49.580 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/rss.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:49.580 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ipsec.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:49.580 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/varbit.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:49.580 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:49.580 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_routing_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:49.580 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:49.580 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:49.580 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_nexthop_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:49.580 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/learner.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:49.580 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:49.580 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/registers.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:49.580 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:49.580 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_nexthop_group_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:49.580 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:49.580 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/registers.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:49.580 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:49.580 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/packet.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:49.580 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:49.580 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/meter.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:49.580 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/hash_func.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:49.580 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/pcap.io to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:49.580 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ipsec.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:49.580 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_table.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:49.580 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ipsec_sa.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:49.580 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/learner.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:49.580 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:49.580 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:49.580 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/meter.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:49.580 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:49.580 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:49.580 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/rte_policer.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:49.580 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:49.580 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:49.580 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/rte_policer.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:49.580 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool 00:02:49.580 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/ethapp.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:49.580 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/ethapp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:49.580 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:49.580 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:49.580 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:49.580 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/rte_ethtool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:49.580 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/rte_ethtool.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:49.580 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ntb/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:02:49.580 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ntb/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:02:49.580 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ntb/ntb_fwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:02:49.581 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_crypto/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_crypto 00:02:49.581 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_crypto/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_crypto 00:02:49.581 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/timer/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/timer 00:02:49.581 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/timer/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/timer 00:02:49.581 Installing lib/librte_log.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:49.581 Installing lib/librte_log.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:49.581 Installing lib/librte_kvargs.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:49.581 Installing lib/librte_kvargs.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:49.581 Installing lib/librte_telemetry.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:49.581 Installing lib/librte_telemetry.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:49.581 Installing lib/librte_eal.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:49.581 Installing lib/librte_eal.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:49.581 Installing lib/librte_ring.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:49.581 Installing lib/librte_ring.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:49.581 Installing lib/librte_rcu.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:49.581 Installing lib/librte_rcu.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:49.581 Installing lib/librte_mempool.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:49.581 Installing lib/librte_mempool.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:49.581 Installing lib/librte_mbuf.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:49.581 Installing lib/librte_mbuf.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:49.581 Installing lib/librte_net.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:49.581 Installing lib/librte_net.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:49.581 Installing lib/librte_meter.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:49.581 Installing lib/librte_meter.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:49.581 Installing lib/librte_ethdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:49.581 Installing lib/librte_ethdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:49.581 Installing lib/librte_pci.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:49.581 Installing lib/librte_pci.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:49.581 Installing lib/librte_cmdline.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:49.581 Installing lib/librte_cmdline.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:49.581 Installing lib/librte_metrics.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:49.581 Installing lib/librte_metrics.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:49.581 Installing lib/librte_hash.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:49.581 Installing lib/librte_hash.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:49.581 Installing lib/librte_timer.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:49.581 Installing lib/librte_timer.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:49.581 Installing lib/librte_acl.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:49.581 Installing lib/librte_acl.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:49.581 Installing lib/librte_bbdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:49.581 Installing lib/librte_bbdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:49.581 Installing lib/librte_bitratestats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:49.581 Installing lib/librte_bitratestats.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:49.581 Installing lib/librte_bpf.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:49.581 Installing lib/librte_bpf.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:49.581 Installing lib/librte_cfgfile.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:49.581 Installing lib/librte_cfgfile.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:49.581 Installing lib/librte_compressdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:49.581 Installing lib/librte_compressdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:49.581 Installing lib/librte_cryptodev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:49.581 Installing lib/librte_cryptodev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:49.581 Installing lib/librte_distributor.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:49.581 Installing lib/librte_distributor.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:49.581 Installing lib/librte_dmadev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:49.581 Installing lib/librte_dmadev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:49.581 Installing lib/librte_efd.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:49.581 Installing lib/librte_efd.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:49.581 Installing lib/librte_eventdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:49.581 Installing lib/librte_eventdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:49.581 Installing lib/librte_dispatcher.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:49.581 Installing lib/librte_dispatcher.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:49.581 Installing lib/librte_gpudev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:49.581 Installing lib/librte_gpudev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:49.581 Installing lib/librte_gro.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:49.581 Installing lib/librte_gro.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:49.581 Installing lib/librte_gso.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:49.581 Installing lib/librte_gso.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:49.581 Installing lib/librte_ip_frag.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:49.581 Installing lib/librte_ip_frag.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:49.581 Installing lib/librte_jobstats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:49.581 Installing lib/librte_jobstats.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:49.581 Installing lib/librte_latencystats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:49.581 Installing lib/librte_latencystats.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:49.581 Installing lib/librte_lpm.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:49.581 Installing lib/librte_lpm.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:49.581 Installing lib/librte_member.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:49.581 Installing lib/librte_member.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:49.581 Installing lib/librte_pcapng.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:49.581 Installing lib/librte_pcapng.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:49.581 Installing lib/librte_power.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:49.581 Installing lib/librte_power.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:49.581 Installing lib/librte_rawdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:49.581 Installing lib/librte_rawdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:49.581 Installing lib/librte_regexdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:49.581 Installing lib/librte_regexdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:49.581 Installing lib/librte_mldev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:49.581 Installing lib/librte_mldev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:49.581 Installing lib/librte_rib.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:49.581 Installing lib/librte_rib.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:49.581 Installing lib/librte_reorder.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:50.148 Installing lib/librte_reorder.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:50.148 Installing lib/librte_sched.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:50.148 Installing lib/librte_sched.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:50.148 Installing lib/librte_security.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:50.148 Installing lib/librte_security.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:50.148 Installing lib/librte_stack.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:50.148 Installing lib/librte_stack.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:50.148 Installing lib/librte_vhost.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:50.148 Installing lib/librte_vhost.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:50.148 Installing lib/librte_ipsec.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:50.148 Installing lib/librte_ipsec.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:50.148 Installing lib/librte_pdcp.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:50.148 Installing lib/librte_pdcp.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:50.148 Installing lib/librte_fib.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:50.148 Installing lib/librte_fib.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:50.148 Installing lib/librte_port.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:50.148 Installing lib/librte_port.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:50.148 Installing lib/librte_pdump.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:50.148 Installing lib/librte_pdump.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:50.148 Installing lib/librte_table.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:50.148 Installing lib/librte_table.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:50.148 Installing lib/librte_pipeline.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:50.148 Installing lib/librte_pipeline.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:50.148 Installing lib/librte_graph.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:50.148 Installing lib/librte_graph.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:50.148 Installing lib/librte_node.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:50.148 Installing lib/librte_node.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:50.148 Installing drivers/librte_bus_pci.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:50.148 Installing drivers/librte_bus_pci.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0 00:02:50.148 Installing drivers/librte_bus_vdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:50.148 Installing drivers/librte_bus_vdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0 00:02:50.148 Installing drivers/librte_mempool_ring.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:50.148 Installing drivers/librte_mempool_ring.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0 00:02:50.148 Installing drivers/librte_net_i40e.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:50.148 Installing drivers/librte_net_i40e.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0 00:02:50.148 Installing app/dpdk-dumpcap to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:50.148 Installing app/dpdk-graph to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:50.148 Installing app/dpdk-pdump to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:50.148 Installing app/dpdk-proc-info to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:50.148 Installing app/dpdk-test-acl to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:50.148 Installing app/dpdk-test-bbdev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:50.148 Installing app/dpdk-test-cmdline to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:50.148 Installing app/dpdk-test-compress-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:50.148 Installing app/dpdk-test-crypto-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:50.148 Installing app/dpdk-test-dma-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:50.148 Installing app/dpdk-test-eventdev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:50.148 Installing app/dpdk-test-fib to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:50.148 Installing app/dpdk-test-flow-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:50.148 Installing app/dpdk-test-gpudev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:50.148 Installing app/dpdk-test-mldev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:50.148 Installing app/dpdk-test-pipeline to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:50.148 Installing app/dpdk-testpmd to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:50.148 Installing app/dpdk-test-regex to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:50.148 Installing app/dpdk-test-sad to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:50.148 Installing app/dpdk-test-security-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:50.148 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/config/rte_config.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.148 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/log/rte_log.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.148 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/kvargs/rte_kvargs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.148 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/telemetry/rte_telemetry.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.148 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_atomic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:50.148 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_byteorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:50.148 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_cpuflags.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:50.148 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_cycles.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:50.148 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_io.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:50.148 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_memcpy.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:50.148 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_pause.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:50.148 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_power_intrinsics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:50.148 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_prefetch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:50.148 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_rwlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:50.148 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_spinlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:50.148 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_vect.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:50.148 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.148 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.148 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_cpuflags.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.148 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_cycles.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.148 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_io.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.148 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_memcpy.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.148 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_pause.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.148 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_power_intrinsics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.148 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_prefetch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.148 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_rtm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.148 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_rwlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.148 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_spinlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.148 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_vect.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.148 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic_32.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.148 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic_64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.148 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder_32.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.149 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder_64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.149 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_alarm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.149 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bitmap.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.149 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bitops.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.149 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_branch_prediction.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.149 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bus.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.149 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_class.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.149 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.149 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_compat.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.149 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_debug.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.149 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_dev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.149 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_devargs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.149 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.149 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal_memconfig.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.149 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.149 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_errno.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.149 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_epoll.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.149 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_fbarray.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.149 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_hexdump.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.149 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_hypervisor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.149 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_interrupts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.149 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_keepalive.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.149 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_launch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.149 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_lcore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.149 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_lock_annotations.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.149 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_malloc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.149 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_mcslock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.149 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_memory.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.149 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_memzone.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.149 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pci_dev_feature_defs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.149 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pci_dev_features.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.149 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_per_lcore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.149 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pflock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.149 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_random.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.149 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_reciprocal.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.149 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_seqcount.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.149 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_seqlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.149 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_service.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.149 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_service_component.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.149 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_stdatomic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.149 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_string_fns.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.149 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_tailq.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.149 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.149 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_ticketlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.149 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_time.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.149 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.149 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace_point.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.149 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace_point_register.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.149 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_uuid.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.149 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_version.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.149 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_vfio.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.149 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/linux/include/rte_os.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.149 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.149 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.149 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_elem.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.149 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.149 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_c11_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.149 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_generic_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.149 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_hts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.150 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_hts_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.150 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.150 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.150 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek_zc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.150 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_rts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.150 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_rts_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.150 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rcu/rte_rcu_qsbr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.150 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mempool/rte_mempool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.150 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mempool/rte_mempool_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.150 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.150 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.150 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_ptype.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.150 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_pool_ops.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.150 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_dyn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.150 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ip.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.150 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_tcp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.150 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_udp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.150 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_tls.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.150 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_dtls.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.150 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_esp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.150 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_sctp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.150 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_icmp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.150 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_arp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.150 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ether.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.150 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_macsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.150 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_vxlan.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.150 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_gre.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.150 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_gtp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.150 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_net.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.150 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_net_crc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.150 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_mpls.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.150 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_higig.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.150 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ecpri.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.150 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_pdcp_hdr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.150 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_geneve.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.150 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_l2tpv2.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.150 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ppp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.150 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ib.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.150 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/meter/rte_meter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.150 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_cman.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.150 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.150 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.150 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_dev_info.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.150 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_flow.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.150 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_flow_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.150 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_mtr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.150 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_mtr_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.150 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_tm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.150 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_tm_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.150 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.150 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_eth_ctrl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.150 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pci/rte_pci.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.150 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.150 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.150 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_num.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.150 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_ipaddr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.150 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_etheraddr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.150 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_string.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.150 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_rdline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.150 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_vt100.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.150 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_socket.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.150 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_cirbuf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.150 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_portlist.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.150 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/metrics/rte_metrics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.150 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/metrics/rte_metrics_telemetry.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.150 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_fbk_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.150 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_hash_crc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.150 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.150 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_jhash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.150 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.150 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash_gfni.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.150 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.150 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_generic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.150 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_sw.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.150 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_x86.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.150 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash_x86_gfni.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.150 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/timer/rte_timer.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.150 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/acl/rte_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.150 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/acl/rte_acl_osdep.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.150 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.150 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev_pmd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.150 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev_op.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.411 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bitratestats/rte_bitrate.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.411 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/bpf_def.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.411 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/rte_bpf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.411 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/rte_bpf_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.411 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cfgfile/rte_cfgfile.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.411 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/compressdev/rte_compressdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.411 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/compressdev/rte_comp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.411 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.411 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.411 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.411 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto_sym.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.411 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto_asym.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.411 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.411 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/distributor/rte_distributor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.411 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/dmadev/rte_dmadev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.411 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/dmadev/rte_dmadev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.411 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/efd/rte_efd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.411 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_crypto_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.411 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_dma_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.411 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_eth_rx_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.411 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_eth_tx_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.411 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.411 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_timer_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.411 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.411 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.411 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.411 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/dispatcher/rte_dispatcher.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.411 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gpudev/rte_gpudev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.411 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gro/rte_gro.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.411 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gso/rte_gso.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.412 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ip_frag/rte_ip_frag.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.412 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/jobstats/rte_jobstats.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.412 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/latencystats/rte_latencystats.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.412 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.412 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.412 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.412 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.412 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_scalar.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.412 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.412 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_sve.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.412 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/member/rte_member.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.412 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pcapng/rte_pcapng.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.412 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.412 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_guest_channel.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.412 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_pmd_mgmt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.412 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_uncore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.412 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rawdev/rte_rawdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.412 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rawdev/rte_rawdev_pmd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.412 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.412 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.412 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.412 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mldev/rte_mldev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.412 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mldev/rte_mldev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.412 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rib/rte_rib.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.412 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rib/rte_rib6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.412 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/reorder/rte_reorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.412 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_approx.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.412 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_red.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.412 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_sched.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.412 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_sched_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.412 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_pie.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.412 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/security/rte_security.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.412 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/security/rte_security_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.412 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.412 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_std.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.412 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.412 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_generic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.412 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_c11.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.412 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_stubs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.412 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vdpa.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.412 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.412 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost_async.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.412 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.412 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.412 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_sa.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.412 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_sad.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.412 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.412 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pdcp/rte_pdcp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.412 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pdcp/rte_pdcp_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.412 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/fib/rte_fib.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.413 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/fib/rte_fib6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.413 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.413 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_fd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.413 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_frag.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.413 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ras.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.413 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.413 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.413 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_sched.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.413 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_source_sink.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.413 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_sym_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.413 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_eventdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.413 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.413 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.413 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_fd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.413 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.413 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_source_sink.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.413 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pdump/rte_pdump.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.413 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.413 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_hash_func.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.413 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.413 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_em.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.413 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_learner.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.413 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_selector.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.413 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_wm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.413 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.413 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.413 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_array.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.413 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.413 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_cuckoo.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.413 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_func.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.413 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.413 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_lpm_ipv6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.413 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_stub.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.413 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.413 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru_x86.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.413 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_func_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.413 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.413 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_port_in_action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.413 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_table_action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.413 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_ipsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.413 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.413 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_extern.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.413 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_ctl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.413 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.413 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_worker.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.413 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_model_mcore_dispatch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.413 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_model_rtc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.413 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_worker_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.413 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_eth_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.413 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_ip4_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.413 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_ip6_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.413 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_udp4_input_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.413 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/bus/pci/rte_bus_pci.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.413 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/bus/vdev/rte_bus_vdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.413 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/net/i40e/rte_pmd_i40e.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.413 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/buildtools/dpdk-cmdline-gen.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:50.413 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-devbind.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:50.413 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-pmdinfo.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:50.413 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-telemetry.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:50.413 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-hugepages.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:50.413 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-rss-flows.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:50.414 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/rte_build_config.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.414 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/meson-private/libdpdk-libs.pc to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig 00:02:50.414 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/meson-private/libdpdk.pc to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig 00:02:50.414 Installing symlink pointing to librte_log.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_log.so.24 00:02:50.414 Installing symlink pointing to librte_log.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_log.so 00:02:50.414 Installing symlink pointing to librte_kvargs.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_kvargs.so.24 00:02:50.414 Installing symlink pointing to librte_kvargs.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_kvargs.so 00:02:50.414 Installing symlink pointing to librte_telemetry.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_telemetry.so.24 00:02:50.414 Installing symlink pointing to librte_telemetry.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_telemetry.so 00:02:50.414 Installing symlink pointing to librte_eal.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eal.so.24 00:02:50.414 Installing symlink pointing to librte_eal.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eal.so 00:02:50.414 Installing symlink pointing to librte_ring.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ring.so.24 00:02:50.414 Installing symlink pointing to librte_ring.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ring.so 00:02:50.414 Installing symlink pointing to librte_rcu.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rcu.so.24 00:02:50.414 Installing symlink pointing to librte_rcu.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rcu.so 00:02:50.414 Installing symlink pointing to librte_mempool.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mempool.so.24 00:02:50.414 Installing symlink pointing to librte_mempool.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mempool.so 00:02:50.414 Installing symlink pointing to librte_mbuf.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mbuf.so.24 00:02:50.414 Installing symlink pointing to librte_mbuf.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mbuf.so 00:02:50.414 Installing symlink pointing to librte_net.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_net.so.24 00:02:50.414 Installing symlink pointing to librte_net.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_net.so 00:02:50.414 Installing symlink pointing to librte_meter.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_meter.so.24 00:02:50.414 Installing symlink pointing to librte_meter.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_meter.so 00:02:50.414 Installing symlink pointing to librte_ethdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ethdev.so.24 00:02:50.414 Installing symlink pointing to librte_ethdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ethdev.so 00:02:50.414 Installing symlink pointing to librte_pci.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pci.so.24 00:02:50.414 Installing symlink pointing to librte_pci.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pci.so 00:02:50.414 Installing symlink pointing to librte_cmdline.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cmdline.so.24 00:02:50.414 Installing symlink pointing to librte_cmdline.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cmdline.so 00:02:50.414 Installing symlink pointing to librte_metrics.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_metrics.so.24 00:02:50.414 Installing symlink pointing to librte_metrics.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_metrics.so 00:02:50.414 Installing symlink pointing to librte_hash.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_hash.so.24 00:02:50.414 Installing symlink pointing to librte_hash.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_hash.so 00:02:50.414 Installing symlink pointing to librte_timer.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_timer.so.24 00:02:50.414 Installing symlink pointing to librte_timer.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_timer.so 00:02:50.414 Installing symlink pointing to librte_acl.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_acl.so.24 00:02:50.414 Installing symlink pointing to librte_acl.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_acl.so 00:02:50.414 Installing symlink pointing to librte_bbdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bbdev.so.24 00:02:50.414 Installing symlink pointing to librte_bbdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bbdev.so 00:02:50.414 Installing symlink pointing to librte_bitratestats.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bitratestats.so.24 00:02:50.414 Installing symlink pointing to librte_bitratestats.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bitratestats.so 00:02:50.414 Installing symlink pointing to librte_bpf.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bpf.so.24 00:02:50.414 Installing symlink pointing to librte_bpf.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bpf.so 00:02:50.414 Installing symlink pointing to librte_cfgfile.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cfgfile.so.24 00:02:50.414 Installing symlink pointing to librte_cfgfile.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cfgfile.so 00:02:50.414 Installing symlink pointing to librte_compressdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_compressdev.so.24 00:02:50.414 Installing symlink pointing to librte_compressdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_compressdev.so 00:02:50.414 Installing symlink pointing to librte_cryptodev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cryptodev.so.24 00:02:50.414 Installing symlink pointing to librte_cryptodev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cryptodev.so 00:02:50.414 Installing symlink pointing to librte_distributor.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_distributor.so.24 00:02:50.414 Installing symlink pointing to librte_distributor.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_distributor.so 00:02:50.414 Installing symlink pointing to librte_dmadev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dmadev.so.24 00:02:50.414 Installing symlink pointing to librte_dmadev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dmadev.so 00:02:50.414 Installing symlink pointing to librte_efd.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_efd.so.24 00:02:50.414 Installing symlink pointing to librte_efd.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_efd.so 00:02:50.414 Installing symlink pointing to librte_eventdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eventdev.so.24 00:02:50.414 Installing symlink pointing to librte_eventdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eventdev.so 00:02:50.414 Installing symlink pointing to librte_dispatcher.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dispatcher.so.24 00:02:50.414 Installing symlink pointing to librte_dispatcher.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dispatcher.so 00:02:50.414 Installing symlink pointing to librte_gpudev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gpudev.so.24 00:02:50.414 Installing symlink pointing to librte_gpudev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gpudev.so 00:02:50.414 Installing symlink pointing to librte_gro.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gro.so.24 00:02:50.414 Installing symlink pointing to librte_gro.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gro.so 00:02:50.414 Installing symlink pointing to librte_gso.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gso.so.24 00:02:50.414 Installing symlink pointing to librte_gso.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gso.so 00:02:50.414 Installing symlink pointing to librte_ip_frag.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ip_frag.so.24 00:02:50.414 Installing symlink pointing to librte_ip_frag.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ip_frag.so 00:02:50.414 Installing symlink pointing to librte_jobstats.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_jobstats.so.24 00:02:50.414 Installing symlink pointing to librte_jobstats.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_jobstats.so 00:02:50.415 Installing symlink pointing to librte_latencystats.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_latencystats.so.24 00:02:50.415 Installing symlink pointing to librte_latencystats.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_latencystats.so 00:02:50.415 Installing symlink pointing to librte_lpm.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_lpm.so.24 00:02:50.415 Installing symlink pointing to librte_lpm.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_lpm.so 00:02:50.415 Installing symlink pointing to librte_member.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_member.so.24 00:02:50.415 Installing symlink pointing to librte_member.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_member.so 00:02:50.415 Installing symlink pointing to librte_pcapng.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pcapng.so.24 00:02:50.415 Installing symlink pointing to librte_pcapng.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pcapng.so 00:02:50.415 Installing symlink pointing to librte_power.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_power.so.24 00:02:50.415 Installing symlink pointing to librte_power.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_power.so 00:02:50.415 './librte_bus_pci.so' -> 'dpdk/pmds-24.0/librte_bus_pci.so' 00:02:50.415 './librte_bus_pci.so.24' -> 'dpdk/pmds-24.0/librte_bus_pci.so.24' 00:02:50.415 './librte_bus_pci.so.24.0' -> 'dpdk/pmds-24.0/librte_bus_pci.so.24.0' 00:02:50.415 './librte_bus_vdev.so' -> 'dpdk/pmds-24.0/librte_bus_vdev.so' 00:02:50.415 './librte_bus_vdev.so.24' -> 'dpdk/pmds-24.0/librte_bus_vdev.so.24' 00:02:50.415 './librte_bus_vdev.so.24.0' -> 'dpdk/pmds-24.0/librte_bus_vdev.so.24.0' 00:02:50.415 './librte_mempool_ring.so' -> 'dpdk/pmds-24.0/librte_mempool_ring.so' 00:02:50.415 './librte_mempool_ring.so.24' -> 'dpdk/pmds-24.0/librte_mempool_ring.so.24' 00:02:50.415 './librte_mempool_ring.so.24.0' -> 'dpdk/pmds-24.0/librte_mempool_ring.so.24.0' 00:02:50.415 './librte_net_i40e.so' -> 'dpdk/pmds-24.0/librte_net_i40e.so' 00:02:50.415 './librte_net_i40e.so.24' -> 'dpdk/pmds-24.0/librte_net_i40e.so.24' 00:02:50.415 './librte_net_i40e.so.24.0' -> 'dpdk/pmds-24.0/librte_net_i40e.so.24.0' 00:02:50.415 Installing symlink pointing to librte_rawdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rawdev.so.24 00:02:50.415 Installing symlink pointing to librte_rawdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rawdev.so 00:02:50.415 Installing symlink pointing to librte_regexdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_regexdev.so.24 00:02:50.415 Installing symlink pointing to librte_regexdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_regexdev.so 00:02:50.415 Installing symlink pointing to librte_mldev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mldev.so.24 00:02:50.415 Installing symlink pointing to librte_mldev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mldev.so 00:02:50.415 Installing symlink pointing to librte_rib.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rib.so.24 00:02:50.415 Installing symlink pointing to librte_rib.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rib.so 00:02:50.415 Installing symlink pointing to librte_reorder.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_reorder.so.24 00:02:50.415 Installing symlink pointing to librte_reorder.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_reorder.so 00:02:50.415 Installing symlink pointing to librte_sched.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_sched.so.24 00:02:50.415 Installing symlink pointing to librte_sched.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_sched.so 00:02:50.415 Installing symlink pointing to librte_security.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_security.so.24 00:02:50.415 Installing symlink pointing to librte_security.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_security.so 00:02:50.415 Installing symlink pointing to librte_stack.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_stack.so.24 00:02:50.415 Installing symlink pointing to librte_stack.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_stack.so 00:02:50.415 Installing symlink pointing to librte_vhost.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_vhost.so.24 00:02:50.415 Installing symlink pointing to librte_vhost.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_vhost.so 00:02:50.415 Installing symlink pointing to librte_ipsec.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ipsec.so.24 00:02:50.415 Installing symlink pointing to librte_ipsec.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ipsec.so 00:02:50.415 Installing symlink pointing to librte_pdcp.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdcp.so.24 00:02:50.415 Installing symlink pointing to librte_pdcp.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdcp.so 00:02:50.415 Installing symlink pointing to librte_fib.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_fib.so.24 00:02:50.415 Installing symlink pointing to librte_fib.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_fib.so 00:02:50.415 Installing symlink pointing to librte_port.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_port.so.24 00:02:50.415 Installing symlink pointing to librte_port.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_port.so 00:02:50.415 Installing symlink pointing to librte_pdump.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdump.so.24 00:02:50.415 Installing symlink pointing to librte_pdump.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdump.so 00:02:50.415 Installing symlink pointing to librte_table.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_table.so.24 00:02:50.415 Installing symlink pointing to librte_table.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_table.so 00:02:50.415 Installing symlink pointing to librte_pipeline.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pipeline.so.24 00:02:50.415 Installing symlink pointing to librte_pipeline.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pipeline.so 00:02:50.415 Installing symlink pointing to librte_graph.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_graph.so.24 00:02:50.415 Installing symlink pointing to librte_graph.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_graph.so 00:02:50.415 Installing symlink pointing to librte_node.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_node.so.24 00:02:50.415 Installing symlink pointing to librte_node.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_node.so 00:02:50.415 Installing symlink pointing to librte_bus_pci.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so.24 00:02:50.415 Installing symlink pointing to librte_bus_pci.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so 00:02:50.415 Installing symlink pointing to librte_bus_vdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so.24 00:02:50.415 Installing symlink pointing to librte_bus_vdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so 00:02:50.415 Installing symlink pointing to librte_mempool_ring.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so.24 00:02:50.415 Installing symlink pointing to librte_mempool_ring.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so 00:02:50.415 Installing symlink pointing to librte_net_i40e.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so.24 00:02:50.415 Installing symlink pointing to librte_net_i40e.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so 00:02:50.415 Running custom install script '/bin/sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/config/../buildtools/symlink-drivers-solibs.sh lib dpdk/pmds-24.0' 00:02:50.415 00:48:43 build_native_dpdk -- common/autobuild_common.sh@192 -- $ uname -s 00:02:50.415 00:48:43 build_native_dpdk -- common/autobuild_common.sh@192 -- $ [[ Linux == \F\r\e\e\B\S\D ]] 00:02:50.415 00:48:43 build_native_dpdk -- common/autobuild_common.sh@203 -- $ cat 00:02:50.415 00:48:43 build_native_dpdk -- common/autobuild_common.sh@208 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:50.415 00:02:50.415 real 1m26.357s 00:02:50.416 user 17m57.404s 00:02:50.416 sys 2m5.617s 00:02:50.416 00:48:43 build_native_dpdk -- common/autotest_common.sh@1122 -- $ xtrace_disable 00:02:50.416 00:48:43 build_native_dpdk -- common/autotest_common.sh@10 -- $ set +x 00:02:50.416 ************************************ 00:02:50.416 END TEST build_native_dpdk 00:02:50.416 ************************************ 00:02:50.416 00:48:43 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:02:50.416 00:48:43 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:02:50.416 00:48:43 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:02:50.416 00:48:43 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:02:50.416 00:48:43 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:02:50.416 00:48:43 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:02:50.416 00:48:43 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:02:50.416 00:48:43 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build --with-shared 00:02:50.416 Using /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig for additional libs... 00:02:50.673 DPDK libraries: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:50.673 DPDK includes: //var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.673 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:02:50.930 Using 'verbs' RDMA provider 00:03:01.508 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:03:09.611 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:03:09.868 Creating mk/config.mk...done. 00:03:09.868 Creating mk/cc.flags.mk...done. 00:03:09.868 Type 'make' to build. 00:03:09.868 00:49:02 -- spdk/autobuild.sh@69 -- $ run_test make make -j48 00:03:09.868 00:49:02 -- common/autotest_common.sh@1097 -- $ '[' 3 -le 1 ']' 00:03:09.868 00:49:02 -- common/autotest_common.sh@1103 -- $ xtrace_disable 00:03:09.868 00:49:02 -- common/autotest_common.sh@10 -- $ set +x 00:03:09.868 ************************************ 00:03:09.868 START TEST make 00:03:09.868 ************************************ 00:03:09.868 00:49:02 make -- common/autotest_common.sh@1121 -- $ make -j48 00:03:10.125 make[1]: Nothing to be done for 'all'. 00:03:11.505 The Meson build system 00:03:11.505 Version: 1.3.1 00:03:11.505 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:03:11.505 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:03:11.505 Build type: native build 00:03:11.505 Project name: libvfio-user 00:03:11.505 Project version: 0.0.1 00:03:11.505 C compiler for the host machine: gcc (gcc 13.2.1 "gcc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:03:11.505 C linker for the host machine: gcc ld.bfd 2.39-16 00:03:11.505 Host machine cpu family: x86_64 00:03:11.505 Host machine cpu: x86_64 00:03:11.505 Run-time dependency threads found: YES 00:03:11.505 Library dl found: YES 00:03:11.505 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:03:11.505 Run-time dependency json-c found: YES 0.17 00:03:11.505 Run-time dependency cmocka found: YES 1.1.7 00:03:11.505 Program pytest-3 found: NO 00:03:11.505 Program flake8 found: NO 00:03:11.505 Program misspell-fixer found: NO 00:03:11.505 Program restructuredtext-lint found: NO 00:03:11.505 Program valgrind found: YES (/usr/bin/valgrind) 00:03:11.505 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:03:11.505 Compiler for C supports arguments -Wmissing-declarations: YES 00:03:11.505 Compiler for C supports arguments -Wwrite-strings: YES 00:03:11.505 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:03:11.505 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:03:11.505 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:03:11.505 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:03:11.505 Build targets in project: 8 00:03:11.505 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:03:11.505 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:03:11.505 00:03:11.505 libvfio-user 0.0.1 00:03:11.505 00:03:11.505 User defined options 00:03:11.505 buildtype : debug 00:03:11.505 default_library: shared 00:03:11.505 libdir : /usr/local/lib 00:03:11.505 00:03:11.505 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:03:12.458 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:03:12.458 [1/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:03:12.458 [2/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:03:12.723 [3/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:03:12.723 [4/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:03:12.723 [5/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:03:12.723 [6/37] Compiling C object samples/null.p/null.c.o 00:03:12.723 [7/37] Compiling C object samples/lspci.p/lspci.c.o 00:03:12.723 [8/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:03:12.723 [9/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:03:12.723 [10/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:03:12.723 [11/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:03:12.723 [12/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:03:12.723 [13/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:03:12.723 [14/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:03:12.723 [15/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:03:12.723 [16/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:03:12.723 [17/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:03:12.723 [18/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:03:12.723 [19/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:03:12.723 [20/37] Compiling C object samples/server.p/server.c.o 00:03:12.723 [21/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:03:12.723 [22/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:03:12.723 [23/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:03:12.723 [24/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:03:12.723 [25/37] Compiling C object test/unit_tests.p/mocks.c.o 00:03:12.723 [26/37] Compiling C object samples/client.p/client.c.o 00:03:12.983 [27/37] Linking target samples/client 00:03:12.983 [28/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:03:12.983 [29/37] Linking target lib/libvfio-user.so.0.0.1 00:03:12.983 [30/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:03:12.983 [31/37] Linking target test/unit_tests 00:03:13.247 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:03:13.247 [33/37] Linking target samples/null 00:03:13.247 [34/37] Linking target samples/lspci 00:03:13.247 [35/37] Linking target samples/server 00:03:13.247 [36/37] Linking target samples/gpio-pci-idio-16 00:03:13.247 [37/37] Linking target samples/shadow_ioeventfd_server 00:03:13.247 INFO: autodetecting backend as ninja 00:03:13.247 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:03:13.247 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:03:14.194 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:03:14.194 ninja: no work to do. 00:03:26.387 CC lib/ut_mock/mock.o 00:03:26.387 CC lib/ut/ut.o 00:03:26.387 CC lib/log/log.o 00:03:26.387 CC lib/log/log_flags.o 00:03:26.387 CC lib/log/log_deprecated.o 00:03:26.387 LIB libspdk_ut.a 00:03:26.387 LIB libspdk_log.a 00:03:26.387 LIB libspdk_ut_mock.a 00:03:26.387 SO libspdk_ut.so.2.0 00:03:26.387 SO libspdk_ut_mock.so.6.0 00:03:26.387 SO libspdk_log.so.7.0 00:03:26.387 SYMLINK libspdk_ut.so 00:03:26.387 SYMLINK libspdk_ut_mock.so 00:03:26.387 SYMLINK libspdk_log.so 00:03:26.387 CXX lib/trace_parser/trace.o 00:03:26.387 CC lib/util/base64.o 00:03:26.387 CC lib/ioat/ioat.o 00:03:26.387 CC lib/util/bit_array.o 00:03:26.387 CC lib/dma/dma.o 00:03:26.387 CC lib/util/cpuset.o 00:03:26.387 CC lib/util/crc16.o 00:03:26.387 CC lib/util/crc32.o 00:03:26.387 CC lib/util/crc32c.o 00:03:26.387 CC lib/util/crc32_ieee.o 00:03:26.387 CC lib/util/crc64.o 00:03:26.387 CC lib/util/dif.o 00:03:26.387 CC lib/util/file.o 00:03:26.387 CC lib/util/fd.o 00:03:26.387 CC lib/util/hexlify.o 00:03:26.387 CC lib/util/iov.o 00:03:26.387 CC lib/util/math.o 00:03:26.387 CC lib/util/pipe.o 00:03:26.387 CC lib/util/strerror_tls.o 00:03:26.387 CC lib/util/string.o 00:03:26.387 CC lib/util/uuid.o 00:03:26.387 CC lib/util/fd_group.o 00:03:26.387 CC lib/util/xor.o 00:03:26.387 CC lib/util/zipf.o 00:03:26.387 CC lib/vfio_user/host/vfio_user_pci.o 00:03:26.387 CC lib/vfio_user/host/vfio_user.o 00:03:26.387 LIB libspdk_dma.a 00:03:26.387 SO libspdk_dma.so.4.0 00:03:26.387 SYMLINK libspdk_dma.so 00:03:26.387 LIB libspdk_ioat.a 00:03:26.387 SO libspdk_ioat.so.7.0 00:03:26.388 LIB libspdk_vfio_user.a 00:03:26.388 SYMLINK libspdk_ioat.so 00:03:26.388 SO libspdk_vfio_user.so.5.0 00:03:26.388 SYMLINK libspdk_vfio_user.so 00:03:26.646 LIB libspdk_util.a 00:03:26.646 SO libspdk_util.so.9.0 00:03:26.904 SYMLINK libspdk_util.so 00:03:26.904 CC lib/conf/conf.o 00:03:26.904 CC lib/idxd/idxd.o 00:03:26.904 CC lib/vmd/vmd.o 00:03:26.904 CC lib/json/json_parse.o 00:03:26.904 CC lib/env_dpdk/env.o 00:03:26.904 CC lib/rdma/common.o 00:03:26.904 CC lib/json/json_util.o 00:03:26.904 CC lib/vmd/led.o 00:03:26.904 CC lib/env_dpdk/memory.o 00:03:26.904 CC lib/rdma/rdma_verbs.o 00:03:26.904 CC lib/json/json_write.o 00:03:26.904 CC lib/idxd/idxd_user.o 00:03:26.904 CC lib/idxd/idxd_kernel.o 00:03:26.904 CC lib/env_dpdk/pci.o 00:03:26.904 CC lib/env_dpdk/init.o 00:03:26.904 CC lib/env_dpdk/threads.o 00:03:26.904 CC lib/env_dpdk/pci_ioat.o 00:03:26.904 CC lib/env_dpdk/pci_virtio.o 00:03:26.904 CC lib/env_dpdk/pci_vmd.o 00:03:26.904 CC lib/env_dpdk/pci_idxd.o 00:03:26.904 CC lib/env_dpdk/pci_event.o 00:03:26.904 CC lib/env_dpdk/sigbus_handler.o 00:03:26.904 CC lib/env_dpdk/pci_dpdk.o 00:03:26.904 CC lib/env_dpdk/pci_dpdk_2207.o 00:03:26.904 CC lib/env_dpdk/pci_dpdk_2211.o 00:03:27.162 LIB libspdk_trace_parser.a 00:03:27.162 SO libspdk_trace_parser.so.5.0 00:03:27.162 SYMLINK libspdk_trace_parser.so 00:03:27.420 LIB libspdk_conf.a 00:03:27.420 LIB libspdk_json.a 00:03:27.420 SO libspdk_conf.so.6.0 00:03:27.420 SO libspdk_json.so.6.0 00:03:27.420 SYMLINK libspdk_conf.so 00:03:27.420 LIB libspdk_rdma.a 00:03:27.420 SYMLINK libspdk_json.so 00:03:27.420 SO libspdk_rdma.so.6.0 00:03:27.420 SYMLINK libspdk_rdma.so 00:03:27.677 CC lib/jsonrpc/jsonrpc_server.o 00:03:27.677 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:03:27.677 CC lib/jsonrpc/jsonrpc_client.o 00:03:27.677 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:03:27.677 LIB libspdk_idxd.a 00:03:27.677 SO libspdk_idxd.so.12.0 00:03:27.677 SYMLINK libspdk_idxd.so 00:03:27.677 LIB libspdk_vmd.a 00:03:27.677 SO libspdk_vmd.so.6.0 00:03:27.677 SYMLINK libspdk_vmd.so 00:03:27.935 LIB libspdk_jsonrpc.a 00:03:27.935 SO libspdk_jsonrpc.so.6.0 00:03:27.935 SYMLINK libspdk_jsonrpc.so 00:03:28.193 CC lib/rpc/rpc.o 00:03:28.193 LIB libspdk_rpc.a 00:03:28.451 SO libspdk_rpc.so.6.0 00:03:28.451 SYMLINK libspdk_rpc.so 00:03:28.451 CC lib/trace/trace.o 00:03:28.451 CC lib/notify/notify.o 00:03:28.451 CC lib/notify/notify_rpc.o 00:03:28.451 CC lib/trace/trace_flags.o 00:03:28.451 CC lib/trace/trace_rpc.o 00:03:28.451 CC lib/keyring/keyring.o 00:03:28.451 CC lib/keyring/keyring_rpc.o 00:03:28.709 LIB libspdk_notify.a 00:03:28.709 SO libspdk_notify.so.6.0 00:03:28.709 LIB libspdk_keyring.a 00:03:28.709 SYMLINK libspdk_notify.so 00:03:28.709 LIB libspdk_trace.a 00:03:28.709 SO libspdk_keyring.so.1.0 00:03:28.967 SO libspdk_trace.so.10.0 00:03:28.967 SYMLINK libspdk_keyring.so 00:03:28.967 SYMLINK libspdk_trace.so 00:03:28.967 LIB libspdk_env_dpdk.a 00:03:28.967 CC lib/thread/thread.o 00:03:28.967 CC lib/thread/iobuf.o 00:03:28.967 CC lib/sock/sock.o 00:03:28.967 CC lib/sock/sock_rpc.o 00:03:28.967 SO libspdk_env_dpdk.so.14.0 00:03:29.225 SYMLINK libspdk_env_dpdk.so 00:03:29.483 LIB libspdk_sock.a 00:03:29.483 SO libspdk_sock.so.9.0 00:03:29.483 SYMLINK libspdk_sock.so 00:03:29.742 CC lib/nvme/nvme_ctrlr_cmd.o 00:03:29.742 CC lib/nvme/nvme_ctrlr.o 00:03:29.742 CC lib/nvme/nvme_fabric.o 00:03:29.742 CC lib/nvme/nvme_ns_cmd.o 00:03:29.742 CC lib/nvme/nvme_ns.o 00:03:29.742 CC lib/nvme/nvme_pcie_common.o 00:03:29.742 CC lib/nvme/nvme_pcie.o 00:03:29.742 CC lib/nvme/nvme_qpair.o 00:03:29.742 CC lib/nvme/nvme.o 00:03:29.742 CC lib/nvme/nvme_quirks.o 00:03:29.742 CC lib/nvme/nvme_transport.o 00:03:29.742 CC lib/nvme/nvme_discovery.o 00:03:29.742 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:03:29.742 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:03:29.742 CC lib/nvme/nvme_tcp.o 00:03:29.742 CC lib/nvme/nvme_opal.o 00:03:29.742 CC lib/nvme/nvme_io_msg.o 00:03:29.742 CC lib/nvme/nvme_poll_group.o 00:03:29.742 CC lib/nvme/nvme_zns.o 00:03:29.742 CC lib/nvme/nvme_stubs.o 00:03:29.742 CC lib/nvme/nvme_auth.o 00:03:29.742 CC lib/nvme/nvme_cuse.o 00:03:29.742 CC lib/nvme/nvme_rdma.o 00:03:29.742 CC lib/nvme/nvme_vfio_user.o 00:03:30.709 LIB libspdk_thread.a 00:03:30.709 SO libspdk_thread.so.10.0 00:03:30.709 SYMLINK libspdk_thread.so 00:03:30.967 CC lib/blob/blobstore.o 00:03:30.967 CC lib/init/json_config.o 00:03:30.967 CC lib/accel/accel.o 00:03:30.967 CC lib/blob/request.o 00:03:30.967 CC lib/virtio/virtio.o 00:03:30.967 CC lib/init/subsystem.o 00:03:30.967 CC lib/vfu_tgt/tgt_endpoint.o 00:03:30.967 CC lib/blob/zeroes.o 00:03:30.967 CC lib/init/subsystem_rpc.o 00:03:30.967 CC lib/accel/accel_rpc.o 00:03:30.967 CC lib/vfu_tgt/tgt_rpc.o 00:03:30.967 CC lib/virtio/virtio_vhost_user.o 00:03:30.967 CC lib/blob/blob_bs_dev.o 00:03:30.967 CC lib/accel/accel_sw.o 00:03:30.967 CC lib/virtio/virtio_vfio_user.o 00:03:30.967 CC lib/init/rpc.o 00:03:30.967 CC lib/virtio/virtio_pci.o 00:03:31.226 LIB libspdk_init.a 00:03:31.226 SO libspdk_init.so.5.0 00:03:31.226 LIB libspdk_virtio.a 00:03:31.226 LIB libspdk_vfu_tgt.a 00:03:31.226 SYMLINK libspdk_init.so 00:03:31.226 SO libspdk_virtio.so.7.0 00:03:31.226 SO libspdk_vfu_tgt.so.3.0 00:03:31.226 SYMLINK libspdk_vfu_tgt.so 00:03:31.226 SYMLINK libspdk_virtio.so 00:03:31.484 CC lib/event/app.o 00:03:31.484 CC lib/event/reactor.o 00:03:31.484 CC lib/event/log_rpc.o 00:03:31.484 CC lib/event/app_rpc.o 00:03:31.484 CC lib/event/scheduler_static.o 00:03:31.742 LIB libspdk_event.a 00:03:32.000 SO libspdk_event.so.13.0 00:03:32.000 SYMLINK libspdk_event.so 00:03:32.000 LIB libspdk_accel.a 00:03:32.000 SO libspdk_accel.so.15.0 00:03:32.000 SYMLINK libspdk_accel.so 00:03:32.000 LIB libspdk_nvme.a 00:03:32.259 SO libspdk_nvme.so.13.0 00:03:32.259 CC lib/bdev/bdev.o 00:03:32.259 CC lib/bdev/bdev_rpc.o 00:03:32.259 CC lib/bdev/bdev_zone.o 00:03:32.259 CC lib/bdev/part.o 00:03:32.259 CC lib/bdev/scsi_nvme.o 00:03:32.517 SYMLINK libspdk_nvme.so 00:03:33.892 LIB libspdk_blob.a 00:03:33.892 SO libspdk_blob.so.11.0 00:03:33.892 SYMLINK libspdk_blob.so 00:03:34.151 CC lib/blobfs/blobfs.o 00:03:34.151 CC lib/blobfs/tree.o 00:03:34.151 CC lib/lvol/lvol.o 00:03:35.088 LIB libspdk_blobfs.a 00:03:35.088 SO libspdk_blobfs.so.10.0 00:03:35.088 SYMLINK libspdk_blobfs.so 00:03:35.088 LIB libspdk_lvol.a 00:03:35.088 SO libspdk_lvol.so.10.0 00:03:35.088 SYMLINK libspdk_lvol.so 00:03:35.088 LIB libspdk_bdev.a 00:03:35.088 SO libspdk_bdev.so.15.0 00:03:35.357 SYMLINK libspdk_bdev.so 00:03:35.357 CC lib/ublk/ublk.o 00:03:35.357 CC lib/scsi/dev.o 00:03:35.357 CC lib/nvmf/ctrlr.o 00:03:35.357 CC lib/ublk/ublk_rpc.o 00:03:35.357 CC lib/nbd/nbd.o 00:03:35.357 CC lib/scsi/lun.o 00:03:35.357 CC lib/nvmf/ctrlr_discovery.o 00:03:35.357 CC lib/ftl/ftl_core.o 00:03:35.357 CC lib/nbd/nbd_rpc.o 00:03:35.357 CC lib/nvmf/ctrlr_bdev.o 00:03:35.357 CC lib/ftl/ftl_init.o 00:03:35.357 CC lib/scsi/port.o 00:03:35.357 CC lib/nvmf/subsystem.o 00:03:35.357 CC lib/scsi/scsi.o 00:03:35.357 CC lib/ftl/ftl_layout.o 00:03:35.357 CC lib/nvmf/nvmf.o 00:03:35.357 CC lib/scsi/scsi_bdev.o 00:03:35.357 CC lib/ftl/ftl_debug.o 00:03:35.357 CC lib/nvmf/nvmf_rpc.o 00:03:35.357 CC lib/scsi/scsi_pr.o 00:03:35.357 CC lib/ftl/ftl_io.o 00:03:35.357 CC lib/scsi/scsi_rpc.o 00:03:35.357 CC lib/nvmf/tcp.o 00:03:35.357 CC lib/nvmf/transport.o 00:03:35.357 CC lib/scsi/task.o 00:03:35.357 CC lib/ftl/ftl_sb.o 00:03:35.357 CC lib/ftl/ftl_l2p.o 00:03:35.357 CC lib/nvmf/stubs.o 00:03:35.357 CC lib/ftl/ftl_l2p_flat.o 00:03:35.357 CC lib/ftl/ftl_nv_cache.o 00:03:35.357 CC lib/nvmf/mdns_server.o 00:03:35.357 CC lib/nvmf/vfio_user.o 00:03:35.357 CC lib/ftl/ftl_band.o 00:03:35.357 CC lib/nvmf/rdma.o 00:03:35.357 CC lib/ftl/ftl_band_ops.o 00:03:35.357 CC lib/nvmf/auth.o 00:03:35.357 CC lib/ftl/ftl_writer.o 00:03:35.357 CC lib/ftl/ftl_rq.o 00:03:35.357 CC lib/ftl/ftl_reloc.o 00:03:35.357 CC lib/ftl/ftl_l2p_cache.o 00:03:35.357 CC lib/ftl/ftl_p2l.o 00:03:35.357 CC lib/ftl/mngt/ftl_mngt.o 00:03:35.357 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:35.357 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:35.357 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:35.357 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:35.357 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:35.357 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:35.928 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:35.928 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:35.928 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:35.928 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:35.928 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:35.928 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:35.928 CC lib/ftl/utils/ftl_conf.o 00:03:35.928 CC lib/ftl/utils/ftl_md.o 00:03:35.928 CC lib/ftl/utils/ftl_mempool.o 00:03:35.928 CC lib/ftl/utils/ftl_bitmap.o 00:03:35.928 CC lib/ftl/utils/ftl_property.o 00:03:35.928 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:35.928 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:35.928 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:35.928 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:35.928 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:35.928 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:35.928 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:03:35.928 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:35.928 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:35.928 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:36.188 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:36.188 CC lib/ftl/base/ftl_base_dev.o 00:03:36.188 CC lib/ftl/base/ftl_base_bdev.o 00:03:36.188 CC lib/ftl/ftl_trace.o 00:03:36.188 LIB libspdk_nbd.a 00:03:36.188 SO libspdk_nbd.so.7.0 00:03:36.446 SYMLINK libspdk_nbd.so 00:03:36.446 LIB libspdk_scsi.a 00:03:36.446 SO libspdk_scsi.so.9.0 00:03:36.446 SYMLINK libspdk_scsi.so 00:03:36.446 LIB libspdk_ublk.a 00:03:36.446 SO libspdk_ublk.so.3.0 00:03:36.705 SYMLINK libspdk_ublk.so 00:03:36.705 CC lib/vhost/vhost.o 00:03:36.705 CC lib/iscsi/conn.o 00:03:36.705 CC lib/iscsi/init_grp.o 00:03:36.705 CC lib/vhost/vhost_rpc.o 00:03:36.705 CC lib/vhost/vhost_scsi.o 00:03:36.705 CC lib/iscsi/iscsi.o 00:03:36.705 CC lib/vhost/vhost_blk.o 00:03:36.705 CC lib/iscsi/md5.o 00:03:36.705 CC lib/vhost/rte_vhost_user.o 00:03:36.705 CC lib/iscsi/param.o 00:03:36.705 CC lib/iscsi/portal_grp.o 00:03:36.705 CC lib/iscsi/tgt_node.o 00:03:36.705 CC lib/iscsi/iscsi_subsystem.o 00:03:36.705 CC lib/iscsi/iscsi_rpc.o 00:03:36.705 CC lib/iscsi/task.o 00:03:36.964 LIB libspdk_ftl.a 00:03:36.964 SO libspdk_ftl.so.9.0 00:03:37.531 SYMLINK libspdk_ftl.so 00:03:37.790 LIB libspdk_vhost.a 00:03:37.790 SO libspdk_vhost.so.8.0 00:03:38.048 LIB libspdk_nvmf.a 00:03:38.048 SYMLINK libspdk_vhost.so 00:03:38.048 SO libspdk_nvmf.so.18.0 00:03:38.048 LIB libspdk_iscsi.a 00:03:38.048 SO libspdk_iscsi.so.8.0 00:03:38.307 SYMLINK libspdk_nvmf.so 00:03:38.307 SYMLINK libspdk_iscsi.so 00:03:38.566 CC module/vfu_device/vfu_virtio.o 00:03:38.566 CC module/env_dpdk/env_dpdk_rpc.o 00:03:38.566 CC module/vfu_device/vfu_virtio_blk.o 00:03:38.566 CC module/vfu_device/vfu_virtio_scsi.o 00:03:38.566 CC module/vfu_device/vfu_virtio_rpc.o 00:03:38.566 CC module/accel/error/accel_error.o 00:03:38.566 CC module/accel/dsa/accel_dsa.o 00:03:38.566 CC module/scheduler/gscheduler/gscheduler.o 00:03:38.566 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:38.566 CC module/accel/iaa/accel_iaa.o 00:03:38.566 CC module/keyring/file/keyring.o 00:03:38.566 CC module/keyring/linux/keyring.o 00:03:38.566 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:38.566 CC module/accel/iaa/accel_iaa_rpc.o 00:03:38.566 CC module/keyring/linux/keyring_rpc.o 00:03:38.566 CC module/accel/ioat/accel_ioat.o 00:03:38.566 CC module/blob/bdev/blob_bdev.o 00:03:38.566 CC module/accel/error/accel_error_rpc.o 00:03:38.566 CC module/keyring/file/keyring_rpc.o 00:03:38.566 CC module/accel/ioat/accel_ioat_rpc.o 00:03:38.566 CC module/sock/posix/posix.o 00:03:38.566 CC module/accel/dsa/accel_dsa_rpc.o 00:03:38.825 LIB libspdk_env_dpdk_rpc.a 00:03:38.825 SO libspdk_env_dpdk_rpc.so.6.0 00:03:38.825 SYMLINK libspdk_env_dpdk_rpc.so 00:03:38.825 LIB libspdk_scheduler_gscheduler.a 00:03:38.825 LIB libspdk_keyring_linux.a 00:03:38.825 LIB libspdk_keyring_file.a 00:03:38.825 LIB libspdk_scheduler_dpdk_governor.a 00:03:38.825 SO libspdk_scheduler_gscheduler.so.4.0 00:03:38.825 SO libspdk_keyring_linux.so.1.0 00:03:38.825 SO libspdk_keyring_file.so.1.0 00:03:38.825 SO libspdk_scheduler_dpdk_governor.so.4.0 00:03:38.825 LIB libspdk_accel_error.a 00:03:38.825 LIB libspdk_scheduler_dynamic.a 00:03:38.825 LIB libspdk_accel_ioat.a 00:03:38.825 LIB libspdk_accel_iaa.a 00:03:38.825 SO libspdk_accel_error.so.2.0 00:03:38.825 SYMLINK libspdk_scheduler_gscheduler.so 00:03:38.825 SO libspdk_scheduler_dynamic.so.4.0 00:03:38.825 SO libspdk_accel_ioat.so.6.0 00:03:38.825 SYMLINK libspdk_keyring_linux.so 00:03:38.825 SYMLINK libspdk_scheduler_dpdk_governor.so 00:03:38.825 SYMLINK libspdk_keyring_file.so 00:03:38.825 SO libspdk_accel_iaa.so.3.0 00:03:39.084 SYMLINK libspdk_accel_error.so 00:03:39.084 LIB libspdk_accel_dsa.a 00:03:39.084 SYMLINK libspdk_scheduler_dynamic.so 00:03:39.084 LIB libspdk_blob_bdev.a 00:03:39.084 SYMLINK libspdk_accel_ioat.so 00:03:39.084 SYMLINK libspdk_accel_iaa.so 00:03:39.084 SO libspdk_accel_dsa.so.5.0 00:03:39.084 SO libspdk_blob_bdev.so.11.0 00:03:39.084 SYMLINK libspdk_blob_bdev.so 00:03:39.084 SYMLINK libspdk_accel_dsa.so 00:03:39.343 LIB libspdk_vfu_device.a 00:03:39.343 CC module/bdev/error/vbdev_error.o 00:03:39.343 CC module/bdev/raid/bdev_raid.o 00:03:39.343 CC module/bdev/delay/vbdev_delay.o 00:03:39.343 CC module/bdev/error/vbdev_error_rpc.o 00:03:39.343 CC module/bdev/raid/bdev_raid_rpc.o 00:03:39.343 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:39.343 CC module/bdev/lvol/vbdev_lvol.o 00:03:39.343 CC module/bdev/raid/bdev_raid_sb.o 00:03:39.343 CC module/bdev/null/bdev_null.o 00:03:39.343 CC module/blobfs/bdev/blobfs_bdev.o 00:03:39.343 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:39.343 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:39.343 CC module/bdev/nvme/bdev_nvme.o 00:03:39.343 CC module/bdev/null/bdev_null_rpc.o 00:03:39.343 CC module/bdev/raid/raid0.o 00:03:39.343 CC module/bdev/raid/raid1.o 00:03:39.343 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:39.343 CC module/bdev/nvme/nvme_rpc.o 00:03:39.343 CC module/bdev/split/vbdev_split_rpc.o 00:03:39.343 CC module/bdev/split/vbdev_split.o 00:03:39.343 CC module/bdev/nvme/bdev_mdns_client.o 00:03:39.343 CC module/bdev/gpt/gpt.o 00:03:39.343 CC module/bdev/raid/concat.o 00:03:39.343 CC module/bdev/malloc/bdev_malloc.o 00:03:39.343 CC module/bdev/gpt/vbdev_gpt.o 00:03:39.343 CC module/bdev/nvme/vbdev_opal.o 00:03:39.343 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:39.343 CC module/bdev/passthru/vbdev_passthru.o 00:03:39.343 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:39.343 CC module/bdev/iscsi/bdev_iscsi.o 00:03:39.343 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:39.343 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:39.343 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:39.343 CC module/bdev/ftl/bdev_ftl.o 00:03:39.343 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:39.343 CC module/bdev/aio/bdev_aio.o 00:03:39.343 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:39.343 CC module/bdev/aio/bdev_aio_rpc.o 00:03:39.343 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:39.343 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:39.343 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:39.343 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:39.343 SO libspdk_vfu_device.so.3.0 00:03:39.343 SYMLINK libspdk_vfu_device.so 00:03:39.602 LIB libspdk_sock_posix.a 00:03:39.602 SO libspdk_sock_posix.so.6.0 00:03:39.602 LIB libspdk_bdev_gpt.a 00:03:39.602 LIB libspdk_blobfs_bdev.a 00:03:39.860 SO libspdk_blobfs_bdev.so.6.0 00:03:39.860 SO libspdk_bdev_gpt.so.6.0 00:03:39.860 LIB libspdk_bdev_zone_block.a 00:03:39.860 SYMLINK libspdk_sock_posix.so 00:03:39.860 LIB libspdk_bdev_split.a 00:03:39.860 SO libspdk_bdev_zone_block.so.6.0 00:03:39.860 LIB libspdk_bdev_null.a 00:03:39.861 SYMLINK libspdk_blobfs_bdev.so 00:03:39.861 SYMLINK libspdk_bdev_gpt.so 00:03:39.861 SO libspdk_bdev_split.so.6.0 00:03:39.861 LIB libspdk_bdev_passthru.a 00:03:39.861 LIB libspdk_bdev_malloc.a 00:03:39.861 SO libspdk_bdev_null.so.6.0 00:03:39.861 LIB libspdk_bdev_error.a 00:03:39.861 SO libspdk_bdev_passthru.so.6.0 00:03:39.861 SYMLINK libspdk_bdev_zone_block.so 00:03:39.861 SO libspdk_bdev_malloc.so.6.0 00:03:39.861 SO libspdk_bdev_error.so.6.0 00:03:39.861 SYMLINK libspdk_bdev_split.so 00:03:39.861 LIB libspdk_bdev_ftl.a 00:03:39.861 LIB libspdk_bdev_delay.a 00:03:39.861 LIB libspdk_bdev_aio.a 00:03:39.861 LIB libspdk_bdev_iscsi.a 00:03:39.861 SYMLINK libspdk_bdev_null.so 00:03:39.861 SYMLINK libspdk_bdev_passthru.so 00:03:39.861 SO libspdk_bdev_ftl.so.6.0 00:03:39.861 SYMLINK libspdk_bdev_malloc.so 00:03:39.861 SO libspdk_bdev_delay.so.6.0 00:03:39.861 SO libspdk_bdev_aio.so.6.0 00:03:39.861 SO libspdk_bdev_iscsi.so.6.0 00:03:39.861 SYMLINK libspdk_bdev_error.so 00:03:39.861 SYMLINK libspdk_bdev_ftl.so 00:03:39.861 SYMLINK libspdk_bdev_delay.so 00:03:39.861 SYMLINK libspdk_bdev_aio.so 00:03:39.861 SYMLINK libspdk_bdev_iscsi.so 00:03:40.119 LIB libspdk_bdev_lvol.a 00:03:40.119 SO libspdk_bdev_lvol.so.6.0 00:03:40.119 LIB libspdk_bdev_virtio.a 00:03:40.119 SYMLINK libspdk_bdev_lvol.so 00:03:40.119 SO libspdk_bdev_virtio.so.6.0 00:03:40.119 SYMLINK libspdk_bdev_virtio.so 00:03:40.377 LIB libspdk_bdev_raid.a 00:03:40.636 SO libspdk_bdev_raid.so.6.0 00:03:40.636 SYMLINK libspdk_bdev_raid.so 00:03:41.591 LIB libspdk_bdev_nvme.a 00:03:41.591 SO libspdk_bdev_nvme.so.7.0 00:03:41.849 SYMLINK libspdk_bdev_nvme.so 00:03:42.107 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:42.107 CC module/event/subsystems/keyring/keyring.o 00:03:42.107 CC module/event/subsystems/vmd/vmd.o 00:03:42.107 CC module/event/subsystems/scheduler/scheduler.o 00:03:42.107 CC module/event/subsystems/iobuf/iobuf.o 00:03:42.107 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:03:42.107 CC module/event/subsystems/sock/sock.o 00:03:42.107 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:42.107 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:42.365 LIB libspdk_event_keyring.a 00:03:42.365 LIB libspdk_event_vhost_blk.a 00:03:42.365 LIB libspdk_event_sock.a 00:03:42.365 LIB libspdk_event_scheduler.a 00:03:42.365 LIB libspdk_event_vfu_tgt.a 00:03:42.365 LIB libspdk_event_vmd.a 00:03:42.365 LIB libspdk_event_iobuf.a 00:03:42.365 SO libspdk_event_keyring.so.1.0 00:03:42.365 SO libspdk_event_sock.so.5.0 00:03:42.365 SO libspdk_event_vhost_blk.so.3.0 00:03:42.365 SO libspdk_event_scheduler.so.4.0 00:03:42.365 SO libspdk_event_vfu_tgt.so.3.0 00:03:42.365 SO libspdk_event_vmd.so.6.0 00:03:42.365 SO libspdk_event_iobuf.so.3.0 00:03:42.365 SYMLINK libspdk_event_keyring.so 00:03:42.365 SYMLINK libspdk_event_sock.so 00:03:42.365 SYMLINK libspdk_event_vhost_blk.so 00:03:42.365 SYMLINK libspdk_event_vfu_tgt.so 00:03:42.365 SYMLINK libspdk_event_scheduler.so 00:03:42.365 SYMLINK libspdk_event_vmd.so 00:03:42.365 SYMLINK libspdk_event_iobuf.so 00:03:42.623 CC module/event/subsystems/accel/accel.o 00:03:42.623 LIB libspdk_event_accel.a 00:03:42.881 SO libspdk_event_accel.so.6.0 00:03:42.881 SYMLINK libspdk_event_accel.so 00:03:42.881 CC module/event/subsystems/bdev/bdev.o 00:03:43.139 LIB libspdk_event_bdev.a 00:03:43.139 SO libspdk_event_bdev.so.6.0 00:03:43.139 SYMLINK libspdk_event_bdev.so 00:03:43.398 CC module/event/subsystems/nbd/nbd.o 00:03:43.398 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:43.398 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:43.398 CC module/event/subsystems/ublk/ublk.o 00:03:43.398 CC module/event/subsystems/scsi/scsi.o 00:03:43.656 LIB libspdk_event_ublk.a 00:03:43.656 LIB libspdk_event_nbd.a 00:03:43.656 SO libspdk_event_ublk.so.3.0 00:03:43.656 LIB libspdk_event_scsi.a 00:03:43.656 SO libspdk_event_nbd.so.6.0 00:03:43.656 SO libspdk_event_scsi.so.6.0 00:03:43.656 SYMLINK libspdk_event_ublk.so 00:03:43.656 SYMLINK libspdk_event_nbd.so 00:03:43.656 SYMLINK libspdk_event_scsi.so 00:03:43.656 LIB libspdk_event_nvmf.a 00:03:43.656 SO libspdk_event_nvmf.so.6.0 00:03:43.656 SYMLINK libspdk_event_nvmf.so 00:03:43.917 CC module/event/subsystems/iscsi/iscsi.o 00:03:43.917 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:43.917 LIB libspdk_event_vhost_scsi.a 00:03:43.917 LIB libspdk_event_iscsi.a 00:03:43.917 SO libspdk_event_vhost_scsi.so.3.0 00:03:43.917 SO libspdk_event_iscsi.so.6.0 00:03:43.917 SYMLINK libspdk_event_vhost_scsi.so 00:03:44.177 SYMLINK libspdk_event_iscsi.so 00:03:44.177 SO libspdk.so.6.0 00:03:44.177 SYMLINK libspdk.so 00:03:44.443 TEST_HEADER include/spdk/accel.h 00:03:44.443 TEST_HEADER include/spdk/accel_module.h 00:03:44.443 TEST_HEADER include/spdk/assert.h 00:03:44.443 TEST_HEADER include/spdk/barrier.h 00:03:44.443 TEST_HEADER include/spdk/base64.h 00:03:44.443 TEST_HEADER include/spdk/bdev.h 00:03:44.443 TEST_HEADER include/spdk/bdev_module.h 00:03:44.443 CXX app/trace/trace.o 00:03:44.443 TEST_HEADER include/spdk/bdev_zone.h 00:03:44.443 TEST_HEADER include/spdk/bit_array.h 00:03:44.443 CC app/spdk_top/spdk_top.o 00:03:44.443 CC test/rpc_client/rpc_client_test.o 00:03:44.443 CC app/trace_record/trace_record.o 00:03:44.443 CC app/spdk_nvme_identify/identify.o 00:03:44.443 TEST_HEADER include/spdk/bit_pool.h 00:03:44.443 CC app/spdk_nvme_perf/perf.o 00:03:44.443 CC app/spdk_lspci/spdk_lspci.o 00:03:44.443 CC app/spdk_nvme_discover/discovery_aer.o 00:03:44.443 TEST_HEADER include/spdk/blob_bdev.h 00:03:44.443 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:44.443 TEST_HEADER include/spdk/blobfs.h 00:03:44.443 TEST_HEADER include/spdk/blob.h 00:03:44.443 TEST_HEADER include/spdk/conf.h 00:03:44.443 TEST_HEADER include/spdk/config.h 00:03:44.443 TEST_HEADER include/spdk/cpuset.h 00:03:44.443 TEST_HEADER include/spdk/crc16.h 00:03:44.443 TEST_HEADER include/spdk/crc32.h 00:03:44.443 TEST_HEADER include/spdk/crc64.h 00:03:44.443 TEST_HEADER include/spdk/dif.h 00:03:44.443 TEST_HEADER include/spdk/dma.h 00:03:44.443 TEST_HEADER include/spdk/endian.h 00:03:44.443 TEST_HEADER include/spdk/env_dpdk.h 00:03:44.443 TEST_HEADER include/spdk/env.h 00:03:44.443 TEST_HEADER include/spdk/event.h 00:03:44.443 TEST_HEADER include/spdk/fd_group.h 00:03:44.443 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:44.443 TEST_HEADER include/spdk/fd.h 00:03:44.443 CC app/spdk_dd/spdk_dd.o 00:03:44.443 TEST_HEADER include/spdk/file.h 00:03:44.443 TEST_HEADER include/spdk/ftl.h 00:03:44.443 TEST_HEADER include/spdk/gpt_spec.h 00:03:44.443 CC app/iscsi_tgt/iscsi_tgt.o 00:03:44.443 TEST_HEADER include/spdk/hexlify.h 00:03:44.443 TEST_HEADER include/spdk/histogram_data.h 00:03:44.443 CC app/vhost/vhost.o 00:03:44.443 TEST_HEADER include/spdk/idxd.h 00:03:44.443 CC app/nvmf_tgt/nvmf_main.o 00:03:44.443 TEST_HEADER include/spdk/idxd_spec.h 00:03:44.443 TEST_HEADER include/spdk/init.h 00:03:44.443 TEST_HEADER include/spdk/ioat.h 00:03:44.443 TEST_HEADER include/spdk/ioat_spec.h 00:03:44.443 TEST_HEADER include/spdk/iscsi_spec.h 00:03:44.443 TEST_HEADER include/spdk/json.h 00:03:44.443 TEST_HEADER include/spdk/jsonrpc.h 00:03:44.443 TEST_HEADER include/spdk/keyring.h 00:03:44.443 TEST_HEADER include/spdk/keyring_module.h 00:03:44.443 TEST_HEADER include/spdk/likely.h 00:03:44.443 CC examples/util/zipf/zipf.o 00:03:44.443 CC test/env/memory/memory_ut.o 00:03:44.443 CC test/env/vtophys/vtophys.o 00:03:44.443 CC app/spdk_tgt/spdk_tgt.o 00:03:44.443 TEST_HEADER include/spdk/log.h 00:03:44.443 CC examples/nvme/hello_world/hello_world.o 00:03:44.443 CC examples/sock/hello_world/hello_sock.o 00:03:44.443 TEST_HEADER include/spdk/lvol.h 00:03:44.443 CC test/env/pci/pci_ut.o 00:03:44.443 CC examples/ioat/perf/perf.o 00:03:44.443 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:44.443 CC app/fio/nvme/fio_plugin.o 00:03:44.443 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:44.443 TEST_HEADER include/spdk/memory.h 00:03:44.443 CC examples/accel/perf/accel_perf.o 00:03:44.443 CC test/app/histogram_perf/histogram_perf.o 00:03:44.443 CC examples/vmd/lsvmd/lsvmd.o 00:03:44.443 CC examples/nvme/arbitration/arbitration.o 00:03:44.443 CC examples/nvme/reconnect/reconnect.o 00:03:44.443 TEST_HEADER include/spdk/mmio.h 00:03:44.443 CC test/thread/poller_perf/poller_perf.o 00:03:44.443 TEST_HEADER include/spdk/nbd.h 00:03:44.443 TEST_HEADER include/spdk/notify.h 00:03:44.443 CC test/event/event_perf/event_perf.o 00:03:44.443 TEST_HEADER include/spdk/nvme.h 00:03:44.443 TEST_HEADER include/spdk/nvme_intel.h 00:03:44.443 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:44.443 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:44.443 TEST_HEADER include/spdk/nvme_spec.h 00:03:44.443 CC examples/idxd/perf/perf.o 00:03:44.443 CC test/nvme/aer/aer.o 00:03:44.443 TEST_HEADER include/spdk/nvme_zns.h 00:03:44.704 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:44.704 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:44.704 TEST_HEADER include/spdk/nvmf.h 00:03:44.704 TEST_HEADER include/spdk/nvmf_spec.h 00:03:44.704 TEST_HEADER include/spdk/nvmf_transport.h 00:03:44.704 TEST_HEADER include/spdk/opal.h 00:03:44.704 TEST_HEADER include/spdk/opal_spec.h 00:03:44.704 TEST_HEADER include/spdk/pci_ids.h 00:03:44.704 TEST_HEADER include/spdk/pipe.h 00:03:44.704 TEST_HEADER include/spdk/queue.h 00:03:44.704 TEST_HEADER include/spdk/reduce.h 00:03:44.704 CC test/bdev/bdevio/bdevio.o 00:03:44.704 TEST_HEADER include/spdk/rpc.h 00:03:44.704 CC test/accel/dif/dif.o 00:03:44.704 TEST_HEADER include/spdk/scheduler.h 00:03:44.704 CC examples/bdev/bdevperf/bdevperf.o 00:03:44.704 TEST_HEADER include/spdk/scsi.h 00:03:44.704 CC test/dma/test_dma/test_dma.o 00:03:44.704 TEST_HEADER include/spdk/scsi_spec.h 00:03:44.704 CC examples/blob/hello_world/hello_blob.o 00:03:44.704 CC test/blobfs/mkfs/mkfs.o 00:03:44.704 CC examples/thread/thread/thread_ex.o 00:03:44.704 TEST_HEADER include/spdk/sock.h 00:03:44.704 CC examples/blob/cli/blobcli.o 00:03:44.704 TEST_HEADER include/spdk/stdinc.h 00:03:44.704 CC examples/nvmf/nvmf/nvmf.o 00:03:44.704 TEST_HEADER include/spdk/string.h 00:03:44.704 TEST_HEADER include/spdk/thread.h 00:03:44.704 CC test/app/bdev_svc/bdev_svc.o 00:03:44.704 CC examples/bdev/hello_world/hello_bdev.o 00:03:44.704 TEST_HEADER include/spdk/trace.h 00:03:44.704 TEST_HEADER include/spdk/trace_parser.h 00:03:44.704 TEST_HEADER include/spdk/tree.h 00:03:44.704 TEST_HEADER include/spdk/ublk.h 00:03:44.704 TEST_HEADER include/spdk/util.h 00:03:44.704 TEST_HEADER include/spdk/uuid.h 00:03:44.704 TEST_HEADER include/spdk/version.h 00:03:44.704 CC test/env/mem_callbacks/mem_callbacks.o 00:03:44.704 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:44.704 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:44.704 TEST_HEADER include/spdk/vhost.h 00:03:44.704 TEST_HEADER include/spdk/vmd.h 00:03:44.704 TEST_HEADER include/spdk/xor.h 00:03:44.704 TEST_HEADER include/spdk/zipf.h 00:03:44.704 CXX test/cpp_headers/accel.o 00:03:44.704 LINK spdk_lspci 00:03:44.704 CC test/lvol/esnap/esnap.o 00:03:44.704 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:44.704 LINK rpc_client_test 00:03:44.969 LINK spdk_nvme_discover 00:03:44.969 LINK vtophys 00:03:44.969 LINK lsvmd 00:03:44.969 LINK zipf 00:03:44.969 LINK interrupt_tgt 00:03:44.969 LINK histogram_perf 00:03:44.969 LINK poller_perf 00:03:44.969 LINK nvmf_tgt 00:03:44.969 LINK event_perf 00:03:44.969 LINK env_dpdk_post_init 00:03:44.969 LINK vhost 00:03:44.969 LINK iscsi_tgt 00:03:44.969 LINK spdk_trace_record 00:03:44.969 LINK spdk_tgt 00:03:44.969 LINK ioat_perf 00:03:44.969 LINK hello_world 00:03:44.969 LINK hello_sock 00:03:44.969 LINK bdev_svc 00:03:44.969 LINK mkfs 00:03:44.969 LINK hello_blob 00:03:45.232 CXX test/cpp_headers/accel_module.o 00:03:45.232 LINK aer 00:03:45.232 LINK hello_bdev 00:03:45.232 LINK thread 00:03:45.232 LINK spdk_dd 00:03:45.232 CC examples/vmd/led/led.o 00:03:45.232 LINK arbitration 00:03:45.232 LINK nvmf 00:03:45.232 LINK reconnect 00:03:45.232 LINK idxd_perf 00:03:45.232 CC test/event/reactor/reactor.o 00:03:45.232 LINK pci_ut 00:03:45.232 LINK spdk_trace 00:03:45.232 CC examples/ioat/verify/verify.o 00:03:45.232 CC test/app/jsoncat/jsoncat.o 00:03:45.232 CC examples/nvme/hotplug/hotplug.o 00:03:45.232 LINK bdevio 00:03:45.232 CC test/app/stub/stub.o 00:03:45.232 CXX test/cpp_headers/assert.o 00:03:45.232 CXX test/cpp_headers/barrier.o 00:03:45.232 CC app/fio/bdev/fio_plugin.o 00:03:45.500 CC test/nvme/reset/reset.o 00:03:45.500 LINK test_dma 00:03:45.500 CC test/event/reactor_perf/reactor_perf.o 00:03:45.500 CC test/nvme/sgl/sgl.o 00:03:45.500 CC test/event/app_repeat/app_repeat.o 00:03:45.500 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:45.500 LINK dif 00:03:45.500 CXX test/cpp_headers/base64.o 00:03:45.500 CXX test/cpp_headers/bdev.o 00:03:45.500 LINK nvme_manage 00:03:45.500 LINK led 00:03:45.500 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:45.500 LINK accel_perf 00:03:45.500 CC test/event/scheduler/scheduler.o 00:03:45.500 CC test/nvme/overhead/overhead.o 00:03:45.500 LINK nvme_fuzz 00:03:45.500 CC test/nvme/e2edp/nvme_dp.o 00:03:45.500 LINK reactor 00:03:45.500 CXX test/cpp_headers/bdev_module.o 00:03:45.500 LINK spdk_nvme 00:03:45.500 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:45.500 CC examples/nvme/abort/abort.o 00:03:45.500 LINK blobcli 00:03:45.765 LINK jsoncat 00:03:45.765 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:45.765 CC test/nvme/err_injection/err_injection.o 00:03:45.765 LINK reactor_perf 00:03:45.765 CC test/nvme/startup/startup.o 00:03:45.765 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:45.765 CC test/nvme/simple_copy/simple_copy.o 00:03:45.765 CC test/nvme/reserve/reserve.o 00:03:45.765 CXX test/cpp_headers/bdev_zone.o 00:03:45.765 LINK stub 00:03:45.765 CC test/nvme/connect_stress/connect_stress.o 00:03:45.765 LINK verify 00:03:45.765 CC test/nvme/boot_partition/boot_partition.o 00:03:45.765 LINK app_repeat 00:03:45.765 CC test/nvme/compliance/nvme_compliance.o 00:03:45.765 CC test/nvme/fused_ordering/fused_ordering.o 00:03:45.765 CXX test/cpp_headers/bit_array.o 00:03:45.765 LINK hotplug 00:03:45.765 CXX test/cpp_headers/bit_pool.o 00:03:45.765 CXX test/cpp_headers/blob_bdev.o 00:03:45.765 CXX test/cpp_headers/blobfs_bdev.o 00:03:45.765 LINK cmb_copy 00:03:45.765 CXX test/cpp_headers/blobfs.o 00:03:45.765 CXX test/cpp_headers/blob.o 00:03:45.765 CXX test/cpp_headers/conf.o 00:03:45.765 CXX test/cpp_headers/config.o 00:03:45.765 CXX test/cpp_headers/cpuset.o 00:03:46.029 CXX test/cpp_headers/crc16.o 00:03:46.029 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:46.029 LINK mem_callbacks 00:03:46.029 LINK reset 00:03:46.029 CXX test/cpp_headers/crc32.o 00:03:46.029 CC test/nvme/fdp/fdp.o 00:03:46.029 LINK spdk_nvme_perf 00:03:46.029 LINK sgl 00:03:46.029 CXX test/cpp_headers/crc64.o 00:03:46.029 CC test/nvme/cuse/cuse.o 00:03:46.029 LINK scheduler 00:03:46.029 LINK pmr_persistence 00:03:46.029 CXX test/cpp_headers/dif.o 00:03:46.029 CXX test/cpp_headers/dma.o 00:03:46.029 CXX test/cpp_headers/endian.o 00:03:46.029 LINK spdk_nvme_identify 00:03:46.029 CXX test/cpp_headers/env_dpdk.o 00:03:46.029 LINK err_injection 00:03:46.029 CXX test/cpp_headers/env.o 00:03:46.029 LINK startup 00:03:46.029 LINK overhead 00:03:46.029 LINK bdevperf 00:03:46.029 LINK nvme_dp 00:03:46.029 LINK boot_partition 00:03:46.029 CXX test/cpp_headers/event.o 00:03:46.029 LINK spdk_top 00:03:46.029 LINK connect_stress 00:03:46.029 CXX test/cpp_headers/fd_group.o 00:03:46.029 CXX test/cpp_headers/fd.o 00:03:46.325 LINK reserve 00:03:46.325 CXX test/cpp_headers/file.o 00:03:46.325 CXX test/cpp_headers/ftl.o 00:03:46.325 CXX test/cpp_headers/gpt_spec.o 00:03:46.325 CXX test/cpp_headers/hexlify.o 00:03:46.325 LINK simple_copy 00:03:46.325 CXX test/cpp_headers/histogram_data.o 00:03:46.325 CXX test/cpp_headers/idxd.o 00:03:46.325 LINK fused_ordering 00:03:46.325 CXX test/cpp_headers/idxd_spec.o 00:03:46.325 CXX test/cpp_headers/init.o 00:03:46.325 CXX test/cpp_headers/ioat.o 00:03:46.325 CXX test/cpp_headers/ioat_spec.o 00:03:46.325 CXX test/cpp_headers/iscsi_spec.o 00:03:46.325 CXX test/cpp_headers/json.o 00:03:46.325 CXX test/cpp_headers/jsonrpc.o 00:03:46.325 CXX test/cpp_headers/keyring.o 00:03:46.325 CXX test/cpp_headers/keyring_module.o 00:03:46.325 LINK doorbell_aers 00:03:46.325 CXX test/cpp_headers/likely.o 00:03:46.325 LINK abort 00:03:46.325 LINK spdk_bdev 00:03:46.325 CXX test/cpp_headers/log.o 00:03:46.325 CXX test/cpp_headers/lvol.o 00:03:46.325 CXX test/cpp_headers/memory.o 00:03:46.325 CXX test/cpp_headers/mmio.o 00:03:46.325 CXX test/cpp_headers/nbd.o 00:03:46.325 CXX test/cpp_headers/notify.o 00:03:46.325 LINK nvme_compliance 00:03:46.325 CXX test/cpp_headers/nvme.o 00:03:46.325 CXX test/cpp_headers/nvme_intel.o 00:03:46.325 CXX test/cpp_headers/nvme_ocssd.o 00:03:46.325 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:46.325 CXX test/cpp_headers/nvme_spec.o 00:03:46.325 CXX test/cpp_headers/nvme_zns.o 00:03:46.596 CXX test/cpp_headers/nvmf_cmd.o 00:03:46.596 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:46.596 CXX test/cpp_headers/nvmf.o 00:03:46.597 CXX test/cpp_headers/nvmf_spec.o 00:03:46.597 LINK vhost_fuzz 00:03:46.597 CXX test/cpp_headers/nvmf_transport.o 00:03:46.597 CXX test/cpp_headers/opal.o 00:03:46.597 CXX test/cpp_headers/opal_spec.o 00:03:46.597 CXX test/cpp_headers/pci_ids.o 00:03:46.597 CXX test/cpp_headers/pipe.o 00:03:46.597 CXX test/cpp_headers/queue.o 00:03:46.597 CXX test/cpp_headers/reduce.o 00:03:46.597 CXX test/cpp_headers/rpc.o 00:03:46.597 CXX test/cpp_headers/scheduler.o 00:03:46.597 CXX test/cpp_headers/scsi.o 00:03:46.597 CXX test/cpp_headers/scsi_spec.o 00:03:46.597 CXX test/cpp_headers/sock.o 00:03:46.597 LINK fdp 00:03:46.597 CXX test/cpp_headers/stdinc.o 00:03:46.597 CXX test/cpp_headers/string.o 00:03:46.597 CXX test/cpp_headers/thread.o 00:03:46.597 CXX test/cpp_headers/trace.o 00:03:46.597 CXX test/cpp_headers/trace_parser.o 00:03:46.597 CXX test/cpp_headers/tree.o 00:03:46.597 CXX test/cpp_headers/ublk.o 00:03:46.597 CXX test/cpp_headers/util.o 00:03:46.597 CXX test/cpp_headers/uuid.o 00:03:46.597 CXX test/cpp_headers/version.o 00:03:46.597 CXX test/cpp_headers/vfio_user_pci.o 00:03:46.597 CXX test/cpp_headers/vfio_user_spec.o 00:03:46.597 CXX test/cpp_headers/vhost.o 00:03:46.597 CXX test/cpp_headers/vmd.o 00:03:46.597 LINK memory_ut 00:03:46.597 CXX test/cpp_headers/xor.o 00:03:46.597 CXX test/cpp_headers/zipf.o 00:03:47.972 LINK cuse 00:03:47.972 LINK iscsi_fuzz 00:03:50.500 LINK esnap 00:03:51.065 00:03:51.066 real 0m41.142s 00:03:51.066 user 7m36.844s 00:03:51.066 sys 1m49.900s 00:03:51.066 00:49:44 make -- common/autotest_common.sh@1122 -- $ xtrace_disable 00:03:51.066 00:49:44 make -- common/autotest_common.sh@10 -- $ set +x 00:03:51.066 ************************************ 00:03:51.066 END TEST make 00:03:51.066 ************************************ 00:03:51.066 00:49:44 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:51.066 00:49:44 -- pm/common@29 -- $ signal_monitor_resources TERM 00:03:51.066 00:49:44 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:03:51.066 00:49:44 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:51.066 00:49:44 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:03:51.066 00:49:44 -- pm/common@44 -- $ pid=3530336 00:03:51.066 00:49:44 -- pm/common@50 -- $ kill -TERM 3530336 00:03:51.066 00:49:44 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:51.066 00:49:44 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:03:51.066 00:49:44 -- pm/common@44 -- $ pid=3530338 00:03:51.066 00:49:44 -- pm/common@50 -- $ kill -TERM 3530338 00:03:51.066 00:49:44 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:51.066 00:49:44 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:03:51.066 00:49:44 -- pm/common@44 -- $ pid=3530340 00:03:51.066 00:49:44 -- pm/common@50 -- $ kill -TERM 3530340 00:03:51.066 00:49:44 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:51.066 00:49:44 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:03:51.066 00:49:44 -- pm/common@44 -- $ pid=3530368 00:03:51.066 00:49:44 -- pm/common@50 -- $ sudo -E kill -TERM 3530368 00:03:51.066 00:49:44 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:03:51.066 00:49:44 -- nvmf/common.sh@7 -- # uname -s 00:03:51.066 00:49:44 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:51.066 00:49:44 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:51.066 00:49:44 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:51.066 00:49:44 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:51.066 00:49:44 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:51.066 00:49:44 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:51.066 00:49:44 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:51.066 00:49:44 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:51.066 00:49:44 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:51.066 00:49:44 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:51.066 00:49:44 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:03:51.066 00:49:44 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:03:51.066 00:49:44 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:51.066 00:49:44 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:51.066 00:49:44 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:03:51.066 00:49:44 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:51.066 00:49:44 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:03:51.066 00:49:44 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:51.066 00:49:44 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:51.066 00:49:44 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:51.066 00:49:44 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:51.066 00:49:44 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:51.066 00:49:44 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:51.066 00:49:44 -- paths/export.sh@5 -- # export PATH 00:03:51.066 00:49:44 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:51.066 00:49:44 -- nvmf/common.sh@47 -- # : 0 00:03:51.066 00:49:44 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:03:51.066 00:49:44 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:03:51.066 00:49:44 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:51.066 00:49:44 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:51.066 00:49:44 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:51.066 00:49:44 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:03:51.066 00:49:44 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:03:51.066 00:49:44 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:03:51.066 00:49:44 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:51.066 00:49:44 -- spdk/autotest.sh@32 -- # uname -s 00:03:51.066 00:49:44 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:51.066 00:49:44 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:51.066 00:49:44 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:03:51.066 00:49:44 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:03:51.066 00:49:44 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:03:51.066 00:49:44 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:51.066 00:49:44 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:51.066 00:49:44 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:51.066 00:49:44 -- spdk/autotest.sh@48 -- # udevadm_pid=3606727 00:03:51.066 00:49:44 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:51.066 00:49:44 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:51.066 00:49:44 -- pm/common@17 -- # local monitor 00:03:51.066 00:49:44 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:51.066 00:49:44 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:51.066 00:49:44 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:51.066 00:49:44 -- pm/common@21 -- # date +%s 00:03:51.066 00:49:44 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:51.066 00:49:44 -- pm/common@21 -- # date +%s 00:03:51.066 00:49:44 -- pm/common@25 -- # sleep 1 00:03:51.066 00:49:44 -- pm/common@21 -- # date +%s 00:03:51.066 00:49:44 -- pm/common@21 -- # date +%s 00:03:51.066 00:49:44 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721861384 00:03:51.066 00:49:44 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721861384 00:03:51.066 00:49:44 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721861384 00:03:51.066 00:49:44 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721861384 00:03:51.066 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721861384_collect-vmstat.pm.log 00:03:51.066 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721861384_collect-cpu-load.pm.log 00:03:51.066 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721861384_collect-cpu-temp.pm.log 00:03:51.066 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721861384_collect-bmc-pm.bmc.pm.log 00:03:51.999 00:49:45 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:51.999 00:49:45 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:03:51.999 00:49:45 -- common/autotest_common.sh@720 -- # xtrace_disable 00:03:51.999 00:49:45 -- common/autotest_common.sh@10 -- # set +x 00:03:51.999 00:49:45 -- spdk/autotest.sh@59 -- # create_test_list 00:03:51.999 00:49:45 -- common/autotest_common.sh@744 -- # xtrace_disable 00:03:51.999 00:49:45 -- common/autotest_common.sh@10 -- # set +x 00:03:52.257 00:49:45 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:03:52.257 00:49:45 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:52.257 00:49:45 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:52.257 00:49:45 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:03:52.257 00:49:45 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:52.257 00:49:45 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:03:52.257 00:49:45 -- common/autotest_common.sh@1451 -- # uname 00:03:52.257 00:49:45 -- common/autotest_common.sh@1451 -- # '[' Linux = FreeBSD ']' 00:03:52.257 00:49:45 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:03:52.257 00:49:45 -- common/autotest_common.sh@1471 -- # uname 00:03:52.257 00:49:45 -- common/autotest_common.sh@1471 -- # [[ Linux = FreeBSD ]] 00:03:52.257 00:49:45 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:03:52.257 00:49:45 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:03:52.257 00:49:45 -- spdk/autotest.sh@72 -- # hash lcov 00:03:52.257 00:49:45 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:03:52.257 00:49:45 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:03:52.257 --rc lcov_branch_coverage=1 00:03:52.257 --rc lcov_function_coverage=1 00:03:52.257 --rc genhtml_branch_coverage=1 00:03:52.257 --rc genhtml_function_coverage=1 00:03:52.257 --rc genhtml_legend=1 00:03:52.257 --rc geninfo_all_blocks=1 00:03:52.257 ' 00:03:52.257 00:49:45 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:03:52.257 --rc lcov_branch_coverage=1 00:03:52.257 --rc lcov_function_coverage=1 00:03:52.257 --rc genhtml_branch_coverage=1 00:03:52.257 --rc genhtml_function_coverage=1 00:03:52.257 --rc genhtml_legend=1 00:03:52.257 --rc geninfo_all_blocks=1 00:03:52.257 ' 00:03:52.257 00:49:45 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:03:52.257 --rc lcov_branch_coverage=1 00:03:52.257 --rc lcov_function_coverage=1 00:03:52.257 --rc genhtml_branch_coverage=1 00:03:52.257 --rc genhtml_function_coverage=1 00:03:52.257 --rc genhtml_legend=1 00:03:52.257 --rc geninfo_all_blocks=1 00:03:52.257 --no-external' 00:03:52.257 00:49:45 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:03:52.257 --rc lcov_branch_coverage=1 00:03:52.257 --rc lcov_function_coverage=1 00:03:52.257 --rc genhtml_branch_coverage=1 00:03:52.257 --rc genhtml_function_coverage=1 00:03:52.257 --rc genhtml_legend=1 00:03:52.257 --rc geninfo_all_blocks=1 00:03:52.257 --no-external' 00:03:52.257 00:49:45 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:03:52.257 lcov: LCOV version 1.14 00:03:52.257 00:49:45 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:04:07.124 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:04:07.124 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:04:21.983 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno:no functions found 00:04:21.983 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno 00:04:21.983 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:04:21.983 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno 00:04:21.983 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno:no functions found 00:04:21.983 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno 00:04:21.983 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno:no functions found 00:04:21.983 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno 00:04:21.983 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno:no functions found 00:04:21.983 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno 00:04:21.983 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno:no functions found 00:04:21.983 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno 00:04:21.983 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:04:21.983 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno 00:04:21.983 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:04:21.983 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno 00:04:21.983 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:04:21.983 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno 00:04:21.983 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:04:21.983 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno 00:04:21.983 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:04:21.983 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno 00:04:21.983 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:04:21.983 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno 00:04:21.983 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:04:21.983 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno 00:04:21.983 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno:no functions found 00:04:21.983 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno 00:04:21.983 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno:no functions found 00:04:21.983 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno 00:04:21.983 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno:no functions found 00:04:21.983 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno 00:04:21.983 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:04:21.983 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno 00:04:21.983 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno:no functions found 00:04:21.983 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno 00:04:21.983 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno:no functions found 00:04:21.983 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno 00:04:21.983 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno:no functions found 00:04:21.983 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno 00:04:21.983 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno:no functions found 00:04:21.983 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno 00:04:21.983 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno:no functions found 00:04:21.983 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno 00:04:21.983 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno:no functions found 00:04:21.983 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno 00:04:21.983 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:04:21.983 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno 00:04:21.983 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno:no functions found 00:04:21.983 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno 00:04:21.983 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno:no functions found 00:04:21.983 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno 00:04:21.983 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:04:21.983 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno 00:04:21.983 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno:no functions found 00:04:21.983 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno 00:04:21.983 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno:no functions found 00:04:21.983 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno 00:04:21.983 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno:no functions found 00:04:21.983 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno 00:04:21.983 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:04:21.983 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno 00:04:21.983 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:04:21.983 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno 00:04:21.983 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:04:21.983 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno 00:04:21.983 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno:no functions found 00:04:21.983 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno 00:04:21.983 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:04:21.983 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno 00:04:21.983 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno:no functions found 00:04:21.983 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno 00:04:21.983 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno:no functions found 00:04:21.983 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno 00:04:21.983 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:04:21.983 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno 00:04:21.983 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:04:21.983 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno 00:04:21.983 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno:no functions found 00:04:21.983 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno 00:04:21.983 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:04:21.983 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno 00:04:21.983 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno:no functions found 00:04:21.983 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno 00:04:21.983 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:04:21.983 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno 00:04:21.983 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno:no functions found 00:04:21.983 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno 00:04:21.983 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno:no functions found 00:04:21.983 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno 00:04:21.983 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno:no functions found 00:04:21.983 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno 00:04:21.983 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno:no functions found 00:04:21.983 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno 00:04:21.983 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno:no functions found 00:04:21.983 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno 00:04:21.983 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno:no functions found 00:04:21.983 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno 00:04:21.983 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno:no functions found 00:04:21.983 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno 00:04:21.983 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno:no functions found 00:04:21.983 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno 00:04:21.983 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:04:21.983 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno 00:04:21.983 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:04:21.983 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno 00:04:21.983 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:04:21.983 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno 00:04:21.983 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:04:21.983 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno 00:04:21.983 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:04:21.983 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:04:21.983 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:04:21.983 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno 00:04:21.983 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:04:21.983 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno 00:04:21.983 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:04:21.983 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:04:21.983 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:04:21.983 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno 00:04:21.983 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:04:21.983 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno 00:04:21.984 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno:no functions found 00:04:21.984 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno 00:04:21.984 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:04:21.984 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno 00:04:21.984 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:04:21.984 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno 00:04:21.984 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno:no functions found 00:04:21.984 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno 00:04:21.984 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno:no functions found 00:04:21.984 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno 00:04:21.984 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno:no functions found 00:04:21.984 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno 00:04:21.984 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:04:21.984 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno 00:04:21.984 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno:no functions found 00:04:21.984 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno 00:04:21.984 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno:no functions found 00:04:21.984 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno 00:04:21.984 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno:no functions found 00:04:21.984 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno 00:04:21.984 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:04:21.984 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno 00:04:21.984 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:04:21.984 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno 00:04:21.984 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno:no functions found 00:04:21.984 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno 00:04:21.984 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno:no functions found 00:04:21.984 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno 00:04:21.984 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno:no functions found 00:04:21.984 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno 00:04:21.984 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:04:21.984 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno 00:04:21.984 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno:no functions found 00:04:21.984 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno 00:04:21.984 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno:no functions found 00:04:21.984 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno 00:04:21.984 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno:no functions found 00:04:21.984 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno 00:04:21.984 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno:no functions found 00:04:21.984 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno 00:04:21.984 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno:no functions found 00:04:21.984 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno 00:04:21.984 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:04:21.984 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno 00:04:21.984 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:04:21.984 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno 00:04:21.984 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno:no functions found 00:04:21.984 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno 00:04:21.984 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno:no functions found 00:04:21.984 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno 00:04:21.984 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno:no functions found 00:04:21.984 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno 00:04:21.984 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno:no functions found 00:04:21.984 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno 00:04:25.265 00:50:18 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:04:25.265 00:50:18 -- common/autotest_common.sh@720 -- # xtrace_disable 00:04:25.265 00:50:18 -- common/autotest_common.sh@10 -- # set +x 00:04:25.265 00:50:18 -- spdk/autotest.sh@91 -- # rm -f 00:04:25.265 00:50:18 -- spdk/autotest.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:26.638 0000:88:00.0 (8086 0a54): Already using the nvme driver 00:04:26.638 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:04:26.638 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:04:26.638 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:04:26.638 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:04:26.638 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:04:26.638 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:04:26.638 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:04:26.638 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:04:26.638 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:04:26.638 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:04:26.638 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:04:26.638 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:04:26.638 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:04:26.638 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:04:26.638 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:04:26.638 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:04:26.638 00:50:19 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:04:26.638 00:50:19 -- common/autotest_common.sh@1665 -- # zoned_devs=() 00:04:26.638 00:50:19 -- common/autotest_common.sh@1665 -- # local -gA zoned_devs 00:04:26.638 00:50:19 -- common/autotest_common.sh@1666 -- # local nvme bdf 00:04:26.638 00:50:19 -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:04:26.638 00:50:19 -- common/autotest_common.sh@1669 -- # is_block_zoned nvme0n1 00:04:26.638 00:50:19 -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:04:26.638 00:50:19 -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:26.638 00:50:19 -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:04:26.638 00:50:19 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:04:26.638 00:50:19 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:04:26.638 00:50:19 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:04:26.638 00:50:19 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:04:26.638 00:50:19 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:04:26.638 00:50:19 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:04:26.896 No valid GPT data, bailing 00:04:26.896 00:50:19 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:26.896 00:50:19 -- scripts/common.sh@391 -- # pt= 00:04:26.896 00:50:19 -- scripts/common.sh@392 -- # return 1 00:04:26.896 00:50:19 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:04:26.896 1+0 records in 00:04:26.896 1+0 records out 00:04:26.896 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00239319 s, 438 MB/s 00:04:26.896 00:50:19 -- spdk/autotest.sh@118 -- # sync 00:04:26.896 00:50:19 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:04:26.896 00:50:19 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:04:26.896 00:50:19 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:04:28.828 00:50:21 -- spdk/autotest.sh@124 -- # uname -s 00:04:28.828 00:50:21 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:04:28.828 00:50:21 -- spdk/autotest.sh@125 -- # run_test setup.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:04:28.828 00:50:21 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:28.828 00:50:21 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:28.828 00:50:21 -- common/autotest_common.sh@10 -- # set +x 00:04:28.828 ************************************ 00:04:28.828 START TEST setup.sh 00:04:28.828 ************************************ 00:04:28.828 00:50:21 setup.sh -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:04:28.828 * Looking for test storage... 00:04:28.829 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:28.829 00:50:21 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:04:28.829 00:50:21 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:04:28.829 00:50:21 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:04:28.829 00:50:21 setup.sh -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:28.829 00:50:21 setup.sh -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:28.829 00:50:21 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:28.829 ************************************ 00:04:28.829 START TEST acl 00:04:28.829 ************************************ 00:04:28.829 00:50:21 setup.sh.acl -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:04:28.829 * Looking for test storage... 00:04:28.829 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:28.829 00:50:21 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:04:28.829 00:50:21 setup.sh.acl -- common/autotest_common.sh@1665 -- # zoned_devs=() 00:04:28.829 00:50:21 setup.sh.acl -- common/autotest_common.sh@1665 -- # local -gA zoned_devs 00:04:28.829 00:50:21 setup.sh.acl -- common/autotest_common.sh@1666 -- # local nvme bdf 00:04:28.829 00:50:21 setup.sh.acl -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:04:28.829 00:50:21 setup.sh.acl -- common/autotest_common.sh@1669 -- # is_block_zoned nvme0n1 00:04:28.829 00:50:21 setup.sh.acl -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:04:28.829 00:50:21 setup.sh.acl -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:28.829 00:50:21 setup.sh.acl -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:04:28.829 00:50:21 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:04:28.829 00:50:21 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:04:28.829 00:50:21 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:04:28.829 00:50:21 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:04:28.829 00:50:21 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:04:28.829 00:50:21 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:28.829 00:50:21 setup.sh.acl -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:30.202 00:50:23 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:04:30.202 00:50:23 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:04:30.202 00:50:23 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:30.202 00:50:23 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:04:30.202 00:50:23 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:04:30.202 00:50:23 setup.sh.acl -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:04:31.573 Hugepages 00:04:31.573 node hugesize free / total 00:04:31.574 00:50:24 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:04:31.574 00:50:24 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:31.574 00:50:24 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:31.574 00:50:24 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:04:31.574 00:50:24 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:31.574 00:50:24 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:31.574 00:50:24 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:04:31.574 00:50:24 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:31.574 00:50:24 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:31.574 00:04:31.574 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:31.574 00:50:24 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:04:31.574 00:50:24 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:31.574 00:50:24 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:31.574 00:50:24 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.0 == *:*:*.* ]] 00:04:31.574 00:50:24 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:31.574 00:50:24 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:31.574 00:50:24 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:31.574 00:50:24 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.1 == *:*:*.* ]] 00:04:31.574 00:50:24 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:31.574 00:50:24 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:31.574 00:50:24 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:31.574 00:50:24 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.2 == *:*:*.* ]] 00:04:31.574 00:50:24 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:31.574 00:50:24 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:31.574 00:50:24 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:31.574 00:50:24 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.3 == *:*:*.* ]] 00:04:31.574 00:50:24 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:31.574 00:50:24 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:31.574 00:50:24 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:31.574 00:50:24 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.4 == *:*:*.* ]] 00:04:31.574 00:50:24 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:31.574 00:50:24 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:31.574 00:50:24 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:31.574 00:50:24 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.5 == *:*:*.* ]] 00:04:31.574 00:50:24 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:31.574 00:50:24 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:31.574 00:50:24 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:31.574 00:50:24 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.6 == *:*:*.* ]] 00:04:31.574 00:50:24 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:31.574 00:50:24 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:31.574 00:50:24 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:31.574 00:50:24 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.7 == *:*:*.* ]] 00:04:31.574 00:50:24 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:31.574 00:50:24 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:31.574 00:50:24 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:31.574 00:50:24 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.0 == *:*:*.* ]] 00:04:31.574 00:50:24 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:31.574 00:50:24 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:31.574 00:50:24 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:31.574 00:50:24 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.1 == *:*:*.* ]] 00:04:31.574 00:50:24 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:31.574 00:50:24 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:31.574 00:50:24 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:31.574 00:50:24 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.2 == *:*:*.* ]] 00:04:31.574 00:50:24 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:31.574 00:50:24 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:31.574 00:50:24 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:31.574 00:50:24 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.3 == *:*:*.* ]] 00:04:31.574 00:50:24 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:31.574 00:50:24 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:31.574 00:50:24 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:31.574 00:50:24 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.4 == *:*:*.* ]] 00:04:31.574 00:50:24 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:31.574 00:50:24 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:31.574 00:50:24 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:31.574 00:50:24 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.5 == *:*:*.* ]] 00:04:31.574 00:50:24 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:31.574 00:50:24 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:31.574 00:50:24 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:31.574 00:50:24 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.6 == *:*:*.* ]] 00:04:31.574 00:50:24 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:31.574 00:50:24 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:31.574 00:50:24 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:31.574 00:50:24 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.7 == *:*:*.* ]] 00:04:31.574 00:50:24 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:31.574 00:50:24 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:31.574 00:50:24 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:31.574 00:50:24 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:88:00.0 == *:*:*.* ]] 00:04:31.574 00:50:24 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:04:31.574 00:50:24 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\8\8\:\0\0\.\0* ]] 00:04:31.574 00:50:24 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:04:31.574 00:50:24 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:04:31.574 00:50:24 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:31.574 00:50:24 setup.sh.acl -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:04:31.574 00:50:24 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:04:31.574 00:50:24 setup.sh.acl -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:31.574 00:50:24 setup.sh.acl -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:31.574 00:50:24 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:04:31.574 ************************************ 00:04:31.574 START TEST denied 00:04:31.574 ************************************ 00:04:31.574 00:50:24 setup.sh.acl.denied -- common/autotest_common.sh@1121 -- # denied 00:04:31.574 00:50:24 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:88:00.0' 00:04:31.574 00:50:24 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:04:31.574 00:50:24 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:88:00.0' 00:04:31.574 00:50:24 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:04:31.574 00:50:24 setup.sh.acl.denied -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:32.947 0000:88:00.0 (8086 0a54): Skipping denied controller at 0000:88:00.0 00:04:32.947 00:50:25 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:88:00.0 00:04:32.947 00:50:25 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:04:32.947 00:50:25 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:04:32.947 00:50:25 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:88:00.0 ]] 00:04:32.947 00:50:25 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:88:00.0/driver 00:04:32.947 00:50:25 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:04:32.947 00:50:25 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:04:32.947 00:50:25 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:04:32.947 00:50:25 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:32.947 00:50:25 setup.sh.acl.denied -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:35.477 00:04:35.477 real 0m3.765s 00:04:35.477 user 0m1.090s 00:04:35.477 sys 0m1.797s 00:04:35.477 00:50:28 setup.sh.acl.denied -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:35.477 00:50:28 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:04:35.477 ************************************ 00:04:35.477 END TEST denied 00:04:35.477 ************************************ 00:04:35.477 00:50:28 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:04:35.477 00:50:28 setup.sh.acl -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:35.477 00:50:28 setup.sh.acl -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:35.477 00:50:28 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:04:35.477 ************************************ 00:04:35.477 START TEST allowed 00:04:35.477 ************************************ 00:04:35.477 00:50:28 setup.sh.acl.allowed -- common/autotest_common.sh@1121 -- # allowed 00:04:35.477 00:50:28 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:88:00.0 00:04:35.477 00:50:28 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:04:35.477 00:50:28 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:88:00.0 .*: nvme -> .*' 00:04:35.477 00:50:28 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:04:35.477 00:50:28 setup.sh.acl.allowed -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:38.005 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:04:38.005 00:50:30 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 00:04:38.005 00:50:30 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:04:38.005 00:50:30 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:04:38.006 00:50:30 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:38.006 00:50:30 setup.sh.acl.allowed -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:39.381 00:04:39.381 real 0m3.950s 00:04:39.381 user 0m1.019s 00:04:39.381 sys 0m1.781s 00:04:39.381 00:50:32 setup.sh.acl.allowed -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:39.381 00:50:32 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:04:39.381 ************************************ 00:04:39.381 END TEST allowed 00:04:39.381 ************************************ 00:04:39.381 00:04:39.381 real 0m10.604s 00:04:39.381 user 0m3.280s 00:04:39.381 sys 0m5.367s 00:04:39.381 00:50:32 setup.sh.acl -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:39.381 00:50:32 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:04:39.381 ************************************ 00:04:39.381 END TEST acl 00:04:39.381 ************************************ 00:04:39.381 00:50:32 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:04:39.381 00:50:32 setup.sh -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:39.381 00:50:32 setup.sh -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:39.381 00:50:32 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:39.381 ************************************ 00:04:39.381 START TEST hugepages 00:04:39.381 ************************************ 00:04:39.381 00:50:32 setup.sh.hugepages -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:04:39.381 * Looking for test storage... 00:04:39.381 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:39.381 00:50:32 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:04:39.381 00:50:32 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:04:39.381 00:50:32 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:04:39.381 00:50:32 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:04:39.381 00:50:32 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:04:39.381 00:50:32 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:04:39.381 00:50:32 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:04:39.381 00:50:32 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:04:39.381 00:50:32 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:04:39.381 00:50:32 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:04:39.381 00:50:32 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:39.381 00:50:32 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:39.381 00:50:32 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:39.381 00:50:32 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:04:39.381 00:50:32 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:39.381 00:50:32 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:39.381 00:50:32 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:39.382 00:50:32 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 41286548 kB' 'MemAvailable: 44775248 kB' 'Buffers: 2704 kB' 'Cached: 12725624 kB' 'SwapCached: 0 kB' 'Active: 9704796 kB' 'Inactive: 3493860 kB' 'Active(anon): 9315892 kB' 'Inactive(anon): 0 kB' 'Active(file): 388904 kB' 'Inactive(file): 3493860 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 473552 kB' 'Mapped: 227808 kB' 'Shmem: 8845564 kB' 'KReclaimable: 199548 kB' 'Slab: 575684 kB' 'SReclaimable: 199548 kB' 'SUnreclaim: 376136 kB' 'KernelStack: 12640 kB' 'PageTables: 8436 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36562316 kB' 'Committed_AS: 10440276 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196292 kB' 'VmallocChunk: 0 kB' 'Percpu: 34368 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 2158172 kB' 'DirectMap2M: 16635904 kB' 'DirectMap1G: 50331648 kB' 00:04:39.382 00:50:32 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:39.382 00:50:32 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:39.382 00:50:32 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:39.382 00:50:32 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:39.382 00:50:32 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:39.382 00:50:32 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:39.382 00:50:32 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:39.382 00:50:32 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:39.382 00:50:32 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:39.382 00:50:32 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:39.382 00:50:32 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:39.382 00:50:32 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:39.382 00:50:32 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:39.382 00:50:32 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:39.382 00:50:32 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:39.382 00:50:32 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:39.382 00:50:32 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:39.382 00:50:32 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:39.382 00:50:32 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:39.382 00:50:32 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:39.382 00:50:32 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:39.382 00:50:32 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:39.382 00:50:32 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:39.382 00:50:32 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:39.382 00:50:32 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:39.382 00:50:32 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:39.382 00:50:32 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:39.382 00:50:32 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:39.382 00:50:32 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:39.382 00:50:32 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:39.382 00:50:32 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:39.382 00:50:32 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:39.382 00:50:32 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:39.382 00:50:32 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:39.382 00:50:32 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:39.382 00:50:32 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:39.382 00:50:32 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:39.382 00:50:32 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:39.382 00:50:32 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:39.382 00:50:32 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:39.382 00:50:32 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:39.382 00:50:32 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:39.382 00:50:32 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:39.382 00:50:32 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:39.382 00:50:32 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:39.382 00:50:32 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:39.382 00:50:32 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:39.382 00:50:32 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:39.382 00:50:32 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:39.382 00:50:32 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:39.382 00:50:32 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:39.382 00:50:32 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:39.382 00:50:32 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:39.382 00:50:32 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:39.382 00:50:32 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:39.382 00:50:32 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:39.382 00:50:32 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:39.382 00:50:32 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:39.382 00:50:32 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:39.382 00:50:32 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:39.382 00:50:32 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:39.382 00:50:32 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:39.382 00:50:32 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:39.382 00:50:32 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:39.382 00:50:32 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:39.382 00:50:32 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:39.382 00:50:32 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:39.382 00:50:32 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:39.382 00:50:32 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:39.382 00:50:32 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:39.382 00:50:32 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:39.382 00:50:32 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:39.382 00:50:32 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:39.382 00:50:32 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:39.382 00:50:32 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:39.382 00:50:32 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:39.382 00:50:32 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:39.382 00:50:32 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:39.382 00:50:32 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:39.382 00:50:32 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:39.382 00:50:32 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:39.382 00:50:32 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:39.382 00:50:32 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:39.382 00:50:32 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:39.382 00:50:32 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:39.382 00:50:32 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:39.382 00:50:32 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:39.382 00:50:32 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:39.382 00:50:32 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:39.382 00:50:32 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:39.382 00:50:32 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:39.382 00:50:32 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:39.382 00:50:32 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:39.382 00:50:32 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:39.382 00:50:32 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:39.382 00:50:32 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:39.382 00:50:32 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:39.382 00:50:32 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:39.382 00:50:32 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:39.382 00:50:32 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:39.382 00:50:32 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:39.382 00:50:32 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:39.382 00:50:32 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:39.382 00:50:32 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:39.382 00:50:32 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:39.382 00:50:32 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:39.382 00:50:32 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:39.382 00:50:32 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:39.382 00:50:32 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:39.382 00:50:32 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:39.382 00:50:32 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:39.382 00:50:32 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:39.382 00:50:32 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:39.382 00:50:32 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:39.382 00:50:32 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:39.382 00:50:32 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:39.382 00:50:32 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:39.382 00:50:32 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:39.382 00:50:32 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:39.382 00:50:32 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:39.382 00:50:32 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:39.382 00:50:32 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:39.382 00:50:32 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:39.382 00:50:32 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:39.383 00:50:32 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:39.383 00:50:32 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:39.383 00:50:32 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:39.383 00:50:32 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:39.383 00:50:32 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:39.383 00:50:32 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:39.383 00:50:32 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:39.383 00:50:32 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:39.383 00:50:32 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:39.383 00:50:32 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:39.383 00:50:32 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:39.383 00:50:32 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:39.383 00:50:32 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:39.383 00:50:32 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:39.383 00:50:32 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:39.383 00:50:32 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:39.383 00:50:32 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:39.383 00:50:32 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:39.383 00:50:32 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:39.383 00:50:32 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:39.383 00:50:32 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:39.383 00:50:32 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:39.383 00:50:32 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:39.383 00:50:32 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:39.383 00:50:32 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:39.383 00:50:32 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:39.383 00:50:32 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:39.383 00:50:32 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:39.383 00:50:32 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:39.383 00:50:32 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:39.383 00:50:32 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:39.383 00:50:32 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:39.383 00:50:32 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:39.383 00:50:32 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:39.383 00:50:32 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:39.383 00:50:32 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:39.383 00:50:32 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:39.383 00:50:32 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:39.383 00:50:32 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:39.383 00:50:32 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:39.383 00:50:32 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:39.383 00:50:32 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:39.383 00:50:32 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:39.383 00:50:32 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:39.383 00:50:32 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:39.383 00:50:32 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:39.383 00:50:32 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:39.383 00:50:32 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:39.383 00:50:32 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:39.383 00:50:32 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:39.383 00:50:32 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:39.383 00:50:32 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:39.383 00:50:32 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:39.383 00:50:32 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:39.383 00:50:32 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:39.383 00:50:32 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:39.383 00:50:32 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:39.383 00:50:32 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:39.383 00:50:32 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:39.383 00:50:32 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:39.383 00:50:32 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:39.383 00:50:32 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:39.383 00:50:32 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:39.383 00:50:32 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:39.383 00:50:32 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:39.383 00:50:32 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:39.383 00:50:32 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:39.383 00:50:32 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:39.383 00:50:32 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:39.383 00:50:32 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:39.383 00:50:32 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:39.383 00:50:32 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:39.383 00:50:32 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:39.383 00:50:32 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:39.383 00:50:32 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:39.383 00:50:32 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:39.383 00:50:32 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:39.383 00:50:32 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:39.383 00:50:32 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:39.383 00:50:32 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:39.383 00:50:32 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:39.383 00:50:32 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:39.383 00:50:32 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:39.383 00:50:32 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:39.383 00:50:32 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:39.383 00:50:32 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:04:39.383 00:50:32 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:04:39.383 00:50:32 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:04:39.383 00:50:32 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:04:39.383 00:50:32 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:04:39.383 00:50:32 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:04:39.383 00:50:32 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:04:39.383 00:50:32 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:04:39.383 00:50:32 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:04:39.383 00:50:32 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:04:39.383 00:50:32 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:04:39.383 00:50:32 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:39.383 00:50:32 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:04:39.383 00:50:32 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:39.383 00:50:32 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:39.383 00:50:32 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:39.383 00:50:32 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:39.383 00:50:32 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:04:39.383 00:50:32 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:04:39.383 00:50:32 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:39.383 00:50:32 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:39.383 00:50:32 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:39.383 00:50:32 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:39.383 00:50:32 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:39.383 00:50:32 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:39.383 00:50:32 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:39.383 00:50:32 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:39.383 00:50:32 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:39.383 00:50:32 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:39.383 00:50:32 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:39.383 00:50:32 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:39.383 00:50:32 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:04:39.383 00:50:32 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:39.383 00:50:32 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:39.383 00:50:32 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:39.383 ************************************ 00:04:39.383 START TEST default_setup 00:04:39.383 ************************************ 00:04:39.383 00:50:32 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1121 -- # default_setup 00:04:39.383 00:50:32 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:04:39.383 00:50:32 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:04:39.383 00:50:32 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:39.383 00:50:32 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:04:39.383 00:50:32 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:39.384 00:50:32 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:04:39.384 00:50:32 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:39.384 00:50:32 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:39.384 00:50:32 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:39.384 00:50:32 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:39.384 00:50:32 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:04:39.384 00:50:32 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:39.384 00:50:32 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:39.384 00:50:32 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:39.384 00:50:32 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:39.384 00:50:32 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:39.384 00:50:32 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:39.384 00:50:32 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:39.384 00:50:32 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:04:39.384 00:50:32 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:04:39.384 00:50:32 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:04:39.384 00:50:32 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:40.758 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:04:40.758 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:04:40.758 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:04:40.758 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:04:40.758 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:04:40.758 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:04:40.758 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:04:40.758 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:04:40.758 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:04:40.758 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:04:40.758 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:04:40.758 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:04:40.758 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:04:40.758 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:04:40.758 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:04:40.758 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:04:41.695 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:04:41.695 00:50:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:04:41.695 00:50:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:04:41.695 00:50:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:04:41.695 00:50:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:04:41.695 00:50:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:04:41.695 00:50:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:04:41.695 00:50:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:04:41.695 00:50:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:41.695 00:50:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:41.695 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:41.695 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:41.695 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:41.695 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:41.695 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:41.695 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:41.695 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:41.695 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:41.695 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:41.695 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.695 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.695 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 43376676 kB' 'MemAvailable: 46865332 kB' 'Buffers: 2704 kB' 'Cached: 12725712 kB' 'SwapCached: 0 kB' 'Active: 9722364 kB' 'Inactive: 3493860 kB' 'Active(anon): 9333460 kB' 'Inactive(anon): 0 kB' 'Active(file): 388904 kB' 'Inactive(file): 3493860 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 490620 kB' 'Mapped: 227492 kB' 'Shmem: 8845652 kB' 'KReclaimable: 199460 kB' 'Slab: 575688 kB' 'SReclaimable: 199460 kB' 'SUnreclaim: 376228 kB' 'KernelStack: 12640 kB' 'PageTables: 8152 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 10456924 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196372 kB' 'VmallocChunk: 0 kB' 'Percpu: 34368 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2158172 kB' 'DirectMap2M: 16635904 kB' 'DirectMap1G: 50331648 kB' 00:04:41.695 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.695 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.695 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.695 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.695 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.695 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.695 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.695 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.695 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.695 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.696 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.696 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.696 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.696 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.696 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.696 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.696 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.696 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.696 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.696 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.696 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.696 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.696 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.696 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.696 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.696 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.696 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.696 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.696 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.696 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.696 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.696 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.696 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.696 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.696 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.696 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.696 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.696 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.696 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.696 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.696 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.696 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.696 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.696 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.696 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.696 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.696 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.696 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.696 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.696 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.696 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.696 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.696 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.696 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.696 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.696 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.696 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.696 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.696 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.696 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.696 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.696 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.696 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.696 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.696 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.696 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.696 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.696 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.696 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.696 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.696 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.696 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.696 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.696 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.696 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.696 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.696 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.696 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.696 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.696 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.696 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.696 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.696 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.696 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.696 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.696 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.696 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.696 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.696 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.696 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.696 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.696 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.696 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.696 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.696 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.696 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.696 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.696 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.696 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.696 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.696 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.696 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.696 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.696 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.696 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.696 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.696 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.696 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.696 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.696 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.696 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.696 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.696 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.696 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.696 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.696 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.696 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.696 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.696 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.696 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.696 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.696 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.696 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.696 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.696 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.696 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.697 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.697 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.697 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.697 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.697 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.697 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.697 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.697 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.697 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.697 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.697 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.697 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.697 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.697 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.697 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.697 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.697 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.697 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.697 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.697 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.697 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.697 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.697 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.697 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.697 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.697 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.697 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.697 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.697 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.697 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.697 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.697 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.697 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.697 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.697 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.697 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:41.697 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:41.697 00:50:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:04:41.697 00:50:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:41.697 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:41.697 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:41.697 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:41.697 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:41.697 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:41.697 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:41.697 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:41.697 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:41.697 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:41.697 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.697 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.697 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 43381656 kB' 'MemAvailable: 46870312 kB' 'Buffers: 2704 kB' 'Cached: 12725716 kB' 'SwapCached: 0 kB' 'Active: 9721832 kB' 'Inactive: 3493860 kB' 'Active(anon): 9332928 kB' 'Inactive(anon): 0 kB' 'Active(file): 388904 kB' 'Inactive(file): 3493860 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 490508 kB' 'Mapped: 227408 kB' 'Shmem: 8845656 kB' 'KReclaimable: 199460 kB' 'Slab: 575696 kB' 'SReclaimable: 199460 kB' 'SUnreclaim: 376236 kB' 'KernelStack: 12608 kB' 'PageTables: 7972 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 10456944 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196340 kB' 'VmallocChunk: 0 kB' 'Percpu: 34368 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2158172 kB' 'DirectMap2M: 16635904 kB' 'DirectMap1G: 50331648 kB' 00:04:41.697 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.697 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.697 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.697 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.697 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.697 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.697 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.697 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.697 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.697 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.697 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.697 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.697 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.697 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.697 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.697 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.697 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.697 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.697 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.697 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.697 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.697 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.697 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.697 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.697 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.697 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.697 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.697 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.697 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.697 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.697 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.697 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.697 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.697 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.697 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.697 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.697 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.697 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.697 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.697 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.697 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.697 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.697 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.697 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.697 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.697 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.697 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.697 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.697 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.697 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.697 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.697 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.697 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.698 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.698 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.698 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.698 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.698 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.698 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.698 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.698 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.698 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.698 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.698 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.698 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.698 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.698 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.698 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.698 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.698 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.698 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.698 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.698 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.698 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.698 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.698 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.698 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.698 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.698 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.698 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.698 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.698 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.698 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.698 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.698 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.698 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.698 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.698 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.698 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.698 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.698 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.698 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.698 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.698 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.698 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.698 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.698 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.698 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.698 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.698 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.698 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.698 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.698 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.698 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.698 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.698 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.698 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.698 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.698 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.698 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.698 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.698 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.698 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.698 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.698 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.698 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.698 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.698 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.698 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.698 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.698 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.698 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.698 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.698 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.698 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.698 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.698 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.698 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.698 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.698 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.698 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.698 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.698 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.698 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.698 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.698 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.698 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.698 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.698 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.698 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.698 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.698 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.698 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.698 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.698 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.698 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.698 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.698 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.698 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.698 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.698 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.698 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.698 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.698 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.698 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.698 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.698 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.698 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.698 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.698 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.698 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.698 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.698 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.698 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.698 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.698 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.698 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.698 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.698 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.699 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.699 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.699 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.699 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.699 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.699 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.699 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.699 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.699 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.699 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.699 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.699 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.699 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.699 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.699 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.699 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.699 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.699 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.699 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.699 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.699 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.699 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.699 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.699 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.699 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.699 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.699 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.699 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.699 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.699 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.699 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.699 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.699 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.699 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.699 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.699 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.699 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:41.699 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:41.699 00:50:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:04:41.699 00:50:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:41.699 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:41.699 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:41.699 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:41.699 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:41.699 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:41.699 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:41.699 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:41.699 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:41.699 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:41.699 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.699 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.699 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 43382404 kB' 'MemAvailable: 46871060 kB' 'Buffers: 2704 kB' 'Cached: 12725732 kB' 'SwapCached: 0 kB' 'Active: 9721852 kB' 'Inactive: 3493860 kB' 'Active(anon): 9332948 kB' 'Inactive(anon): 0 kB' 'Active(file): 388904 kB' 'Inactive(file): 3493860 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 490536 kB' 'Mapped: 227408 kB' 'Shmem: 8845672 kB' 'KReclaimable: 199460 kB' 'Slab: 575696 kB' 'SReclaimable: 199460 kB' 'SUnreclaim: 376236 kB' 'KernelStack: 12688 kB' 'PageTables: 8208 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 10456964 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196324 kB' 'VmallocChunk: 0 kB' 'Percpu: 34368 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2158172 kB' 'DirectMap2M: 16635904 kB' 'DirectMap1G: 50331648 kB' 00:04:41.699 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.699 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.699 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.699 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.699 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.699 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.699 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.960 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.960 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.960 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.960 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.960 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.960 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.960 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.960 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.960 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.960 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.960 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.960 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.960 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.960 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.960 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.960 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.960 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.960 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.960 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.960 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.960 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.960 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.960 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.960 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.960 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.960 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.960 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.960 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.960 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.960 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.960 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.960 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.960 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.960 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.960 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.960 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.960 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.960 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.960 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.960 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.960 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.960 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.960 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.960 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.960 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.960 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.960 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.960 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.960 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.960 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.960 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.960 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.960 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.960 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.960 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.960 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.960 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.960 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.960 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.960 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.960 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.960 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.960 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.960 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.960 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.960 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.960 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.960 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.960 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.960 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.960 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.960 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.960 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.960 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.960 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.960 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.960 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.960 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.960 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.960 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.960 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.960 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.960 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.960 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.960 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.960 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.960 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.960 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.961 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.961 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.961 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.961 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.961 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.961 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.961 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.961 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.961 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.961 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.961 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.961 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.961 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.961 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.961 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.961 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.961 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.961 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.961 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.961 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.961 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.961 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.961 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.961 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.961 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.961 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.961 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.961 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.961 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.961 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.961 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.961 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.961 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.961 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.961 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.961 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.961 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.961 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.961 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.961 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.961 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.961 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.961 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.961 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.961 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.961 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.961 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.961 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.961 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.961 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.961 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.961 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.961 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.961 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.961 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.961 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.961 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.961 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.961 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.961 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.961 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.961 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.961 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.961 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.961 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.961 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.961 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.961 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.961 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.961 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.961 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.961 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.961 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.961 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.961 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.961 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.961 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.961 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.961 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.961 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.961 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.961 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.961 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.961 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.961 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.961 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.961 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.961 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.961 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.961 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.961 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.961 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.961 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.961 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.961 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.961 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.961 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.961 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.961 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.961 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.961 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.961 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.961 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.961 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.961 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.961 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.961 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:41.961 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:41.961 00:50:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:04:41.961 00:50:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:41.961 nr_hugepages=1024 00:04:41.961 00:50:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:41.961 resv_hugepages=0 00:04:41.961 00:50:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:41.961 surplus_hugepages=0 00:04:41.961 00:50:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:41.961 anon_hugepages=0 00:04:41.961 00:50:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:41.961 00:50:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:41.962 00:50:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:41.962 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:41.962 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:41.962 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:41.962 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:41.962 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:41.962 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:41.962 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:41.962 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:41.962 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:41.962 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.962 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.962 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 43382228 kB' 'MemAvailable: 46870884 kB' 'Buffers: 2704 kB' 'Cached: 12725756 kB' 'SwapCached: 0 kB' 'Active: 9721836 kB' 'Inactive: 3493860 kB' 'Active(anon): 9332932 kB' 'Inactive(anon): 0 kB' 'Active(file): 388904 kB' 'Inactive(file): 3493860 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 490508 kB' 'Mapped: 227408 kB' 'Shmem: 8845696 kB' 'KReclaimable: 199460 kB' 'Slab: 575784 kB' 'SReclaimable: 199460 kB' 'SUnreclaim: 376324 kB' 'KernelStack: 12688 kB' 'PageTables: 8224 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 10456988 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196324 kB' 'VmallocChunk: 0 kB' 'Percpu: 34368 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2158172 kB' 'DirectMap2M: 16635904 kB' 'DirectMap1G: 50331648 kB' 00:04:41.962 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.962 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.962 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.962 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.962 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.962 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.962 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.962 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.962 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.962 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.962 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.962 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.962 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.962 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.962 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.962 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.962 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.962 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.962 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.962 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.962 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.962 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.962 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.962 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.962 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.962 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.962 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.962 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.962 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.962 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.962 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.962 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.962 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.962 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.962 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.962 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.962 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.962 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.962 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.962 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.962 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.962 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.962 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.962 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.962 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.962 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.962 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.962 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.962 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.962 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.962 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.962 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.962 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.962 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.962 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.962 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.962 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.962 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.962 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.962 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.962 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.962 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.962 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.962 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.962 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.962 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.962 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.962 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.962 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.962 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.962 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.962 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.962 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.962 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.962 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.962 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.962 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.962 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.962 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.962 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.962 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.962 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.963 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.963 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.963 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.963 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.963 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.963 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.963 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.963 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.963 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.963 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.963 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.963 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.963 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.963 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.963 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.963 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.963 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.963 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.963 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.963 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.963 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.963 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.963 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.963 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.963 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.963 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.963 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.963 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.963 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.963 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.963 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.963 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.963 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.963 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.963 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.963 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.963 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.963 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.963 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.963 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.963 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.963 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.963 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.963 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.963 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.963 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.963 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.963 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.963 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.963 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.963 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.963 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.963 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.963 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.963 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.963 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.963 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.963 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.963 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.963 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.963 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.963 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.963 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.963 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.963 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.963 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.963 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.963 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.963 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.963 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.963 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.963 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.963 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.963 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.963 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.963 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.963 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.963 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.963 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.963 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.963 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.963 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.963 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.963 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.963 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.963 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.963 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.963 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.963 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.963 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.963 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.963 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.963 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.963 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.963 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.963 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.963 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.963 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.963 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.963 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.963 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.963 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.963 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.963 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.963 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.963 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.963 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.963 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.963 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.963 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.963 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.963 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:04:41.963 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:41.963 00:50:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:41.963 00:50:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:04:41.963 00:50:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:04:41.964 00:50:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:41.964 00:50:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:41.964 00:50:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:41.964 00:50:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:41.964 00:50:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:41.964 00:50:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:41.964 00:50:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:41.964 00:50:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:41.964 00:50:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:41.964 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:41.964 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:04:41.964 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:41.964 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:41.964 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:41.964 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:41.964 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:41.964 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:41.964 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:41.964 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.964 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.964 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 19246164 kB' 'MemUsed: 13630776 kB' 'SwapCached: 0 kB' 'Active: 7020772 kB' 'Inactive: 3248472 kB' 'Active(anon): 6810036 kB' 'Inactive(anon): 0 kB' 'Active(file): 210736 kB' 'Inactive(file): 3248472 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9916612 kB' 'Mapped: 191904 kB' 'AnonPages: 355812 kB' 'Shmem: 6457404 kB' 'KernelStack: 8072 kB' 'PageTables: 5732 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 127464 kB' 'Slab: 383144 kB' 'SReclaimable: 127464 kB' 'SUnreclaim: 255680 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:41.964 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.964 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.964 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.964 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.964 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.964 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.964 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.964 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.964 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.964 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.964 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.964 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.964 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.964 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.964 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.964 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.964 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.964 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.964 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.964 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.964 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.964 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.964 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.964 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.964 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.964 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.964 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.964 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.964 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.964 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.964 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.964 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.964 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.964 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.964 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.964 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.964 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.964 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.964 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.964 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.964 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.964 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.964 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.964 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.964 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.964 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.964 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.964 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.964 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.964 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.964 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.964 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.964 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.964 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.964 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.964 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.964 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.964 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.964 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.964 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.964 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.964 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.964 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.964 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.964 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.964 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.964 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.964 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.964 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.964 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.964 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.964 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.964 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.964 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.965 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.965 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.965 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.965 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.965 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.965 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.965 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.965 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.965 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.965 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.965 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.965 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.965 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.965 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.965 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.965 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.965 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.965 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.965 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.965 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.965 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.965 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.965 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.965 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.965 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.965 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.965 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.965 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.965 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.965 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.965 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.965 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.965 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.965 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.965 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.965 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.965 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.965 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.965 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.965 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.965 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.965 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.965 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.965 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.965 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.965 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.965 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.965 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.965 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.965 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.965 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.965 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.965 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.965 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.965 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.965 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.965 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.965 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.965 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.965 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.965 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.965 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.965 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.965 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.965 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.965 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.965 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.965 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.965 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.965 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.965 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.965 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:41.965 00:50:34 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:41.965 00:50:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:41.965 00:50:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:41.965 00:50:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:41.965 00:50:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:41.965 00:50:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:41.965 node0=1024 expecting 1024 00:04:41.965 00:50:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:41.965 00:04:41.965 real 0m2.485s 00:04:41.965 user 0m0.637s 00:04:41.965 sys 0m0.915s 00:04:41.965 00:50:34 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:41.965 00:50:34 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:04:41.965 ************************************ 00:04:41.965 END TEST default_setup 00:04:41.966 ************************************ 00:04:41.966 00:50:34 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:04:41.966 00:50:34 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:41.966 00:50:34 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:41.966 00:50:34 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:41.966 ************************************ 00:04:41.966 START TEST per_node_1G_alloc 00:04:41.966 ************************************ 00:04:41.966 00:50:34 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1121 -- # per_node_1G_alloc 00:04:41.966 00:50:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:04:41.966 00:50:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:04:41.966 00:50:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:04:41.966 00:50:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:04:41.966 00:50:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:04:41.966 00:50:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:04:41.966 00:50:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:04:41.966 00:50:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:41.966 00:50:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:41.966 00:50:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:04:41.966 00:50:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:04:41.966 00:50:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:41.966 00:50:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:41.966 00:50:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:41.966 00:50:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:41.966 00:50:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:41.966 00:50:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:04:41.966 00:50:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:41.966 00:50:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:04:41.966 00:50:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:41.966 00:50:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:04:41.966 00:50:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:04:41.966 00:50:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:04:41.966 00:50:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:04:41.966 00:50:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:04:41.966 00:50:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:41.966 00:50:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:42.898 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:42.898 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:42.898 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:42.898 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:42.898 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:42.898 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:42.898 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:42.898 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:42.898 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:42.898 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:42.898 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:42.898 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:42.898 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:42.898 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:42.898 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:42.898 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:42.898 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:43.160 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:04:43.160 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:04:43.160 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:04:43.160 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:43.160 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:43.160 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:43.160 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:43.160 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:43.160 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:43.161 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:43.161 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:43.161 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:43.161 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:43.161 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:43.161 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:43.161 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:43.161 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:43.161 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:43.161 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:43.161 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.161 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.161 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 43371376 kB' 'MemAvailable: 46860032 kB' 'Buffers: 2704 kB' 'Cached: 12725824 kB' 'SwapCached: 0 kB' 'Active: 9722240 kB' 'Inactive: 3493860 kB' 'Active(anon): 9333336 kB' 'Inactive(anon): 0 kB' 'Active(file): 388904 kB' 'Inactive(file): 3493860 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 490800 kB' 'Mapped: 227400 kB' 'Shmem: 8845764 kB' 'KReclaimable: 199460 kB' 'Slab: 575792 kB' 'SReclaimable: 199460 kB' 'SUnreclaim: 376332 kB' 'KernelStack: 12688 kB' 'PageTables: 8176 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 10457164 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196420 kB' 'VmallocChunk: 0 kB' 'Percpu: 34368 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2158172 kB' 'DirectMap2M: 16635904 kB' 'DirectMap1G: 50331648 kB' 00:04:43.161 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.161 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.161 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.161 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.161 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.161 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.161 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.161 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.161 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.161 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.161 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.161 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.161 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.161 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.161 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.161 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.161 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.161 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.161 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.161 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.161 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.161 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.161 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.161 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.161 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.161 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.161 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.161 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.161 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.161 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.161 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.161 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.161 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.161 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.161 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.161 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.161 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.161 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.161 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.161 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.161 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.161 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.161 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.161 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.161 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.161 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.161 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.161 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.161 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.161 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.161 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.161 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.161 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.161 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.161 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.161 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.161 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.161 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.161 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.161 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.161 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.161 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.161 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.161 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.161 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.161 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.161 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.161 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.161 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.161 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.161 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.161 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.161 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.161 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.161 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.161 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.161 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.161 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.161 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.161 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.161 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.161 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.161 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.161 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.161 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.161 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.161 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.162 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.162 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.162 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.162 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.162 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.162 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.162 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.162 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.162 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.162 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.162 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.162 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.162 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.162 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.162 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.162 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.162 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.162 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.162 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.162 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.162 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.162 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.162 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.162 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.162 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.162 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.162 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.162 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.162 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.162 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.162 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.162 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.162 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.162 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.162 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.162 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.162 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.162 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.162 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.162 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.162 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.162 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.162 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.162 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.162 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.162 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.162 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.162 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.162 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.162 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.162 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.162 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.162 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.162 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.162 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.162 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.162 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.162 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.162 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.162 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.162 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.162 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.162 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.162 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.162 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.162 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.162 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.162 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.162 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.162 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.162 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.162 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.162 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.162 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.162 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:43.162 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:43.162 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:43.162 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:43.162 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:43.162 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:43.162 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:43.162 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:43.162 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:43.162 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:43.162 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:43.162 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:43.162 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:43.162 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.162 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.162 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 43375756 kB' 'MemAvailable: 46864412 kB' 'Buffers: 2704 kB' 'Cached: 12725824 kB' 'SwapCached: 0 kB' 'Active: 9722532 kB' 'Inactive: 3493860 kB' 'Active(anon): 9333628 kB' 'Inactive(anon): 0 kB' 'Active(file): 388904 kB' 'Inactive(file): 3493860 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 491064 kB' 'Mapped: 227384 kB' 'Shmem: 8845764 kB' 'KReclaimable: 199460 kB' 'Slab: 575768 kB' 'SReclaimable: 199460 kB' 'SUnreclaim: 376308 kB' 'KernelStack: 12768 kB' 'PageTables: 8312 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 10457184 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196420 kB' 'VmallocChunk: 0 kB' 'Percpu: 34368 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2158172 kB' 'DirectMap2M: 16635904 kB' 'DirectMap1G: 50331648 kB' 00:04:43.162 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.162 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.162 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.162 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.162 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.162 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.162 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.162 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.163 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.163 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.163 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.163 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.163 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.163 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.163 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.163 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.163 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.163 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.163 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.163 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.163 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.163 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.163 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.163 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.163 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.163 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.163 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.163 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.163 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.163 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.163 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.163 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.163 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.163 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.163 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.163 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.163 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.163 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.163 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.163 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.163 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.163 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.163 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.163 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.163 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.163 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.163 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.163 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.163 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.163 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.163 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.163 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.163 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.163 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.163 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.163 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.163 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.163 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.163 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.163 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.163 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.163 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.163 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.163 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.163 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.163 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.163 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.163 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.163 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.163 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.163 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.163 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.163 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.163 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.163 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.163 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.163 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.163 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.163 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.163 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.163 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.163 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.163 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.163 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.163 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.163 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.163 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.163 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.163 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.163 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.163 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.163 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.163 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.163 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.163 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.163 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.163 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.163 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.163 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.163 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.163 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.163 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.163 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.163 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.163 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.163 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.163 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.163 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.163 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.163 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.163 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.163 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.163 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.163 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.164 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.164 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.164 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.164 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.164 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.164 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.164 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.164 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.164 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.164 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.164 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.164 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.164 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.164 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.164 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.164 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.164 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.164 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.164 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.164 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.164 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.164 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.164 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.164 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.164 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.164 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.164 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.164 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.164 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.164 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.164 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.164 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.164 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.164 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.164 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.164 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.164 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.164 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.164 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.164 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.164 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.164 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.164 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.164 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.164 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.164 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.164 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.164 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.164 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.164 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.164 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.164 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.164 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.164 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.164 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.164 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.164 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.164 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.164 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.164 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.164 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.164 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.164 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.164 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.164 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.164 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.164 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.164 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.164 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.164 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.164 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.164 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.164 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.164 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.164 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.164 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.164 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.164 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.164 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.164 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.164 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.164 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.164 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.164 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.164 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.164 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.164 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.164 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.164 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.164 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.164 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.164 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:43.164 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:43.164 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:43.164 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:43.164 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:43.164 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:43.164 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:43.164 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:43.164 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:43.164 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:43.164 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:43.164 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:43.164 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:43.164 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.164 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.165 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 43376440 kB' 'MemAvailable: 46865096 kB' 'Buffers: 2704 kB' 'Cached: 12725844 kB' 'SwapCached: 0 kB' 'Active: 9722176 kB' 'Inactive: 3493860 kB' 'Active(anon): 9333272 kB' 'Inactive(anon): 0 kB' 'Active(file): 388904 kB' 'Inactive(file): 3493860 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 490644 kB' 'Mapped: 227384 kB' 'Shmem: 8845784 kB' 'KReclaimable: 199460 kB' 'Slab: 575836 kB' 'SReclaimable: 199460 kB' 'SUnreclaim: 376376 kB' 'KernelStack: 12736 kB' 'PageTables: 8216 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 10457208 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196420 kB' 'VmallocChunk: 0 kB' 'Percpu: 34368 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2158172 kB' 'DirectMap2M: 16635904 kB' 'DirectMap1G: 50331648 kB' 00:04:43.165 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.165 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.165 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.165 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.165 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.165 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.165 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.165 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.165 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.165 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.165 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.165 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.165 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.165 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.165 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.165 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.165 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.165 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.165 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.165 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.165 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.165 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.165 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.165 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.165 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.165 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.165 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.165 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.165 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.165 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.165 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.165 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.165 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.165 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.165 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.165 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.165 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.165 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.165 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.165 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.165 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.165 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.165 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.165 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.165 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.165 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.165 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.165 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.165 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.165 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.165 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.165 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.165 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.165 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.165 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.165 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.165 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.165 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.165 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.165 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.165 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.165 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.165 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.165 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.165 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.165 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.165 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.165 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.165 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.165 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.165 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.165 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.165 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.165 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.165 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.165 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.165 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.165 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.165 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.165 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.165 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.165 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.165 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.166 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.166 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.166 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.166 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.166 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.166 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.166 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.166 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.166 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.166 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.166 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.166 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.166 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.166 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.166 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.166 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.166 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.166 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.166 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.166 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.166 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.166 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.166 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.166 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.166 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.166 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.166 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.166 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.166 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.166 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.166 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.166 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.166 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.166 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.166 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.166 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.166 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.166 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.166 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.166 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.166 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.166 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.166 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.166 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.166 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.166 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.166 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.166 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.166 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.166 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.166 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.166 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.166 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.166 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.166 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.166 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.166 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.166 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.166 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.166 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.166 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.166 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.166 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.166 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.166 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.166 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.166 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.166 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.166 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.166 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.166 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.166 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.166 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.166 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.166 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.166 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.166 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.166 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.166 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.166 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.166 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.166 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.166 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.166 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.166 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.166 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.166 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.166 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.166 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.166 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.166 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.166 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.166 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.166 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.166 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.166 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.166 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.166 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.166 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.166 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.166 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.166 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.167 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.167 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.167 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.167 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.167 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.167 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.167 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.167 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.167 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.167 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.167 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.167 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.167 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.167 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.167 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.167 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.167 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:43.167 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:43.167 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:43.167 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:43.167 nr_hugepages=1024 00:04:43.167 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:43.167 resv_hugepages=0 00:04:43.167 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:43.167 surplus_hugepages=0 00:04:43.167 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:43.167 anon_hugepages=0 00:04:43.167 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:43.167 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:43.167 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:43.167 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:43.167 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:43.167 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:43.167 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:43.167 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:43.167 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:43.167 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:43.167 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:43.167 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:43.167 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.167 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.167 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 43376876 kB' 'MemAvailable: 46865532 kB' 'Buffers: 2704 kB' 'Cached: 12725868 kB' 'SwapCached: 0 kB' 'Active: 9722148 kB' 'Inactive: 3493860 kB' 'Active(anon): 9333244 kB' 'Inactive(anon): 0 kB' 'Active(file): 388904 kB' 'Inactive(file): 3493860 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 490640 kB' 'Mapped: 227384 kB' 'Shmem: 8845808 kB' 'KReclaimable: 199460 kB' 'Slab: 575836 kB' 'SReclaimable: 199460 kB' 'SUnreclaim: 376376 kB' 'KernelStack: 12736 kB' 'PageTables: 8216 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 10457228 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196420 kB' 'VmallocChunk: 0 kB' 'Percpu: 34368 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2158172 kB' 'DirectMap2M: 16635904 kB' 'DirectMap1G: 50331648 kB' 00:04:43.167 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.167 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.167 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.167 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.167 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.167 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.167 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.167 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.167 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.167 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.167 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.167 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.167 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.167 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.167 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.167 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.167 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.167 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.167 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.167 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.167 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.167 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.167 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.167 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.167 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.167 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.167 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.167 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.167 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.167 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.167 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.167 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.167 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.167 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.167 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.167 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.167 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.167 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.167 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.167 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.167 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.167 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.167 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.167 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.167 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.167 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.167 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.167 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.167 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.167 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.167 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.167 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.167 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.167 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.167 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.167 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.168 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.168 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.168 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.168 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.168 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.168 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.168 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.168 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.168 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.168 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.168 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.168 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.168 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.168 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.168 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.168 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.168 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.168 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.168 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.168 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.168 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.168 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.168 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.168 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.168 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.168 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.168 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.168 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.168 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.168 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.168 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.168 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.168 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.168 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.168 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.168 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.168 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.168 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.168 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.168 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.168 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.168 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.168 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.168 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.168 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.168 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.168 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.168 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.168 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.168 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.168 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.168 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.168 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.168 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.168 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.168 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.168 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.168 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.168 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.168 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.168 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.168 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.168 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.168 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.168 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.168 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.168 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.168 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.168 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.168 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.168 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.168 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.168 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.168 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.168 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.168 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.168 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.168 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.168 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.168 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.168 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.168 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.168 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.168 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.168 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.168 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.168 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.168 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.168 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.168 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.168 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.168 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.168 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.168 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.168 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.168 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.168 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.168 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.168 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.168 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.168 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.168 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.168 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.168 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.168 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.168 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.169 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.169 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.169 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.169 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.169 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.169 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.169 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.169 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.169 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.169 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.169 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.169 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.169 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.430 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.430 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.430 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.430 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.430 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.430 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.430 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.430 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.430 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.430 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.430 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.430 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.430 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.430 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.430 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.430 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.430 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.430 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.430 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 1024 00:04:43.430 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:43.430 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:43.430 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:43.430 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:04:43.430 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:43.430 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:43.430 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:43.430 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:43.430 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:43.431 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:43.431 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:43.431 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:43.431 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:43.431 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:43.431 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:04:43.431 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:43.431 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:43.431 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:43.431 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:43.431 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:43.431 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:43.431 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:43.431 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.431 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.431 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 20301136 kB' 'MemUsed: 12575804 kB' 'SwapCached: 0 kB' 'Active: 7020588 kB' 'Inactive: 3248472 kB' 'Active(anon): 6809852 kB' 'Inactive(anon): 0 kB' 'Active(file): 210736 kB' 'Inactive(file): 3248472 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9916620 kB' 'Mapped: 191880 kB' 'AnonPages: 355528 kB' 'Shmem: 6457412 kB' 'KernelStack: 8056 kB' 'PageTables: 5636 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 127464 kB' 'Slab: 383108 kB' 'SReclaimable: 127464 kB' 'SUnreclaim: 255644 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:43.431 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.431 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.431 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.431 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.431 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.431 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.431 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.431 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.431 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.431 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.431 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.431 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.431 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.431 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.431 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.431 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.431 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.431 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.431 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.431 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.431 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.431 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.431 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.431 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.431 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.431 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.431 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.431 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.431 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.431 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.431 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.431 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.431 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.431 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.431 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.431 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.431 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.431 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.431 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.431 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.431 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.431 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.431 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.431 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.431 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.431 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.431 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.431 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.431 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.431 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.431 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.431 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.431 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.431 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.431 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.431 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.431 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.431 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.431 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.431 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.431 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.431 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.431 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.431 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.431 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.431 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.431 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.431 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.431 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.431 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.431 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.431 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.431 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.431 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.431 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.431 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.431 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.431 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.431 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.431 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.431 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.431 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.431 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.431 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.431 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.431 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.431 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.432 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.432 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.432 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.432 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.432 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.432 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.432 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.432 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.432 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.432 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.432 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.432 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.432 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.432 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.432 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.432 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.432 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.432 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.432 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.432 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.432 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.432 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.432 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.432 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.432 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.432 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.432 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.432 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.432 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.432 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.432 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.432 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.432 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.432 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.432 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.432 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.432 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.432 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.432 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.432 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.432 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.432 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.432 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.432 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.432 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.432 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.432 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.432 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.432 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.432 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.432 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.432 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.432 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.432 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.432 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.432 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.432 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.432 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.432 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:43.432 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:43.432 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:43.432 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:43.432 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:43.432 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:43.432 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:43.432 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=1 00:04:43.432 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:43.432 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:43.432 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:43.432 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:43.432 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:43.432 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:43.432 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:43.432 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.432 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.432 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27664788 kB' 'MemFree: 23077344 kB' 'MemUsed: 4587444 kB' 'SwapCached: 0 kB' 'Active: 2701612 kB' 'Inactive: 245388 kB' 'Active(anon): 2523444 kB' 'Inactive(anon): 0 kB' 'Active(file): 178168 kB' 'Inactive(file): 245388 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2811996 kB' 'Mapped: 35504 kB' 'AnonPages: 135116 kB' 'Shmem: 2388440 kB' 'KernelStack: 4680 kB' 'PageTables: 2580 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 71996 kB' 'Slab: 192728 kB' 'SReclaimable: 71996 kB' 'SUnreclaim: 120732 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:43.432 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.432 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.432 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.432 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.432 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.432 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.432 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.432 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.432 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.432 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.432 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.432 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.432 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.432 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.432 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.432 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.432 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.432 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.432 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.432 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.432 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.432 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.432 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.432 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.432 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.432 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.432 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.432 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.432 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.433 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.433 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.433 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.433 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.433 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.433 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.433 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.433 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.433 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.433 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.433 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.433 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.433 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.433 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.433 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.433 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.433 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.433 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.433 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.433 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.433 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.433 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.433 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.433 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.433 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.433 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.433 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.433 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.433 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.433 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.433 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.433 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.433 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.433 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.433 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.433 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.433 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.433 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.433 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.433 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.433 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.433 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.433 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.433 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.433 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.433 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.433 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.433 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.433 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.433 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.433 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.433 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.433 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.433 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.433 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.433 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.433 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.433 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.433 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.433 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.433 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.433 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.433 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.433 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.433 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.433 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.433 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.433 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.433 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.433 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.433 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.433 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.433 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.433 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.433 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.433 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.433 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.433 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.433 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.433 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.433 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.433 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.433 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.433 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.433 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.433 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.433 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.433 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.433 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.433 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.433 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.433 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.433 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.433 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.433 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.433 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.433 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.433 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.433 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.433 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.433 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.433 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.433 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.433 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.433 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.433 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.433 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.433 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.433 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.433 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.433 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.434 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.434 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.434 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.434 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.434 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.434 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:43.434 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:43.434 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:43.434 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:43.434 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:43.434 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:43.434 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:43.434 node0=512 expecting 512 00:04:43.434 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:43.434 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:43.434 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:43.434 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:04:43.434 node1=512 expecting 512 00:04:43.434 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:43.434 00:04:43.434 real 0m1.389s 00:04:43.434 user 0m0.584s 00:04:43.434 sys 0m0.762s 00:04:43.434 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:43.434 00:50:36 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:43.434 ************************************ 00:04:43.434 END TEST per_node_1G_alloc 00:04:43.434 ************************************ 00:04:43.434 00:50:36 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:04:43.434 00:50:36 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:43.434 00:50:36 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:43.434 00:50:36 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:43.434 ************************************ 00:04:43.434 START TEST even_2G_alloc 00:04:43.434 ************************************ 00:04:43.434 00:50:36 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1121 -- # even_2G_alloc 00:04:43.434 00:50:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:04:43.434 00:50:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:04:43.434 00:50:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:43.434 00:50:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:43.434 00:50:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:43.434 00:50:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:43.434 00:50:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:43.434 00:50:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:43.434 00:50:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:43.434 00:50:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:43.434 00:50:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:43.434 00:50:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:43.434 00:50:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:43.434 00:50:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:43.434 00:50:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:43.434 00:50:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:43.434 00:50:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 512 00:04:43.434 00:50:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 1 00:04:43.434 00:50:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:43.434 00:50:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:43.434 00:50:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:43.434 00:50:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:43.434 00:50:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:43.434 00:50:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:04:43.434 00:50:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:04:43.434 00:50:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:04:43.434 00:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:43.434 00:50:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:44.367 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:44.367 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:44.367 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:44.367 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:44.367 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:44.367 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:44.367 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:44.367 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:44.367 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:44.367 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:44.367 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:44.367 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:44.367 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:44.367 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:44.367 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:44.367 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:44.367 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:44.630 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:04:44.630 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:04:44.630 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:44.630 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:44.630 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:44.630 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:44.630 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:44.630 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:44.630 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:44.630 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:44.630 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:44.630 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:44.630 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:44.630 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:44.630 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:44.630 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:44.630 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:44.630 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:44.630 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.630 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.630 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 43370892 kB' 'MemAvailable: 46859548 kB' 'Buffers: 2704 kB' 'Cached: 12725968 kB' 'SwapCached: 0 kB' 'Active: 9722512 kB' 'Inactive: 3493860 kB' 'Active(anon): 9333608 kB' 'Inactive(anon): 0 kB' 'Active(file): 388904 kB' 'Inactive(file): 3493860 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 490916 kB' 'Mapped: 227424 kB' 'Shmem: 8845908 kB' 'KReclaimable: 199460 kB' 'Slab: 575780 kB' 'SReclaimable: 199460 kB' 'SUnreclaim: 376320 kB' 'KernelStack: 12720 kB' 'PageTables: 8132 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 10457620 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196420 kB' 'VmallocChunk: 0 kB' 'Percpu: 34368 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2158172 kB' 'DirectMap2M: 16635904 kB' 'DirectMap1G: 50331648 kB' 00:04:44.630 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.630 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.630 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.630 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.630 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.630 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.630 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.630 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.630 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.630 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.630 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.630 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.630 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.630 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.630 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.630 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.630 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.630 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.630 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.631 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.631 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.631 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.631 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.631 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.631 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.631 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.631 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.631 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.631 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.631 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.631 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.631 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.631 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.631 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.631 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.631 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.631 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.631 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.631 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.631 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.631 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.631 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.631 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.631 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.631 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.631 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.631 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.631 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.631 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.631 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.631 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.631 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.631 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.631 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.631 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.631 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.631 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.631 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.631 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.631 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.631 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.631 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.631 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.631 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.631 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.631 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.631 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.631 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.631 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.631 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.631 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.631 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.631 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.631 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.631 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.631 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.631 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.631 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.631 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.631 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.631 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.631 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.631 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.631 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.631 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.631 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.631 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.631 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.631 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.631 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.631 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.631 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.631 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.631 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.631 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.631 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.631 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.631 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.631 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.631 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.631 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.631 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.631 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.631 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.631 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.631 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.631 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.631 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.631 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.631 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.631 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.631 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.631 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.631 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.631 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.631 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.631 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.631 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.631 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.631 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.631 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.631 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.631 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.631 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.631 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.631 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.631 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.631 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.631 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.631 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.631 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.631 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.631 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.631 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.631 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.631 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.632 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.632 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.632 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.632 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.632 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.632 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.632 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.632 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.632 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.632 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.632 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.632 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.632 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.632 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.632 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.632 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.632 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.632 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.632 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.632 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.632 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.632 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.632 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.632 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.632 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.632 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:44.632 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:44.632 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:44.632 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:44.632 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:44.632 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:44.632 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:44.632 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:44.632 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:44.632 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:44.632 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:44.632 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:44.632 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:44.632 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.632 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.632 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 43371144 kB' 'MemAvailable: 46859800 kB' 'Buffers: 2704 kB' 'Cached: 12725968 kB' 'SwapCached: 0 kB' 'Active: 9722852 kB' 'Inactive: 3493860 kB' 'Active(anon): 9333948 kB' 'Inactive(anon): 0 kB' 'Active(file): 388904 kB' 'Inactive(file): 3493860 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 491320 kB' 'Mapped: 227424 kB' 'Shmem: 8845908 kB' 'KReclaimable: 199460 kB' 'Slab: 575740 kB' 'SReclaimable: 199460 kB' 'SUnreclaim: 376280 kB' 'KernelStack: 12752 kB' 'PageTables: 8208 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 10457636 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196372 kB' 'VmallocChunk: 0 kB' 'Percpu: 34368 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2158172 kB' 'DirectMap2M: 16635904 kB' 'DirectMap1G: 50331648 kB' 00:04:44.632 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.632 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.632 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.632 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.632 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.632 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.632 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.632 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.632 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.632 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.632 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.632 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.632 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.632 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.632 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.632 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.632 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.632 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.632 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.632 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.632 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.632 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.632 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.632 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.632 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.632 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.632 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.632 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.632 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.632 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.632 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.632 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.632 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.632 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.632 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.632 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.632 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.632 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.632 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.632 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.632 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.632 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.632 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.632 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.632 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.632 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.632 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.632 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.632 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.632 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.632 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.632 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.632 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.632 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.632 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.632 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.632 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.632 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.632 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.632 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.632 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.632 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.633 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.633 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.633 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.633 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.633 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.633 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.633 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.633 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.633 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.633 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.633 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.633 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.633 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.633 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.633 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.633 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.633 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.633 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.633 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.633 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.633 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.633 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.633 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.633 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.633 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.633 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.633 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.633 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.633 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.633 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.633 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.633 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.633 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.633 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.633 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.633 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.633 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.633 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.633 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.633 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.633 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.633 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.633 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.633 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.633 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.633 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.633 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.633 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.633 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.633 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.633 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.633 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.633 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.633 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.633 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.633 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.633 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.633 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.633 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.633 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.633 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.633 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.633 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.633 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.633 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.633 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.633 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.633 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.633 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.633 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.633 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.633 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.633 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.633 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.633 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.633 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.633 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.633 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.633 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.633 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.633 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.633 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.633 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.633 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.633 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.633 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.633 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.633 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.633 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.633 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.633 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.633 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.633 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.633 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.633 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.633 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.633 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.633 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.633 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.633 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.633 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.633 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.633 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.633 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.633 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.633 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.633 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.633 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.633 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.633 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.633 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.633 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.633 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.633 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.633 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.633 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.633 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.633 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.634 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.634 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.634 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.634 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.634 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.634 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.634 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.634 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.634 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.634 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.634 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.634 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.634 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.634 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.634 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.634 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.634 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.634 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.634 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.634 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.634 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.634 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.634 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.634 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.634 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.634 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:44.634 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:44.634 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:44.634 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:44.634 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:44.634 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:44.634 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:44.634 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:44.634 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:44.634 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:44.634 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:44.634 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:44.634 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:44.634 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.634 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.634 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 43372380 kB' 'MemAvailable: 46861036 kB' 'Buffers: 2704 kB' 'Cached: 12725988 kB' 'SwapCached: 0 kB' 'Active: 9722404 kB' 'Inactive: 3493860 kB' 'Active(anon): 9333500 kB' 'Inactive(anon): 0 kB' 'Active(file): 388904 kB' 'Inactive(file): 3493860 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 490824 kB' 'Mapped: 227400 kB' 'Shmem: 8845928 kB' 'KReclaimable: 199460 kB' 'Slab: 575856 kB' 'SReclaimable: 199460 kB' 'SUnreclaim: 376396 kB' 'KernelStack: 12752 kB' 'PageTables: 8220 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 10457656 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196372 kB' 'VmallocChunk: 0 kB' 'Percpu: 34368 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2158172 kB' 'DirectMap2M: 16635904 kB' 'DirectMap1G: 50331648 kB' 00:04:44.634 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.634 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.634 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.634 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.634 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.634 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.634 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.634 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.634 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.634 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.634 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.634 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.634 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.634 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.634 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.634 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.634 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.634 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.634 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.634 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.634 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.634 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.634 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.634 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.634 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.634 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.634 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.634 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.634 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.634 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.634 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.634 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.634 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.634 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.634 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.634 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.634 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.634 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.634 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.634 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.634 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.634 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.634 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.634 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.634 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.634 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.634 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.634 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.634 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.634 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.634 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.634 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.634 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.634 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.634 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.634 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.634 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.634 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.634 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.634 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.634 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.634 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.634 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.634 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.635 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.635 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.635 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.635 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.635 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.635 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.635 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.635 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.635 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.635 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.635 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.635 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.635 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.635 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.635 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.635 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.635 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.635 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.635 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.635 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.635 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.635 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.635 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.635 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.635 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.635 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.635 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.635 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.635 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.635 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.635 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.635 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.635 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.635 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.635 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.635 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.635 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.635 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.635 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.635 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.635 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.635 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.635 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.635 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.635 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.635 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.635 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.635 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.635 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.635 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.635 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.635 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.635 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.635 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.635 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.635 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.635 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.635 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.635 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.635 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.635 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.635 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.635 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.635 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.635 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.635 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.635 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.635 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.635 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.635 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.635 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.635 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.635 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.635 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.635 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.635 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.635 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.635 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.635 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.635 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.635 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.635 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.635 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.635 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.635 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.635 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.635 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.635 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.635 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.635 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.635 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.635 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.635 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.636 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.636 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.636 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.636 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.636 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.636 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.636 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.636 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.636 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.636 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.636 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.636 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.636 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.636 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.636 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.636 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.636 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.636 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.636 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.636 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.636 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.636 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.636 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.636 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.636 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.636 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.636 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.636 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.636 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.636 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.636 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.636 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.636 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.636 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.636 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.636 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.636 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.636 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.636 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.636 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.636 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.636 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.636 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.636 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.636 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:44.636 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:44.636 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:44.636 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:44.636 nr_hugepages=1024 00:04:44.636 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:44.636 resv_hugepages=0 00:04:44.636 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:44.636 surplus_hugepages=0 00:04:44.636 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:44.636 anon_hugepages=0 00:04:44.636 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:44.636 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:44.636 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:44.636 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:44.636 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:44.636 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:44.636 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:44.636 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:44.636 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:44.636 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:44.636 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:44.636 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:44.636 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.636 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.636 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 43376012 kB' 'MemAvailable: 46864668 kB' 'Buffers: 2704 kB' 'Cached: 12726012 kB' 'SwapCached: 0 kB' 'Active: 9722436 kB' 'Inactive: 3493860 kB' 'Active(anon): 9333532 kB' 'Inactive(anon): 0 kB' 'Active(file): 388904 kB' 'Inactive(file): 3493860 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 490824 kB' 'Mapped: 227400 kB' 'Shmem: 8845952 kB' 'KReclaimable: 199460 kB' 'Slab: 575840 kB' 'SReclaimable: 199460 kB' 'SUnreclaim: 376380 kB' 'KernelStack: 12768 kB' 'PageTables: 8220 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 10457680 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196372 kB' 'VmallocChunk: 0 kB' 'Percpu: 34368 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2158172 kB' 'DirectMap2M: 16635904 kB' 'DirectMap1G: 50331648 kB' 00:04:44.636 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.636 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.636 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.636 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.636 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.636 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.636 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.636 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.636 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.636 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.636 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.636 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.636 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.636 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.636 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.636 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.636 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.636 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.636 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.636 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.636 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.636 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.636 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.636 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.636 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.636 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.636 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.636 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.636 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.636 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.636 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.636 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.636 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.636 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.636 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.636 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.637 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.637 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.637 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.637 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.637 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.637 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.637 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.637 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.637 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.637 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.637 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.637 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.637 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.637 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.637 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.637 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.637 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.637 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.637 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.637 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.637 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.637 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.637 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.637 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.637 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.637 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.637 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.637 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.637 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.637 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.637 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.637 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.637 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.637 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.637 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.637 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.637 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.637 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.637 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.637 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.637 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.637 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.637 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.637 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.637 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.637 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.637 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.637 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.637 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.637 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.637 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.637 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.637 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.637 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.637 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.637 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.637 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.637 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.637 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.637 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.637 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.637 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.637 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.637 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.637 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.637 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.637 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.637 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.637 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.637 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.637 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.637 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.637 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.637 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.637 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.637 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.637 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.637 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.637 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.637 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.637 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.637 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.637 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.637 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.637 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.637 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.637 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.637 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.637 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.637 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.637 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.637 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.637 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.637 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.637 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.637 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.637 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.637 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.637 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.637 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.637 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.637 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.637 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.637 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.637 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.637 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.637 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.637 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.637 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.637 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.637 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.637 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.637 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.637 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.637 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.637 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.637 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.638 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.638 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.638 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.638 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.638 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.638 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.638 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.638 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.638 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.638 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.638 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.638 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.638 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.638 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.638 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.638 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.638 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.638 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.638 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.638 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.638 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.638 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.638 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.638 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.638 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.638 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.638 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.638 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.638 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.638 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.638 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.638 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.638 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.638 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.638 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.638 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.638 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.638 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.638 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.638 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.638 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:04:44.638 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:44.638 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:44.638 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:44.638 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:04:44.638 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:44.638 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:44.638 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:44.638 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:44.638 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:44.638 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:44.638 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:44.638 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:44.638 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:44.638 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:44.638 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:04:44.638 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:44.638 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:44.638 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:44.638 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:44.638 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:44.638 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:44.638 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:44.638 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.638 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.638 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 20304064 kB' 'MemUsed: 12572876 kB' 'SwapCached: 0 kB' 'Active: 7021280 kB' 'Inactive: 3248472 kB' 'Active(anon): 6810544 kB' 'Inactive(anon): 0 kB' 'Active(file): 210736 kB' 'Inactive(file): 3248472 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9916640 kB' 'Mapped: 191896 kB' 'AnonPages: 356384 kB' 'Shmem: 6457432 kB' 'KernelStack: 8088 kB' 'PageTables: 5744 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 127464 kB' 'Slab: 383192 kB' 'SReclaimable: 127464 kB' 'SUnreclaim: 255728 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:44.638 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.638 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.638 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.638 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.638 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.638 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.638 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.638 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.638 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.638 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.898 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.898 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.898 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.898 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.898 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.898 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.898 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.898 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.898 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.898 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.898 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.898 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.898 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.898 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.898 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.898 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.898 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.898 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.898 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.898 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.898 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.898 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.898 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.898 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.898 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.899 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.899 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.899 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.899 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.899 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.899 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.899 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.899 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.899 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.899 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.899 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.899 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.899 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.899 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.899 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.899 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.899 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.899 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.899 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.899 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.899 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.899 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.899 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.899 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.899 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.899 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.899 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.899 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.899 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.899 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.899 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.899 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.899 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.899 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.899 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.899 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.899 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.899 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.899 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.899 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.899 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.899 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.899 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.899 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.899 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.899 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.899 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.899 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.899 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.899 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.899 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.899 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.899 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.899 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.899 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.899 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.899 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.899 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.899 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.899 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.899 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.899 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.899 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.899 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.899 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.899 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.899 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.899 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.899 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.899 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.899 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.899 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.899 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.899 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.899 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.899 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.899 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.899 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.899 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.899 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.899 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.899 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.899 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.899 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.899 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.899 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.899 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.899 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.899 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.899 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.899 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.899 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.899 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.899 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.899 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.899 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.899 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.899 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.899 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.899 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.899 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.899 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.899 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.899 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.899 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.899 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.899 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.899 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.899 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.899 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.899 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:44.899 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:44.899 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:44.899 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:44.899 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:44.900 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:44.900 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:44.900 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=1 00:04:44.900 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:44.900 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:44.900 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:44.900 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:44.900 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:44.900 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:44.900 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:44.900 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.900 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.900 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27664788 kB' 'MemFree: 23070196 kB' 'MemUsed: 4594592 kB' 'SwapCached: 0 kB' 'Active: 2701416 kB' 'Inactive: 245388 kB' 'Active(anon): 2523248 kB' 'Inactive(anon): 0 kB' 'Active(file): 178168 kB' 'Inactive(file): 245388 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2812116 kB' 'Mapped: 35504 kB' 'AnonPages: 134772 kB' 'Shmem: 2388560 kB' 'KernelStack: 4664 kB' 'PageTables: 2444 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 71996 kB' 'Slab: 192600 kB' 'SReclaimable: 71996 kB' 'SUnreclaim: 120604 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:44.900 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.900 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.900 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.900 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.900 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.900 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.900 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.900 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.900 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.900 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.900 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.900 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.900 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.900 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.900 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.900 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.900 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.900 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.900 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.900 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.900 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.900 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.900 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.900 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.900 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.900 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.900 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.900 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.900 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.900 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.900 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.900 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.900 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.900 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.900 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.900 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.900 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.900 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.900 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.900 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.900 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.900 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.900 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.900 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.900 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.900 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.900 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.900 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.900 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.900 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.900 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.900 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.900 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.900 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.900 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.900 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.900 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.900 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.900 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.900 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.900 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.900 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.900 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.900 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.900 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.900 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.900 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.900 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.900 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.900 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.900 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.900 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.900 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.900 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.900 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.900 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.900 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.900 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.900 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.900 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.900 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.900 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.900 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.900 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.900 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.900 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.900 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.900 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.900 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.900 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.900 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.900 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.900 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.900 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.900 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.901 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.901 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.901 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.901 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.901 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.901 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.901 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.901 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.901 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.901 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.901 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.901 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.901 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.901 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.901 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.901 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.901 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.901 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.901 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.901 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.901 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.901 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.901 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.901 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.901 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.901 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.901 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.901 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.901 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.901 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.901 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.901 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.901 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.901 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.901 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.901 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.901 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.901 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.901 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.901 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.901 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.901 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.901 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.901 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.901 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.901 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.901 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.901 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.901 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.901 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.901 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:44.901 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:44.901 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:44.901 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:44.901 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:44.901 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:44.901 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:44.901 node0=512 expecting 512 00:04:44.901 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:44.901 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:44.901 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:44.901 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:04:44.901 node1=512 expecting 512 00:04:44.901 00:50:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:44.901 00:04:44.901 real 0m1.416s 00:04:44.901 user 0m0.578s 00:04:44.901 sys 0m0.791s 00:04:44.901 00:50:37 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:44.901 00:50:37 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:44.901 ************************************ 00:04:44.901 END TEST even_2G_alloc 00:04:44.901 ************************************ 00:04:44.901 00:50:37 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:04:44.901 00:50:37 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:44.901 00:50:37 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:44.901 00:50:37 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:44.901 ************************************ 00:04:44.901 START TEST odd_alloc 00:04:44.901 ************************************ 00:04:44.901 00:50:37 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1121 -- # odd_alloc 00:04:44.901 00:50:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:04:44.901 00:50:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:04:44.901 00:50:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:44.901 00:50:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:44.901 00:50:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:04:44.901 00:50:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:44.901 00:50:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:44.901 00:50:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:44.901 00:50:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:04:44.901 00:50:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:44.901 00:50:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:44.901 00:50:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:44.901 00:50:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:44.901 00:50:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:44.901 00:50:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:44.901 00:50:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:44.901 00:50:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 513 00:04:44.901 00:50:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 1 00:04:44.901 00:50:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:44.901 00:50:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:04:44.901 00:50:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:44.901 00:50:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:44.901 00:50:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:44.901 00:50:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:04:44.901 00:50:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:04:44.901 00:50:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:04:44.901 00:50:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:44.901 00:50:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:45.850 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:45.850 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:45.850 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:45.850 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:45.850 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:45.850 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:45.850 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:45.850 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:45.850 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:45.850 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:45.850 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:45.850 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:45.850 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:45.850 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:45.850 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:45.850 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:45.850 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:46.133 00:50:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:04:46.133 00:50:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:04:46.133 00:50:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:46.133 00:50:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:46.133 00:50:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:46.133 00:50:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:46.133 00:50:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:46.133 00:50:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:46.133 00:50:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:46.133 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:46.133 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:46.133 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:46.133 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:46.133 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:46.133 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:46.133 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:46.133 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:46.133 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:46.133 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.133 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.133 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 43387136 kB' 'MemAvailable: 46875788 kB' 'Buffers: 2704 kB' 'Cached: 12726096 kB' 'SwapCached: 0 kB' 'Active: 9724524 kB' 'Inactive: 3493860 kB' 'Active(anon): 9335620 kB' 'Inactive(anon): 0 kB' 'Active(file): 388904 kB' 'Inactive(file): 3493860 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 492816 kB' 'Mapped: 227376 kB' 'Shmem: 8846036 kB' 'KReclaimable: 199452 kB' 'Slab: 575676 kB' 'SReclaimable: 199452 kB' 'SUnreclaim: 376224 kB' 'KernelStack: 12688 kB' 'PageTables: 7888 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609868 kB' 'Committed_AS: 10450028 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196424 kB' 'VmallocChunk: 0 kB' 'Percpu: 34368 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 2158172 kB' 'DirectMap2M: 16635904 kB' 'DirectMap1G: 50331648 kB' 00:04:46.133 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.133 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.133 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.133 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.133 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.133 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.133 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.133 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.133 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.133 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.133 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.133 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.133 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.133 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.133 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.133 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.133 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.133 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.133 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.133 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.133 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.133 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.133 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.133 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.133 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.134 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.134 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.134 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.134 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.134 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.134 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.134 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.134 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.134 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.134 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.134 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.134 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.134 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.134 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.134 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.134 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.134 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.134 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.134 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.134 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.134 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.134 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.134 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.134 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.134 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.134 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.134 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.134 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.134 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.134 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.134 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.134 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.134 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.134 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.134 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.134 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.134 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.134 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.134 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.134 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.134 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.134 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.134 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.134 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.134 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.134 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.134 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.134 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.134 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.134 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.134 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.134 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.134 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.134 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.134 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.134 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.134 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.134 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.134 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.134 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.134 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.134 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.134 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.134 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.134 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.134 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.134 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.134 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.134 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.134 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.134 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.134 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.134 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.134 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.134 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.134 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.134 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.134 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.134 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.134 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.134 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.134 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.134 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.134 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.134 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.134 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.134 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.134 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.134 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.134 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.134 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.134 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.134 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.134 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.134 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.134 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.134 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.134 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.134 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.134 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.134 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.134 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.134 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.134 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.134 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.134 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.134 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.134 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.134 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.134 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.134 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.134 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.134 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.134 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.134 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.135 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.135 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.135 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.135 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.135 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.135 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.135 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.135 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.135 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.135 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.135 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.135 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.135 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.135 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.135 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.135 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.135 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.135 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.135 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.135 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.135 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.135 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:46.135 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:46.135 00:50:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:46.135 00:50:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:46.135 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:46.135 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:46.135 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:46.135 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:46.135 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:46.135 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:46.135 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:46.135 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:46.135 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:46.135 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.135 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.135 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 43390288 kB' 'MemAvailable: 46878940 kB' 'Buffers: 2704 kB' 'Cached: 12726100 kB' 'SwapCached: 0 kB' 'Active: 9719524 kB' 'Inactive: 3493860 kB' 'Active(anon): 9330620 kB' 'Inactive(anon): 0 kB' 'Active(file): 388904 kB' 'Inactive(file): 3493860 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 488252 kB' 'Mapped: 227064 kB' 'Shmem: 8846040 kB' 'KReclaimable: 199452 kB' 'Slab: 575712 kB' 'SReclaimable: 199452 kB' 'SUnreclaim: 376260 kB' 'KernelStack: 12672 kB' 'PageTables: 7792 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609868 kB' 'Committed_AS: 10446292 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196420 kB' 'VmallocChunk: 0 kB' 'Percpu: 34368 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 2158172 kB' 'DirectMap2M: 16635904 kB' 'DirectMap1G: 50331648 kB' 00:04:46.135 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.135 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.135 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.135 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.135 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.135 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.135 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.135 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.135 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.135 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.135 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.135 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.135 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.135 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.135 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.135 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.135 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.135 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.135 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.135 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.135 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.135 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.135 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.135 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.135 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.135 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.135 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.135 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.135 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.135 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.135 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.135 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.135 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.135 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.135 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.135 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.135 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.135 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.135 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.135 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.135 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.135 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.135 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.135 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.135 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.135 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.135 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.135 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.135 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.135 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.135 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.135 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.135 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.135 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.135 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.135 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.135 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.135 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.135 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.135 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.135 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.135 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.135 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.135 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.135 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.135 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.135 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.136 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.136 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.136 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.136 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.136 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.136 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.136 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.136 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.136 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.136 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.136 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.136 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.136 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.136 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.136 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.136 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.136 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.136 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.136 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.136 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.136 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.136 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.136 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.136 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.136 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.136 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.136 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.136 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.136 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.136 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.136 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.136 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.136 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.136 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.136 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.136 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.136 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.136 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.136 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.136 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.136 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.136 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.136 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.136 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.136 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.136 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.136 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.136 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.136 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.136 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.136 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.136 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.136 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.136 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.136 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.136 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.136 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.136 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.136 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.136 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.136 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.136 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.136 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.136 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.136 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.136 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.136 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.136 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.136 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.136 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.136 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.136 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.136 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.136 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.136 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.136 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.136 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.136 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.136 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.136 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.136 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.136 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.136 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.136 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.136 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.136 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.136 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.136 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.136 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.136 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.136 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.136 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.136 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.136 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.136 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.136 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.136 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.136 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.136 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.136 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.136 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.136 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.136 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.137 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.137 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.137 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.137 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.137 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.137 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.137 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.137 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.137 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.137 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.137 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.137 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.137 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.137 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.137 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.137 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.137 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.137 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.137 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.137 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.137 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.137 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.137 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.137 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.137 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.137 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.137 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.137 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.137 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.137 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.137 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.137 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.137 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.137 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.137 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.137 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:46.137 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:46.137 00:50:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:46.137 00:50:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:46.137 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:46.137 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:46.137 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:46.137 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:46.137 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:46.137 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:46.137 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:46.137 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:46.137 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:46.137 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.137 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.137 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 43389032 kB' 'MemAvailable: 46877684 kB' 'Buffers: 2704 kB' 'Cached: 12726112 kB' 'SwapCached: 0 kB' 'Active: 9719256 kB' 'Inactive: 3493860 kB' 'Active(anon): 9330352 kB' 'Inactive(anon): 0 kB' 'Active(file): 388904 kB' 'Inactive(file): 3493860 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 487508 kB' 'Mapped: 227064 kB' 'Shmem: 8846052 kB' 'KReclaimable: 199452 kB' 'Slab: 575712 kB' 'SReclaimable: 199452 kB' 'SUnreclaim: 376260 kB' 'KernelStack: 12752 kB' 'PageTables: 7816 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609868 kB' 'Committed_AS: 10444944 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196612 kB' 'VmallocChunk: 0 kB' 'Percpu: 34368 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 2158172 kB' 'DirectMap2M: 16635904 kB' 'DirectMap1G: 50331648 kB' 00:04:46.137 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.137 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.137 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.137 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.137 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.137 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.137 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.137 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.137 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.137 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.137 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.137 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.137 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.137 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.137 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.137 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.137 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.137 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.137 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.137 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.137 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.137 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.137 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.137 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.137 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.137 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.137 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.137 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.137 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.137 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.137 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.137 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.137 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.137 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.137 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.137 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.137 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.137 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.137 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.137 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.137 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.137 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.137 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.137 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.137 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.137 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.138 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.138 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.138 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.138 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.138 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.138 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.138 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.138 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.138 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.138 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.138 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.138 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.138 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.138 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.138 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.138 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.138 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.138 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.138 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.138 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.138 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.138 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.138 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.138 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.138 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.138 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.138 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.138 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.138 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.138 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.138 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.138 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.138 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.138 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.138 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.138 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.138 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.138 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.138 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.138 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.138 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.138 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.138 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.138 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.138 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.138 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.138 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.138 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.138 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.138 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.138 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.138 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.138 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.138 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.138 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.138 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.138 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.138 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.138 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.138 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.138 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.138 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.138 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.138 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.138 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.138 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.138 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.138 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.138 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.138 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.138 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.138 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.138 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.138 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.138 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.138 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.138 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.138 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.138 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.138 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.138 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.138 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.138 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.139 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.139 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.139 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.139 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.139 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.139 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.139 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.139 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.139 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.139 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.139 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.139 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.139 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.139 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.139 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.139 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.139 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.139 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.139 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.139 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.139 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.139 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.139 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.139 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.139 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.139 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.139 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.139 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.139 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.139 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.139 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.139 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.139 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.139 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.139 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.139 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.139 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.139 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.139 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.139 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.139 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.139 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.139 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.139 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.139 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.139 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.139 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.139 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.139 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.139 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.139 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.139 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.139 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.139 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.139 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.139 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.139 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.139 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.139 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.139 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.139 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.139 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.139 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.139 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.139 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.139 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.139 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.139 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.139 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.139 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.139 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.139 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.139 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:46.139 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:46.139 00:50:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:46.139 00:50:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:04:46.139 nr_hugepages=1025 00:04:46.139 00:50:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:46.139 resv_hugepages=0 00:04:46.139 00:50:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:46.139 surplus_hugepages=0 00:04:46.139 00:50:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:46.139 anon_hugepages=0 00:04:46.139 00:50:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:46.139 00:50:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:04:46.139 00:50:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:46.139 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:46.139 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:46.139 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:46.139 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:46.139 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:46.139 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:46.139 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:46.139 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:46.139 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:46.139 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.139 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.139 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 43387556 kB' 'MemAvailable: 46876208 kB' 'Buffers: 2704 kB' 'Cached: 12726136 kB' 'SwapCached: 0 kB' 'Active: 9720312 kB' 'Inactive: 3493860 kB' 'Active(anon): 9331408 kB' 'Inactive(anon): 0 kB' 'Active(file): 388904 kB' 'Inactive(file): 3493860 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 488536 kB' 'Mapped: 226584 kB' 'Shmem: 8846076 kB' 'KReclaimable: 199452 kB' 'Slab: 575700 kB' 'SReclaimable: 199452 kB' 'SUnreclaim: 376248 kB' 'KernelStack: 12912 kB' 'PageTables: 8296 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609868 kB' 'Committed_AS: 10444964 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196676 kB' 'VmallocChunk: 0 kB' 'Percpu: 34368 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 2158172 kB' 'DirectMap2M: 16635904 kB' 'DirectMap1G: 50331648 kB' 00:04:46.140 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.140 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.140 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.140 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.140 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.140 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.140 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.140 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.140 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.140 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.140 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.140 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.140 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.140 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.140 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.140 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.140 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.140 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.140 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.140 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.140 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.140 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.140 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.140 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.140 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.140 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.140 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.140 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.140 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.140 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.140 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.140 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.140 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.140 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.140 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.140 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.140 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.140 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.140 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.140 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.140 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.140 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.140 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.140 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.140 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.140 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.140 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.140 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.140 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.140 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.140 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.140 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.140 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.140 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.140 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.140 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.140 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.140 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.140 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.140 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.140 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.140 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.140 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.140 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.140 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.140 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.140 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.140 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.140 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.140 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.140 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.140 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.140 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.140 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.140 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.140 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.140 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.140 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.140 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.140 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.140 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.140 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.140 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.140 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.140 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.140 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.140 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.140 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.140 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.140 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.140 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.140 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.140 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.140 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.140 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.140 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.140 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.140 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.140 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.140 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.140 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.140 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.140 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.140 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.140 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.140 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.140 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.140 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.140 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.140 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.140 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.140 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.141 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.141 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.141 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.141 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.141 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.141 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.141 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.141 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.141 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.141 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.141 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.141 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.141 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.141 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.141 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.141 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.141 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.141 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.141 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.141 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.141 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.141 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.141 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.141 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.141 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.141 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.141 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.141 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.141 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.141 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.141 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.141 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.141 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.141 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.141 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.141 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.141 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.141 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.141 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.141 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.141 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.141 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.141 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.141 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.141 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.141 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.141 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.141 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.141 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.141 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.141 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.141 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.141 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.141 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.141 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.141 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.141 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.141 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.141 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.141 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.141 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.141 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.141 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.141 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.141 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.141 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.141 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.141 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.141 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.141 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.141 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.141 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.141 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.141 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.141 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.141 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.141 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.141 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.141 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.141 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.141 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.141 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:04:46.141 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:46.141 00:50:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:46.141 00:50:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:46.141 00:50:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:04:46.141 00:50:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:46.141 00:50:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:46.141 00:50:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:46.141 00:50:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:04:46.141 00:50:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:46.141 00:50:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:46.141 00:50:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:46.141 00:50:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:46.141 00:50:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:46.141 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:46.141 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:04:46.141 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:46.141 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:46.141 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:46.141 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:46.142 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:46.142 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:46.142 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:46.142 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.142 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.142 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 20304388 kB' 'MemUsed: 12572552 kB' 'SwapCached: 0 kB' 'Active: 7020728 kB' 'Inactive: 3248472 kB' 'Active(anon): 6809992 kB' 'Inactive(anon): 0 kB' 'Active(file): 210736 kB' 'Inactive(file): 3248472 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9916648 kB' 'Mapped: 191140 kB' 'AnonPages: 355648 kB' 'Shmem: 6457440 kB' 'KernelStack: 8024 kB' 'PageTables: 5324 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 127464 kB' 'Slab: 383156 kB' 'SReclaimable: 127464 kB' 'SUnreclaim: 255692 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:46.142 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.142 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.142 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.142 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.142 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.142 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.142 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.142 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.142 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.142 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.142 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.142 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.142 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.142 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.142 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.142 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.142 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.142 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.142 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.142 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.142 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.142 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.142 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.142 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.142 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.142 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.142 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.142 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.142 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.142 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.142 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.142 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.142 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.142 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.142 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.142 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.142 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.142 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.142 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.142 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.142 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.142 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.142 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.142 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.142 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.142 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.142 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.142 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.142 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.142 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.142 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.142 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.142 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.142 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.142 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.142 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.142 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.142 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.142 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.142 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.142 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.142 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.142 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.142 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.142 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.142 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.142 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.142 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.142 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.142 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.142 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.142 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.142 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.142 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.142 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.142 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.142 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.142 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.142 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.142 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.142 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.142 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.142 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.142 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.142 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.142 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.142 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.142 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.142 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.142 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.142 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.142 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.142 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.142 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.142 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.142 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.142 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.142 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.142 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.142 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.142 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.142 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.142 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.142 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.142 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.142 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.142 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.143 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.143 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.143 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.143 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.143 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.143 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.143 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.143 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.143 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.143 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.143 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.143 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.143 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.143 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.143 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.143 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.143 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.143 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.143 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.143 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.143 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.143 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.143 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.143 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.143 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.143 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.143 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.143 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.143 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.143 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.143 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.143 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.143 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.143 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.143 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.143 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.143 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.143 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.143 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:46.143 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:46.143 00:50:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:46.143 00:50:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:46.143 00:50:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:46.143 00:50:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:46.143 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:46.143 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=1 00:04:46.143 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:46.143 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:46.143 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:46.143 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:46.143 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:46.143 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:46.143 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:46.143 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.143 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.143 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27664788 kB' 'MemFree: 23080204 kB' 'MemUsed: 4584584 kB' 'SwapCached: 0 kB' 'Active: 2700480 kB' 'Inactive: 245388 kB' 'Active(anon): 2522312 kB' 'Inactive(anon): 0 kB' 'Active(file): 178168 kB' 'Inactive(file): 245388 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2812216 kB' 'Mapped: 35444 kB' 'AnonPages: 133788 kB' 'Shmem: 2388660 kB' 'KernelStack: 5048 kB' 'PageTables: 3328 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 71988 kB' 'Slab: 192608 kB' 'SReclaimable: 71988 kB' 'SUnreclaim: 120620 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:04:46.143 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.143 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.143 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.143 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.143 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.143 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.143 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.143 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.143 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.143 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.143 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.143 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.143 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.143 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.143 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.143 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.143 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.143 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.143 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.143 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.143 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.143 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.143 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.143 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.143 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.143 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.143 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.143 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.143 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.143 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.143 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.143 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.144 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.403 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.403 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.403 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.403 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.403 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.403 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.403 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.403 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.403 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.403 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.403 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.403 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.403 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.403 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.403 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.403 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.403 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.403 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.403 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.403 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.403 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.403 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.403 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.403 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.403 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.403 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.403 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.403 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.403 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.403 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.403 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.403 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.403 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.403 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.403 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.403 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.403 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.403 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.403 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.403 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.403 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.403 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.403 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.404 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.404 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.404 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.404 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.404 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.404 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.404 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.404 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.404 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.404 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.404 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.404 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.404 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.404 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.404 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.404 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.404 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.404 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.404 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.404 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.404 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.404 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.404 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.404 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.404 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.404 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.404 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.404 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.404 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.404 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.404 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.404 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.404 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.404 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.404 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.404 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.404 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.404 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.404 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.404 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.404 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.404 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.404 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.404 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.404 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.404 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.404 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.404 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.404 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.404 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.404 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.404 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.404 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.404 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.404 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.404 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.404 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.404 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.404 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.404 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.404 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.404 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.404 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.404 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.404 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.404 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:46.404 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.404 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.404 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.404 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:46.404 00:50:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:46.404 00:50:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:46.404 00:50:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:46.404 00:50:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:46.404 00:50:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:46.404 00:50:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:04:46.404 node0=512 expecting 513 00:04:46.404 00:50:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:46.404 00:50:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:46.404 00:50:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:46.404 00:50:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:04:46.404 node1=513 expecting 512 00:04:46.404 00:50:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:04:46.404 00:04:46.404 real 0m1.428s 00:04:46.404 user 0m0.626s 00:04:46.404 sys 0m0.763s 00:04:46.404 00:50:39 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:46.404 00:50:39 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:46.404 ************************************ 00:04:46.404 END TEST odd_alloc 00:04:46.404 ************************************ 00:04:46.404 00:50:39 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:04:46.404 00:50:39 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:46.404 00:50:39 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:46.404 00:50:39 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:46.404 ************************************ 00:04:46.404 START TEST custom_alloc 00:04:46.404 ************************************ 00:04:46.404 00:50:39 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1121 -- # custom_alloc 00:04:46.404 00:50:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:04:46.404 00:50:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:04:46.404 00:50:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:04:46.404 00:50:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:04:46.404 00:50:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:04:46.404 00:50:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:04:46.404 00:50:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:04:46.404 00:50:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:46.404 00:50:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:46.404 00:50:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:46.404 00:50:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:46.404 00:50:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:46.404 00:50:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:46.404 00:50:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:46.404 00:50:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:46.404 00:50:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:46.404 00:50:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:46.404 00:50:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:46.404 00:50:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:46.404 00:50:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:46.404 00:50:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:04:46.404 00:50:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 256 00:04:46.404 00:50:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 1 00:04:46.404 00:50:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:46.404 00:50:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:04:46.405 00:50:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:46.405 00:50:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:46.405 00:50:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:46.405 00:50:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:04:46.405 00:50:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:04:46.405 00:50:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:04:46.405 00:50:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:04:46.405 00:50:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:46.405 00:50:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:46.405 00:50:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:46.405 00:50:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:46.405 00:50:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:46.405 00:50:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:46.405 00:50:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:46.405 00:50:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:46.405 00:50:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:46.405 00:50:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:46.405 00:50:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:46.405 00:50:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:04:46.405 00:50:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:46.405 00:50:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:04:46.405 00:50:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:04:46.405 00:50:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:04:46.405 00:50:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:04:46.405 00:50:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:04:46.405 00:50:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:04:46.405 00:50:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:04:46.405 00:50:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:04:46.405 00:50:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:04:46.405 00:50:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:04:46.405 00:50:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:46.405 00:50:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:46.405 00:50:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:46.405 00:50:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:46.405 00:50:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:46.405 00:50:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:46.405 00:50:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:46.405 00:50:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:04:46.405 00:50:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:46.405 00:50:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:04:46.405 00:50:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:46.405 00:50:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:04:46.405 00:50:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:04:46.405 00:50:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:04:46.405 00:50:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:04:46.405 00:50:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:46.405 00:50:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:47.341 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:47.341 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:47.341 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:47.341 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:47.341 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:47.341 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:47.341 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:47.341 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:47.341 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:47.341 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:47.341 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:47.341 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:47.341 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:47.341 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:47.341 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:47.341 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:47.341 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:47.605 00:50:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:04:47.605 00:50:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:04:47.605 00:50:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:04:47.605 00:50:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:47.605 00:50:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:47.605 00:50:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:47.605 00:50:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:47.605 00:50:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:47.605 00:50:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:47.605 00:50:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:47.605 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:47.605 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:47.605 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:47.605 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:47.605 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:47.605 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:47.605 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:47.605 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:47.605 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:47.605 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.605 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.605 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 42362976 kB' 'MemAvailable: 45851676 kB' 'Buffers: 2704 kB' 'Cached: 12726228 kB' 'SwapCached: 0 kB' 'Active: 9718912 kB' 'Inactive: 3493860 kB' 'Active(anon): 9330008 kB' 'Inactive(anon): 0 kB' 'Active(file): 388904 kB' 'Inactive(file): 3493860 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 487000 kB' 'Mapped: 226644 kB' 'Shmem: 8846168 kB' 'KReclaimable: 199548 kB' 'Slab: 575800 kB' 'SReclaimable: 199548 kB' 'SUnreclaim: 376252 kB' 'KernelStack: 12752 kB' 'PageTables: 7864 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086604 kB' 'Committed_AS: 10444344 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196532 kB' 'VmallocChunk: 0 kB' 'Percpu: 34368 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 2158172 kB' 'DirectMap2M: 16635904 kB' 'DirectMap1G: 50331648 kB' 00:04:47.605 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.605 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.605 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.605 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.605 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.605 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.605 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.605 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.605 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.605 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.605 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.605 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.605 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.605 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.605 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.605 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.605 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.605 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.605 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.605 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.605 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.605 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.605 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.605 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.605 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.605 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.605 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.605 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.605 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.605 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.605 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.605 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.605 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.605 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.605 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.605 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.605 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.605 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.605 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.605 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.605 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.605 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.605 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.605 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.605 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.605 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.605 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.605 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.605 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.605 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.605 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.605 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.605 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.605 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.605 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.605 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.605 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.605 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.605 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.605 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.605 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.605 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.605 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.605 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.605 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.605 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.605 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.605 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.605 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.605 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.605 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.606 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.606 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.606 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.606 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.606 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.606 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.606 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.606 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.606 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.606 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.606 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.606 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.606 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.606 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.606 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.606 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.606 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.606 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.606 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.606 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.606 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.606 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.606 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.606 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.606 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.606 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.606 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.606 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.606 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.606 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.606 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.606 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.606 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.606 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.606 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.606 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.606 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.606 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.606 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.606 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.606 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.606 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.606 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.606 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.606 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.606 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.606 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.606 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.606 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.606 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.606 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.606 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.606 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.606 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.606 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.606 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.606 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.606 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.606 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.606 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.606 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.606 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.606 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.606 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.606 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.606 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.606 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.606 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.606 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.606 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.606 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.606 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.606 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.606 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.606 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.606 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.606 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.606 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.606 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.606 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.606 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.606 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.606 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.606 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.606 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.606 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.606 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.606 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.606 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.606 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.606 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:47.606 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:47.606 00:50:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:47.606 00:50:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:47.606 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:47.606 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:47.606 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:47.606 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:47.606 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:47.606 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:47.606 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:47.606 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:47.606 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:47.606 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.606 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.607 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 42362616 kB' 'MemAvailable: 45851316 kB' 'Buffers: 2704 kB' 'Cached: 12726228 kB' 'SwapCached: 0 kB' 'Active: 9719332 kB' 'Inactive: 3493860 kB' 'Active(anon): 9330428 kB' 'Inactive(anon): 0 kB' 'Active(file): 388904 kB' 'Inactive(file): 3493860 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 487448 kB' 'Mapped: 226672 kB' 'Shmem: 8846168 kB' 'KReclaimable: 199548 kB' 'Slab: 575784 kB' 'SReclaimable: 199548 kB' 'SUnreclaim: 376236 kB' 'KernelStack: 12784 kB' 'PageTables: 7972 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086604 kB' 'Committed_AS: 10444364 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196516 kB' 'VmallocChunk: 0 kB' 'Percpu: 34368 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 2158172 kB' 'DirectMap2M: 16635904 kB' 'DirectMap1G: 50331648 kB' 00:04:47.607 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.607 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.607 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.607 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.607 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.607 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.607 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.607 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.607 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.607 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.607 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.607 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.607 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.607 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.607 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.607 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.607 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.607 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.607 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.607 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.607 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.607 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.607 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.607 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.607 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.607 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.607 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.607 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.607 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.607 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.607 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.607 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.607 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.607 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.607 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.607 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.607 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.607 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.607 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.607 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.607 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.607 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.607 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.607 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.607 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.607 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.607 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.607 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.607 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.607 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.607 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.607 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.607 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.607 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.607 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.607 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.607 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.607 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.607 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.607 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.607 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.607 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.607 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.607 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.607 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.607 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.607 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.607 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.607 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.607 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.607 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.607 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.607 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.607 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.607 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.607 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.607 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.607 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.607 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.607 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.607 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.607 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.607 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.607 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.607 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.607 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.607 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.607 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.607 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.607 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.607 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.607 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.607 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.607 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.607 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.607 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.607 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.607 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.607 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.607 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.607 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.607 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.607 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.607 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.607 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.607 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.607 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.607 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.607 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.607 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.607 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.607 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.607 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.607 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.607 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.607 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.607 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.608 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.608 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.608 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.608 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.608 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.608 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.608 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.608 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.608 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.608 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.608 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.608 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.608 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.608 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.608 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.608 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.608 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.608 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.608 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.608 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.608 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.608 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.608 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.608 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.608 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.608 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.608 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.608 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.608 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.608 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.608 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.608 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.608 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.608 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.608 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.608 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.608 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.608 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.608 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.608 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.608 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.608 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.608 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.608 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.608 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.608 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.608 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.608 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.608 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.608 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.608 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.608 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.608 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.608 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.608 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.608 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.608 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.608 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.608 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.608 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.608 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.608 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.608 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.608 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.608 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.608 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.608 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.608 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.608 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.608 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.608 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.608 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.608 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.608 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.608 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.608 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.608 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.608 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.608 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.608 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.608 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.608 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.608 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.608 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.608 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.608 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.608 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.608 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.608 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:47.608 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:47.608 00:50:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:47.608 00:50:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:47.608 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:47.608 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:47.608 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:47.608 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:47.608 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:47.608 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:47.608 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:47.608 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:47.608 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:47.608 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.608 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.608 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 42362616 kB' 'MemAvailable: 45851316 kB' 'Buffers: 2704 kB' 'Cached: 12726248 kB' 'SwapCached: 0 kB' 'Active: 9719044 kB' 'Inactive: 3493860 kB' 'Active(anon): 9330140 kB' 'Inactive(anon): 0 kB' 'Active(file): 388904 kB' 'Inactive(file): 3493860 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 487180 kB' 'Mapped: 226596 kB' 'Shmem: 8846188 kB' 'KReclaimable: 199548 kB' 'Slab: 575788 kB' 'SReclaimable: 199548 kB' 'SUnreclaim: 376240 kB' 'KernelStack: 12784 kB' 'PageTables: 7908 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086604 kB' 'Committed_AS: 10444384 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196500 kB' 'VmallocChunk: 0 kB' 'Percpu: 34368 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 2158172 kB' 'DirectMap2M: 16635904 kB' 'DirectMap1G: 50331648 kB' 00:04:47.608 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.609 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.609 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.609 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.609 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.609 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.609 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.609 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.609 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.609 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.609 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.609 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.609 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.609 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.609 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.609 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.609 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.609 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.609 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.609 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.609 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.609 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.609 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.609 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.609 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.609 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.609 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.609 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.609 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.609 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.609 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.609 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.609 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.609 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.609 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.609 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.609 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.609 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.609 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.609 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.609 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.609 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.609 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.609 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.609 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.609 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.609 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.609 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.609 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.609 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.609 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.609 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.609 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.609 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.609 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.609 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.609 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.609 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.609 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.609 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.609 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.609 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.609 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.609 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.609 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.609 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.609 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.609 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.609 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.609 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.609 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.609 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.609 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.609 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.609 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.609 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.609 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.609 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.609 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.609 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.609 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.609 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.609 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.609 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.609 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.609 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.609 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.609 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.609 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.609 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.609 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.609 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.609 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.609 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.609 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.609 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.609 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.610 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.610 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.610 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.610 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.610 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.610 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.610 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.610 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.610 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.610 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.610 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.610 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.610 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.610 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.610 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.610 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.610 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.610 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.610 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.610 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.610 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.610 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.610 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.610 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.610 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.610 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.610 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.610 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.610 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.610 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.610 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.610 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.610 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.610 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.610 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.610 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.610 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.610 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.610 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.610 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.610 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.610 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.610 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.610 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.610 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.610 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.610 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.610 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.610 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.610 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.610 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.610 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.610 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.610 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.610 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.610 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.610 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.610 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.610 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.610 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.610 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.610 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.610 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.610 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.610 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.610 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.610 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.610 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.610 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.610 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.610 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.610 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.610 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.610 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.610 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.610 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.610 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.610 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.610 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.610 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.610 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.610 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.610 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.610 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.610 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.610 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.610 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.610 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.610 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.610 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.610 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.610 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.610 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.610 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.610 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.610 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.610 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.610 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.610 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.610 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.610 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.610 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.610 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.610 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.610 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:47.610 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:47.610 00:50:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:47.610 00:50:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:04:47.610 nr_hugepages=1536 00:04:47.610 00:50:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:47.610 resv_hugepages=0 00:04:47.610 00:50:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:47.610 surplus_hugepages=0 00:04:47.610 00:50:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:47.610 anon_hugepages=0 00:04:47.610 00:50:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:04:47.611 00:50:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:04:47.611 00:50:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:47.611 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:47.611 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:47.611 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:47.611 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:47.611 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:47.611 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:47.611 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:47.611 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:47.611 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:47.611 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.611 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.611 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 42362616 kB' 'MemAvailable: 45851316 kB' 'Buffers: 2704 kB' 'Cached: 12726248 kB' 'SwapCached: 0 kB' 'Active: 9719188 kB' 'Inactive: 3493860 kB' 'Active(anon): 9330284 kB' 'Inactive(anon): 0 kB' 'Active(file): 388904 kB' 'Inactive(file): 3493860 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 487364 kB' 'Mapped: 226596 kB' 'Shmem: 8846188 kB' 'KReclaimable: 199548 kB' 'Slab: 575788 kB' 'SReclaimable: 199548 kB' 'SUnreclaim: 376240 kB' 'KernelStack: 12800 kB' 'PageTables: 7960 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086604 kB' 'Committed_AS: 10444404 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196516 kB' 'VmallocChunk: 0 kB' 'Percpu: 34368 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 2158172 kB' 'DirectMap2M: 16635904 kB' 'DirectMap1G: 50331648 kB' 00:04:47.611 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.611 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.611 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.611 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.611 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.611 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.611 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.611 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.611 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.611 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.611 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.611 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.611 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.611 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.611 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.611 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.611 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.611 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.611 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.611 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.611 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.611 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.611 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.611 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.611 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.611 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.611 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.611 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.611 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.611 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.611 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.611 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.611 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.611 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.611 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.611 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.611 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.611 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.611 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.611 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.611 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.611 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.611 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.611 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.611 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.611 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.611 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.611 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.611 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.611 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.611 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.611 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.611 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.611 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.611 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.611 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.611 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.611 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.611 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.611 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.611 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.611 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.611 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.611 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.611 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.611 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.611 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.611 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.611 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.611 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.611 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.611 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.611 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.611 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.611 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.611 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.611 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.611 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.611 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.611 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.611 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.611 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.611 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.611 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.611 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.611 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.611 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.612 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.612 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.612 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.612 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.612 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.612 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.612 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.612 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.612 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.612 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.612 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.612 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.612 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.612 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.612 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.612 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.612 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.612 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.612 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.612 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.612 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.612 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.612 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.612 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.612 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.612 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.612 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.612 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.612 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.612 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.612 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.612 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.612 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.612 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.612 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.612 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.612 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.612 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.612 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.612 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.612 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.612 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.612 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.612 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.612 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.612 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.612 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.612 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.612 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.612 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.612 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.612 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.612 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.612 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.612 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.612 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.612 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.612 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.612 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.612 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.612 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.612 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.612 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.612 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.612 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.612 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.612 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.612 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.612 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.612 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.612 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.612 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.612 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.612 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.612 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.612 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.612 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.612 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.612 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.612 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.612 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.612 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.612 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.612 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.612 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.612 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.612 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.612 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.612 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.612 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.612 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.612 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.612 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.612 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.612 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.612 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.612 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.612 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.612 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.612 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.612 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.612 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.612 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.612 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.612 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.612 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.612 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 1536 00:04:47.612 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:47.612 00:50:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:04:47.612 00:50:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:47.612 00:50:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:04:47.612 00:50:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:47.612 00:50:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:47.612 00:50:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:47.612 00:50:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:47.612 00:50:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:47.612 00:50:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:47.612 00:50:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:47.613 00:50:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:47.613 00:50:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:47.613 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:47.613 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:04:47.613 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:47.613 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:47.613 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:47.613 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:47.613 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:47.613 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:47.613 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:47.613 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.613 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.613 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 20323340 kB' 'MemUsed: 12553600 kB' 'SwapCached: 0 kB' 'Active: 7020532 kB' 'Inactive: 3248472 kB' 'Active(anon): 6809796 kB' 'Inactive(anon): 0 kB' 'Active(file): 210736 kB' 'Inactive(file): 3248472 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9916740 kB' 'Mapped: 191152 kB' 'AnonPages: 355464 kB' 'Shmem: 6457532 kB' 'KernelStack: 8216 kB' 'PageTables: 5792 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 127464 kB' 'Slab: 383116 kB' 'SReclaimable: 127464 kB' 'SUnreclaim: 255652 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:47.613 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.613 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.613 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.613 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.613 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.613 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.613 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.613 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.613 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.613 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.613 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.613 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.613 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.613 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.613 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.613 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.613 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.613 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.613 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.613 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.613 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.613 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.613 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.613 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.613 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.613 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.613 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.613 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.613 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.613 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.613 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.613 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.613 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.613 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.613 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.613 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.613 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.613 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.613 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.613 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.613 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.613 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.613 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.613 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.613 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.613 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.613 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.613 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.613 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.613 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.613 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.613 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.613 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.613 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.613 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.613 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.613 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.613 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.613 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.613 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.613 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.613 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.613 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.613 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.613 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.613 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.613 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.613 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.613 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.613 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.613 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.613 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.613 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.613 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.613 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.613 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.613 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.613 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.613 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.613 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.613 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.613 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.613 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.613 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.613 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.613 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.613 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.613 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.613 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.613 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.614 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.614 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.614 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.614 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.614 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.614 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.614 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.614 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.614 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.614 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.614 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.614 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.614 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.614 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.614 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.614 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.614 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.614 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.614 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.614 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.614 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.614 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.614 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.614 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.614 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.614 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.614 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.614 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.614 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.614 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.614 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.614 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.614 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.614 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.614 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.614 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.614 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.614 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.614 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.614 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.614 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.614 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.614 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.614 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.614 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.614 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.614 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.614 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.614 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.614 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.614 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.614 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.614 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.614 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.614 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.614 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:47.614 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:47.614 00:50:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:47.614 00:50:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:47.614 00:50:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:47.614 00:50:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:47.614 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:47.614 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=1 00:04:47.614 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:47.614 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:47.614 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:47.614 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:47.614 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:47.614 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:47.614 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:47.614 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.614 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.614 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27664788 kB' 'MemFree: 22038772 kB' 'MemUsed: 5626016 kB' 'SwapCached: 0 kB' 'Active: 2698220 kB' 'Inactive: 245388 kB' 'Active(anon): 2520052 kB' 'Inactive(anon): 0 kB' 'Active(file): 178168 kB' 'Inactive(file): 245388 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2812212 kB' 'Mapped: 35444 kB' 'AnonPages: 131464 kB' 'Shmem: 2388656 kB' 'KernelStack: 4584 kB' 'PageTables: 2168 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 72084 kB' 'Slab: 192672 kB' 'SReclaimable: 72084 kB' 'SUnreclaim: 120588 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:47.614 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.614 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.614 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.614 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.614 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.614 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.614 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.614 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.614 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.614 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.614 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.614 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.614 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.614 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.614 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.614 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.615 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.615 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.615 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.615 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.874 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.874 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.874 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.874 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.874 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.874 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.874 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.874 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.874 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.874 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.874 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.874 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.874 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.874 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.874 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.874 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.874 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.874 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.874 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.874 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.874 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.874 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.874 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.874 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.874 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.874 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.874 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.874 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.874 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.874 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.874 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.874 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.874 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.874 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.874 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.874 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.874 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.874 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.874 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.874 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.874 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.874 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.874 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.874 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.874 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.874 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.874 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.874 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.874 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.874 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.874 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.874 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.874 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.874 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.874 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.874 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.874 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.874 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.874 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.874 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.874 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.874 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.874 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.874 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.874 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.874 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.874 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.874 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.874 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.874 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.874 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.874 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.874 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.874 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.874 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.874 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.874 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.874 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.874 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.874 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.874 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.874 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.874 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.874 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.874 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.874 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.874 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.874 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.874 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.874 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.874 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.874 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.874 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.874 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.874 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.874 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.874 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.874 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.874 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.874 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.874 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.874 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.874 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.874 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.874 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.874 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.874 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.874 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.874 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.874 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.874 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.874 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.874 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.874 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.874 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.874 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.875 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.875 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.875 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.875 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.875 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.875 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.875 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.875 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.875 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.875 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:47.875 00:50:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:47.875 00:50:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:47.875 00:50:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:47.875 00:50:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:47.875 00:50:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:47.875 00:50:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:47.875 node0=512 expecting 512 00:04:47.875 00:50:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:47.875 00:50:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:47.875 00:50:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:47.875 00:50:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:04:47.875 node1=1024 expecting 1024 00:04:47.875 00:50:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:04:47.875 00:04:47.875 real 0m1.429s 00:04:47.875 user 0m0.631s 00:04:47.875 sys 0m0.759s 00:04:47.875 00:50:40 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:47.875 00:50:40 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:47.875 ************************************ 00:04:47.875 END TEST custom_alloc 00:04:47.875 ************************************ 00:04:47.875 00:50:40 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:04:47.875 00:50:40 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:47.875 00:50:40 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:47.875 00:50:40 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:47.875 ************************************ 00:04:47.875 START TEST no_shrink_alloc 00:04:47.875 ************************************ 00:04:47.875 00:50:40 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1121 -- # no_shrink_alloc 00:04:47.875 00:50:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:04:47.875 00:50:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:04:47.875 00:50:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:47.875 00:50:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:04:47.875 00:50:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:47.875 00:50:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:04:47.875 00:50:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:47.875 00:50:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:47.875 00:50:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:47.875 00:50:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:47.875 00:50:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:47.875 00:50:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:47.875 00:50:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:47.875 00:50:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:47.875 00:50:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:47.875 00:50:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:47.875 00:50:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:47.875 00:50:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:47.875 00:50:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:04:47.875 00:50:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:04:47.875 00:50:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:47.875 00:50:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:48.809 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:48.809 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:48.809 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:48.809 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:48.809 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:48.809 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:48.809 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:48.809 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:48.809 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:48.809 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:48.809 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:48.809 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:48.809 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:48.809 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:48.809 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:48.809 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:48.809 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:49.074 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:04:49.074 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:04:49.074 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:49.074 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:49.074 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:49.074 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:49.074 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:49.074 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:49.074 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:49.074 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:49.074 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:49.074 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:49.074 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:49.074 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:49.074 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:49.074 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:49.074 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:49.074 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:49.074 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.074 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.074 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 43311688 kB' 'MemAvailable: 46800388 kB' 'Buffers: 2704 kB' 'Cached: 12726360 kB' 'SwapCached: 0 kB' 'Active: 9719476 kB' 'Inactive: 3493860 kB' 'Active(anon): 9330572 kB' 'Inactive(anon): 0 kB' 'Active(file): 388904 kB' 'Inactive(file): 3493860 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 487504 kB' 'Mapped: 226668 kB' 'Shmem: 8846300 kB' 'KReclaimable: 199548 kB' 'Slab: 575532 kB' 'SReclaimable: 199548 kB' 'SUnreclaim: 375984 kB' 'KernelStack: 12768 kB' 'PageTables: 7836 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 10444768 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196484 kB' 'VmallocChunk: 0 kB' 'Percpu: 34368 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2158172 kB' 'DirectMap2M: 16635904 kB' 'DirectMap1G: 50331648 kB' 00:04:49.074 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.074 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.074 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.074 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.074 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.074 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.074 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.074 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.074 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.074 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.074 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.074 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.074 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.074 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.074 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.074 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.074 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.074 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.074 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.074 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.074 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.074 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.074 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.074 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.074 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.074 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.074 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.074 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.074 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.074 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.074 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.074 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.074 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.074 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.074 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.074 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.074 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.074 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.074 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.074 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.074 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.074 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.074 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.074 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.074 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.074 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.074 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.074 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.074 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.074 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.074 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.074 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.074 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.074 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.074 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.074 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.074 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.074 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.074 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.074 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.074 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.074 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.074 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.074 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.075 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.075 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.075 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.075 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.075 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.075 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.075 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.075 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.075 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.075 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.075 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.075 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.075 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.075 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.075 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.075 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.075 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.075 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.075 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.075 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.075 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.075 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.075 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.075 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.075 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.075 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.075 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.075 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.075 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.075 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.075 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.075 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.075 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.075 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.075 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.075 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.075 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.075 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.075 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.075 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.075 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.075 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.075 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.075 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.075 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.075 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.075 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.075 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.075 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.075 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.075 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.075 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.075 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.075 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.075 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.075 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.075 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.075 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.075 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.075 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.075 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.075 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.075 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.075 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.075 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.075 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.075 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.075 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.075 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.075 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.075 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.075 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.075 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.075 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.075 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.075 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.075 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.075 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.075 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.075 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.075 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.075 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.075 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.075 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.075 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.075 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.075 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.075 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.075 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.075 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.075 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.075 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.075 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.075 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.075 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.075 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.075 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.075 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:49.075 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:49.075 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:49.075 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:49.075 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:49.075 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:49.075 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:49.075 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:49.075 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:49.075 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:49.075 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:49.076 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:49.076 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:49.076 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.076 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.076 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 43318876 kB' 'MemAvailable: 46807576 kB' 'Buffers: 2704 kB' 'Cached: 12726364 kB' 'SwapCached: 0 kB' 'Active: 9719376 kB' 'Inactive: 3493860 kB' 'Active(anon): 9330472 kB' 'Inactive(anon): 0 kB' 'Active(file): 388904 kB' 'Inactive(file): 3493860 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 487384 kB' 'Mapped: 226604 kB' 'Shmem: 8846304 kB' 'KReclaimable: 199548 kB' 'Slab: 575516 kB' 'SReclaimable: 199548 kB' 'SUnreclaim: 375968 kB' 'KernelStack: 12784 kB' 'PageTables: 7848 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 10444784 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196468 kB' 'VmallocChunk: 0 kB' 'Percpu: 34368 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2158172 kB' 'DirectMap2M: 16635904 kB' 'DirectMap1G: 50331648 kB' 00:04:49.076 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.076 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.076 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.076 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.076 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.076 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.076 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.076 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.076 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.076 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.076 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.076 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.076 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.076 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.076 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.076 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.076 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.076 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.076 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.076 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.076 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.076 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.076 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.076 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.076 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.076 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.076 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.076 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.076 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.076 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.076 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.076 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.076 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.076 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.076 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.076 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.076 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.076 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.076 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.076 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.076 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.076 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.076 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.076 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.076 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.076 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.076 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.076 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.076 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.076 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.076 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.076 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.076 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.076 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.076 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.076 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.076 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.076 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.076 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.076 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.076 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.076 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.076 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.076 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.076 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.076 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.076 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.076 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.076 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.076 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.076 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.076 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.076 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.076 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.076 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.076 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.076 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.076 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.076 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.076 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.076 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.076 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.076 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.076 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.076 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.076 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.076 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.076 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.076 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.076 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.076 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.076 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.076 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.076 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.077 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.077 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.077 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.077 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.077 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.077 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.077 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.077 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.077 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.077 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.077 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.077 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.077 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.077 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.077 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.077 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.077 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.077 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.077 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.077 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.077 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.077 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.077 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.077 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.077 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.077 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.077 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.077 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.077 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.077 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.077 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.077 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.077 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.077 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.077 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.077 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.077 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.077 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.077 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.077 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.077 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.077 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.077 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.077 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.077 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.077 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.077 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.077 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.077 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.077 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.077 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.077 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.077 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.077 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.077 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.077 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.077 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.077 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.077 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.077 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.077 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.077 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.077 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.077 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.077 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.077 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.077 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.077 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.077 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.077 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.077 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.077 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.077 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.077 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.077 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.077 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.077 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.077 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.077 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.077 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.077 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.077 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.077 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.077 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.077 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.077 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.077 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.077 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.077 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.077 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.077 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.077 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.077 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.077 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.077 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.077 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.077 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.077 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.077 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.077 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.077 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.077 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.077 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.077 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.077 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.077 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.077 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.077 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.077 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.077 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.077 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.077 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:49.077 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:49.077 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:49.078 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:49.078 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:49.078 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:49.078 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:49.078 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:49.078 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:49.078 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:49.078 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:49.078 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:49.078 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:49.078 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.078 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.078 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 43319256 kB' 'MemAvailable: 46807956 kB' 'Buffers: 2704 kB' 'Cached: 12726380 kB' 'SwapCached: 0 kB' 'Active: 9719336 kB' 'Inactive: 3493860 kB' 'Active(anon): 9330432 kB' 'Inactive(anon): 0 kB' 'Active(file): 388904 kB' 'Inactive(file): 3493860 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 487328 kB' 'Mapped: 226604 kB' 'Shmem: 8846320 kB' 'KReclaimable: 199548 kB' 'Slab: 575616 kB' 'SReclaimable: 199548 kB' 'SUnreclaim: 376068 kB' 'KernelStack: 12768 kB' 'PageTables: 7816 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 10444808 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196468 kB' 'VmallocChunk: 0 kB' 'Percpu: 34368 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2158172 kB' 'DirectMap2M: 16635904 kB' 'DirectMap1G: 50331648 kB' 00:04:49.078 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.078 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.078 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.078 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.078 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.078 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.078 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.078 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.078 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.078 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.078 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.078 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.078 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.078 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.078 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.078 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.078 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.078 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.078 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.078 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.078 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.078 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.078 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.078 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.078 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.078 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.078 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.078 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.078 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.078 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.078 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.078 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.078 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.078 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.078 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.078 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.078 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.078 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.078 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.078 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.078 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.078 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.078 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.078 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.078 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.078 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.078 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.078 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.078 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.078 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.078 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.078 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.078 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.078 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.078 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.078 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.078 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.078 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.078 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.078 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.078 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.078 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.078 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.078 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.078 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.078 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.078 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.078 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.078 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.078 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.078 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.078 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.078 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.078 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.078 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.078 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.078 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.078 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.078 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.078 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.078 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.079 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.079 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.079 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.079 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.079 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.079 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.079 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.079 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.079 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.079 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.079 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.079 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.079 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.079 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.079 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.079 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.079 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.079 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.079 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.079 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.079 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.079 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.079 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.079 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.079 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.079 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.079 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.079 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.079 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.079 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.079 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.079 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.079 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.079 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.079 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.079 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.079 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.079 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.079 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.079 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.079 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.079 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.079 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.079 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.079 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.079 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.079 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.079 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.079 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.079 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.079 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.079 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.079 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.079 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.079 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.079 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.079 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.079 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.079 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.079 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.079 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.079 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.079 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.079 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.079 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.079 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.079 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.079 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.079 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.079 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.079 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.079 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.079 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.079 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.079 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.079 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.079 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.079 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.079 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.079 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.079 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.079 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.079 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.079 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.079 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.079 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.080 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.080 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.080 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.080 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.080 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.080 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.080 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.080 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.080 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.080 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.080 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.080 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.080 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.080 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.080 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.080 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.080 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.080 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.080 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.080 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.080 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.080 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.080 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.080 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.080 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.080 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.080 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.080 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.080 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.080 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.080 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.080 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.080 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.080 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.080 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:49.080 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:49.080 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:49.080 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:49.080 nr_hugepages=1024 00:04:49.080 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:49.080 resv_hugepages=0 00:04:49.080 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:49.080 surplus_hugepages=0 00:04:49.080 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:49.080 anon_hugepages=0 00:04:49.080 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:49.080 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:49.080 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:49.080 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:49.080 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:49.080 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:49.080 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:49.080 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:49.080 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:49.080 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:49.080 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:49.080 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:49.080 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.080 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.080 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 43319256 kB' 'MemAvailable: 46807956 kB' 'Buffers: 2704 kB' 'Cached: 12726400 kB' 'SwapCached: 0 kB' 'Active: 9719176 kB' 'Inactive: 3493860 kB' 'Active(anon): 9330272 kB' 'Inactive(anon): 0 kB' 'Active(file): 388904 kB' 'Inactive(file): 3493860 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 487172 kB' 'Mapped: 226604 kB' 'Shmem: 8846340 kB' 'KReclaimable: 199548 kB' 'Slab: 575616 kB' 'SReclaimable: 199548 kB' 'SUnreclaim: 376068 kB' 'KernelStack: 12784 kB' 'PageTables: 7868 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 10444832 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196468 kB' 'VmallocChunk: 0 kB' 'Percpu: 34368 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2158172 kB' 'DirectMap2M: 16635904 kB' 'DirectMap1G: 50331648 kB' 00:04:49.080 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.080 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.080 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.080 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.080 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.080 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.080 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.080 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.080 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.080 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.080 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.080 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.080 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.080 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.080 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.080 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.080 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.080 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.080 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.080 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.080 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.080 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.080 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.080 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.080 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.080 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.080 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.080 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.080 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.080 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.080 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.080 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.080 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.080 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.080 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.080 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.080 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.080 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.080 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.080 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.080 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.080 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.081 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.081 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.081 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.081 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.081 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.081 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.081 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.081 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.081 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.081 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.081 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.081 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.081 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.081 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.081 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.081 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.081 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.081 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.081 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.081 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.081 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.081 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.081 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.081 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.081 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.081 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.081 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.081 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.081 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.081 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.081 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.081 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.081 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.081 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.081 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.081 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.081 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.081 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.081 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.081 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.081 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.081 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.081 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.081 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.081 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.081 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.081 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.081 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.081 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.081 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.081 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.081 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.081 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.081 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.081 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.081 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.081 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.081 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.081 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.081 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.081 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.081 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.081 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.081 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.081 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.081 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.081 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.081 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.081 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.081 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.081 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.081 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.081 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.081 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.081 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.081 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.081 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.081 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.081 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.081 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.081 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.081 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.081 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.081 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.081 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.081 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.081 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.081 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.081 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.081 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.081 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.081 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.081 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.081 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.081 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.081 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.081 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.081 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.081 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.081 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.081 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.081 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.081 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.081 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.081 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.081 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.081 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.081 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.081 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.081 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.082 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.082 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.082 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.082 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.082 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.082 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.082 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.082 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.082 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.082 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.082 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.082 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.082 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.082 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.082 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.082 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.082 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.082 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.082 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.082 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.082 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.082 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.082 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.082 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.082 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.082 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.082 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.082 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.082 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.082 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.082 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.082 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.082 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.082 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.082 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.082 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.082 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.082 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.082 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.082 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.082 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.082 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:04:49.082 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:49.082 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:49.082 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:49.082 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:04:49.082 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:49.082 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:49.082 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:49.082 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:49.082 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:49.082 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:49.082 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:49.082 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:49.082 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:49.082 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:49.082 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:04:49.082 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:49.082 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:49.082 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:49.082 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:49.082 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:49.082 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:49.082 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:49.082 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.082 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.082 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 19271360 kB' 'MemUsed: 13605580 kB' 'SwapCached: 0 kB' 'Active: 7021012 kB' 'Inactive: 3248472 kB' 'Active(anon): 6810276 kB' 'Inactive(anon): 0 kB' 'Active(file): 210736 kB' 'Inactive(file): 3248472 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9916844 kB' 'Mapped: 191160 kB' 'AnonPages: 355804 kB' 'Shmem: 6457636 kB' 'KernelStack: 8200 kB' 'PageTables: 5696 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 127464 kB' 'Slab: 382928 kB' 'SReclaimable: 127464 kB' 'SUnreclaim: 255464 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:49.082 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.082 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.082 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.082 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.082 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.082 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.082 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.082 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.082 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.082 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.082 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.082 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.082 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.082 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.082 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.082 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.082 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.082 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.082 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.082 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.082 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.082 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.082 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.082 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.082 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.082 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.082 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.082 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.083 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.083 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.083 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.083 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.083 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.083 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.083 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.083 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.083 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.083 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.083 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.083 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.083 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.083 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.083 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.083 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.083 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.083 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.083 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.083 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.083 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.083 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.083 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.083 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.083 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.083 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.083 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.083 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.083 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.083 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.083 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.083 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.083 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.083 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.083 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.083 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.083 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.083 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.083 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.083 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.083 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.083 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.083 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.083 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.083 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.083 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.083 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.083 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.083 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.083 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.083 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.083 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.083 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.083 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.083 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.083 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.083 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.083 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.083 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.083 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.083 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.083 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.083 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.083 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.083 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.083 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.083 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.083 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.083 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.083 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.083 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.083 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.083 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.083 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.083 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.083 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.083 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.083 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.083 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.083 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.083 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.083 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.083 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.083 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.083 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.083 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.083 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.083 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.083 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.083 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.083 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.083 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.083 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.083 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.083 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.083 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.083 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.083 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.083 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.083 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.083 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.083 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.083 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.083 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.083 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.083 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.083 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.083 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.083 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.083 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.083 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.083 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.083 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.084 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.084 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.084 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.084 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.084 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:49.084 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:49.084 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:49.084 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:49.084 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:49.084 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:49.084 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:49.084 node0=1024 expecting 1024 00:04:49.084 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:49.084 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:04:49.084 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:04:49.084 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:04:49.084 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:49.084 00:50:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:50.463 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:50.463 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:50.463 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:50.463 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:50.463 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:50.463 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:50.463 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:50.463 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:50.463 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:50.463 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:50.463 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:50.463 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:50.463 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:50.463 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:50.463 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:50.463 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:50.463 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:50.463 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:04:50.463 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:04:50.463 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:04:50.463 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:50.463 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:50.463 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:50.463 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:50.463 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:50.463 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:50.463 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:50.463 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:50.463 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:50.463 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:50.463 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:50.463 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:50.463 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:50.463 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:50.463 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:50.463 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:50.463 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.463 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.464 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 43334296 kB' 'MemAvailable: 46822996 kB' 'Buffers: 2704 kB' 'Cached: 12726468 kB' 'SwapCached: 0 kB' 'Active: 9719792 kB' 'Inactive: 3493860 kB' 'Active(anon): 9330888 kB' 'Inactive(anon): 0 kB' 'Active(file): 388904 kB' 'Inactive(file): 3493860 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 487612 kB' 'Mapped: 226656 kB' 'Shmem: 8846408 kB' 'KReclaimable: 199548 kB' 'Slab: 575760 kB' 'SReclaimable: 199548 kB' 'SUnreclaim: 376212 kB' 'KernelStack: 12800 kB' 'PageTables: 7868 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 10445208 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196596 kB' 'VmallocChunk: 0 kB' 'Percpu: 34368 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2158172 kB' 'DirectMap2M: 16635904 kB' 'DirectMap1G: 50331648 kB' 00:04:50.464 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.464 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.464 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.464 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.464 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.464 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.464 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.464 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.464 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.464 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.464 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.464 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.464 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.464 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.464 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.464 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.464 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.464 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.464 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.464 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.464 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.464 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.464 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.464 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.464 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.464 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.464 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.464 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.464 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.464 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.464 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.464 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.464 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.464 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.464 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.464 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.464 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.464 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.464 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.464 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.464 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.464 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.464 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.464 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.464 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.464 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.464 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.464 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.464 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.464 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.464 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.464 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.464 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.464 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.464 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.464 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.464 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.464 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.464 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.464 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.464 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.464 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.464 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.464 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.464 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.464 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.464 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.464 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.464 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.464 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.464 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.464 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.464 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.464 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.464 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.464 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.464 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.464 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.464 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.464 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.464 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.464 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.464 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.464 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.464 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.464 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.464 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.464 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.464 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.464 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.464 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.464 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.464 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.464 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.464 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.464 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.464 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.464 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.464 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.464 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.464 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.464 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.464 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.464 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.464 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.464 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.464 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.464 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.465 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.465 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.465 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.465 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.465 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.465 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.465 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.465 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.465 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.465 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.465 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.465 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.465 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.465 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.465 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.465 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.465 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.465 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.465 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.465 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.465 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.465 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.465 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.465 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.465 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.465 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.465 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.465 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.465 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.465 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.465 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.465 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.465 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.465 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.465 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.465 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.465 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.465 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.465 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.465 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.465 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.465 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.465 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.465 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.465 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.465 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.465 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.465 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.465 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.465 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.465 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.465 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.465 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.465 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:50.465 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:50.465 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:50.465 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:50.465 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:50.465 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:50.465 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:50.465 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:50.465 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:50.465 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:50.465 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:50.465 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:50.465 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:50.465 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.465 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.465 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 43334640 kB' 'MemAvailable: 46823340 kB' 'Buffers: 2704 kB' 'Cached: 12726468 kB' 'SwapCached: 0 kB' 'Active: 9719428 kB' 'Inactive: 3493860 kB' 'Active(anon): 9330524 kB' 'Inactive(anon): 0 kB' 'Active(file): 388904 kB' 'Inactive(file): 3493860 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 487260 kB' 'Mapped: 226760 kB' 'Shmem: 8846408 kB' 'KReclaimable: 199548 kB' 'Slab: 575760 kB' 'SReclaimable: 199548 kB' 'SUnreclaim: 376212 kB' 'KernelStack: 12752 kB' 'PageTables: 7724 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 10445224 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196548 kB' 'VmallocChunk: 0 kB' 'Percpu: 34368 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2158172 kB' 'DirectMap2M: 16635904 kB' 'DirectMap1G: 50331648 kB' 00:04:50.465 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.465 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.465 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.465 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.465 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.465 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.465 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.465 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.465 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.465 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.465 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.465 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.465 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.465 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.465 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.465 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.465 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.465 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.465 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.465 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.465 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.465 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.465 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.465 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.465 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.465 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.465 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.465 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.465 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.465 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.465 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.465 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.465 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.466 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.466 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.466 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.466 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.466 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.466 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.466 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.466 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.466 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.466 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.466 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.466 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.466 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.466 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.466 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.466 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.466 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.466 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.466 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.466 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.466 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.466 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.466 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.466 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.466 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.466 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.466 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.466 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.466 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.466 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.466 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.466 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.466 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.466 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.466 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.466 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.466 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.466 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.466 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.466 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.466 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.466 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.466 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.466 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.466 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.466 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.466 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.466 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.466 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.466 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.466 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.466 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.466 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.466 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.466 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.466 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.466 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.466 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.466 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.466 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.466 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.466 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.466 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.466 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.466 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.466 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.466 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.466 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.466 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.466 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.466 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.466 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.466 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.466 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.466 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.466 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.466 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.466 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.466 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.466 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.466 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.466 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.466 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.466 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.466 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.466 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.466 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.466 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.466 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.466 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.466 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.466 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.466 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.466 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.466 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.466 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.466 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.466 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.466 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.466 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.466 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.466 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.466 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.466 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.466 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.466 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.466 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.466 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.466 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.466 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.466 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.466 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.466 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.466 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.466 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.466 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.467 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.467 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.467 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.467 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.467 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.467 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.467 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.467 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.467 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.467 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.467 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.467 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.467 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.467 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.467 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.467 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.467 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.467 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.467 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.467 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.467 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.467 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.467 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.467 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.467 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.467 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.467 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.467 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.467 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.467 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.467 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.467 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.467 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.467 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.467 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.467 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.467 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.467 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.467 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.467 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.467 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.467 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.467 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.467 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.467 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.467 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.467 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.467 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.467 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.467 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.467 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.467 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.467 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.467 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.467 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.467 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.467 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:50.467 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:50.467 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:50.467 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:50.467 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:50.467 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:50.467 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:50.467 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:50.467 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:50.467 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:50.467 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:50.467 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:50.467 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:50.467 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.467 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.467 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 43334060 kB' 'MemAvailable: 46822760 kB' 'Buffers: 2704 kB' 'Cached: 12726492 kB' 'SwapCached: 0 kB' 'Active: 9719668 kB' 'Inactive: 3493860 kB' 'Active(anon): 9330764 kB' 'Inactive(anon): 0 kB' 'Active(file): 388904 kB' 'Inactive(file): 3493860 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 487484 kB' 'Mapped: 226616 kB' 'Shmem: 8846432 kB' 'KReclaimable: 199548 kB' 'Slab: 575768 kB' 'SReclaimable: 199548 kB' 'SUnreclaim: 376220 kB' 'KernelStack: 12816 kB' 'PageTables: 7900 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 10445248 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196564 kB' 'VmallocChunk: 0 kB' 'Percpu: 34368 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2158172 kB' 'DirectMap2M: 16635904 kB' 'DirectMap1G: 50331648 kB' 00:04:50.467 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.467 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.467 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.467 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.467 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.467 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.467 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.467 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.467 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.467 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.467 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.467 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.467 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.467 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.467 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.467 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.467 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.467 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.467 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.467 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.467 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.467 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.467 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.467 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.467 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.467 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.467 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.467 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.467 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.467 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.467 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.468 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.468 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.468 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.468 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.468 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.468 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.468 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.468 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.468 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.468 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.468 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.468 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.468 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.468 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.468 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.468 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.468 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.468 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.468 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.468 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.468 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.468 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.468 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.468 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.468 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.468 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.468 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.468 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.468 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.468 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.468 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.468 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.468 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.468 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.468 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.468 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.468 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.468 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.468 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.468 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.468 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.468 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.468 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.468 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.468 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.468 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.468 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.468 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.468 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.468 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.468 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.468 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.468 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.468 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.468 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.468 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.468 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.468 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.468 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.468 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.468 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.468 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.468 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.468 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.468 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.468 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.468 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.468 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.468 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.468 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.468 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.468 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.468 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.468 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.468 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.468 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.468 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.468 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.468 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.468 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.468 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.468 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.468 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.468 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.468 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.468 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.468 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.468 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.468 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.469 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.469 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.469 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.469 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.469 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.469 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.469 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.469 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.469 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.469 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.469 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.469 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.469 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.469 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.469 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.469 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.469 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.469 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.469 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.469 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.469 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.469 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.469 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.469 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.469 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.469 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.469 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.469 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.469 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.469 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.469 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.469 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.469 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.469 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.469 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.469 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.469 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.469 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.469 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.469 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.469 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.469 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.469 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.469 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.469 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.469 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.469 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.469 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.469 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.469 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.469 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.469 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.469 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.469 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.469 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.469 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.469 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.469 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.469 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.469 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.469 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.469 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.469 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.469 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.469 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.469 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.469 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.469 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.469 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.469 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.469 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.469 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.469 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.469 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.469 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.469 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.469 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.469 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.469 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.469 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.469 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.469 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:50.469 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:50.469 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:50.469 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:50.469 nr_hugepages=1024 00:04:50.469 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:50.469 resv_hugepages=0 00:04:50.469 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:50.469 surplus_hugepages=0 00:04:50.469 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:50.469 anon_hugepages=0 00:04:50.469 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:50.469 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:50.469 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:50.469 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:50.469 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:50.469 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:50.469 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:50.469 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:50.469 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:50.469 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:50.469 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:50.469 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:50.469 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.469 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.470 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 43335900 kB' 'MemAvailable: 46824600 kB' 'Buffers: 2704 kB' 'Cached: 12726512 kB' 'SwapCached: 0 kB' 'Active: 9719688 kB' 'Inactive: 3493860 kB' 'Active(anon): 9330784 kB' 'Inactive(anon): 0 kB' 'Active(file): 388904 kB' 'Inactive(file): 3493860 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 487484 kB' 'Mapped: 226616 kB' 'Shmem: 8846452 kB' 'KReclaimable: 199548 kB' 'Slab: 575768 kB' 'SReclaimable: 199548 kB' 'SUnreclaim: 376220 kB' 'KernelStack: 12816 kB' 'PageTables: 7900 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 10445268 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196564 kB' 'VmallocChunk: 0 kB' 'Percpu: 34368 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2158172 kB' 'DirectMap2M: 16635904 kB' 'DirectMap1G: 50331648 kB' 00:04:50.470 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.470 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.470 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.470 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.470 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.470 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.470 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.470 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.470 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.470 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.470 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.470 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.470 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.470 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.470 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.470 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.470 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.470 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.470 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.470 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.470 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.470 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.470 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.470 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.470 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.470 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.470 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.470 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.470 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.470 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.470 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.470 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.470 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.470 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.470 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.470 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.470 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.470 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.470 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.470 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.470 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.470 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.470 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.470 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.470 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.470 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.470 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.470 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.470 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.470 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.470 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.470 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.470 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.470 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.470 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.470 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.470 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.470 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.470 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.470 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.470 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.470 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.470 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.470 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.470 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.470 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.470 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.470 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.470 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.470 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.470 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.470 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.470 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.470 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.470 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.470 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.470 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.470 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.470 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.470 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.470 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.470 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.470 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.470 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.470 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.470 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.470 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.470 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.470 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.470 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.470 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.470 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.470 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.470 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.470 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.470 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.470 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.470 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.470 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.470 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.470 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.470 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.470 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.470 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.470 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.470 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.470 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.470 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.470 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.470 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.471 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.471 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.471 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.471 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.471 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.471 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.471 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.471 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.471 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.471 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.471 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.471 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.471 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.471 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.471 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.471 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.471 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.471 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.471 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.471 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.471 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.471 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.471 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.471 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.471 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.471 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.471 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.471 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.471 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.471 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.471 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.471 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.471 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.471 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.471 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.471 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.471 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.471 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.471 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.471 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.471 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.471 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.471 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.471 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.471 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.471 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.471 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.471 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.471 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.471 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.471 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.471 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.471 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.471 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.471 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.471 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.471 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.471 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.471 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.471 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.471 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.471 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.471 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.471 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.471 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.471 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.471 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.471 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.471 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.471 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.471 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.471 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.471 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.471 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.471 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.471 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.471 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.471 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.471 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.471 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.471 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.471 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.471 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.471 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:04:50.471 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:50.471 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:50.471 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:50.471 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:04:50.471 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:50.471 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:50.471 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:50.471 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:50.471 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:50.471 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:50.471 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:50.471 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:50.471 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:50.471 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:50.471 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:04:50.471 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:50.471 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:50.471 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:50.471 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:50.471 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:50.471 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:50.471 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:50.471 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.471 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.472 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 19260652 kB' 'MemUsed: 13616288 kB' 'SwapCached: 0 kB' 'Active: 7020980 kB' 'Inactive: 3248472 kB' 'Active(anon): 6810244 kB' 'Inactive(anon): 0 kB' 'Active(file): 210736 kB' 'Inactive(file): 3248472 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9916852 kB' 'Mapped: 191608 kB' 'AnonPages: 356112 kB' 'Shmem: 6457644 kB' 'KernelStack: 8232 kB' 'PageTables: 5732 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 127464 kB' 'Slab: 383036 kB' 'SReclaimable: 127464 kB' 'SUnreclaim: 255572 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:50.472 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.472 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.472 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.472 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.472 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.472 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.472 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.472 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.472 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.472 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.472 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.472 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.472 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.472 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.472 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.472 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.472 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.472 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.472 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.472 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.472 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.472 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.472 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.472 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.472 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.472 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.472 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.472 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.472 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.472 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.472 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.472 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.472 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.472 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.472 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.472 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.472 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.472 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.472 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.472 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.472 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.472 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.472 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.472 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.472 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.472 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.472 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.472 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.472 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.472 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.472 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.472 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.472 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.472 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.472 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.472 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.472 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.472 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.472 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.472 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.472 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.472 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.472 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.472 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.472 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.472 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.472 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.472 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.472 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.472 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.472 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.472 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.472 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.472 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.472 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.472 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.472 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.472 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.472 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.472 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.472 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.472 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.472 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.472 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.472 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.472 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.472 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.472 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.472 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.472 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.472 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.472 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.472 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.472 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.472 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.472 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.472 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.472 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.472 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.472 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.472 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.472 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.472 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.472 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.472 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.472 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.472 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.472 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.472 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.472 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.472 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.472 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.473 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.473 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.473 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.473 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.473 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.473 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.473 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.473 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.473 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.473 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.473 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.473 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.473 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.473 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.473 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.473 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.473 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.473 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.473 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.473 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.473 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.473 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.473 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.473 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.473 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.473 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.473 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.473 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.473 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.473 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.473 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.473 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.473 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.473 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:50.473 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:50.473 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:50.473 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:50.473 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:50.473 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:50.473 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:50.473 node0=1024 expecting 1024 00:04:50.473 00:50:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:50.473 00:04:50.473 real 0m2.774s 00:04:50.473 user 0m1.089s 00:04:50.473 sys 0m1.600s 00:04:50.473 00:50:43 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:50.473 00:50:43 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:50.473 ************************************ 00:04:50.473 END TEST no_shrink_alloc 00:04:50.473 ************************************ 00:04:50.473 00:50:43 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:04:50.473 00:50:43 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:04:50.473 00:50:43 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:50.473 00:50:43 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:50.473 00:50:43 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:50.473 00:50:43 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:50.473 00:50:43 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:50.731 00:50:43 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:50.731 00:50:43 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:50.731 00:50:43 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:50.731 00:50:43 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:50.731 00:50:43 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:50.731 00:50:43 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:50.731 00:50:43 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:50.731 00:04:50.731 real 0m11.315s 00:04:50.731 user 0m4.308s 00:04:50.731 sys 0m5.844s 00:04:50.731 00:50:43 setup.sh.hugepages -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:50.731 00:50:43 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:50.731 ************************************ 00:04:50.731 END TEST hugepages 00:04:50.731 ************************************ 00:04:50.731 00:50:43 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:04:50.731 00:50:43 setup.sh -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:50.731 00:50:43 setup.sh -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:50.731 00:50:43 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:50.731 ************************************ 00:04:50.731 START TEST driver 00:04:50.731 ************************************ 00:04:50.731 00:50:43 setup.sh.driver -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:04:50.731 * Looking for test storage... 00:04:50.731 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:50.731 00:50:43 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:04:50.731 00:50:43 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:50.731 00:50:43 setup.sh.driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:53.260 00:50:46 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:04:53.260 00:50:46 setup.sh.driver -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:53.260 00:50:46 setup.sh.driver -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:53.260 00:50:46 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:04:53.260 ************************************ 00:04:53.260 START TEST guess_driver 00:04:53.260 ************************************ 00:04:53.260 00:50:46 setup.sh.driver.guess_driver -- common/autotest_common.sh@1121 -- # guess_driver 00:04:53.260 00:50:46 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:04:53.260 00:50:46 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:04:53.260 00:50:46 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:04:53.260 00:50:46 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:04:53.260 00:50:46 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:04:53.260 00:50:46 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:04:53.260 00:50:46 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:04:53.260 00:50:46 setup.sh.driver.guess_driver -- setup/driver.sh@25 -- # unsafe_vfio=N 00:04:53.260 00:50:46 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:04:53.260 00:50:46 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 141 > 0 )) 00:04:53.260 00:50:46 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # is_driver vfio_pci 00:04:53.260 00:50:46 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod vfio_pci 00:04:53.260 00:50:46 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep vfio_pci 00:04:53.260 00:50:46 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:04:53.260 00:50:46 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:04:53.260 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:04:53.260 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:04:53.260 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:04:53.260 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:04:53.260 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:04:53.260 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:04:53.260 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:04:53.260 00:50:46 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # return 0 00:04:53.260 00:50:46 setup.sh.driver.guess_driver -- setup/driver.sh@37 -- # echo vfio-pci 00:04:53.260 00:50:46 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=vfio-pci 00:04:53.260 00:50:46 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:04:53.260 00:50:46 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:04:53.260 Looking for driver=vfio-pci 00:04:53.260 00:50:46 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:53.260 00:50:46 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:04:53.260 00:50:46 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:04:53.260 00:50:46 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:54.232 00:50:47 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:54.232 00:50:47 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:54.232 00:50:47 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:54.232 00:50:47 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:54.232 00:50:47 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:54.232 00:50:47 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:54.490 00:50:47 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:54.490 00:50:47 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:54.490 00:50:47 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:54.490 00:50:47 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:54.490 00:50:47 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:54.490 00:50:47 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:54.490 00:50:47 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:54.490 00:50:47 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:54.490 00:50:47 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:54.490 00:50:47 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:54.490 00:50:47 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:54.490 00:50:47 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:54.490 00:50:47 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:54.490 00:50:47 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:54.490 00:50:47 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:54.490 00:50:47 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:54.490 00:50:47 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:54.490 00:50:47 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:54.490 00:50:47 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:54.490 00:50:47 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:54.490 00:50:47 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:54.490 00:50:47 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:54.490 00:50:47 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:54.490 00:50:47 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:54.490 00:50:47 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:54.490 00:50:47 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:54.490 00:50:47 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:54.490 00:50:47 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:54.490 00:50:47 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:54.490 00:50:47 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:54.490 00:50:47 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:54.490 00:50:47 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:54.490 00:50:47 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:54.490 00:50:47 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:54.490 00:50:47 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:54.490 00:50:47 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:54.490 00:50:47 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:54.490 00:50:47 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:54.490 00:50:47 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:54.490 00:50:47 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:54.490 00:50:47 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:54.490 00:50:47 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:55.424 00:50:48 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:55.424 00:50:48 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:55.424 00:50:48 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:55.424 00:50:48 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:04:55.424 00:50:48 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:04:55.424 00:50:48 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:55.424 00:50:48 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:57.950 00:04:57.950 real 0m4.779s 00:04:57.950 user 0m1.131s 00:04:57.950 sys 0m1.825s 00:04:57.950 00:50:50 setup.sh.driver.guess_driver -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:57.950 00:50:50 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:04:57.950 ************************************ 00:04:57.950 END TEST guess_driver 00:04:57.950 ************************************ 00:04:57.950 00:04:57.950 real 0m7.307s 00:04:57.950 user 0m1.691s 00:04:57.950 sys 0m2.811s 00:04:57.950 00:50:50 setup.sh.driver -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:57.950 00:50:50 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:04:57.950 ************************************ 00:04:57.950 END TEST driver 00:04:57.950 ************************************ 00:04:57.950 00:50:50 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:04:57.950 00:50:50 setup.sh -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:57.950 00:50:50 setup.sh -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:57.950 00:50:50 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:57.950 ************************************ 00:04:57.950 START TEST devices 00:04:57.950 ************************************ 00:04:57.950 00:50:51 setup.sh.devices -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:04:57.950 * Looking for test storage... 00:04:57.950 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:57.950 00:50:51 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:04:57.950 00:50:51 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:04:57.950 00:50:51 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:57.950 00:50:51 setup.sh.devices -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:59.850 00:50:52 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:04:59.850 00:50:52 setup.sh.devices -- common/autotest_common.sh@1665 -- # zoned_devs=() 00:04:59.850 00:50:52 setup.sh.devices -- common/autotest_common.sh@1665 -- # local -gA zoned_devs 00:04:59.850 00:50:52 setup.sh.devices -- common/autotest_common.sh@1666 -- # local nvme bdf 00:04:59.850 00:50:52 setup.sh.devices -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:04:59.850 00:50:52 setup.sh.devices -- common/autotest_common.sh@1669 -- # is_block_zoned nvme0n1 00:04:59.850 00:50:52 setup.sh.devices -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:04:59.850 00:50:52 setup.sh.devices -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:59.850 00:50:52 setup.sh.devices -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:04:59.850 00:50:52 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:04:59.850 00:50:52 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:04:59.850 00:50:52 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:04:59.850 00:50:52 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:04:59.850 00:50:52 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:04:59.850 00:50:52 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:59.850 00:50:52 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:04:59.850 00:50:52 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:04:59.850 00:50:52 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:88:00.0 00:04:59.850 00:50:52 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\8\8\:\0\0\.\0* ]] 00:04:59.850 00:50:52 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:04:59.850 00:50:52 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:04:59.850 00:50:52 setup.sh.devices -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:04:59.850 No valid GPT data, bailing 00:04:59.850 00:50:52 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:59.850 00:50:52 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:04:59.850 00:50:52 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:04:59.850 00:50:52 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:04:59.850 00:50:52 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:04:59.850 00:50:52 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:04:59.850 00:50:52 setup.sh.devices -- setup/common.sh@80 -- # echo 1000204886016 00:04:59.850 00:50:52 setup.sh.devices -- setup/devices.sh@204 -- # (( 1000204886016 >= min_disk_size )) 00:04:59.850 00:50:52 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:59.850 00:50:52 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:88:00.0 00:04:59.850 00:50:52 setup.sh.devices -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:04:59.850 00:50:52 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:04:59.850 00:50:52 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:04:59.850 00:50:52 setup.sh.devices -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:59.850 00:50:52 setup.sh.devices -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:59.850 00:50:52 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:59.850 ************************************ 00:04:59.850 START TEST nvme_mount 00:04:59.850 ************************************ 00:04:59.850 00:50:52 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1121 -- # nvme_mount 00:04:59.850 00:50:52 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:04:59.850 00:50:52 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:04:59.850 00:50:52 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:59.850 00:50:52 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:59.850 00:50:52 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:04:59.850 00:50:52 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:59.850 00:50:52 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:04:59.850 00:50:52 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:04:59.850 00:50:52 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:59.850 00:50:52 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:04:59.850 00:50:52 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:04:59.850 00:50:52 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:04:59.850 00:50:52 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:59.850 00:50:52 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:59.850 00:50:52 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:59.850 00:50:52 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:59.850 00:50:52 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:04:59.850 00:50:52 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:59.850 00:50:52 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:05:00.783 Creating new GPT entries in memory. 00:05:00.783 GPT data structures destroyed! You may now partition the disk using fdisk or 00:05:00.783 other utilities. 00:05:00.783 00:50:53 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:05:00.783 00:50:53 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:00.783 00:50:53 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:00.783 00:50:53 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:00.783 00:50:53 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:05:01.716 Creating new GPT entries in memory. 00:05:01.716 The operation has completed successfully. 00:05:01.716 00:50:54 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:05:01.716 00:50:54 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:01.716 00:50:54 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 3626841 00:05:01.716 00:50:54 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:01.716 00:50:54 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size= 00:05:01.716 00:50:54 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:01.716 00:50:54 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:05:01.716 00:50:54 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:05:01.716 00:50:54 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:01.716 00:50:54 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:88:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:01.716 00:50:54 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:05:01.716 00:50:54 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:05:01.716 00:50:54 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:01.716 00:50:54 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:01.716 00:50:54 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:05:01.716 00:50:54 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:01.716 00:50:54 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:05:01.716 00:50:54 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:05:01.716 00:50:54 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:01.716 00:50:54 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:05:01.716 00:50:54 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:05:01.716 00:50:54 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:01.716 00:50:54 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:02.650 00:50:55 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:02.650 00:50:55 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:05:02.650 00:50:55 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:05:02.650 00:50:55 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:02.650 00:50:55 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:02.650 00:50:55 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:02.650 00:50:55 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:02.650 00:50:55 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:02.650 00:50:55 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:02.650 00:50:55 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:02.650 00:50:55 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:02.650 00:50:55 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:02.650 00:50:55 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:02.650 00:50:55 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:02.650 00:50:55 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:02.650 00:50:55 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:02.650 00:50:55 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:02.650 00:50:55 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:02.650 00:50:55 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:02.650 00:50:55 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:02.650 00:50:55 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:02.650 00:50:55 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:02.650 00:50:55 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:02.650 00:50:55 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:02.650 00:50:55 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:02.650 00:50:55 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:02.650 00:50:55 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:02.650 00:50:55 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:02.650 00:50:55 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:02.650 00:50:55 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:02.650 00:50:55 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:02.650 00:50:55 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:02.650 00:50:55 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:02.650 00:50:55 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:02.650 00:50:55 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:02.650 00:50:55 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:02.908 00:50:55 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:02.909 00:50:55 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:05:02.909 00:50:55 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:02.909 00:50:55 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:02.909 00:50:55 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:02.909 00:50:55 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:05:02.909 00:50:55 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:02.909 00:50:55 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:02.909 00:50:55 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:02.909 00:50:55 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:05:02.909 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:02.909 00:50:55 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:02.909 00:50:55 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:03.167 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:05:03.167 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:05:03.167 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:05:03.167 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:05:03.167 00:50:56 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:05:03.167 00:50:56 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:05:03.167 00:50:56 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:03.167 00:50:56 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:05:03.167 00:50:56 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:05:03.167 00:50:56 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:03.167 00:50:56 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:88:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:03.167 00:50:56 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:05:03.167 00:50:56 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:05:03.167 00:50:56 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:03.167 00:50:56 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:03.167 00:50:56 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:05:03.167 00:50:56 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:03.167 00:50:56 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:05:03.167 00:50:56 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:05:03.167 00:50:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:03.167 00:50:56 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:05:03.167 00:50:56 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:05:03.167 00:50:56 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:03.167 00:50:56 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:04.576 00:50:57 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:04.576 00:50:57 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:05:04.576 00:50:57 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:05:04.576 00:50:57 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:04.576 00:50:57 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:04.576 00:50:57 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:04.576 00:50:57 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:04.576 00:50:57 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:04.576 00:50:57 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:04.576 00:50:57 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:04.576 00:50:57 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:04.576 00:50:57 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:04.576 00:50:57 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:04.576 00:50:57 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:04.576 00:50:57 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:04.576 00:50:57 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:04.576 00:50:57 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:04.576 00:50:57 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:04.576 00:50:57 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:04.576 00:50:57 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:04.576 00:50:57 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:04.576 00:50:57 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:04.576 00:50:57 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:04.576 00:50:57 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:04.577 00:50:57 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:04.577 00:50:57 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:04.577 00:50:57 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:04.577 00:50:57 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:04.577 00:50:57 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:04.577 00:50:57 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:04.577 00:50:57 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:04.577 00:50:57 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:04.577 00:50:57 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:04.577 00:50:57 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:04.577 00:50:57 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:04.577 00:50:57 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:04.577 00:50:57 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:04.577 00:50:57 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:05:04.577 00:50:57 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:04.577 00:50:57 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:04.577 00:50:57 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:04.577 00:50:57 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:04.577 00:50:57 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:88:00.0 data@nvme0n1 '' '' 00:05:04.577 00:50:57 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:05:04.577 00:50:57 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:05:04.577 00:50:57 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:05:04.577 00:50:57 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:05:04.577 00:50:57 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:05:04.577 00:50:57 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:05:04.577 00:50:57 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:05:04.577 00:50:57 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:04.577 00:50:57 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:05:04.577 00:50:57 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:05:04.577 00:50:57 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:04.577 00:50:57 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:05.952 00:50:58 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:05.952 00:50:58 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:05:05.952 00:50:58 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:05:05.952 00:50:58 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:05.952 00:50:58 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:05.952 00:50:58 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:05.952 00:50:58 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:05.952 00:50:58 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:05.952 00:50:58 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:05.952 00:50:58 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:05.952 00:50:58 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:05.952 00:50:58 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:05.952 00:50:58 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:05.952 00:50:58 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:05.952 00:50:58 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:05.952 00:50:58 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:05.952 00:50:58 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:05.952 00:50:58 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:05.952 00:50:58 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:05.952 00:50:58 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:05.952 00:50:58 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:05.952 00:50:58 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:05.952 00:50:58 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:05.952 00:50:58 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:05.952 00:50:58 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:05.952 00:50:58 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:05.952 00:50:58 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:05.952 00:50:58 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:05.952 00:50:58 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:05.952 00:50:58 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:05.952 00:50:58 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:05.952 00:50:58 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:05.952 00:50:58 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:05.952 00:50:58 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:05.952 00:50:58 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:05.952 00:50:58 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:05.952 00:50:58 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:05.952 00:50:58 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:05.952 00:50:58 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:05:05.952 00:50:58 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:05:05.952 00:50:58 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:05.952 00:50:58 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:05.952 00:50:58 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:05.952 00:50:58 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:05.952 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:05.952 00:05:05.952 real 0m6.275s 00:05:05.952 user 0m1.443s 00:05:05.952 sys 0m2.360s 00:05:05.952 00:50:58 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:05.952 00:50:58 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:05:05.952 ************************************ 00:05:05.952 END TEST nvme_mount 00:05:05.952 ************************************ 00:05:05.952 00:50:58 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:05:05.952 00:50:58 setup.sh.devices -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:05.952 00:50:58 setup.sh.devices -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:05.952 00:50:58 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:05:05.952 ************************************ 00:05:05.952 START TEST dm_mount 00:05:05.952 ************************************ 00:05:05.952 00:50:58 setup.sh.devices.dm_mount -- common/autotest_common.sh@1121 -- # dm_mount 00:05:05.952 00:50:58 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:05:05.952 00:50:58 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:05:05.952 00:50:58 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:05:05.952 00:50:58 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:05:05.952 00:50:58 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:05:05.952 00:50:58 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:05:05.952 00:50:58 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:05:05.952 00:50:58 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:05:05.952 00:50:58 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:05:05.952 00:50:58 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:05:05.952 00:50:58 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:05:05.952 00:50:58 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:05.952 00:50:58 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:05.952 00:50:58 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:05:05.952 00:50:58 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:05.952 00:50:58 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:05.952 00:50:58 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:05:05.952 00:50:58 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:05.952 00:50:58 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:05:05.952 00:50:58 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:05:05.952 00:50:58 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:05:06.887 Creating new GPT entries in memory. 00:05:06.887 GPT data structures destroyed! You may now partition the disk using fdisk or 00:05:06.887 other utilities. 00:05:06.887 00:50:59 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:05:06.887 00:50:59 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:06.887 00:50:59 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:06.887 00:50:59 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:06.887 00:50:59 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:05:07.821 Creating new GPT entries in memory. 00:05:07.821 The operation has completed successfully. 00:05:07.821 00:51:00 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:05:07.821 00:51:00 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:07.821 00:51:00 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:07.821 00:51:00 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:07.821 00:51:00 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:05:09.196 The operation has completed successfully. 00:05:09.196 00:51:01 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:05:09.196 00:51:01 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:09.196 00:51:01 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 3629230 00:05:09.196 00:51:01 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:05:09.196 00:51:01 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:09.196 00:51:01 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:09.196 00:51:01 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:05:09.196 00:51:02 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:05:09.196 00:51:02 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:09.196 00:51:02 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:05:09.196 00:51:02 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:09.196 00:51:02 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:05:09.196 00:51:02 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:05:09.196 00:51:02 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 00:05:09.196 00:51:02 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:05:09.196 00:51:02 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:05:09.196 00:51:02 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:09.196 00:51:02 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount size= 00:05:09.196 00:51:02 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:09.196 00:51:02 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:09.196 00:51:02 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:05:09.196 00:51:02 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:09.196 00:51:02 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:88:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:09.196 00:51:02 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:05:09.196 00:51:02 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:05:09.196 00:51:02 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:09.196 00:51:02 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:09.196 00:51:02 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:05:09.196 00:51:02 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:05:09.196 00:51:02 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:05:09.196 00:51:02 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:05:09.196 00:51:02 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:09.196 00:51:02 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:05:09.196 00:51:02 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:05:09.196 00:51:02 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:09.196 00:51:02 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:10.130 00:51:03 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:10.130 00:51:03 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:05:10.130 00:51:03 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:05:10.130 00:51:03 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:10.130 00:51:03 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:10.130 00:51:03 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:10.130 00:51:03 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:10.130 00:51:03 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:10.130 00:51:03 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:10.130 00:51:03 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:10.130 00:51:03 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:10.130 00:51:03 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:10.130 00:51:03 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:10.130 00:51:03 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:10.130 00:51:03 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:10.130 00:51:03 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:10.130 00:51:03 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:10.130 00:51:03 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:10.130 00:51:03 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:10.130 00:51:03 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:10.130 00:51:03 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:10.130 00:51:03 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:10.130 00:51:03 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:10.130 00:51:03 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:10.130 00:51:03 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:10.130 00:51:03 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:10.130 00:51:03 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:10.130 00:51:03 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:10.130 00:51:03 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:10.130 00:51:03 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:10.130 00:51:03 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:10.130 00:51:03 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:10.130 00:51:03 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:10.130 00:51:03 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:10.130 00:51:03 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:10.130 00:51:03 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:10.130 00:51:03 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:10.130 00:51:03 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount ]] 00:05:10.130 00:51:03 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:10.130 00:51:03 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:05:10.130 00:51:03 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:10.130 00:51:03 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:10.130 00:51:03 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:88:00.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:05:10.130 00:51:03 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:05:10.130 00:51:03 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:05:10.130 00:51:03 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:05:10.130 00:51:03 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:05:10.131 00:51:03 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:05:10.131 00:51:03 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:05:10.131 00:51:03 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:05:10.131 00:51:03 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:10.131 00:51:03 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:05:10.131 00:51:03 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:05:10.131 00:51:03 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:10.131 00:51:03 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:11.064 00:51:04 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:11.064 00:51:04 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:05:11.064 00:51:04 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:05:11.064 00:51:04 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:11.064 00:51:04 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:11.064 00:51:04 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:11.064 00:51:04 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:11.064 00:51:04 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:11.064 00:51:04 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:11.064 00:51:04 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:11.064 00:51:04 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:11.064 00:51:04 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:11.064 00:51:04 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:11.064 00:51:04 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:11.064 00:51:04 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:11.064 00:51:04 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:11.064 00:51:04 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:11.064 00:51:04 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:11.064 00:51:04 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:11.064 00:51:04 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:11.064 00:51:04 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:11.064 00:51:04 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:11.064 00:51:04 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:11.064 00:51:04 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:11.064 00:51:04 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:11.064 00:51:04 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:11.064 00:51:04 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:11.064 00:51:04 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:11.064 00:51:04 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:11.064 00:51:04 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:11.064 00:51:04 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:11.064 00:51:04 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:11.064 00:51:04 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:11.064 00:51:04 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:11.064 00:51:04 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:11.064 00:51:04 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:11.322 00:51:04 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:11.322 00:51:04 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:11.322 00:51:04 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:05:11.322 00:51:04 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:05:11.322 00:51:04 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:11.322 00:51:04 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:11.322 00:51:04 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:05:11.322 00:51:04 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:11.322 00:51:04 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:05:11.322 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:11.322 00:51:04 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:11.322 00:51:04 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:05:11.322 00:05:11.322 real 0m5.507s 00:05:11.322 user 0m0.894s 00:05:11.322 sys 0m1.468s 00:05:11.322 00:51:04 setup.sh.devices.dm_mount -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:11.322 00:51:04 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:05:11.322 ************************************ 00:05:11.322 END TEST dm_mount 00:05:11.322 ************************************ 00:05:11.322 00:51:04 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:05:11.322 00:51:04 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:05:11.322 00:51:04 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:11.322 00:51:04 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:11.322 00:51:04 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:05:11.322 00:51:04 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:11.322 00:51:04 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:11.579 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:05:11.579 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:05:11.579 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:05:11.579 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:05:11.579 00:51:04 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:05:11.579 00:51:04 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:11.579 00:51:04 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:11.579 00:51:04 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:11.579 00:51:04 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:11.579 00:51:04 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:05:11.579 00:51:04 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:05:11.579 00:05:11.579 real 0m13.708s 00:05:11.579 user 0m2.971s 00:05:11.579 sys 0m4.877s 00:05:11.579 00:51:04 setup.sh.devices -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:11.579 00:51:04 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:05:11.579 ************************************ 00:05:11.579 END TEST devices 00:05:11.579 ************************************ 00:05:11.836 00:05:11.836 real 0m43.171s 00:05:11.836 user 0m12.355s 00:05:11.836 sys 0m19.047s 00:05:11.836 00:51:04 setup.sh -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:11.836 00:51:04 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:11.836 ************************************ 00:05:11.836 END TEST setup.sh 00:05:11.836 ************************************ 00:05:11.836 00:51:04 -- spdk/autotest.sh@128 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:05:13.209 Hugepages 00:05:13.209 node hugesize free / total 00:05:13.209 node0 1048576kB 0 / 0 00:05:13.209 node0 2048kB 2048 / 2048 00:05:13.209 node1 1048576kB 0 / 0 00:05:13.209 node1 2048kB 0 / 0 00:05:13.209 00:05:13.209 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:13.209 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:05:13.209 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:05:13.209 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:05:13.209 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:05:13.209 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:05:13.209 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:05:13.209 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:05:13.209 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:05:13.209 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:05:13.209 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:05:13.209 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:05:13.209 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:05:13.209 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:05:13.209 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:05:13.209 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:05:13.209 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:05:13.209 NVMe 0000:88:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:05:13.209 00:51:06 -- spdk/autotest.sh@130 -- # uname -s 00:05:13.209 00:51:06 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:05:13.209 00:51:06 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:05:13.209 00:51:06 -- common/autotest_common.sh@1527 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:14.142 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:05:14.142 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:05:14.142 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:05:14.142 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:05:14.142 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:05:14.142 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:05:14.142 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:05:14.142 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:05:14.142 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:05:14.142 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:05:14.400 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:05:14.400 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:05:14.400 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:05:14.400 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:05:14.400 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:05:14.400 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:05:15.378 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:05:15.378 00:51:08 -- common/autotest_common.sh@1528 -- # sleep 1 00:05:16.312 00:51:09 -- common/autotest_common.sh@1529 -- # bdfs=() 00:05:16.312 00:51:09 -- common/autotest_common.sh@1529 -- # local bdfs 00:05:16.312 00:51:09 -- common/autotest_common.sh@1530 -- # bdfs=($(get_nvme_bdfs)) 00:05:16.312 00:51:09 -- common/autotest_common.sh@1530 -- # get_nvme_bdfs 00:05:16.312 00:51:09 -- common/autotest_common.sh@1509 -- # bdfs=() 00:05:16.312 00:51:09 -- common/autotest_common.sh@1509 -- # local bdfs 00:05:16.312 00:51:09 -- common/autotest_common.sh@1510 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:16.312 00:51:09 -- common/autotest_common.sh@1510 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:16.312 00:51:09 -- common/autotest_common.sh@1510 -- # jq -r '.config[].params.traddr' 00:05:16.312 00:51:09 -- common/autotest_common.sh@1511 -- # (( 1 == 0 )) 00:05:16.312 00:51:09 -- common/autotest_common.sh@1515 -- # printf '%s\n' 0000:88:00.0 00:05:16.312 00:51:09 -- common/autotest_common.sh@1532 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:05:17.686 Waiting for block devices as requested 00:05:17.686 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:05:17.686 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:05:17.686 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:05:17.944 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:05:17.944 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:05:17.944 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:05:17.944 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:05:18.202 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:05:18.202 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:05:18.202 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:05:18.202 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:05:18.461 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:05:18.461 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:05:18.461 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:05:18.719 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:05:18.719 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:05:18.719 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:05:18.977 00:51:11 -- common/autotest_common.sh@1534 -- # for bdf in "${bdfs[@]}" 00:05:18.977 00:51:11 -- common/autotest_common.sh@1535 -- # get_nvme_ctrlr_from_bdf 0000:88:00.0 00:05:18.977 00:51:11 -- common/autotest_common.sh@1498 -- # readlink -f /sys/class/nvme/nvme0 00:05:18.977 00:51:11 -- common/autotest_common.sh@1498 -- # grep 0000:88:00.0/nvme/nvme 00:05:18.977 00:51:11 -- common/autotest_common.sh@1498 -- # bdf_sysfs_path=/sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 00:05:18.977 00:51:11 -- common/autotest_common.sh@1499 -- # [[ -z /sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 ]] 00:05:18.977 00:51:11 -- common/autotest_common.sh@1503 -- # basename /sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 00:05:18.977 00:51:11 -- common/autotest_common.sh@1503 -- # printf '%s\n' nvme0 00:05:18.977 00:51:11 -- common/autotest_common.sh@1535 -- # nvme_ctrlr=/dev/nvme0 00:05:18.977 00:51:11 -- common/autotest_common.sh@1536 -- # [[ -z /dev/nvme0 ]] 00:05:18.977 00:51:11 -- common/autotest_common.sh@1541 -- # nvme id-ctrl /dev/nvme0 00:05:18.977 00:51:11 -- common/autotest_common.sh@1541 -- # grep oacs 00:05:18.977 00:51:11 -- common/autotest_common.sh@1541 -- # cut -d: -f2 00:05:18.977 00:51:11 -- common/autotest_common.sh@1541 -- # oacs=' 0xf' 00:05:18.977 00:51:11 -- common/autotest_common.sh@1542 -- # oacs_ns_manage=8 00:05:18.977 00:51:11 -- common/autotest_common.sh@1544 -- # [[ 8 -ne 0 ]] 00:05:18.977 00:51:11 -- common/autotest_common.sh@1550 -- # nvme id-ctrl /dev/nvme0 00:05:18.977 00:51:11 -- common/autotest_common.sh@1550 -- # grep unvmcap 00:05:18.977 00:51:11 -- common/autotest_common.sh@1550 -- # cut -d: -f2 00:05:18.977 00:51:11 -- common/autotest_common.sh@1550 -- # unvmcap=' 0' 00:05:18.977 00:51:11 -- common/autotest_common.sh@1551 -- # [[ 0 -eq 0 ]] 00:05:18.977 00:51:11 -- common/autotest_common.sh@1553 -- # continue 00:05:18.977 00:51:11 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:05:18.977 00:51:11 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:18.977 00:51:11 -- common/autotest_common.sh@10 -- # set +x 00:05:18.977 00:51:11 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:05:18.977 00:51:11 -- common/autotest_common.sh@720 -- # xtrace_disable 00:05:18.977 00:51:11 -- common/autotest_common.sh@10 -- # set +x 00:05:18.977 00:51:11 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:20.351 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:05:20.351 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:05:20.351 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:05:20.351 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:05:20.351 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:05:20.351 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:05:20.351 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:05:20.351 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:05:20.351 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:05:20.351 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:05:20.351 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:05:20.351 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:05:20.351 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:05:20.351 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:05:20.351 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:05:20.351 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:05:21.284 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:05:21.284 00:51:14 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:05:21.284 00:51:14 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:21.284 00:51:14 -- common/autotest_common.sh@10 -- # set +x 00:05:21.284 00:51:14 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:05:21.284 00:51:14 -- common/autotest_common.sh@1587 -- # mapfile -t bdfs 00:05:21.284 00:51:14 -- common/autotest_common.sh@1587 -- # get_nvme_bdfs_by_id 0x0a54 00:05:21.284 00:51:14 -- common/autotest_common.sh@1573 -- # bdfs=() 00:05:21.284 00:51:14 -- common/autotest_common.sh@1573 -- # local bdfs 00:05:21.284 00:51:14 -- common/autotest_common.sh@1575 -- # get_nvme_bdfs 00:05:21.284 00:51:14 -- common/autotest_common.sh@1509 -- # bdfs=() 00:05:21.284 00:51:14 -- common/autotest_common.sh@1509 -- # local bdfs 00:05:21.284 00:51:14 -- common/autotest_common.sh@1510 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:21.284 00:51:14 -- common/autotest_common.sh@1510 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:21.284 00:51:14 -- common/autotest_common.sh@1510 -- # jq -r '.config[].params.traddr' 00:05:21.284 00:51:14 -- common/autotest_common.sh@1511 -- # (( 1 == 0 )) 00:05:21.284 00:51:14 -- common/autotest_common.sh@1515 -- # printf '%s\n' 0000:88:00.0 00:05:21.284 00:51:14 -- common/autotest_common.sh@1575 -- # for bdf in $(get_nvme_bdfs) 00:05:21.284 00:51:14 -- common/autotest_common.sh@1576 -- # cat /sys/bus/pci/devices/0000:88:00.0/device 00:05:21.284 00:51:14 -- common/autotest_common.sh@1576 -- # device=0x0a54 00:05:21.284 00:51:14 -- common/autotest_common.sh@1577 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:05:21.284 00:51:14 -- common/autotest_common.sh@1578 -- # bdfs+=($bdf) 00:05:21.284 00:51:14 -- common/autotest_common.sh@1582 -- # printf '%s\n' 0000:88:00.0 00:05:21.284 00:51:14 -- common/autotest_common.sh@1588 -- # [[ -z 0000:88:00.0 ]] 00:05:21.284 00:51:14 -- common/autotest_common.sh@1593 -- # spdk_tgt_pid=3635023 00:05:21.284 00:51:14 -- common/autotest_common.sh@1592 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:21.284 00:51:14 -- common/autotest_common.sh@1594 -- # waitforlisten 3635023 00:05:21.284 00:51:14 -- common/autotest_common.sh@827 -- # '[' -z 3635023 ']' 00:05:21.284 00:51:14 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:21.284 00:51:14 -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:21.284 00:51:14 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:21.284 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:21.284 00:51:14 -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:21.284 00:51:14 -- common/autotest_common.sh@10 -- # set +x 00:05:21.543 [2024-07-25 00:51:14.452153] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 23.11.0 initialization... 00:05:21.543 [2024-07-25 00:51:14.452249] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3635023 ] 00:05:21.543 EAL: No free 2048 kB hugepages reported on node 1 00:05:21.543 [2024-07-25 00:51:14.512616] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:21.543 [2024-07-25 00:51:14.602294] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:21.802 00:51:14 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:21.802 00:51:14 -- common/autotest_common.sh@860 -- # return 0 00:05:21.802 00:51:14 -- common/autotest_common.sh@1596 -- # bdf_id=0 00:05:21.802 00:51:14 -- common/autotest_common.sh@1597 -- # for bdf in "${bdfs[@]}" 00:05:21.802 00:51:14 -- common/autotest_common.sh@1598 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:88:00.0 00:05:25.143 nvme0n1 00:05:25.143 00:51:17 -- common/autotest_common.sh@1600 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:05:25.144 [2024-07-25 00:51:18.165759] nvme_opal.c:2063:spdk_opal_cmd_revert_tper: *ERROR*: Error on starting admin SP session with error 18 00:05:25.144 [2024-07-25 00:51:18.165806] vbdev_opal_rpc.c: 134:rpc_bdev_nvme_opal_revert: *ERROR*: Revert TPer failure: 18 00:05:25.144 request: 00:05:25.144 { 00:05:25.144 "nvme_ctrlr_name": "nvme0", 00:05:25.144 "password": "test", 00:05:25.144 "method": "bdev_nvme_opal_revert", 00:05:25.144 "req_id": 1 00:05:25.144 } 00:05:25.144 Got JSON-RPC error response 00:05:25.144 response: 00:05:25.144 { 00:05:25.144 "code": -32603, 00:05:25.144 "message": "Internal error" 00:05:25.144 } 00:05:25.144 00:51:18 -- common/autotest_common.sh@1600 -- # true 00:05:25.144 00:51:18 -- common/autotest_common.sh@1601 -- # (( ++bdf_id )) 00:05:25.144 00:51:18 -- common/autotest_common.sh@1604 -- # killprocess 3635023 00:05:25.144 00:51:18 -- common/autotest_common.sh@946 -- # '[' -z 3635023 ']' 00:05:25.144 00:51:18 -- common/autotest_common.sh@950 -- # kill -0 3635023 00:05:25.144 00:51:18 -- common/autotest_common.sh@951 -- # uname 00:05:25.144 00:51:18 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:25.144 00:51:18 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3635023 00:05:25.144 00:51:18 -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:25.144 00:51:18 -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:25.144 00:51:18 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3635023' 00:05:25.144 killing process with pid 3635023 00:05:25.144 00:51:18 -- common/autotest_common.sh@965 -- # kill 3635023 00:05:25.144 00:51:18 -- common/autotest_common.sh@970 -- # wait 3635023 00:05:27.042 00:51:19 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:05:27.042 00:51:19 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:05:27.042 00:51:19 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:05:27.042 00:51:19 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:05:27.042 00:51:19 -- spdk/autotest.sh@162 -- # timing_enter lib 00:05:27.042 00:51:19 -- common/autotest_common.sh@720 -- # xtrace_disable 00:05:27.042 00:51:19 -- common/autotest_common.sh@10 -- # set +x 00:05:27.042 00:51:19 -- spdk/autotest.sh@164 -- # [[ 0 -eq 1 ]] 00:05:27.042 00:51:19 -- spdk/autotest.sh@168 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:05:27.042 00:51:19 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:27.042 00:51:19 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:27.042 00:51:19 -- common/autotest_common.sh@10 -- # set +x 00:05:27.042 ************************************ 00:05:27.042 START TEST env 00:05:27.042 ************************************ 00:05:27.042 00:51:20 env -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:05:27.042 * Looking for test storage... 00:05:27.042 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:05:27.042 00:51:20 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:05:27.042 00:51:20 env -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:27.042 00:51:20 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:27.042 00:51:20 env -- common/autotest_common.sh@10 -- # set +x 00:05:27.042 ************************************ 00:05:27.042 START TEST env_memory 00:05:27.042 ************************************ 00:05:27.042 00:51:20 env.env_memory -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:05:27.042 00:05:27.042 00:05:27.042 CUnit - A unit testing framework for C - Version 2.1-3 00:05:27.042 http://cunit.sourceforge.net/ 00:05:27.042 00:05:27.042 00:05:27.042 Suite: memory 00:05:27.042 Test: alloc and free memory map ...[2024-07-25 00:51:20.121248] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:27.042 passed 00:05:27.042 Test: mem map translation ...[2024-07-25 00:51:20.140972] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:27.042 [2024-07-25 00:51:20.140995] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:27.042 [2024-07-25 00:51:20.141046] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:27.042 [2024-07-25 00:51:20.141058] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:27.042 passed 00:05:27.042 Test: mem map registration ...[2024-07-25 00:51:20.181535] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:05:27.042 [2024-07-25 00:51:20.181556] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:05:27.301 passed 00:05:27.301 Test: mem map adjacent registrations ...passed 00:05:27.301 00:05:27.301 Run Summary: Type Total Ran Passed Failed Inactive 00:05:27.301 suites 1 1 n/a 0 0 00:05:27.301 tests 4 4 4 0 0 00:05:27.301 asserts 152 152 152 0 n/a 00:05:27.301 00:05:27.301 Elapsed time = 0.140 seconds 00:05:27.301 00:05:27.301 real 0m0.148s 00:05:27.301 user 0m0.139s 00:05:27.301 sys 0m0.008s 00:05:27.301 00:51:20 env.env_memory -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:27.301 00:51:20 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:05:27.301 ************************************ 00:05:27.301 END TEST env_memory 00:05:27.301 ************************************ 00:05:27.301 00:51:20 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:27.301 00:51:20 env -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:27.301 00:51:20 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:27.301 00:51:20 env -- common/autotest_common.sh@10 -- # set +x 00:05:27.301 ************************************ 00:05:27.301 START TEST env_vtophys 00:05:27.301 ************************************ 00:05:27.301 00:51:20 env.env_vtophys -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:27.301 EAL: lib.eal log level changed from notice to debug 00:05:27.301 EAL: Detected lcore 0 as core 0 on socket 0 00:05:27.301 EAL: Detected lcore 1 as core 1 on socket 0 00:05:27.301 EAL: Detected lcore 2 as core 2 on socket 0 00:05:27.301 EAL: Detected lcore 3 as core 3 on socket 0 00:05:27.301 EAL: Detected lcore 4 as core 4 on socket 0 00:05:27.301 EAL: Detected lcore 5 as core 5 on socket 0 00:05:27.301 EAL: Detected lcore 6 as core 8 on socket 0 00:05:27.301 EAL: Detected lcore 7 as core 9 on socket 0 00:05:27.301 EAL: Detected lcore 8 as core 10 on socket 0 00:05:27.301 EAL: Detected lcore 9 as core 11 on socket 0 00:05:27.301 EAL: Detected lcore 10 as core 12 on socket 0 00:05:27.301 EAL: Detected lcore 11 as core 13 on socket 0 00:05:27.301 EAL: Detected lcore 12 as core 0 on socket 1 00:05:27.301 EAL: Detected lcore 13 as core 1 on socket 1 00:05:27.301 EAL: Detected lcore 14 as core 2 on socket 1 00:05:27.301 EAL: Detected lcore 15 as core 3 on socket 1 00:05:27.301 EAL: Detected lcore 16 as core 4 on socket 1 00:05:27.301 EAL: Detected lcore 17 as core 5 on socket 1 00:05:27.301 EAL: Detected lcore 18 as core 8 on socket 1 00:05:27.301 EAL: Detected lcore 19 as core 9 on socket 1 00:05:27.301 EAL: Detected lcore 20 as core 10 on socket 1 00:05:27.301 EAL: Detected lcore 21 as core 11 on socket 1 00:05:27.301 EAL: Detected lcore 22 as core 12 on socket 1 00:05:27.301 EAL: Detected lcore 23 as core 13 on socket 1 00:05:27.301 EAL: Detected lcore 24 as core 0 on socket 0 00:05:27.301 EAL: Detected lcore 25 as core 1 on socket 0 00:05:27.301 EAL: Detected lcore 26 as core 2 on socket 0 00:05:27.301 EAL: Detected lcore 27 as core 3 on socket 0 00:05:27.301 EAL: Detected lcore 28 as core 4 on socket 0 00:05:27.301 EAL: Detected lcore 29 as core 5 on socket 0 00:05:27.301 EAL: Detected lcore 30 as core 8 on socket 0 00:05:27.301 EAL: Detected lcore 31 as core 9 on socket 0 00:05:27.302 EAL: Detected lcore 32 as core 10 on socket 0 00:05:27.302 EAL: Detected lcore 33 as core 11 on socket 0 00:05:27.302 EAL: Detected lcore 34 as core 12 on socket 0 00:05:27.302 EAL: Detected lcore 35 as core 13 on socket 0 00:05:27.302 EAL: Detected lcore 36 as core 0 on socket 1 00:05:27.302 EAL: Detected lcore 37 as core 1 on socket 1 00:05:27.302 EAL: Detected lcore 38 as core 2 on socket 1 00:05:27.302 EAL: Detected lcore 39 as core 3 on socket 1 00:05:27.302 EAL: Detected lcore 40 as core 4 on socket 1 00:05:27.302 EAL: Detected lcore 41 as core 5 on socket 1 00:05:27.302 EAL: Detected lcore 42 as core 8 on socket 1 00:05:27.302 EAL: Detected lcore 43 as core 9 on socket 1 00:05:27.302 EAL: Detected lcore 44 as core 10 on socket 1 00:05:27.302 EAL: Detected lcore 45 as core 11 on socket 1 00:05:27.302 EAL: Detected lcore 46 as core 12 on socket 1 00:05:27.302 EAL: Detected lcore 47 as core 13 on socket 1 00:05:27.302 EAL: Maximum logical cores by configuration: 128 00:05:27.302 EAL: Detected CPU lcores: 48 00:05:27.302 EAL: Detected NUMA nodes: 2 00:05:27.302 EAL: Checking presence of .so 'librte_eal.so.24.0' 00:05:27.302 EAL: Detected shared linkage of DPDK 00:05:27.302 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so.24.0 00:05:27.302 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so.24.0 00:05:27.302 EAL: Registered [vdev] bus. 00:05:27.302 EAL: bus.vdev log level changed from disabled to notice 00:05:27.302 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so.24.0 00:05:27.302 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so.24.0 00:05:27.302 EAL: pmd.net.i40e.init log level changed from disabled to notice 00:05:27.302 EAL: pmd.net.i40e.driver log level changed from disabled to notice 00:05:27.302 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so 00:05:27.302 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so 00:05:27.302 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so 00:05:27.302 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so 00:05:27.302 EAL: No shared files mode enabled, IPC will be disabled 00:05:27.302 EAL: No shared files mode enabled, IPC is disabled 00:05:27.302 EAL: Bus pci wants IOVA as 'DC' 00:05:27.302 EAL: Bus vdev wants IOVA as 'DC' 00:05:27.302 EAL: Buses did not request a specific IOVA mode. 00:05:27.302 EAL: IOMMU is available, selecting IOVA as VA mode. 00:05:27.302 EAL: Selected IOVA mode 'VA' 00:05:27.302 EAL: No free 2048 kB hugepages reported on node 1 00:05:27.302 EAL: Probing VFIO support... 00:05:27.302 EAL: IOMMU type 1 (Type 1) is supported 00:05:27.302 EAL: IOMMU type 7 (sPAPR) is not supported 00:05:27.302 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:05:27.302 EAL: VFIO support initialized 00:05:27.302 EAL: Ask a virtual area of 0x2e000 bytes 00:05:27.302 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:27.302 EAL: Setting up physically contiguous memory... 00:05:27.302 EAL: Setting maximum number of open files to 524288 00:05:27.302 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:27.302 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:05:27.302 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:27.302 EAL: Ask a virtual area of 0x61000 bytes 00:05:27.302 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:27.302 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:27.302 EAL: Ask a virtual area of 0x400000000 bytes 00:05:27.302 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:27.302 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:27.302 EAL: Ask a virtual area of 0x61000 bytes 00:05:27.302 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:27.302 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:27.302 EAL: Ask a virtual area of 0x400000000 bytes 00:05:27.302 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:27.302 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:27.302 EAL: Ask a virtual area of 0x61000 bytes 00:05:27.302 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:27.302 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:27.302 EAL: Ask a virtual area of 0x400000000 bytes 00:05:27.302 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:27.302 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:27.302 EAL: Ask a virtual area of 0x61000 bytes 00:05:27.302 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:27.302 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:27.302 EAL: Ask a virtual area of 0x400000000 bytes 00:05:27.302 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:27.302 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:27.302 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:05:27.302 EAL: Ask a virtual area of 0x61000 bytes 00:05:27.302 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:05:27.302 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:27.302 EAL: Ask a virtual area of 0x400000000 bytes 00:05:27.302 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:05:27.302 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:05:27.302 EAL: Ask a virtual area of 0x61000 bytes 00:05:27.302 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:05:27.302 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:27.302 EAL: Ask a virtual area of 0x400000000 bytes 00:05:27.302 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:05:27.302 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:05:27.302 EAL: Ask a virtual area of 0x61000 bytes 00:05:27.302 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:05:27.302 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:27.302 EAL: Ask a virtual area of 0x400000000 bytes 00:05:27.302 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:05:27.302 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:05:27.302 EAL: Ask a virtual area of 0x61000 bytes 00:05:27.302 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:05:27.302 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:27.302 EAL: Ask a virtual area of 0x400000000 bytes 00:05:27.302 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:05:27.302 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:05:27.302 EAL: Hugepages will be freed exactly as allocated. 00:05:27.302 EAL: No shared files mode enabled, IPC is disabled 00:05:27.302 EAL: No shared files mode enabled, IPC is disabled 00:05:27.302 EAL: TSC frequency is ~2700000 KHz 00:05:27.302 EAL: Main lcore 0 is ready (tid=7ffab8ad6a00;cpuset=[0]) 00:05:27.302 EAL: Trying to obtain current memory policy. 00:05:27.302 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:27.302 EAL: Restoring previous memory policy: 0 00:05:27.302 EAL: request: mp_malloc_sync 00:05:27.302 EAL: No shared files mode enabled, IPC is disabled 00:05:27.302 EAL: Heap on socket 0 was expanded by 2MB 00:05:27.302 EAL: No shared files mode enabled, IPC is disabled 00:05:27.302 EAL: No shared files mode enabled, IPC is disabled 00:05:27.302 EAL: No PCI address specified using 'addr=' in: bus=pci 00:05:27.302 EAL: Mem event callback 'spdk:(nil)' registered 00:05:27.302 00:05:27.302 00:05:27.302 CUnit - A unit testing framework for C - Version 2.1-3 00:05:27.302 http://cunit.sourceforge.net/ 00:05:27.302 00:05:27.302 00:05:27.302 Suite: components_suite 00:05:27.302 Test: vtophys_malloc_test ...passed 00:05:27.302 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:27.302 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:27.302 EAL: Restoring previous memory policy: 4 00:05:27.302 EAL: Calling mem event callback 'spdk:(nil)' 00:05:27.302 EAL: request: mp_malloc_sync 00:05:27.302 EAL: No shared files mode enabled, IPC is disabled 00:05:27.302 EAL: Heap on socket 0 was expanded by 4MB 00:05:27.302 EAL: Calling mem event callback 'spdk:(nil)' 00:05:27.302 EAL: request: mp_malloc_sync 00:05:27.302 EAL: No shared files mode enabled, IPC is disabled 00:05:27.302 EAL: Heap on socket 0 was shrunk by 4MB 00:05:27.302 EAL: Trying to obtain current memory policy. 00:05:27.302 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:27.302 EAL: Restoring previous memory policy: 4 00:05:27.302 EAL: Calling mem event callback 'spdk:(nil)' 00:05:27.302 EAL: request: mp_malloc_sync 00:05:27.302 EAL: No shared files mode enabled, IPC is disabled 00:05:27.302 EAL: Heap on socket 0 was expanded by 6MB 00:05:27.302 EAL: Calling mem event callback 'spdk:(nil)' 00:05:27.302 EAL: request: mp_malloc_sync 00:05:27.302 EAL: No shared files mode enabled, IPC is disabled 00:05:27.302 EAL: Heap on socket 0 was shrunk by 6MB 00:05:27.302 EAL: Trying to obtain current memory policy. 00:05:27.302 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:27.302 EAL: Restoring previous memory policy: 4 00:05:27.302 EAL: Calling mem event callback 'spdk:(nil)' 00:05:27.302 EAL: request: mp_malloc_sync 00:05:27.302 EAL: No shared files mode enabled, IPC is disabled 00:05:27.302 EAL: Heap on socket 0 was expanded by 10MB 00:05:27.302 EAL: Calling mem event callback 'spdk:(nil)' 00:05:27.302 EAL: request: mp_malloc_sync 00:05:27.302 EAL: No shared files mode enabled, IPC is disabled 00:05:27.302 EAL: Heap on socket 0 was shrunk by 10MB 00:05:27.302 EAL: Trying to obtain current memory policy. 00:05:27.302 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:27.302 EAL: Restoring previous memory policy: 4 00:05:27.302 EAL: Calling mem event callback 'spdk:(nil)' 00:05:27.302 EAL: request: mp_malloc_sync 00:05:27.302 EAL: No shared files mode enabled, IPC is disabled 00:05:27.302 EAL: Heap on socket 0 was expanded by 18MB 00:05:27.302 EAL: Calling mem event callback 'spdk:(nil)' 00:05:27.302 EAL: request: mp_malloc_sync 00:05:27.302 EAL: No shared files mode enabled, IPC is disabled 00:05:27.302 EAL: Heap on socket 0 was shrunk by 18MB 00:05:27.302 EAL: Trying to obtain current memory policy. 00:05:27.302 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:27.302 EAL: Restoring previous memory policy: 4 00:05:27.302 EAL: Calling mem event callback 'spdk:(nil)' 00:05:27.302 EAL: request: mp_malloc_sync 00:05:27.302 EAL: No shared files mode enabled, IPC is disabled 00:05:27.303 EAL: Heap on socket 0 was expanded by 34MB 00:05:27.303 EAL: Calling mem event callback 'spdk:(nil)' 00:05:27.303 EAL: request: mp_malloc_sync 00:05:27.303 EAL: No shared files mode enabled, IPC is disabled 00:05:27.303 EAL: Heap on socket 0 was shrunk by 34MB 00:05:27.303 EAL: Trying to obtain current memory policy. 00:05:27.303 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:27.303 EAL: Restoring previous memory policy: 4 00:05:27.303 EAL: Calling mem event callback 'spdk:(nil)' 00:05:27.303 EAL: request: mp_malloc_sync 00:05:27.303 EAL: No shared files mode enabled, IPC is disabled 00:05:27.303 EAL: Heap on socket 0 was expanded by 66MB 00:05:27.303 EAL: Calling mem event callback 'spdk:(nil)' 00:05:27.303 EAL: request: mp_malloc_sync 00:05:27.303 EAL: No shared files mode enabled, IPC is disabled 00:05:27.303 EAL: Heap on socket 0 was shrunk by 66MB 00:05:27.303 EAL: Trying to obtain current memory policy. 00:05:27.303 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:27.561 EAL: Restoring previous memory policy: 4 00:05:27.561 EAL: Calling mem event callback 'spdk:(nil)' 00:05:27.561 EAL: request: mp_malloc_sync 00:05:27.561 EAL: No shared files mode enabled, IPC is disabled 00:05:27.561 EAL: Heap on socket 0 was expanded by 130MB 00:05:27.561 EAL: Calling mem event callback 'spdk:(nil)' 00:05:27.561 EAL: request: mp_malloc_sync 00:05:27.561 EAL: No shared files mode enabled, IPC is disabled 00:05:27.561 EAL: Heap on socket 0 was shrunk by 130MB 00:05:27.561 EAL: Trying to obtain current memory policy. 00:05:27.561 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:27.561 EAL: Restoring previous memory policy: 4 00:05:27.561 EAL: Calling mem event callback 'spdk:(nil)' 00:05:27.561 EAL: request: mp_malloc_sync 00:05:27.561 EAL: No shared files mode enabled, IPC is disabled 00:05:27.561 EAL: Heap on socket 0 was expanded by 258MB 00:05:27.561 EAL: Calling mem event callback 'spdk:(nil)' 00:05:27.561 EAL: request: mp_malloc_sync 00:05:27.561 EAL: No shared files mode enabled, IPC is disabled 00:05:27.561 EAL: Heap on socket 0 was shrunk by 258MB 00:05:27.561 EAL: Trying to obtain current memory policy. 00:05:27.561 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:27.819 EAL: Restoring previous memory policy: 4 00:05:27.819 EAL: Calling mem event callback 'spdk:(nil)' 00:05:27.819 EAL: request: mp_malloc_sync 00:05:27.819 EAL: No shared files mode enabled, IPC is disabled 00:05:27.819 EAL: Heap on socket 0 was expanded by 514MB 00:05:27.819 EAL: Calling mem event callback 'spdk:(nil)' 00:05:28.077 EAL: request: mp_malloc_sync 00:05:28.077 EAL: No shared files mode enabled, IPC is disabled 00:05:28.077 EAL: Heap on socket 0 was shrunk by 514MB 00:05:28.077 EAL: Trying to obtain current memory policy. 00:05:28.077 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:28.335 EAL: Restoring previous memory policy: 4 00:05:28.335 EAL: Calling mem event callback 'spdk:(nil)' 00:05:28.335 EAL: request: mp_malloc_sync 00:05:28.335 EAL: No shared files mode enabled, IPC is disabled 00:05:28.335 EAL: Heap on socket 0 was expanded by 1026MB 00:05:28.592 EAL: Calling mem event callback 'spdk:(nil)' 00:05:28.850 EAL: request: mp_malloc_sync 00:05:28.850 EAL: No shared files mode enabled, IPC is disabled 00:05:28.850 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:28.850 passed 00:05:28.850 00:05:28.850 Run Summary: Type Total Ran Passed Failed Inactive 00:05:28.850 suites 1 1 n/a 0 0 00:05:28.850 tests 2 2 2 0 0 00:05:28.850 asserts 497 497 497 0 n/a 00:05:28.850 00:05:28.850 Elapsed time = 1.377 seconds 00:05:28.850 EAL: Calling mem event callback 'spdk:(nil)' 00:05:28.850 EAL: request: mp_malloc_sync 00:05:28.850 EAL: No shared files mode enabled, IPC is disabled 00:05:28.850 EAL: Heap on socket 0 was shrunk by 2MB 00:05:28.850 EAL: No shared files mode enabled, IPC is disabled 00:05:28.850 EAL: No shared files mode enabled, IPC is disabled 00:05:28.850 EAL: No shared files mode enabled, IPC is disabled 00:05:28.850 00:05:28.850 real 0m1.499s 00:05:28.850 user 0m0.860s 00:05:28.850 sys 0m0.603s 00:05:28.850 00:51:21 env.env_vtophys -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:28.850 00:51:21 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:05:28.850 ************************************ 00:05:28.850 END TEST env_vtophys 00:05:28.850 ************************************ 00:05:28.850 00:51:21 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:05:28.850 00:51:21 env -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:28.850 00:51:21 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:28.850 00:51:21 env -- common/autotest_common.sh@10 -- # set +x 00:05:28.850 ************************************ 00:05:28.850 START TEST env_pci 00:05:28.850 ************************************ 00:05:28.850 00:51:21 env.env_pci -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:05:28.850 00:05:28.850 00:05:28.850 CUnit - A unit testing framework for C - Version 2.1-3 00:05:28.850 http://cunit.sourceforge.net/ 00:05:28.850 00:05:28.850 00:05:28.850 Suite: pci 00:05:28.850 Test: pci_hook ...[2024-07-25 00:51:21.836682] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 3635914 has claimed it 00:05:28.850 EAL: Cannot find device (10000:00:01.0) 00:05:28.850 EAL: Failed to attach device on primary process 00:05:28.850 passed 00:05:28.850 00:05:28.850 Run Summary: Type Total Ran Passed Failed Inactive 00:05:28.850 suites 1 1 n/a 0 0 00:05:28.850 tests 1 1 1 0 0 00:05:28.850 asserts 25 25 25 0 n/a 00:05:28.850 00:05:28.850 Elapsed time = 0.021 seconds 00:05:28.850 00:05:28.850 real 0m0.033s 00:05:28.850 user 0m0.006s 00:05:28.850 sys 0m0.027s 00:05:28.850 00:51:21 env.env_pci -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:28.850 00:51:21 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:05:28.850 ************************************ 00:05:28.850 END TEST env_pci 00:05:28.850 ************************************ 00:05:28.850 00:51:21 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:28.850 00:51:21 env -- env/env.sh@15 -- # uname 00:05:28.850 00:51:21 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:28.850 00:51:21 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:28.850 00:51:21 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:28.850 00:51:21 env -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:05:28.850 00:51:21 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:28.850 00:51:21 env -- common/autotest_common.sh@10 -- # set +x 00:05:28.850 ************************************ 00:05:28.850 START TEST env_dpdk_post_init 00:05:28.850 ************************************ 00:05:28.851 00:51:21 env.env_dpdk_post_init -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:28.851 EAL: Detected CPU lcores: 48 00:05:28.851 EAL: Detected NUMA nodes: 2 00:05:28.851 EAL: Detected shared linkage of DPDK 00:05:28.851 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:28.851 EAL: Selected IOVA mode 'VA' 00:05:28.851 EAL: No free 2048 kB hugepages reported on node 1 00:05:28.851 EAL: VFIO support initialized 00:05:28.851 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:29.108 EAL: Using IOMMU type 1 (Type 1) 00:05:29.108 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:00:04.0 (socket 0) 00:05:29.108 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:00:04.1 (socket 0) 00:05:29.108 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:00:04.2 (socket 0) 00:05:29.108 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:00:04.3 (socket 0) 00:05:29.108 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:00:04.4 (socket 0) 00:05:29.108 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:00:04.5 (socket 0) 00:05:29.108 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:00:04.6 (socket 0) 00:05:29.109 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:00:04.7 (socket 0) 00:05:29.109 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:80:04.0 (socket 1) 00:05:29.109 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:80:04.1 (socket 1) 00:05:29.109 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:80:04.2 (socket 1) 00:05:29.109 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:80:04.3 (socket 1) 00:05:29.109 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:80:04.4 (socket 1) 00:05:29.109 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:80:04.5 (socket 1) 00:05:29.109 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:80:04.6 (socket 1) 00:05:29.109 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:80:04.7 (socket 1) 00:05:30.042 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:88:00.0 (socket 1) 00:05:33.320 EAL: Releasing PCI mapped resource for 0000:88:00.0 00:05:33.320 EAL: Calling pci_unmap_resource for 0000:88:00.0 at 0x202001040000 00:05:33.320 Starting DPDK initialization... 00:05:33.320 Starting SPDK post initialization... 00:05:33.320 SPDK NVMe probe 00:05:33.320 Attaching to 0000:88:00.0 00:05:33.320 Attached to 0000:88:00.0 00:05:33.320 Cleaning up... 00:05:33.320 00:05:33.320 real 0m4.386s 00:05:33.320 user 0m3.254s 00:05:33.320 sys 0m0.193s 00:05:33.320 00:51:26 env.env_dpdk_post_init -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:33.320 00:51:26 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:05:33.320 ************************************ 00:05:33.320 END TEST env_dpdk_post_init 00:05:33.320 ************************************ 00:05:33.320 00:51:26 env -- env/env.sh@26 -- # uname 00:05:33.320 00:51:26 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:33.320 00:51:26 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:33.320 00:51:26 env -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:33.320 00:51:26 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:33.320 00:51:26 env -- common/autotest_common.sh@10 -- # set +x 00:05:33.320 ************************************ 00:05:33.320 START TEST env_mem_callbacks 00:05:33.320 ************************************ 00:05:33.320 00:51:26 env.env_mem_callbacks -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:33.320 EAL: Detected CPU lcores: 48 00:05:33.320 EAL: Detected NUMA nodes: 2 00:05:33.320 EAL: Detected shared linkage of DPDK 00:05:33.320 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:33.320 EAL: Selected IOVA mode 'VA' 00:05:33.320 EAL: No free 2048 kB hugepages reported on node 1 00:05:33.320 EAL: VFIO support initialized 00:05:33.320 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:33.320 00:05:33.320 00:05:33.320 CUnit - A unit testing framework for C - Version 2.1-3 00:05:33.320 http://cunit.sourceforge.net/ 00:05:33.320 00:05:33.320 00:05:33.320 Suite: memory 00:05:33.320 Test: test ... 00:05:33.320 register 0x200000200000 2097152 00:05:33.320 malloc 3145728 00:05:33.320 register 0x200000400000 4194304 00:05:33.320 buf 0x200000500000 len 3145728 PASSED 00:05:33.320 malloc 64 00:05:33.320 buf 0x2000004fff40 len 64 PASSED 00:05:33.320 malloc 4194304 00:05:33.320 register 0x200000800000 6291456 00:05:33.320 buf 0x200000a00000 len 4194304 PASSED 00:05:33.320 free 0x200000500000 3145728 00:05:33.320 free 0x2000004fff40 64 00:05:33.320 unregister 0x200000400000 4194304 PASSED 00:05:33.320 free 0x200000a00000 4194304 00:05:33.320 unregister 0x200000800000 6291456 PASSED 00:05:33.320 malloc 8388608 00:05:33.320 register 0x200000400000 10485760 00:05:33.320 buf 0x200000600000 len 8388608 PASSED 00:05:33.320 free 0x200000600000 8388608 00:05:33.320 unregister 0x200000400000 10485760 PASSED 00:05:33.320 passed 00:05:33.320 00:05:33.320 Run Summary: Type Total Ran Passed Failed Inactive 00:05:33.320 suites 1 1 n/a 0 0 00:05:33.320 tests 1 1 1 0 0 00:05:33.320 asserts 15 15 15 0 n/a 00:05:33.320 00:05:33.320 Elapsed time = 0.005 seconds 00:05:33.320 00:05:33.320 real 0m0.047s 00:05:33.320 user 0m0.010s 00:05:33.320 sys 0m0.038s 00:05:33.320 00:51:26 env.env_mem_callbacks -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:33.320 00:51:26 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:05:33.320 ************************************ 00:05:33.320 END TEST env_mem_callbacks 00:05:33.320 ************************************ 00:05:33.320 00:05:33.320 real 0m6.403s 00:05:33.320 user 0m4.396s 00:05:33.320 sys 0m1.051s 00:05:33.320 00:51:26 env -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:33.320 00:51:26 env -- common/autotest_common.sh@10 -- # set +x 00:05:33.320 ************************************ 00:05:33.320 END TEST env 00:05:33.320 ************************************ 00:05:33.320 00:51:26 -- spdk/autotest.sh@169 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:05:33.320 00:51:26 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:33.320 00:51:26 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:33.320 00:51:26 -- common/autotest_common.sh@10 -- # set +x 00:05:33.320 ************************************ 00:05:33.320 START TEST rpc 00:05:33.320 ************************************ 00:05:33.320 00:51:26 rpc -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:05:33.578 * Looking for test storage... 00:05:33.578 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:33.578 00:51:26 rpc -- rpc/rpc.sh@65 -- # spdk_pid=3636576 00:05:33.578 00:51:26 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:05:33.578 00:51:26 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:33.578 00:51:26 rpc -- rpc/rpc.sh@67 -- # waitforlisten 3636576 00:05:33.578 00:51:26 rpc -- common/autotest_common.sh@827 -- # '[' -z 3636576 ']' 00:05:33.578 00:51:26 rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:33.578 00:51:26 rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:33.578 00:51:26 rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:33.578 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:33.578 00:51:26 rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:33.578 00:51:26 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:33.578 [2024-07-25 00:51:26.559425] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 23.11.0 initialization... 00:05:33.578 [2024-07-25 00:51:26.559515] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3636576 ] 00:05:33.578 EAL: No free 2048 kB hugepages reported on node 1 00:05:33.578 [2024-07-25 00:51:26.620841] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:33.578 [2024-07-25 00:51:26.708724] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:33.578 [2024-07-25 00:51:26.708785] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 3636576' to capture a snapshot of events at runtime. 00:05:33.578 [2024-07-25 00:51:26.708807] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:33.578 [2024-07-25 00:51:26.708824] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:33.578 [2024-07-25 00:51:26.708839] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid3636576 for offline analysis/debug. 00:05:33.578 [2024-07-25 00:51:26.708883] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:33.836 00:51:26 rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:33.836 00:51:26 rpc -- common/autotest_common.sh@860 -- # return 0 00:05:33.836 00:51:26 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:33.836 00:51:26 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:33.836 00:51:26 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:33.836 00:51:26 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:33.836 00:51:26 rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:33.836 00:51:26 rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:33.836 00:51:26 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:34.094 ************************************ 00:05:34.094 START TEST rpc_integrity 00:05:34.094 ************************************ 00:05:34.094 00:51:26 rpc.rpc_integrity -- common/autotest_common.sh@1121 -- # rpc_integrity 00:05:34.094 00:51:26 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:34.094 00:51:26 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:34.094 00:51:26 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:34.094 00:51:27 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:34.094 00:51:27 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:34.094 00:51:27 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:34.094 00:51:27 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:34.094 00:51:27 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:34.094 00:51:27 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:34.094 00:51:27 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:34.094 00:51:27 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:34.094 00:51:27 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:34.094 00:51:27 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:34.094 00:51:27 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:34.094 00:51:27 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:34.094 00:51:27 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:34.094 00:51:27 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:34.094 { 00:05:34.094 "name": "Malloc0", 00:05:34.094 "aliases": [ 00:05:34.094 "fe353c82-d5e9-46df-b5f2-e4ed1b063bda" 00:05:34.094 ], 00:05:34.094 "product_name": "Malloc disk", 00:05:34.094 "block_size": 512, 00:05:34.094 "num_blocks": 16384, 00:05:34.094 "uuid": "fe353c82-d5e9-46df-b5f2-e4ed1b063bda", 00:05:34.094 "assigned_rate_limits": { 00:05:34.094 "rw_ios_per_sec": 0, 00:05:34.094 "rw_mbytes_per_sec": 0, 00:05:34.094 "r_mbytes_per_sec": 0, 00:05:34.094 "w_mbytes_per_sec": 0 00:05:34.094 }, 00:05:34.094 "claimed": false, 00:05:34.094 "zoned": false, 00:05:34.094 "supported_io_types": { 00:05:34.094 "read": true, 00:05:34.094 "write": true, 00:05:34.094 "unmap": true, 00:05:34.094 "write_zeroes": true, 00:05:34.094 "flush": true, 00:05:34.094 "reset": true, 00:05:34.094 "compare": false, 00:05:34.094 "compare_and_write": false, 00:05:34.094 "abort": true, 00:05:34.094 "nvme_admin": false, 00:05:34.094 "nvme_io": false 00:05:34.094 }, 00:05:34.094 "memory_domains": [ 00:05:34.094 { 00:05:34.094 "dma_device_id": "system", 00:05:34.094 "dma_device_type": 1 00:05:34.094 }, 00:05:34.094 { 00:05:34.094 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:34.094 "dma_device_type": 2 00:05:34.094 } 00:05:34.094 ], 00:05:34.094 "driver_specific": {} 00:05:34.094 } 00:05:34.094 ]' 00:05:34.094 00:51:27 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:34.094 00:51:27 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:34.095 00:51:27 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:34.095 00:51:27 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:34.095 00:51:27 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:34.095 [2024-07-25 00:51:27.103569] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:34.095 [2024-07-25 00:51:27.103625] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:34.095 [2024-07-25 00:51:27.103662] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x19c3d60 00:05:34.095 [2024-07-25 00:51:27.103697] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:34.095 [2024-07-25 00:51:27.105258] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:34.095 [2024-07-25 00:51:27.105302] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:34.095 Passthru0 00:05:34.095 00:51:27 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:34.095 00:51:27 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:34.095 00:51:27 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:34.095 00:51:27 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:34.095 00:51:27 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:34.095 00:51:27 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:34.095 { 00:05:34.095 "name": "Malloc0", 00:05:34.095 "aliases": [ 00:05:34.095 "fe353c82-d5e9-46df-b5f2-e4ed1b063bda" 00:05:34.095 ], 00:05:34.095 "product_name": "Malloc disk", 00:05:34.095 "block_size": 512, 00:05:34.095 "num_blocks": 16384, 00:05:34.095 "uuid": "fe353c82-d5e9-46df-b5f2-e4ed1b063bda", 00:05:34.095 "assigned_rate_limits": { 00:05:34.095 "rw_ios_per_sec": 0, 00:05:34.095 "rw_mbytes_per_sec": 0, 00:05:34.095 "r_mbytes_per_sec": 0, 00:05:34.095 "w_mbytes_per_sec": 0 00:05:34.095 }, 00:05:34.095 "claimed": true, 00:05:34.095 "claim_type": "exclusive_write", 00:05:34.095 "zoned": false, 00:05:34.095 "supported_io_types": { 00:05:34.095 "read": true, 00:05:34.095 "write": true, 00:05:34.095 "unmap": true, 00:05:34.095 "write_zeroes": true, 00:05:34.095 "flush": true, 00:05:34.095 "reset": true, 00:05:34.095 "compare": false, 00:05:34.095 "compare_and_write": false, 00:05:34.095 "abort": true, 00:05:34.095 "nvme_admin": false, 00:05:34.095 "nvme_io": false 00:05:34.095 }, 00:05:34.095 "memory_domains": [ 00:05:34.095 { 00:05:34.095 "dma_device_id": "system", 00:05:34.095 "dma_device_type": 1 00:05:34.095 }, 00:05:34.095 { 00:05:34.095 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:34.095 "dma_device_type": 2 00:05:34.095 } 00:05:34.095 ], 00:05:34.095 "driver_specific": {} 00:05:34.095 }, 00:05:34.095 { 00:05:34.095 "name": "Passthru0", 00:05:34.095 "aliases": [ 00:05:34.095 "4d8323ec-8ee1-5abd-bed8-2eead27af5f3" 00:05:34.095 ], 00:05:34.095 "product_name": "passthru", 00:05:34.095 "block_size": 512, 00:05:34.095 "num_blocks": 16384, 00:05:34.095 "uuid": "4d8323ec-8ee1-5abd-bed8-2eead27af5f3", 00:05:34.095 "assigned_rate_limits": { 00:05:34.095 "rw_ios_per_sec": 0, 00:05:34.095 "rw_mbytes_per_sec": 0, 00:05:34.095 "r_mbytes_per_sec": 0, 00:05:34.095 "w_mbytes_per_sec": 0 00:05:34.095 }, 00:05:34.095 "claimed": false, 00:05:34.095 "zoned": false, 00:05:34.095 "supported_io_types": { 00:05:34.095 "read": true, 00:05:34.095 "write": true, 00:05:34.095 "unmap": true, 00:05:34.095 "write_zeroes": true, 00:05:34.095 "flush": true, 00:05:34.095 "reset": true, 00:05:34.095 "compare": false, 00:05:34.095 "compare_and_write": false, 00:05:34.095 "abort": true, 00:05:34.095 "nvme_admin": false, 00:05:34.095 "nvme_io": false 00:05:34.095 }, 00:05:34.095 "memory_domains": [ 00:05:34.095 { 00:05:34.095 "dma_device_id": "system", 00:05:34.095 "dma_device_type": 1 00:05:34.095 }, 00:05:34.095 { 00:05:34.095 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:34.095 "dma_device_type": 2 00:05:34.095 } 00:05:34.095 ], 00:05:34.095 "driver_specific": { 00:05:34.095 "passthru": { 00:05:34.095 "name": "Passthru0", 00:05:34.095 "base_bdev_name": "Malloc0" 00:05:34.095 } 00:05:34.095 } 00:05:34.095 } 00:05:34.095 ]' 00:05:34.095 00:51:27 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:34.095 00:51:27 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:34.095 00:51:27 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:34.095 00:51:27 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:34.095 00:51:27 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:34.095 00:51:27 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:34.095 00:51:27 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:34.095 00:51:27 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:34.095 00:51:27 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:34.095 00:51:27 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:34.095 00:51:27 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:34.095 00:51:27 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:34.095 00:51:27 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:34.095 00:51:27 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:34.095 00:51:27 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:34.095 00:51:27 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:34.095 00:51:27 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:34.095 00:05:34.095 real 0m0.233s 00:05:34.095 user 0m0.153s 00:05:34.095 sys 0m0.019s 00:05:34.095 00:51:27 rpc.rpc_integrity -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:34.095 00:51:27 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:34.095 ************************************ 00:05:34.095 END TEST rpc_integrity 00:05:34.095 ************************************ 00:05:34.353 00:51:27 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:34.353 00:51:27 rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:34.353 00:51:27 rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:34.353 00:51:27 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:34.353 ************************************ 00:05:34.353 START TEST rpc_plugins 00:05:34.353 ************************************ 00:05:34.353 00:51:27 rpc.rpc_plugins -- common/autotest_common.sh@1121 -- # rpc_plugins 00:05:34.353 00:51:27 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:34.353 00:51:27 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:34.353 00:51:27 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:34.353 00:51:27 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:34.353 00:51:27 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:34.353 00:51:27 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:34.353 00:51:27 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:34.353 00:51:27 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:34.353 00:51:27 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:34.353 00:51:27 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:34.353 { 00:05:34.353 "name": "Malloc1", 00:05:34.353 "aliases": [ 00:05:34.353 "dfa627f0-08e3-4150-9d9a-3d8de7ed5aa9" 00:05:34.353 ], 00:05:34.353 "product_name": "Malloc disk", 00:05:34.353 "block_size": 4096, 00:05:34.353 "num_blocks": 256, 00:05:34.353 "uuid": "dfa627f0-08e3-4150-9d9a-3d8de7ed5aa9", 00:05:34.353 "assigned_rate_limits": { 00:05:34.353 "rw_ios_per_sec": 0, 00:05:34.353 "rw_mbytes_per_sec": 0, 00:05:34.353 "r_mbytes_per_sec": 0, 00:05:34.353 "w_mbytes_per_sec": 0 00:05:34.353 }, 00:05:34.353 "claimed": false, 00:05:34.353 "zoned": false, 00:05:34.353 "supported_io_types": { 00:05:34.353 "read": true, 00:05:34.353 "write": true, 00:05:34.353 "unmap": true, 00:05:34.353 "write_zeroes": true, 00:05:34.353 "flush": true, 00:05:34.353 "reset": true, 00:05:34.353 "compare": false, 00:05:34.353 "compare_and_write": false, 00:05:34.353 "abort": true, 00:05:34.353 "nvme_admin": false, 00:05:34.353 "nvme_io": false 00:05:34.353 }, 00:05:34.353 "memory_domains": [ 00:05:34.353 { 00:05:34.353 "dma_device_id": "system", 00:05:34.353 "dma_device_type": 1 00:05:34.353 }, 00:05:34.353 { 00:05:34.353 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:34.353 "dma_device_type": 2 00:05:34.353 } 00:05:34.353 ], 00:05:34.353 "driver_specific": {} 00:05:34.353 } 00:05:34.353 ]' 00:05:34.353 00:51:27 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:05:34.353 00:51:27 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:34.353 00:51:27 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:34.353 00:51:27 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:34.353 00:51:27 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:34.353 00:51:27 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:34.353 00:51:27 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:34.353 00:51:27 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:34.353 00:51:27 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:34.353 00:51:27 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:34.353 00:51:27 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:34.353 00:51:27 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:05:34.353 00:51:27 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:34.353 00:05:34.353 real 0m0.114s 00:05:34.353 user 0m0.080s 00:05:34.353 sys 0m0.004s 00:05:34.353 00:51:27 rpc.rpc_plugins -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:34.353 00:51:27 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:34.353 ************************************ 00:05:34.353 END TEST rpc_plugins 00:05:34.353 ************************************ 00:05:34.353 00:51:27 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:34.353 00:51:27 rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:34.353 00:51:27 rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:34.353 00:51:27 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:34.353 ************************************ 00:05:34.353 START TEST rpc_trace_cmd_test 00:05:34.353 ************************************ 00:05:34.353 00:51:27 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1121 -- # rpc_trace_cmd_test 00:05:34.353 00:51:27 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:05:34.353 00:51:27 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:34.353 00:51:27 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:34.353 00:51:27 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:34.353 00:51:27 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:34.353 00:51:27 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:05:34.353 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid3636576", 00:05:34.353 "tpoint_group_mask": "0x8", 00:05:34.353 "iscsi_conn": { 00:05:34.353 "mask": "0x2", 00:05:34.353 "tpoint_mask": "0x0" 00:05:34.353 }, 00:05:34.353 "scsi": { 00:05:34.353 "mask": "0x4", 00:05:34.353 "tpoint_mask": "0x0" 00:05:34.353 }, 00:05:34.353 "bdev": { 00:05:34.353 "mask": "0x8", 00:05:34.353 "tpoint_mask": "0xffffffffffffffff" 00:05:34.353 }, 00:05:34.353 "nvmf_rdma": { 00:05:34.353 "mask": "0x10", 00:05:34.353 "tpoint_mask": "0x0" 00:05:34.353 }, 00:05:34.353 "nvmf_tcp": { 00:05:34.353 "mask": "0x20", 00:05:34.353 "tpoint_mask": "0x0" 00:05:34.353 }, 00:05:34.353 "ftl": { 00:05:34.353 "mask": "0x40", 00:05:34.353 "tpoint_mask": "0x0" 00:05:34.353 }, 00:05:34.353 "blobfs": { 00:05:34.353 "mask": "0x80", 00:05:34.353 "tpoint_mask": "0x0" 00:05:34.353 }, 00:05:34.353 "dsa": { 00:05:34.353 "mask": "0x200", 00:05:34.353 "tpoint_mask": "0x0" 00:05:34.353 }, 00:05:34.353 "thread": { 00:05:34.353 "mask": "0x400", 00:05:34.353 "tpoint_mask": "0x0" 00:05:34.353 }, 00:05:34.353 "nvme_pcie": { 00:05:34.353 "mask": "0x800", 00:05:34.353 "tpoint_mask": "0x0" 00:05:34.353 }, 00:05:34.353 "iaa": { 00:05:34.353 "mask": "0x1000", 00:05:34.353 "tpoint_mask": "0x0" 00:05:34.353 }, 00:05:34.353 "nvme_tcp": { 00:05:34.353 "mask": "0x2000", 00:05:34.353 "tpoint_mask": "0x0" 00:05:34.353 }, 00:05:34.353 "bdev_nvme": { 00:05:34.353 "mask": "0x4000", 00:05:34.353 "tpoint_mask": "0x0" 00:05:34.353 }, 00:05:34.353 "sock": { 00:05:34.353 "mask": "0x8000", 00:05:34.353 "tpoint_mask": "0x0" 00:05:34.353 } 00:05:34.353 }' 00:05:34.353 00:51:27 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:05:34.353 00:51:27 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:05:34.353 00:51:27 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:34.611 00:51:27 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:34.611 00:51:27 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:34.611 00:51:27 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:34.611 00:51:27 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:34.612 00:51:27 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:34.612 00:51:27 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:34.612 00:51:27 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:34.612 00:05:34.612 real 0m0.198s 00:05:34.612 user 0m0.174s 00:05:34.612 sys 0m0.014s 00:05:34.612 00:51:27 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:34.612 00:51:27 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:34.612 ************************************ 00:05:34.612 END TEST rpc_trace_cmd_test 00:05:34.612 ************************************ 00:05:34.612 00:51:27 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:34.612 00:51:27 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:34.612 00:51:27 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:34.612 00:51:27 rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:34.612 00:51:27 rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:34.612 00:51:27 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:34.612 ************************************ 00:05:34.612 START TEST rpc_daemon_integrity 00:05:34.612 ************************************ 00:05:34.612 00:51:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1121 -- # rpc_integrity 00:05:34.612 00:51:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:34.612 00:51:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:34.612 00:51:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:34.612 00:51:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:34.612 00:51:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:34.612 00:51:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:34.612 00:51:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:34.612 00:51:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:34.612 00:51:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:34.612 00:51:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:34.612 00:51:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:34.612 00:51:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:34.612 00:51:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:34.612 00:51:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:34.612 00:51:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:34.612 00:51:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:34.612 00:51:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:34.612 { 00:05:34.612 "name": "Malloc2", 00:05:34.612 "aliases": [ 00:05:34.612 "1b8db202-f4b4-4a2d-9b46-1141dd009c76" 00:05:34.612 ], 00:05:34.612 "product_name": "Malloc disk", 00:05:34.612 "block_size": 512, 00:05:34.612 "num_blocks": 16384, 00:05:34.612 "uuid": "1b8db202-f4b4-4a2d-9b46-1141dd009c76", 00:05:34.612 "assigned_rate_limits": { 00:05:34.612 "rw_ios_per_sec": 0, 00:05:34.612 "rw_mbytes_per_sec": 0, 00:05:34.612 "r_mbytes_per_sec": 0, 00:05:34.612 "w_mbytes_per_sec": 0 00:05:34.612 }, 00:05:34.612 "claimed": false, 00:05:34.612 "zoned": false, 00:05:34.612 "supported_io_types": { 00:05:34.612 "read": true, 00:05:34.612 "write": true, 00:05:34.612 "unmap": true, 00:05:34.612 "write_zeroes": true, 00:05:34.612 "flush": true, 00:05:34.612 "reset": true, 00:05:34.612 "compare": false, 00:05:34.612 "compare_and_write": false, 00:05:34.612 "abort": true, 00:05:34.612 "nvme_admin": false, 00:05:34.612 "nvme_io": false 00:05:34.612 }, 00:05:34.612 "memory_domains": [ 00:05:34.612 { 00:05:34.612 "dma_device_id": "system", 00:05:34.612 "dma_device_type": 1 00:05:34.612 }, 00:05:34.612 { 00:05:34.612 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:34.612 "dma_device_type": 2 00:05:34.612 } 00:05:34.612 ], 00:05:34.612 "driver_specific": {} 00:05:34.612 } 00:05:34.612 ]' 00:05:34.612 00:51:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:34.870 00:51:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:34.870 00:51:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:34.870 00:51:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:34.870 00:51:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:34.870 [2024-07-25 00:51:27.781788] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:34.870 [2024-07-25 00:51:27.781833] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:34.870 [2024-07-25 00:51:27.781869] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1b75420 00:05:34.870 [2024-07-25 00:51:27.781896] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:34.870 [2024-07-25 00:51:27.783351] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:34.870 [2024-07-25 00:51:27.783378] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:34.870 Passthru0 00:05:34.870 00:51:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:34.870 00:51:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:34.870 00:51:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:34.870 00:51:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:34.870 00:51:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:34.870 00:51:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:34.870 { 00:05:34.870 "name": "Malloc2", 00:05:34.870 "aliases": [ 00:05:34.870 "1b8db202-f4b4-4a2d-9b46-1141dd009c76" 00:05:34.870 ], 00:05:34.870 "product_name": "Malloc disk", 00:05:34.870 "block_size": 512, 00:05:34.870 "num_blocks": 16384, 00:05:34.870 "uuid": "1b8db202-f4b4-4a2d-9b46-1141dd009c76", 00:05:34.870 "assigned_rate_limits": { 00:05:34.870 "rw_ios_per_sec": 0, 00:05:34.870 "rw_mbytes_per_sec": 0, 00:05:34.870 "r_mbytes_per_sec": 0, 00:05:34.870 "w_mbytes_per_sec": 0 00:05:34.870 }, 00:05:34.870 "claimed": true, 00:05:34.870 "claim_type": "exclusive_write", 00:05:34.870 "zoned": false, 00:05:34.870 "supported_io_types": { 00:05:34.870 "read": true, 00:05:34.870 "write": true, 00:05:34.870 "unmap": true, 00:05:34.870 "write_zeroes": true, 00:05:34.870 "flush": true, 00:05:34.870 "reset": true, 00:05:34.870 "compare": false, 00:05:34.870 "compare_and_write": false, 00:05:34.870 "abort": true, 00:05:34.870 "nvme_admin": false, 00:05:34.870 "nvme_io": false 00:05:34.870 }, 00:05:34.870 "memory_domains": [ 00:05:34.870 { 00:05:34.870 "dma_device_id": "system", 00:05:34.870 "dma_device_type": 1 00:05:34.870 }, 00:05:34.870 { 00:05:34.870 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:34.870 "dma_device_type": 2 00:05:34.870 } 00:05:34.870 ], 00:05:34.870 "driver_specific": {} 00:05:34.870 }, 00:05:34.870 { 00:05:34.870 "name": "Passthru0", 00:05:34.870 "aliases": [ 00:05:34.870 "5b8c5f8c-dab1-58e4-8b49-3890eec37d24" 00:05:34.870 ], 00:05:34.870 "product_name": "passthru", 00:05:34.870 "block_size": 512, 00:05:34.870 "num_blocks": 16384, 00:05:34.870 "uuid": "5b8c5f8c-dab1-58e4-8b49-3890eec37d24", 00:05:34.870 "assigned_rate_limits": { 00:05:34.870 "rw_ios_per_sec": 0, 00:05:34.870 "rw_mbytes_per_sec": 0, 00:05:34.870 "r_mbytes_per_sec": 0, 00:05:34.870 "w_mbytes_per_sec": 0 00:05:34.870 }, 00:05:34.870 "claimed": false, 00:05:34.870 "zoned": false, 00:05:34.870 "supported_io_types": { 00:05:34.870 "read": true, 00:05:34.870 "write": true, 00:05:34.870 "unmap": true, 00:05:34.870 "write_zeroes": true, 00:05:34.870 "flush": true, 00:05:34.870 "reset": true, 00:05:34.870 "compare": false, 00:05:34.870 "compare_and_write": false, 00:05:34.870 "abort": true, 00:05:34.870 "nvme_admin": false, 00:05:34.870 "nvme_io": false 00:05:34.870 }, 00:05:34.870 "memory_domains": [ 00:05:34.870 { 00:05:34.870 "dma_device_id": "system", 00:05:34.870 "dma_device_type": 1 00:05:34.870 }, 00:05:34.870 { 00:05:34.870 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:34.870 "dma_device_type": 2 00:05:34.870 } 00:05:34.870 ], 00:05:34.870 "driver_specific": { 00:05:34.870 "passthru": { 00:05:34.870 "name": "Passthru0", 00:05:34.870 "base_bdev_name": "Malloc2" 00:05:34.870 } 00:05:34.870 } 00:05:34.870 } 00:05:34.870 ]' 00:05:34.870 00:51:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:34.870 00:51:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:34.870 00:51:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:34.870 00:51:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:34.870 00:51:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:34.870 00:51:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:34.870 00:51:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:34.870 00:51:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:34.870 00:51:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:34.870 00:51:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:34.871 00:51:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:34.871 00:51:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:34.871 00:51:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:34.871 00:51:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:34.871 00:51:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:34.871 00:51:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:34.871 00:51:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:34.871 00:05:34.871 real 0m0.233s 00:05:34.871 user 0m0.157s 00:05:34.871 sys 0m0.016s 00:05:34.871 00:51:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:34.871 00:51:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:34.871 ************************************ 00:05:34.871 END TEST rpc_daemon_integrity 00:05:34.871 ************************************ 00:05:34.871 00:51:27 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:34.871 00:51:27 rpc -- rpc/rpc.sh@84 -- # killprocess 3636576 00:05:34.871 00:51:27 rpc -- common/autotest_common.sh@946 -- # '[' -z 3636576 ']' 00:05:34.871 00:51:27 rpc -- common/autotest_common.sh@950 -- # kill -0 3636576 00:05:34.871 00:51:27 rpc -- common/autotest_common.sh@951 -- # uname 00:05:34.871 00:51:27 rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:34.871 00:51:27 rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3636576 00:05:34.871 00:51:27 rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:34.871 00:51:27 rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:34.871 00:51:27 rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3636576' 00:05:34.871 killing process with pid 3636576 00:05:34.871 00:51:27 rpc -- common/autotest_common.sh@965 -- # kill 3636576 00:05:34.871 00:51:27 rpc -- common/autotest_common.sh@970 -- # wait 3636576 00:05:35.435 00:05:35.435 real 0m1.913s 00:05:35.435 user 0m2.428s 00:05:35.435 sys 0m0.560s 00:05:35.435 00:51:28 rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:35.435 00:51:28 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:35.435 ************************************ 00:05:35.435 END TEST rpc 00:05:35.435 ************************************ 00:05:35.435 00:51:28 -- spdk/autotest.sh@170 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:35.435 00:51:28 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:35.435 00:51:28 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:35.435 00:51:28 -- common/autotest_common.sh@10 -- # set +x 00:05:35.435 ************************************ 00:05:35.435 START TEST skip_rpc 00:05:35.435 ************************************ 00:05:35.435 00:51:28 skip_rpc -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:35.435 * Looking for test storage... 00:05:35.435 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:35.435 00:51:28 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:35.435 00:51:28 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:35.435 00:51:28 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:05:35.435 00:51:28 skip_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:35.435 00:51:28 skip_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:35.435 00:51:28 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:35.435 ************************************ 00:05:35.435 START TEST skip_rpc 00:05:35.435 ************************************ 00:05:35.435 00:51:28 skip_rpc.skip_rpc -- common/autotest_common.sh@1121 -- # test_skip_rpc 00:05:35.435 00:51:28 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=3637006 00:05:35.435 00:51:28 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:05:35.435 00:51:28 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:35.435 00:51:28 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:05:35.435 [2024-07-25 00:51:28.554021] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 23.11.0 initialization... 00:05:35.435 [2024-07-25 00:51:28.554097] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3637006 ] 00:05:35.435 EAL: No free 2048 kB hugepages reported on node 1 00:05:35.693 [2024-07-25 00:51:28.614469] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:35.693 [2024-07-25 00:51:28.706412] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:40.952 00:51:33 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:05:40.952 00:51:33 skip_rpc.skip_rpc -- common/autotest_common.sh@648 -- # local es=0 00:05:40.952 00:51:33 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd spdk_get_version 00:05:40.952 00:51:33 skip_rpc.skip_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:05:40.952 00:51:33 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:40.952 00:51:33 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:05:40.952 00:51:33 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:40.952 00:51:33 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # rpc_cmd spdk_get_version 00:05:40.952 00:51:33 skip_rpc.skip_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:40.952 00:51:33 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:40.952 00:51:33 skip_rpc.skip_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:05:40.952 00:51:33 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # es=1 00:05:40.952 00:51:33 skip_rpc.skip_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:40.952 00:51:33 skip_rpc.skip_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:40.952 00:51:33 skip_rpc.skip_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:40.952 00:51:33 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:05:40.952 00:51:33 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 3637006 00:05:40.952 00:51:33 skip_rpc.skip_rpc -- common/autotest_common.sh@946 -- # '[' -z 3637006 ']' 00:05:40.952 00:51:33 skip_rpc.skip_rpc -- common/autotest_common.sh@950 -- # kill -0 3637006 00:05:40.952 00:51:33 skip_rpc.skip_rpc -- common/autotest_common.sh@951 -- # uname 00:05:40.952 00:51:33 skip_rpc.skip_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:40.952 00:51:33 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3637006 00:05:40.952 00:51:33 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:40.952 00:51:33 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:40.953 00:51:33 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3637006' 00:05:40.953 killing process with pid 3637006 00:05:40.953 00:51:33 skip_rpc.skip_rpc -- common/autotest_common.sh@965 -- # kill 3637006 00:05:40.953 00:51:33 skip_rpc.skip_rpc -- common/autotest_common.sh@970 -- # wait 3637006 00:05:40.953 00:05:40.953 real 0m5.445s 00:05:40.953 user 0m5.141s 00:05:40.953 sys 0m0.310s 00:05:40.953 00:51:33 skip_rpc.skip_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:40.953 00:51:33 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:40.953 ************************************ 00:05:40.953 END TEST skip_rpc 00:05:40.953 ************************************ 00:05:40.953 00:51:33 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:05:40.953 00:51:33 skip_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:40.953 00:51:33 skip_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:40.953 00:51:33 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:40.953 ************************************ 00:05:40.953 START TEST skip_rpc_with_json 00:05:40.953 ************************************ 00:05:40.953 00:51:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1121 -- # test_skip_rpc_with_json 00:05:40.953 00:51:33 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:05:40.953 00:51:33 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=3637693 00:05:40.953 00:51:33 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:40.953 00:51:33 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:40.953 00:51:33 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 3637693 00:05:40.953 00:51:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@827 -- # '[' -z 3637693 ']' 00:05:40.953 00:51:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:40.953 00:51:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:40.953 00:51:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:40.953 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:40.953 00:51:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:40.953 00:51:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:40.953 [2024-07-25 00:51:34.047366] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 23.11.0 initialization... 00:05:40.953 [2024-07-25 00:51:34.047453] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3637693 ] 00:05:40.953 EAL: No free 2048 kB hugepages reported on node 1 00:05:41.211 [2024-07-25 00:51:34.106658] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:41.211 [2024-07-25 00:51:34.193163] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:41.469 00:51:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:41.469 00:51:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@860 -- # return 0 00:05:41.469 00:51:34 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:05:41.469 00:51:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:41.469 00:51:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:41.469 [2024-07-25 00:51:34.444873] nvmf_rpc.c:2558:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:05:41.469 request: 00:05:41.469 { 00:05:41.469 "trtype": "tcp", 00:05:41.469 "method": "nvmf_get_transports", 00:05:41.469 "req_id": 1 00:05:41.469 } 00:05:41.469 Got JSON-RPC error response 00:05:41.469 response: 00:05:41.469 { 00:05:41.469 "code": -19, 00:05:41.469 "message": "No such device" 00:05:41.469 } 00:05:41.469 00:51:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:05:41.469 00:51:34 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:05:41.469 00:51:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:41.469 00:51:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:41.469 [2024-07-25 00:51:34.452996] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:41.469 00:51:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:41.469 00:51:34 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:05:41.469 00:51:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:41.469 00:51:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:41.469 00:51:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:41.469 00:51:34 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:41.469 { 00:05:41.469 "subsystems": [ 00:05:41.469 { 00:05:41.469 "subsystem": "vfio_user_target", 00:05:41.469 "config": null 00:05:41.469 }, 00:05:41.469 { 00:05:41.469 "subsystem": "keyring", 00:05:41.469 "config": [] 00:05:41.469 }, 00:05:41.469 { 00:05:41.469 "subsystem": "iobuf", 00:05:41.469 "config": [ 00:05:41.469 { 00:05:41.469 "method": "iobuf_set_options", 00:05:41.469 "params": { 00:05:41.469 "small_pool_count": 8192, 00:05:41.469 "large_pool_count": 1024, 00:05:41.469 "small_bufsize": 8192, 00:05:41.469 "large_bufsize": 135168 00:05:41.469 } 00:05:41.469 } 00:05:41.469 ] 00:05:41.469 }, 00:05:41.469 { 00:05:41.469 "subsystem": "sock", 00:05:41.469 "config": [ 00:05:41.469 { 00:05:41.469 "method": "sock_set_default_impl", 00:05:41.469 "params": { 00:05:41.469 "impl_name": "posix" 00:05:41.469 } 00:05:41.469 }, 00:05:41.469 { 00:05:41.469 "method": "sock_impl_set_options", 00:05:41.469 "params": { 00:05:41.469 "impl_name": "ssl", 00:05:41.469 "recv_buf_size": 4096, 00:05:41.469 "send_buf_size": 4096, 00:05:41.469 "enable_recv_pipe": true, 00:05:41.469 "enable_quickack": false, 00:05:41.469 "enable_placement_id": 0, 00:05:41.469 "enable_zerocopy_send_server": true, 00:05:41.469 "enable_zerocopy_send_client": false, 00:05:41.469 "zerocopy_threshold": 0, 00:05:41.469 "tls_version": 0, 00:05:41.469 "enable_ktls": false 00:05:41.469 } 00:05:41.469 }, 00:05:41.469 { 00:05:41.469 "method": "sock_impl_set_options", 00:05:41.469 "params": { 00:05:41.469 "impl_name": "posix", 00:05:41.469 "recv_buf_size": 2097152, 00:05:41.469 "send_buf_size": 2097152, 00:05:41.469 "enable_recv_pipe": true, 00:05:41.469 "enable_quickack": false, 00:05:41.469 "enable_placement_id": 0, 00:05:41.469 "enable_zerocopy_send_server": true, 00:05:41.469 "enable_zerocopy_send_client": false, 00:05:41.469 "zerocopy_threshold": 0, 00:05:41.469 "tls_version": 0, 00:05:41.469 "enable_ktls": false 00:05:41.469 } 00:05:41.469 } 00:05:41.469 ] 00:05:41.469 }, 00:05:41.469 { 00:05:41.469 "subsystem": "vmd", 00:05:41.469 "config": [] 00:05:41.469 }, 00:05:41.469 { 00:05:41.469 "subsystem": "accel", 00:05:41.469 "config": [ 00:05:41.469 { 00:05:41.469 "method": "accel_set_options", 00:05:41.469 "params": { 00:05:41.469 "small_cache_size": 128, 00:05:41.469 "large_cache_size": 16, 00:05:41.469 "task_count": 2048, 00:05:41.469 "sequence_count": 2048, 00:05:41.469 "buf_count": 2048 00:05:41.469 } 00:05:41.469 } 00:05:41.469 ] 00:05:41.469 }, 00:05:41.469 { 00:05:41.470 "subsystem": "bdev", 00:05:41.470 "config": [ 00:05:41.470 { 00:05:41.470 "method": "bdev_set_options", 00:05:41.470 "params": { 00:05:41.470 "bdev_io_pool_size": 65535, 00:05:41.470 "bdev_io_cache_size": 256, 00:05:41.470 "bdev_auto_examine": true, 00:05:41.470 "iobuf_small_cache_size": 128, 00:05:41.470 "iobuf_large_cache_size": 16 00:05:41.470 } 00:05:41.470 }, 00:05:41.470 { 00:05:41.470 "method": "bdev_raid_set_options", 00:05:41.470 "params": { 00:05:41.470 "process_window_size_kb": 1024 00:05:41.470 } 00:05:41.470 }, 00:05:41.470 { 00:05:41.470 "method": "bdev_iscsi_set_options", 00:05:41.470 "params": { 00:05:41.470 "timeout_sec": 30 00:05:41.470 } 00:05:41.470 }, 00:05:41.470 { 00:05:41.470 "method": "bdev_nvme_set_options", 00:05:41.470 "params": { 00:05:41.470 "action_on_timeout": "none", 00:05:41.470 "timeout_us": 0, 00:05:41.470 "timeout_admin_us": 0, 00:05:41.470 "keep_alive_timeout_ms": 10000, 00:05:41.470 "arbitration_burst": 0, 00:05:41.470 "low_priority_weight": 0, 00:05:41.470 "medium_priority_weight": 0, 00:05:41.470 "high_priority_weight": 0, 00:05:41.470 "nvme_adminq_poll_period_us": 10000, 00:05:41.470 "nvme_ioq_poll_period_us": 0, 00:05:41.470 "io_queue_requests": 0, 00:05:41.470 "delay_cmd_submit": true, 00:05:41.470 "transport_retry_count": 4, 00:05:41.470 "bdev_retry_count": 3, 00:05:41.470 "transport_ack_timeout": 0, 00:05:41.470 "ctrlr_loss_timeout_sec": 0, 00:05:41.470 "reconnect_delay_sec": 0, 00:05:41.470 "fast_io_fail_timeout_sec": 0, 00:05:41.470 "disable_auto_failback": false, 00:05:41.470 "generate_uuids": false, 00:05:41.470 "transport_tos": 0, 00:05:41.470 "nvme_error_stat": false, 00:05:41.470 "rdma_srq_size": 0, 00:05:41.470 "io_path_stat": false, 00:05:41.470 "allow_accel_sequence": false, 00:05:41.470 "rdma_max_cq_size": 0, 00:05:41.470 "rdma_cm_event_timeout_ms": 0, 00:05:41.470 "dhchap_digests": [ 00:05:41.470 "sha256", 00:05:41.470 "sha384", 00:05:41.470 "sha512" 00:05:41.470 ], 00:05:41.470 "dhchap_dhgroups": [ 00:05:41.470 "null", 00:05:41.470 "ffdhe2048", 00:05:41.470 "ffdhe3072", 00:05:41.470 "ffdhe4096", 00:05:41.470 "ffdhe6144", 00:05:41.470 "ffdhe8192" 00:05:41.470 ] 00:05:41.470 } 00:05:41.470 }, 00:05:41.470 { 00:05:41.470 "method": "bdev_nvme_set_hotplug", 00:05:41.470 "params": { 00:05:41.470 "period_us": 100000, 00:05:41.470 "enable": false 00:05:41.470 } 00:05:41.470 }, 00:05:41.470 { 00:05:41.470 "method": "bdev_wait_for_examine" 00:05:41.470 } 00:05:41.470 ] 00:05:41.470 }, 00:05:41.470 { 00:05:41.470 "subsystem": "scsi", 00:05:41.470 "config": null 00:05:41.470 }, 00:05:41.470 { 00:05:41.470 "subsystem": "scheduler", 00:05:41.470 "config": [ 00:05:41.470 { 00:05:41.470 "method": "framework_set_scheduler", 00:05:41.470 "params": { 00:05:41.470 "name": "static" 00:05:41.470 } 00:05:41.470 } 00:05:41.470 ] 00:05:41.470 }, 00:05:41.470 { 00:05:41.470 "subsystem": "vhost_scsi", 00:05:41.470 "config": [] 00:05:41.470 }, 00:05:41.470 { 00:05:41.470 "subsystem": "vhost_blk", 00:05:41.470 "config": [] 00:05:41.470 }, 00:05:41.470 { 00:05:41.470 "subsystem": "ublk", 00:05:41.470 "config": [] 00:05:41.470 }, 00:05:41.470 { 00:05:41.470 "subsystem": "nbd", 00:05:41.470 "config": [] 00:05:41.470 }, 00:05:41.470 { 00:05:41.470 "subsystem": "nvmf", 00:05:41.470 "config": [ 00:05:41.470 { 00:05:41.470 "method": "nvmf_set_config", 00:05:41.470 "params": { 00:05:41.470 "discovery_filter": "match_any", 00:05:41.470 "admin_cmd_passthru": { 00:05:41.470 "identify_ctrlr": false 00:05:41.470 } 00:05:41.470 } 00:05:41.470 }, 00:05:41.470 { 00:05:41.470 "method": "nvmf_set_max_subsystems", 00:05:41.470 "params": { 00:05:41.470 "max_subsystems": 1024 00:05:41.470 } 00:05:41.470 }, 00:05:41.470 { 00:05:41.470 "method": "nvmf_set_crdt", 00:05:41.470 "params": { 00:05:41.470 "crdt1": 0, 00:05:41.470 "crdt2": 0, 00:05:41.470 "crdt3": 0 00:05:41.470 } 00:05:41.470 }, 00:05:41.470 { 00:05:41.470 "method": "nvmf_create_transport", 00:05:41.470 "params": { 00:05:41.470 "trtype": "TCP", 00:05:41.470 "max_queue_depth": 128, 00:05:41.470 "max_io_qpairs_per_ctrlr": 127, 00:05:41.470 "in_capsule_data_size": 4096, 00:05:41.470 "max_io_size": 131072, 00:05:41.470 "io_unit_size": 131072, 00:05:41.470 "max_aq_depth": 128, 00:05:41.470 "num_shared_buffers": 511, 00:05:41.470 "buf_cache_size": 4294967295, 00:05:41.470 "dif_insert_or_strip": false, 00:05:41.470 "zcopy": false, 00:05:41.470 "c2h_success": true, 00:05:41.470 "sock_priority": 0, 00:05:41.470 "abort_timeout_sec": 1, 00:05:41.470 "ack_timeout": 0, 00:05:41.470 "data_wr_pool_size": 0 00:05:41.470 } 00:05:41.470 } 00:05:41.470 ] 00:05:41.470 }, 00:05:41.470 { 00:05:41.470 "subsystem": "iscsi", 00:05:41.470 "config": [ 00:05:41.470 { 00:05:41.470 "method": "iscsi_set_options", 00:05:41.470 "params": { 00:05:41.470 "node_base": "iqn.2016-06.io.spdk", 00:05:41.470 "max_sessions": 128, 00:05:41.470 "max_connections_per_session": 2, 00:05:41.470 "max_queue_depth": 64, 00:05:41.470 "default_time2wait": 2, 00:05:41.470 "default_time2retain": 20, 00:05:41.470 "first_burst_length": 8192, 00:05:41.470 "immediate_data": true, 00:05:41.470 "allow_duplicated_isid": false, 00:05:41.470 "error_recovery_level": 0, 00:05:41.470 "nop_timeout": 60, 00:05:41.470 "nop_in_interval": 30, 00:05:41.470 "disable_chap": false, 00:05:41.470 "require_chap": false, 00:05:41.470 "mutual_chap": false, 00:05:41.470 "chap_group": 0, 00:05:41.470 "max_large_datain_per_connection": 64, 00:05:41.470 "max_r2t_per_connection": 4, 00:05:41.470 "pdu_pool_size": 36864, 00:05:41.470 "immediate_data_pool_size": 16384, 00:05:41.470 "data_out_pool_size": 2048 00:05:41.470 } 00:05:41.470 } 00:05:41.470 ] 00:05:41.470 } 00:05:41.470 ] 00:05:41.470 } 00:05:41.470 00:51:34 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:05:41.470 00:51:34 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 3637693 00:05:41.470 00:51:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@946 -- # '[' -z 3637693 ']' 00:05:41.470 00:51:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # kill -0 3637693 00:05:41.470 00:51:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # uname 00:05:41.470 00:51:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:41.470 00:51:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3637693 00:05:41.728 00:51:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:41.728 00:51:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:41.728 00:51:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3637693' 00:05:41.728 killing process with pid 3637693 00:05:41.728 00:51:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@965 -- # kill 3637693 00:05:41.728 00:51:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@970 -- # wait 3637693 00:05:41.986 00:51:35 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=3637833 00:05:41.986 00:51:35 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:41.986 00:51:35 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:05:47.248 00:51:40 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 3637833 00:05:47.248 00:51:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@946 -- # '[' -z 3637833 ']' 00:05:47.248 00:51:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # kill -0 3637833 00:05:47.248 00:51:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # uname 00:05:47.248 00:51:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:47.248 00:51:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3637833 00:05:47.248 00:51:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:47.248 00:51:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:47.248 00:51:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3637833' 00:05:47.248 killing process with pid 3637833 00:05:47.248 00:51:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@965 -- # kill 3637833 00:05:47.248 00:51:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@970 -- # wait 3637833 00:05:47.538 00:51:40 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:47.538 00:51:40 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:47.538 00:05:47.538 real 0m6.488s 00:05:47.538 user 0m6.087s 00:05:47.538 sys 0m0.680s 00:05:47.538 00:51:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:47.538 00:51:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:47.538 ************************************ 00:05:47.538 END TEST skip_rpc_with_json 00:05:47.538 ************************************ 00:05:47.538 00:51:40 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:05:47.538 00:51:40 skip_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:47.538 00:51:40 skip_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:47.538 00:51:40 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:47.538 ************************************ 00:05:47.538 START TEST skip_rpc_with_delay 00:05:47.538 ************************************ 00:05:47.538 00:51:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1121 -- # test_skip_rpc_with_delay 00:05:47.538 00:51:40 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:47.538 00:51:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@648 -- # local es=0 00:05:47.538 00:51:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:47.538 00:51:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:47.538 00:51:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:47.538 00:51:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:47.538 00:51:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:47.538 00:51:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:47.538 00:51:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:47.538 00:51:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:47.538 00:51:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:47.538 00:51:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:47.538 [2024-07-25 00:51:40.589955] app.c: 832:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:05:47.538 [2024-07-25 00:51:40.590056] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:05:47.538 00:51:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # es=1 00:05:47.538 00:51:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:47.538 00:51:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:47.538 00:51:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:47.538 00:05:47.538 real 0m0.068s 00:05:47.538 user 0m0.049s 00:05:47.539 sys 0m0.019s 00:05:47.539 00:51:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:47.539 00:51:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:05:47.539 ************************************ 00:05:47.539 END TEST skip_rpc_with_delay 00:05:47.539 ************************************ 00:05:47.539 00:51:40 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:05:47.539 00:51:40 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:05:47.539 00:51:40 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:05:47.539 00:51:40 skip_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:47.539 00:51:40 skip_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:47.539 00:51:40 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:47.539 ************************************ 00:05:47.539 START TEST exit_on_failed_rpc_init 00:05:47.539 ************************************ 00:05:47.539 00:51:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1121 -- # test_exit_on_failed_rpc_init 00:05:47.539 00:51:40 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=3638551 00:05:47.539 00:51:40 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:47.539 00:51:40 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 3638551 00:05:47.539 00:51:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@827 -- # '[' -z 3638551 ']' 00:05:47.539 00:51:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:47.539 00:51:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:47.539 00:51:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:47.539 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:47.539 00:51:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:47.539 00:51:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:47.802 [2024-07-25 00:51:40.706387] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 23.11.0 initialization... 00:05:47.802 [2024-07-25 00:51:40.706466] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3638551 ] 00:05:47.802 EAL: No free 2048 kB hugepages reported on node 1 00:05:47.802 [2024-07-25 00:51:40.767826] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:47.802 [2024-07-25 00:51:40.855905] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:48.060 00:51:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:48.060 00:51:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@860 -- # return 0 00:05:48.060 00:51:41 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:48.060 00:51:41 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:48.060 00:51:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@648 -- # local es=0 00:05:48.060 00:51:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:48.060 00:51:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:48.060 00:51:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:48.060 00:51:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:48.060 00:51:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:48.060 00:51:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:48.060 00:51:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:48.060 00:51:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:48.060 00:51:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:48.060 00:51:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:48.060 [2024-07-25 00:51:41.170607] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 23.11.0 initialization... 00:05:48.060 [2024-07-25 00:51:41.170679] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3638564 ] 00:05:48.060 EAL: No free 2048 kB hugepages reported on node 1 00:05:48.317 [2024-07-25 00:51:41.231453] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:48.317 [2024-07-25 00:51:41.324913] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:48.317 [2024-07-25 00:51:41.325011] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:05:48.318 [2024-07-25 00:51:41.325030] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:05:48.318 [2024-07-25 00:51:41.325041] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:48.318 00:51:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # es=234 00:05:48.318 00:51:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:48.318 00:51:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@660 -- # es=106 00:05:48.318 00:51:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # case "$es" in 00:05:48.318 00:51:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@668 -- # es=1 00:05:48.318 00:51:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:48.318 00:51:41 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:05:48.318 00:51:41 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 3638551 00:05:48.318 00:51:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@946 -- # '[' -z 3638551 ']' 00:05:48.318 00:51:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@950 -- # kill -0 3638551 00:05:48.318 00:51:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@951 -- # uname 00:05:48.318 00:51:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:48.318 00:51:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3638551 00:05:48.318 00:51:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:48.318 00:51:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:48.318 00:51:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3638551' 00:05:48.318 killing process with pid 3638551 00:05:48.318 00:51:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@965 -- # kill 3638551 00:05:48.318 00:51:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@970 -- # wait 3638551 00:05:48.882 00:05:48.882 real 0m1.204s 00:05:48.882 user 0m1.287s 00:05:48.882 sys 0m0.468s 00:05:48.882 00:51:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:48.882 00:51:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:48.882 ************************************ 00:05:48.882 END TEST exit_on_failed_rpc_init 00:05:48.882 ************************************ 00:05:48.883 00:51:41 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:48.883 00:05:48.883 real 0m13.461s 00:05:48.883 user 0m12.664s 00:05:48.883 sys 0m1.649s 00:05:48.883 00:51:41 skip_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:48.883 00:51:41 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:48.883 ************************************ 00:05:48.883 END TEST skip_rpc 00:05:48.883 ************************************ 00:05:48.883 00:51:41 -- spdk/autotest.sh@171 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:48.883 00:51:41 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:48.883 00:51:41 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:48.883 00:51:41 -- common/autotest_common.sh@10 -- # set +x 00:05:48.883 ************************************ 00:05:48.883 START TEST rpc_client 00:05:48.883 ************************************ 00:05:48.883 00:51:41 rpc_client -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:48.883 * Looking for test storage... 00:05:48.883 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:05:48.883 00:51:41 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:05:48.883 OK 00:05:48.883 00:51:41 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:48.883 00:05:48.883 real 0m0.066s 00:05:48.883 user 0m0.030s 00:05:48.883 sys 0m0.041s 00:05:48.883 00:51:41 rpc_client -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:48.883 00:51:41 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:05:48.883 ************************************ 00:05:48.883 END TEST rpc_client 00:05:48.883 ************************************ 00:05:48.883 00:51:42 -- spdk/autotest.sh@172 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:05:48.883 00:51:42 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:48.883 00:51:42 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:48.883 00:51:42 -- common/autotest_common.sh@10 -- # set +x 00:05:48.883 ************************************ 00:05:48.883 START TEST json_config 00:05:48.883 ************************************ 00:05:48.883 00:51:42 json_config -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:05:49.141 00:51:42 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:49.141 00:51:42 json_config -- nvmf/common.sh@7 -- # uname -s 00:05:49.141 00:51:42 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:49.141 00:51:42 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:49.141 00:51:42 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:49.141 00:51:42 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:49.141 00:51:42 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:49.141 00:51:42 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:49.141 00:51:42 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:49.141 00:51:42 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:49.141 00:51:42 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:49.141 00:51:42 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:49.141 00:51:42 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:05:49.141 00:51:42 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:05:49.141 00:51:42 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:49.141 00:51:42 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:49.141 00:51:42 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:49.141 00:51:42 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:49.141 00:51:42 json_config -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:49.141 00:51:42 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:49.141 00:51:42 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:49.141 00:51:42 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:49.141 00:51:42 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:49.141 00:51:42 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:49.142 00:51:42 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:49.142 00:51:42 json_config -- paths/export.sh@5 -- # export PATH 00:05:49.142 00:51:42 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:49.142 00:51:42 json_config -- nvmf/common.sh@47 -- # : 0 00:05:49.142 00:51:42 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:49.142 00:51:42 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:49.142 00:51:42 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:49.142 00:51:42 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:49.142 00:51:42 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:49.142 00:51:42 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:49.142 00:51:42 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:49.142 00:51:42 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:49.142 00:51:42 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:05:49.142 00:51:42 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:05:49.142 00:51:42 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:05:49.142 00:51:42 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:05:49.142 00:51:42 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:49.142 00:51:42 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:05:49.142 00:51:42 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:05:49.142 00:51:42 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:05:49.142 00:51:42 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:05:49.142 00:51:42 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:05:49.142 00:51:42 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:05:49.142 00:51:42 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:05:49.142 00:51:42 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:05:49.142 00:51:42 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:05:49.142 00:51:42 json_config -- json_config/json_config.sh@355 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:49.142 00:51:42 json_config -- json_config/json_config.sh@356 -- # echo 'INFO: JSON configuration test init' 00:05:49.142 INFO: JSON configuration test init 00:05:49.142 00:51:42 json_config -- json_config/json_config.sh@357 -- # json_config_test_init 00:05:49.142 00:51:42 json_config -- json_config/json_config.sh@262 -- # timing_enter json_config_test_init 00:05:49.142 00:51:42 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:05:49.142 00:51:42 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:49.142 00:51:42 json_config -- json_config/json_config.sh@263 -- # timing_enter json_config_setup_target 00:05:49.142 00:51:42 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:05:49.142 00:51:42 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:49.142 00:51:42 json_config -- json_config/json_config.sh@265 -- # json_config_test_start_app target --wait-for-rpc 00:05:49.142 00:51:42 json_config -- json_config/common.sh@9 -- # local app=target 00:05:49.142 00:51:42 json_config -- json_config/common.sh@10 -- # shift 00:05:49.142 00:51:42 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:49.142 00:51:42 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:49.142 00:51:42 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:49.142 00:51:42 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:49.142 00:51:42 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:49.142 00:51:42 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=3638808 00:05:49.142 00:51:42 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:05:49.142 00:51:42 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:49.142 Waiting for target to run... 00:05:49.142 00:51:42 json_config -- json_config/common.sh@25 -- # waitforlisten 3638808 /var/tmp/spdk_tgt.sock 00:05:49.142 00:51:42 json_config -- common/autotest_common.sh@827 -- # '[' -z 3638808 ']' 00:05:49.142 00:51:42 json_config -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:49.142 00:51:42 json_config -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:49.142 00:51:42 json_config -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:49.142 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:49.142 00:51:42 json_config -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:49.142 00:51:42 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:49.142 [2024-07-25 00:51:42.142201] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 23.11.0 initialization... 00:05:49.142 [2024-07-25 00:51:42.142315] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3638808 ] 00:05:49.142 EAL: No free 2048 kB hugepages reported on node 1 00:05:49.400 [2024-07-25 00:51:42.479847] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:49.400 [2024-07-25 00:51:42.546391] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:49.966 00:51:43 json_config -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:49.966 00:51:43 json_config -- common/autotest_common.sh@860 -- # return 0 00:05:49.966 00:51:43 json_config -- json_config/common.sh@26 -- # echo '' 00:05:49.966 00:05:49.966 00:51:43 json_config -- json_config/json_config.sh@269 -- # create_accel_config 00:05:49.966 00:51:43 json_config -- json_config/json_config.sh@93 -- # timing_enter create_accel_config 00:05:49.966 00:51:43 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:05:49.966 00:51:43 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:49.966 00:51:43 json_config -- json_config/json_config.sh@95 -- # [[ 0 -eq 1 ]] 00:05:49.966 00:51:43 json_config -- json_config/json_config.sh@101 -- # timing_exit create_accel_config 00:05:49.966 00:51:43 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:49.966 00:51:43 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:49.966 00:51:43 json_config -- json_config/json_config.sh@273 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:05:49.966 00:51:43 json_config -- json_config/json_config.sh@274 -- # tgt_rpc load_config 00:05:49.966 00:51:43 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:05:53.250 00:51:46 json_config -- json_config/json_config.sh@276 -- # tgt_check_notification_types 00:05:53.250 00:51:46 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:05:53.250 00:51:46 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:05:53.250 00:51:46 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:53.250 00:51:46 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:05:53.250 00:51:46 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:05:53.250 00:51:46 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:05:53.250 00:51:46 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:05:53.250 00:51:46 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:05:53.250 00:51:46 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:05:53.507 00:51:46 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:05:53.507 00:51:46 json_config -- json_config/json_config.sh@48 -- # local get_types 00:05:53.507 00:51:46 json_config -- json_config/json_config.sh@49 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:05:53.507 00:51:46 json_config -- json_config/json_config.sh@54 -- # timing_exit tgt_check_notification_types 00:05:53.507 00:51:46 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:53.507 00:51:46 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:53.507 00:51:46 json_config -- json_config/json_config.sh@55 -- # return 0 00:05:53.507 00:51:46 json_config -- json_config/json_config.sh@278 -- # [[ 0 -eq 1 ]] 00:05:53.507 00:51:46 json_config -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:05:53.507 00:51:46 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:05:53.507 00:51:46 json_config -- json_config/json_config.sh@290 -- # [[ 1 -eq 1 ]] 00:05:53.507 00:51:46 json_config -- json_config/json_config.sh@291 -- # create_nvmf_subsystem_config 00:05:53.507 00:51:46 json_config -- json_config/json_config.sh@230 -- # timing_enter create_nvmf_subsystem_config 00:05:53.507 00:51:46 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:05:53.507 00:51:46 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:53.507 00:51:46 json_config -- json_config/json_config.sh@232 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:05:53.507 00:51:46 json_config -- json_config/json_config.sh@233 -- # [[ tcp == \r\d\m\a ]] 00:05:53.507 00:51:46 json_config -- json_config/json_config.sh@237 -- # [[ -z 127.0.0.1 ]] 00:05:53.507 00:51:46 json_config -- json_config/json_config.sh@242 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:53.507 00:51:46 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:53.764 MallocForNvmf0 00:05:53.764 00:51:46 json_config -- json_config/json_config.sh@243 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:53.764 00:51:46 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:54.021 MallocForNvmf1 00:05:54.021 00:51:46 json_config -- json_config/json_config.sh@245 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:05:54.021 00:51:46 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:05:54.278 [2024-07-25 00:51:47.265956] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:54.278 00:51:47 json_config -- json_config/json_config.sh@246 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:54.279 00:51:47 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:54.536 00:51:47 json_config -- json_config/json_config.sh@247 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:54.536 00:51:47 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:54.794 00:51:47 json_config -- json_config/json_config.sh@248 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:54.794 00:51:47 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:55.051 00:51:48 json_config -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:55.051 00:51:48 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:55.309 [2024-07-25 00:51:48.257180] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:55.309 00:51:48 json_config -- json_config/json_config.sh@251 -- # timing_exit create_nvmf_subsystem_config 00:05:55.309 00:51:48 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:55.309 00:51:48 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:55.309 00:51:48 json_config -- json_config/json_config.sh@293 -- # timing_exit json_config_setup_target 00:05:55.309 00:51:48 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:55.309 00:51:48 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:55.309 00:51:48 json_config -- json_config/json_config.sh@295 -- # [[ 0 -eq 1 ]] 00:05:55.309 00:51:48 json_config -- json_config/json_config.sh@300 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:55.309 00:51:48 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:55.566 MallocBdevForConfigChangeCheck 00:05:55.566 00:51:48 json_config -- json_config/json_config.sh@302 -- # timing_exit json_config_test_init 00:05:55.566 00:51:48 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:55.566 00:51:48 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:55.566 00:51:48 json_config -- json_config/json_config.sh@359 -- # tgt_rpc save_config 00:05:55.566 00:51:48 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:55.823 00:51:48 json_config -- json_config/json_config.sh@361 -- # echo 'INFO: shutting down applications...' 00:05:55.823 INFO: shutting down applications... 00:05:55.823 00:51:48 json_config -- json_config/json_config.sh@362 -- # [[ 0 -eq 1 ]] 00:05:55.823 00:51:48 json_config -- json_config/json_config.sh@368 -- # json_config_clear target 00:05:55.823 00:51:48 json_config -- json_config/json_config.sh@332 -- # [[ -n 22 ]] 00:05:55.823 00:51:48 json_config -- json_config/json_config.sh@333 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:05:57.720 Calling clear_iscsi_subsystem 00:05:57.720 Calling clear_nvmf_subsystem 00:05:57.720 Calling clear_nbd_subsystem 00:05:57.720 Calling clear_ublk_subsystem 00:05:57.720 Calling clear_vhost_blk_subsystem 00:05:57.720 Calling clear_vhost_scsi_subsystem 00:05:57.720 Calling clear_bdev_subsystem 00:05:57.720 00:51:50 json_config -- json_config/json_config.sh@337 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:05:57.720 00:51:50 json_config -- json_config/json_config.sh@343 -- # count=100 00:05:57.720 00:51:50 json_config -- json_config/json_config.sh@344 -- # '[' 100 -gt 0 ']' 00:05:57.720 00:51:50 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:57.720 00:51:50 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:05:57.720 00:51:50 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:05:57.977 00:51:51 json_config -- json_config/json_config.sh@345 -- # break 00:05:57.977 00:51:51 json_config -- json_config/json_config.sh@350 -- # '[' 100 -eq 0 ']' 00:05:57.977 00:51:51 json_config -- json_config/json_config.sh@369 -- # json_config_test_shutdown_app target 00:05:57.977 00:51:51 json_config -- json_config/common.sh@31 -- # local app=target 00:05:57.977 00:51:51 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:57.977 00:51:51 json_config -- json_config/common.sh@35 -- # [[ -n 3638808 ]] 00:05:57.977 00:51:51 json_config -- json_config/common.sh@38 -- # kill -SIGINT 3638808 00:05:57.977 00:51:51 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:57.977 00:51:51 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:57.977 00:51:51 json_config -- json_config/common.sh@41 -- # kill -0 3638808 00:05:57.977 00:51:51 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:05:58.543 00:51:51 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:05:58.543 00:51:51 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:58.543 00:51:51 json_config -- json_config/common.sh@41 -- # kill -0 3638808 00:05:58.543 00:51:51 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:58.543 00:51:51 json_config -- json_config/common.sh@43 -- # break 00:05:58.543 00:51:51 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:58.543 00:51:51 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:58.543 SPDK target shutdown done 00:05:58.543 00:51:51 json_config -- json_config/json_config.sh@371 -- # echo 'INFO: relaunching applications...' 00:05:58.543 INFO: relaunching applications... 00:05:58.543 00:51:51 json_config -- json_config/json_config.sh@372 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:58.543 00:51:51 json_config -- json_config/common.sh@9 -- # local app=target 00:05:58.543 00:51:51 json_config -- json_config/common.sh@10 -- # shift 00:05:58.543 00:51:51 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:58.543 00:51:51 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:58.543 00:51:51 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:58.543 00:51:51 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:58.543 00:51:51 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:58.543 00:51:51 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=3640107 00:05:58.543 00:51:51 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:58.543 00:51:51 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:58.543 Waiting for target to run... 00:05:58.543 00:51:51 json_config -- json_config/common.sh@25 -- # waitforlisten 3640107 /var/tmp/spdk_tgt.sock 00:05:58.543 00:51:51 json_config -- common/autotest_common.sh@827 -- # '[' -z 3640107 ']' 00:05:58.543 00:51:51 json_config -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:58.543 00:51:51 json_config -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:58.543 00:51:51 json_config -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:58.543 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:58.543 00:51:51 json_config -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:58.543 00:51:51 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:58.543 [2024-07-25 00:51:51.573379] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 23.11.0 initialization... 00:05:58.543 [2024-07-25 00:51:51.573462] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3640107 ] 00:05:58.543 EAL: No free 2048 kB hugepages reported on node 1 00:05:59.109 [2024-07-25 00:51:52.073869] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:59.109 [2024-07-25 00:51:52.156010] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:02.386 [2024-07-25 00:51:55.183417] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:02.386 [2024-07-25 00:51:55.215910] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:06:02.951 00:51:55 json_config -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:02.951 00:51:55 json_config -- common/autotest_common.sh@860 -- # return 0 00:06:02.951 00:51:55 json_config -- json_config/common.sh@26 -- # echo '' 00:06:02.951 00:06:02.951 00:51:55 json_config -- json_config/json_config.sh@373 -- # [[ 0 -eq 1 ]] 00:06:02.951 00:51:55 json_config -- json_config/json_config.sh@377 -- # echo 'INFO: Checking if target configuration is the same...' 00:06:02.951 INFO: Checking if target configuration is the same... 00:06:02.951 00:51:55 json_config -- json_config/json_config.sh@378 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:02.951 00:51:55 json_config -- json_config/json_config.sh@378 -- # tgt_rpc save_config 00:06:02.951 00:51:55 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:02.951 + '[' 2 -ne 2 ']' 00:06:02.951 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:06:02.951 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:06:02.951 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:06:02.951 +++ basename /dev/fd/62 00:06:02.951 ++ mktemp /tmp/62.XXX 00:06:02.951 + tmp_file_1=/tmp/62.ENF 00:06:02.952 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:02.952 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:06:02.952 + tmp_file_2=/tmp/spdk_tgt_config.json.bb5 00:06:02.952 + ret=0 00:06:02.952 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:03.210 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:03.468 + diff -u /tmp/62.ENF /tmp/spdk_tgt_config.json.bb5 00:06:03.468 + echo 'INFO: JSON config files are the same' 00:06:03.468 INFO: JSON config files are the same 00:06:03.468 + rm /tmp/62.ENF /tmp/spdk_tgt_config.json.bb5 00:06:03.468 + exit 0 00:06:03.468 00:51:56 json_config -- json_config/json_config.sh@379 -- # [[ 0 -eq 1 ]] 00:06:03.468 00:51:56 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:06:03.468 INFO: changing configuration and checking if this can be detected... 00:06:03.468 00:51:56 json_config -- json_config/json_config.sh@386 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:06:03.468 00:51:56 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:06:03.726 00:51:56 json_config -- json_config/json_config.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:03.726 00:51:56 json_config -- json_config/json_config.sh@387 -- # tgt_rpc save_config 00:06:03.726 00:51:56 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:03.726 + '[' 2 -ne 2 ']' 00:06:03.726 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:06:03.726 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:06:03.726 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:06:03.726 +++ basename /dev/fd/62 00:06:03.726 ++ mktemp /tmp/62.XXX 00:06:03.726 + tmp_file_1=/tmp/62.oe9 00:06:03.726 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:03.726 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:06:03.726 + tmp_file_2=/tmp/spdk_tgt_config.json.xkh 00:06:03.726 + ret=0 00:06:03.726 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:03.983 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:03.983 + diff -u /tmp/62.oe9 /tmp/spdk_tgt_config.json.xkh 00:06:03.983 + ret=1 00:06:03.983 + echo '=== Start of file: /tmp/62.oe9 ===' 00:06:03.983 + cat /tmp/62.oe9 00:06:03.983 + echo '=== End of file: /tmp/62.oe9 ===' 00:06:03.983 + echo '' 00:06:03.983 + echo '=== Start of file: /tmp/spdk_tgt_config.json.xkh ===' 00:06:03.983 + cat /tmp/spdk_tgt_config.json.xkh 00:06:03.983 + echo '=== End of file: /tmp/spdk_tgt_config.json.xkh ===' 00:06:03.983 + echo '' 00:06:03.983 + rm /tmp/62.oe9 /tmp/spdk_tgt_config.json.xkh 00:06:03.983 + exit 1 00:06:03.983 00:51:57 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: configuration change detected.' 00:06:03.983 INFO: configuration change detected. 00:06:03.983 00:51:57 json_config -- json_config/json_config.sh@394 -- # json_config_test_fini 00:06:03.983 00:51:57 json_config -- json_config/json_config.sh@306 -- # timing_enter json_config_test_fini 00:06:03.983 00:51:57 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:06:03.983 00:51:57 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:03.983 00:51:57 json_config -- json_config/json_config.sh@307 -- # local ret=0 00:06:03.983 00:51:57 json_config -- json_config/json_config.sh@309 -- # [[ -n '' ]] 00:06:03.984 00:51:57 json_config -- json_config/json_config.sh@317 -- # [[ -n 3640107 ]] 00:06:03.984 00:51:57 json_config -- json_config/json_config.sh@320 -- # cleanup_bdev_subsystem_config 00:06:03.984 00:51:57 json_config -- json_config/json_config.sh@184 -- # timing_enter cleanup_bdev_subsystem_config 00:06:03.984 00:51:57 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:06:03.984 00:51:57 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:03.984 00:51:57 json_config -- json_config/json_config.sh@186 -- # [[ 0 -eq 1 ]] 00:06:03.984 00:51:57 json_config -- json_config/json_config.sh@193 -- # uname -s 00:06:03.984 00:51:57 json_config -- json_config/json_config.sh@193 -- # [[ Linux = Linux ]] 00:06:03.984 00:51:57 json_config -- json_config/json_config.sh@194 -- # rm -f /sample_aio 00:06:03.984 00:51:57 json_config -- json_config/json_config.sh@197 -- # [[ 0 -eq 1 ]] 00:06:03.984 00:51:57 json_config -- json_config/json_config.sh@201 -- # timing_exit cleanup_bdev_subsystem_config 00:06:03.984 00:51:57 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:03.984 00:51:57 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:03.984 00:51:57 json_config -- json_config/json_config.sh@323 -- # killprocess 3640107 00:06:03.984 00:51:57 json_config -- common/autotest_common.sh@946 -- # '[' -z 3640107 ']' 00:06:03.984 00:51:57 json_config -- common/autotest_common.sh@950 -- # kill -0 3640107 00:06:03.984 00:51:57 json_config -- common/autotest_common.sh@951 -- # uname 00:06:03.984 00:51:57 json_config -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:03.984 00:51:57 json_config -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3640107 00:06:04.242 00:51:57 json_config -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:04.242 00:51:57 json_config -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:04.242 00:51:57 json_config -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3640107' 00:06:04.242 killing process with pid 3640107 00:06:04.242 00:51:57 json_config -- common/autotest_common.sh@965 -- # kill 3640107 00:06:04.242 00:51:57 json_config -- common/autotest_common.sh@970 -- # wait 3640107 00:06:05.613 00:51:58 json_config -- json_config/json_config.sh@326 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:05.613 00:51:58 json_config -- json_config/json_config.sh@327 -- # timing_exit json_config_test_fini 00:06:05.613 00:51:58 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:05.613 00:51:58 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:05.613 00:51:58 json_config -- json_config/json_config.sh@328 -- # return 0 00:06:05.613 00:51:58 json_config -- json_config/json_config.sh@396 -- # echo 'INFO: Success' 00:06:05.613 INFO: Success 00:06:05.613 00:06:05.613 real 0m16.722s 00:06:05.613 user 0m18.661s 00:06:05.613 sys 0m2.020s 00:06:05.613 00:51:58 json_config -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:05.613 00:51:58 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:05.613 ************************************ 00:06:05.613 END TEST json_config 00:06:05.613 ************************************ 00:06:05.872 00:51:58 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:06:05.872 00:51:58 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:05.872 00:51:58 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:05.872 00:51:58 -- common/autotest_common.sh@10 -- # set +x 00:06:05.872 ************************************ 00:06:05.872 START TEST json_config_extra_key 00:06:05.872 ************************************ 00:06:05.872 00:51:58 json_config_extra_key -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:06:05.872 00:51:58 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:05.872 00:51:58 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:06:05.872 00:51:58 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:05.872 00:51:58 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:05.872 00:51:58 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:05.872 00:51:58 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:05.872 00:51:58 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:05.872 00:51:58 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:05.872 00:51:58 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:05.872 00:51:58 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:05.872 00:51:58 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:05.872 00:51:58 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:05.872 00:51:58 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:06:05.872 00:51:58 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:06:05.872 00:51:58 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:05.872 00:51:58 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:05.872 00:51:58 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:05.872 00:51:58 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:05.872 00:51:58 json_config_extra_key -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:05.872 00:51:58 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:05.872 00:51:58 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:05.872 00:51:58 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:05.872 00:51:58 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:05.872 00:51:58 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:05.872 00:51:58 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:05.872 00:51:58 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:06:05.872 00:51:58 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:05.872 00:51:58 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:06:05.872 00:51:58 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:05.872 00:51:58 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:05.872 00:51:58 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:05.872 00:51:58 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:05.872 00:51:58 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:05.872 00:51:58 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:05.872 00:51:58 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:05.872 00:51:58 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:05.872 00:51:58 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:06:05.872 00:51:58 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:06:05.872 00:51:58 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:06:05.872 00:51:58 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:06:05.872 00:51:58 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:06:05.872 00:51:58 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:06:05.872 00:51:58 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:06:05.872 00:51:58 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:06:05.872 00:51:58 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:06:05.872 00:51:58 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:05.872 00:51:58 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:06:05.872 INFO: launching applications... 00:06:05.872 00:51:58 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:06:05.872 00:51:58 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:06:05.872 00:51:58 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:06:05.872 00:51:58 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:05.872 00:51:58 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:05.872 00:51:58 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:06:05.873 00:51:58 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:05.873 00:51:58 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:05.873 00:51:58 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=3641041 00:06:05.873 00:51:58 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:06:05.873 00:51:58 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:05.873 Waiting for target to run... 00:06:05.873 00:51:58 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 3641041 /var/tmp/spdk_tgt.sock 00:06:05.873 00:51:58 json_config_extra_key -- common/autotest_common.sh@827 -- # '[' -z 3641041 ']' 00:06:05.873 00:51:58 json_config_extra_key -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:05.873 00:51:58 json_config_extra_key -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:05.873 00:51:58 json_config_extra_key -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:05.873 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:05.873 00:51:58 json_config_extra_key -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:05.873 00:51:58 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:05.873 [2024-07-25 00:51:58.916785] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 23.11.0 initialization... 00:06:05.873 [2024-07-25 00:51:58.916878] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3641041 ] 00:06:05.873 EAL: No free 2048 kB hugepages reported on node 1 00:06:06.132 [2024-07-25 00:51:59.253594] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:06.390 [2024-07-25 00:51:59.318067] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:06.982 00:51:59 json_config_extra_key -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:06.982 00:51:59 json_config_extra_key -- common/autotest_common.sh@860 -- # return 0 00:06:06.982 00:51:59 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:06:06.982 00:06:06.982 00:51:59 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:06:06.982 INFO: shutting down applications... 00:06:06.982 00:51:59 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:06:06.982 00:51:59 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:06:06.982 00:51:59 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:06.982 00:51:59 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 3641041 ]] 00:06:06.982 00:51:59 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 3641041 00:06:06.982 00:51:59 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:06.982 00:51:59 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:06.982 00:51:59 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 3641041 00:06:06.982 00:51:59 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:07.240 00:52:00 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:07.240 00:52:00 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:07.240 00:52:00 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 3641041 00:06:07.240 00:52:00 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:07.240 00:52:00 json_config_extra_key -- json_config/common.sh@43 -- # break 00:06:07.240 00:52:00 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:07.240 00:52:00 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:07.240 SPDK target shutdown done 00:06:07.240 00:52:00 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:06:07.240 Success 00:06:07.240 00:06:07.240 real 0m1.562s 00:06:07.240 user 0m1.531s 00:06:07.240 sys 0m0.444s 00:06:07.240 00:52:00 json_config_extra_key -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:07.240 00:52:00 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:07.240 ************************************ 00:06:07.240 END TEST json_config_extra_key 00:06:07.240 ************************************ 00:06:07.240 00:52:00 -- spdk/autotest.sh@174 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:07.241 00:52:00 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:07.241 00:52:00 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:07.241 00:52:00 -- common/autotest_common.sh@10 -- # set +x 00:06:07.499 ************************************ 00:06:07.499 START TEST alias_rpc 00:06:07.499 ************************************ 00:06:07.499 00:52:00 alias_rpc -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:07.499 * Looking for test storage... 00:06:07.499 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:06:07.499 00:52:00 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:07.499 00:52:00 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=3641241 00:06:07.499 00:52:00 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:07.499 00:52:00 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 3641241 00:06:07.499 00:52:00 alias_rpc -- common/autotest_common.sh@827 -- # '[' -z 3641241 ']' 00:06:07.499 00:52:00 alias_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:07.499 00:52:00 alias_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:07.499 00:52:00 alias_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:07.499 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:07.499 00:52:00 alias_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:07.499 00:52:00 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:07.499 [2024-07-25 00:52:00.524158] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 23.11.0 initialization... 00:06:07.499 [2024-07-25 00:52:00.524258] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3641241 ] 00:06:07.499 EAL: No free 2048 kB hugepages reported on node 1 00:06:07.499 [2024-07-25 00:52:00.582981] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:07.757 [2024-07-25 00:52:00.669879] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:08.014 00:52:00 alias_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:08.014 00:52:00 alias_rpc -- common/autotest_common.sh@860 -- # return 0 00:06:08.014 00:52:00 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:06:08.272 00:52:01 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 3641241 00:06:08.272 00:52:01 alias_rpc -- common/autotest_common.sh@946 -- # '[' -z 3641241 ']' 00:06:08.272 00:52:01 alias_rpc -- common/autotest_common.sh@950 -- # kill -0 3641241 00:06:08.272 00:52:01 alias_rpc -- common/autotest_common.sh@951 -- # uname 00:06:08.272 00:52:01 alias_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:08.272 00:52:01 alias_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3641241 00:06:08.272 00:52:01 alias_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:08.272 00:52:01 alias_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:08.272 00:52:01 alias_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3641241' 00:06:08.272 killing process with pid 3641241 00:06:08.272 00:52:01 alias_rpc -- common/autotest_common.sh@965 -- # kill 3641241 00:06:08.272 00:52:01 alias_rpc -- common/autotest_common.sh@970 -- # wait 3641241 00:06:08.531 00:06:08.531 real 0m1.213s 00:06:08.531 user 0m1.268s 00:06:08.531 sys 0m0.433s 00:06:08.531 00:52:01 alias_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:08.531 00:52:01 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:08.531 ************************************ 00:06:08.531 END TEST alias_rpc 00:06:08.531 ************************************ 00:06:08.531 00:52:01 -- spdk/autotest.sh@176 -- # [[ 0 -eq 0 ]] 00:06:08.531 00:52:01 -- spdk/autotest.sh@177 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:06:08.531 00:52:01 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:08.531 00:52:01 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:08.531 00:52:01 -- common/autotest_common.sh@10 -- # set +x 00:06:08.531 ************************************ 00:06:08.531 START TEST spdkcli_tcp 00:06:08.531 ************************************ 00:06:08.531 00:52:01 spdkcli_tcp -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:06:08.789 * Looking for test storage... 00:06:08.789 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:06:08.789 00:52:01 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:06:08.789 00:52:01 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:06:08.789 00:52:01 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:06:08.789 00:52:01 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:06:08.789 00:52:01 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:06:08.789 00:52:01 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:06:08.789 00:52:01 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:06:08.789 00:52:01 spdkcli_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:06:08.789 00:52:01 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:08.789 00:52:01 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=3641534 00:06:08.789 00:52:01 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:06:08.789 00:52:01 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 3641534 00:06:08.789 00:52:01 spdkcli_tcp -- common/autotest_common.sh@827 -- # '[' -z 3641534 ']' 00:06:08.789 00:52:01 spdkcli_tcp -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:08.789 00:52:01 spdkcli_tcp -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:08.789 00:52:01 spdkcli_tcp -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:08.789 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:08.789 00:52:01 spdkcli_tcp -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:08.789 00:52:01 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:08.789 [2024-07-25 00:52:01.785348] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 23.11.0 initialization... 00:06:08.789 [2024-07-25 00:52:01.785427] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3641534 ] 00:06:08.789 EAL: No free 2048 kB hugepages reported on node 1 00:06:08.789 [2024-07-25 00:52:01.844885] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:08.789 [2024-07-25 00:52:01.935689] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:08.789 [2024-07-25 00:52:01.935695] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.047 00:52:02 spdkcli_tcp -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:09.047 00:52:02 spdkcli_tcp -- common/autotest_common.sh@860 -- # return 0 00:06:09.047 00:52:02 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=3641547 00:06:09.047 00:52:02 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:06:09.047 00:52:02 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:06:09.305 [ 00:06:09.305 "bdev_malloc_delete", 00:06:09.305 "bdev_malloc_create", 00:06:09.305 "bdev_null_resize", 00:06:09.305 "bdev_null_delete", 00:06:09.305 "bdev_null_create", 00:06:09.305 "bdev_nvme_cuse_unregister", 00:06:09.305 "bdev_nvme_cuse_register", 00:06:09.305 "bdev_opal_new_user", 00:06:09.305 "bdev_opal_set_lock_state", 00:06:09.305 "bdev_opal_delete", 00:06:09.305 "bdev_opal_get_info", 00:06:09.305 "bdev_opal_create", 00:06:09.305 "bdev_nvme_opal_revert", 00:06:09.305 "bdev_nvme_opal_init", 00:06:09.305 "bdev_nvme_send_cmd", 00:06:09.305 "bdev_nvme_get_path_iostat", 00:06:09.305 "bdev_nvme_get_mdns_discovery_info", 00:06:09.305 "bdev_nvme_stop_mdns_discovery", 00:06:09.305 "bdev_nvme_start_mdns_discovery", 00:06:09.305 "bdev_nvme_set_multipath_policy", 00:06:09.305 "bdev_nvme_set_preferred_path", 00:06:09.305 "bdev_nvme_get_io_paths", 00:06:09.305 "bdev_nvme_remove_error_injection", 00:06:09.305 "bdev_nvme_add_error_injection", 00:06:09.305 "bdev_nvme_get_discovery_info", 00:06:09.305 "bdev_nvme_stop_discovery", 00:06:09.305 "bdev_nvme_start_discovery", 00:06:09.305 "bdev_nvme_get_controller_health_info", 00:06:09.305 "bdev_nvme_disable_controller", 00:06:09.305 "bdev_nvme_enable_controller", 00:06:09.305 "bdev_nvme_reset_controller", 00:06:09.305 "bdev_nvme_get_transport_statistics", 00:06:09.305 "bdev_nvme_apply_firmware", 00:06:09.305 "bdev_nvme_detach_controller", 00:06:09.305 "bdev_nvme_get_controllers", 00:06:09.305 "bdev_nvme_attach_controller", 00:06:09.305 "bdev_nvme_set_hotplug", 00:06:09.305 "bdev_nvme_set_options", 00:06:09.305 "bdev_passthru_delete", 00:06:09.305 "bdev_passthru_create", 00:06:09.305 "bdev_lvol_set_parent_bdev", 00:06:09.305 "bdev_lvol_set_parent", 00:06:09.305 "bdev_lvol_check_shallow_copy", 00:06:09.305 "bdev_lvol_start_shallow_copy", 00:06:09.305 "bdev_lvol_grow_lvstore", 00:06:09.305 "bdev_lvol_get_lvols", 00:06:09.305 "bdev_lvol_get_lvstores", 00:06:09.305 "bdev_lvol_delete", 00:06:09.305 "bdev_lvol_set_read_only", 00:06:09.305 "bdev_lvol_resize", 00:06:09.305 "bdev_lvol_decouple_parent", 00:06:09.305 "bdev_lvol_inflate", 00:06:09.305 "bdev_lvol_rename", 00:06:09.305 "bdev_lvol_clone_bdev", 00:06:09.305 "bdev_lvol_clone", 00:06:09.305 "bdev_lvol_snapshot", 00:06:09.305 "bdev_lvol_create", 00:06:09.305 "bdev_lvol_delete_lvstore", 00:06:09.305 "bdev_lvol_rename_lvstore", 00:06:09.305 "bdev_lvol_create_lvstore", 00:06:09.305 "bdev_raid_set_options", 00:06:09.305 "bdev_raid_remove_base_bdev", 00:06:09.305 "bdev_raid_add_base_bdev", 00:06:09.305 "bdev_raid_delete", 00:06:09.305 "bdev_raid_create", 00:06:09.305 "bdev_raid_get_bdevs", 00:06:09.305 "bdev_error_inject_error", 00:06:09.305 "bdev_error_delete", 00:06:09.305 "bdev_error_create", 00:06:09.305 "bdev_split_delete", 00:06:09.305 "bdev_split_create", 00:06:09.305 "bdev_delay_delete", 00:06:09.305 "bdev_delay_create", 00:06:09.305 "bdev_delay_update_latency", 00:06:09.305 "bdev_zone_block_delete", 00:06:09.305 "bdev_zone_block_create", 00:06:09.305 "blobfs_create", 00:06:09.305 "blobfs_detect", 00:06:09.305 "blobfs_set_cache_size", 00:06:09.305 "bdev_aio_delete", 00:06:09.305 "bdev_aio_rescan", 00:06:09.305 "bdev_aio_create", 00:06:09.305 "bdev_ftl_set_property", 00:06:09.305 "bdev_ftl_get_properties", 00:06:09.305 "bdev_ftl_get_stats", 00:06:09.305 "bdev_ftl_unmap", 00:06:09.305 "bdev_ftl_unload", 00:06:09.305 "bdev_ftl_delete", 00:06:09.305 "bdev_ftl_load", 00:06:09.305 "bdev_ftl_create", 00:06:09.305 "bdev_virtio_attach_controller", 00:06:09.306 "bdev_virtio_scsi_get_devices", 00:06:09.306 "bdev_virtio_detach_controller", 00:06:09.306 "bdev_virtio_blk_set_hotplug", 00:06:09.306 "bdev_iscsi_delete", 00:06:09.306 "bdev_iscsi_create", 00:06:09.306 "bdev_iscsi_set_options", 00:06:09.306 "accel_error_inject_error", 00:06:09.306 "ioat_scan_accel_module", 00:06:09.306 "dsa_scan_accel_module", 00:06:09.306 "iaa_scan_accel_module", 00:06:09.306 "vfu_virtio_create_scsi_endpoint", 00:06:09.306 "vfu_virtio_scsi_remove_target", 00:06:09.306 "vfu_virtio_scsi_add_target", 00:06:09.306 "vfu_virtio_create_blk_endpoint", 00:06:09.306 "vfu_virtio_delete_endpoint", 00:06:09.306 "keyring_file_remove_key", 00:06:09.306 "keyring_file_add_key", 00:06:09.306 "keyring_linux_set_options", 00:06:09.306 "iscsi_get_histogram", 00:06:09.306 "iscsi_enable_histogram", 00:06:09.306 "iscsi_set_options", 00:06:09.306 "iscsi_get_auth_groups", 00:06:09.306 "iscsi_auth_group_remove_secret", 00:06:09.306 "iscsi_auth_group_add_secret", 00:06:09.306 "iscsi_delete_auth_group", 00:06:09.306 "iscsi_create_auth_group", 00:06:09.306 "iscsi_set_discovery_auth", 00:06:09.306 "iscsi_get_options", 00:06:09.306 "iscsi_target_node_request_logout", 00:06:09.306 "iscsi_target_node_set_redirect", 00:06:09.306 "iscsi_target_node_set_auth", 00:06:09.306 "iscsi_target_node_add_lun", 00:06:09.306 "iscsi_get_stats", 00:06:09.306 "iscsi_get_connections", 00:06:09.306 "iscsi_portal_group_set_auth", 00:06:09.306 "iscsi_start_portal_group", 00:06:09.306 "iscsi_delete_portal_group", 00:06:09.306 "iscsi_create_portal_group", 00:06:09.306 "iscsi_get_portal_groups", 00:06:09.306 "iscsi_delete_target_node", 00:06:09.306 "iscsi_target_node_remove_pg_ig_maps", 00:06:09.306 "iscsi_target_node_add_pg_ig_maps", 00:06:09.306 "iscsi_create_target_node", 00:06:09.306 "iscsi_get_target_nodes", 00:06:09.306 "iscsi_delete_initiator_group", 00:06:09.306 "iscsi_initiator_group_remove_initiators", 00:06:09.306 "iscsi_initiator_group_add_initiators", 00:06:09.306 "iscsi_create_initiator_group", 00:06:09.306 "iscsi_get_initiator_groups", 00:06:09.306 "nvmf_set_crdt", 00:06:09.306 "nvmf_set_config", 00:06:09.306 "nvmf_set_max_subsystems", 00:06:09.306 "nvmf_stop_mdns_prr", 00:06:09.306 "nvmf_publish_mdns_prr", 00:06:09.306 "nvmf_subsystem_get_listeners", 00:06:09.306 "nvmf_subsystem_get_qpairs", 00:06:09.306 "nvmf_subsystem_get_controllers", 00:06:09.306 "nvmf_get_stats", 00:06:09.306 "nvmf_get_transports", 00:06:09.306 "nvmf_create_transport", 00:06:09.306 "nvmf_get_targets", 00:06:09.306 "nvmf_delete_target", 00:06:09.306 "nvmf_create_target", 00:06:09.306 "nvmf_subsystem_allow_any_host", 00:06:09.306 "nvmf_subsystem_remove_host", 00:06:09.306 "nvmf_subsystem_add_host", 00:06:09.306 "nvmf_ns_remove_host", 00:06:09.306 "nvmf_ns_add_host", 00:06:09.306 "nvmf_subsystem_remove_ns", 00:06:09.306 "nvmf_subsystem_add_ns", 00:06:09.306 "nvmf_subsystem_listener_set_ana_state", 00:06:09.306 "nvmf_discovery_get_referrals", 00:06:09.306 "nvmf_discovery_remove_referral", 00:06:09.306 "nvmf_discovery_add_referral", 00:06:09.306 "nvmf_subsystem_remove_listener", 00:06:09.306 "nvmf_subsystem_add_listener", 00:06:09.306 "nvmf_delete_subsystem", 00:06:09.306 "nvmf_create_subsystem", 00:06:09.306 "nvmf_get_subsystems", 00:06:09.306 "env_dpdk_get_mem_stats", 00:06:09.306 "nbd_get_disks", 00:06:09.306 "nbd_stop_disk", 00:06:09.306 "nbd_start_disk", 00:06:09.306 "ublk_recover_disk", 00:06:09.306 "ublk_get_disks", 00:06:09.306 "ublk_stop_disk", 00:06:09.306 "ublk_start_disk", 00:06:09.306 "ublk_destroy_target", 00:06:09.306 "ublk_create_target", 00:06:09.306 "virtio_blk_create_transport", 00:06:09.306 "virtio_blk_get_transports", 00:06:09.306 "vhost_controller_set_coalescing", 00:06:09.306 "vhost_get_controllers", 00:06:09.306 "vhost_delete_controller", 00:06:09.306 "vhost_create_blk_controller", 00:06:09.306 "vhost_scsi_controller_remove_target", 00:06:09.306 "vhost_scsi_controller_add_target", 00:06:09.306 "vhost_start_scsi_controller", 00:06:09.306 "vhost_create_scsi_controller", 00:06:09.306 "thread_set_cpumask", 00:06:09.306 "framework_get_scheduler", 00:06:09.306 "framework_set_scheduler", 00:06:09.306 "framework_get_reactors", 00:06:09.306 "thread_get_io_channels", 00:06:09.306 "thread_get_pollers", 00:06:09.306 "thread_get_stats", 00:06:09.306 "framework_monitor_context_switch", 00:06:09.306 "spdk_kill_instance", 00:06:09.306 "log_enable_timestamps", 00:06:09.306 "log_get_flags", 00:06:09.306 "log_clear_flag", 00:06:09.306 "log_set_flag", 00:06:09.306 "log_get_level", 00:06:09.306 "log_set_level", 00:06:09.306 "log_get_print_level", 00:06:09.306 "log_set_print_level", 00:06:09.306 "framework_enable_cpumask_locks", 00:06:09.306 "framework_disable_cpumask_locks", 00:06:09.306 "framework_wait_init", 00:06:09.306 "framework_start_init", 00:06:09.306 "scsi_get_devices", 00:06:09.306 "bdev_get_histogram", 00:06:09.306 "bdev_enable_histogram", 00:06:09.306 "bdev_set_qos_limit", 00:06:09.306 "bdev_set_qd_sampling_period", 00:06:09.306 "bdev_get_bdevs", 00:06:09.306 "bdev_reset_iostat", 00:06:09.306 "bdev_get_iostat", 00:06:09.306 "bdev_examine", 00:06:09.306 "bdev_wait_for_examine", 00:06:09.306 "bdev_set_options", 00:06:09.306 "notify_get_notifications", 00:06:09.306 "notify_get_types", 00:06:09.306 "accel_get_stats", 00:06:09.306 "accel_set_options", 00:06:09.306 "accel_set_driver", 00:06:09.306 "accel_crypto_key_destroy", 00:06:09.306 "accel_crypto_keys_get", 00:06:09.306 "accel_crypto_key_create", 00:06:09.306 "accel_assign_opc", 00:06:09.306 "accel_get_module_info", 00:06:09.306 "accel_get_opc_assignments", 00:06:09.306 "vmd_rescan", 00:06:09.306 "vmd_remove_device", 00:06:09.306 "vmd_enable", 00:06:09.306 "sock_get_default_impl", 00:06:09.306 "sock_set_default_impl", 00:06:09.306 "sock_impl_set_options", 00:06:09.306 "sock_impl_get_options", 00:06:09.306 "iobuf_get_stats", 00:06:09.306 "iobuf_set_options", 00:06:09.306 "keyring_get_keys", 00:06:09.306 "framework_get_pci_devices", 00:06:09.306 "framework_get_config", 00:06:09.306 "framework_get_subsystems", 00:06:09.306 "vfu_tgt_set_base_path", 00:06:09.306 "trace_get_info", 00:06:09.306 "trace_get_tpoint_group_mask", 00:06:09.306 "trace_disable_tpoint_group", 00:06:09.306 "trace_enable_tpoint_group", 00:06:09.306 "trace_clear_tpoint_mask", 00:06:09.306 "trace_set_tpoint_mask", 00:06:09.306 "spdk_get_version", 00:06:09.306 "rpc_get_methods" 00:06:09.306 ] 00:06:09.306 00:52:02 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:06:09.306 00:52:02 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:09.306 00:52:02 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:09.306 00:52:02 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:06:09.306 00:52:02 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 3641534 00:06:09.306 00:52:02 spdkcli_tcp -- common/autotest_common.sh@946 -- # '[' -z 3641534 ']' 00:06:09.306 00:52:02 spdkcli_tcp -- common/autotest_common.sh@950 -- # kill -0 3641534 00:06:09.306 00:52:02 spdkcli_tcp -- common/autotest_common.sh@951 -- # uname 00:06:09.563 00:52:02 spdkcli_tcp -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:09.563 00:52:02 spdkcli_tcp -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3641534 00:06:09.563 00:52:02 spdkcli_tcp -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:09.563 00:52:02 spdkcli_tcp -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:09.563 00:52:02 spdkcli_tcp -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3641534' 00:06:09.563 killing process with pid 3641534 00:06:09.563 00:52:02 spdkcli_tcp -- common/autotest_common.sh@965 -- # kill 3641534 00:06:09.563 00:52:02 spdkcli_tcp -- common/autotest_common.sh@970 -- # wait 3641534 00:06:09.821 00:06:09.821 real 0m1.203s 00:06:09.821 user 0m2.148s 00:06:09.821 sys 0m0.426s 00:06:09.821 00:52:02 spdkcli_tcp -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:09.821 00:52:02 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:09.821 ************************************ 00:06:09.821 END TEST spdkcli_tcp 00:06:09.821 ************************************ 00:06:09.821 00:52:02 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:09.821 00:52:02 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:09.821 00:52:02 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:09.821 00:52:02 -- common/autotest_common.sh@10 -- # set +x 00:06:09.821 ************************************ 00:06:09.821 START TEST dpdk_mem_utility 00:06:09.821 ************************************ 00:06:09.821 00:52:02 dpdk_mem_utility -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:10.079 * Looking for test storage... 00:06:10.079 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:06:10.079 00:52:02 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:06:10.079 00:52:02 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=3641740 00:06:10.079 00:52:02 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:10.079 00:52:02 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 3641740 00:06:10.079 00:52:02 dpdk_mem_utility -- common/autotest_common.sh@827 -- # '[' -z 3641740 ']' 00:06:10.079 00:52:02 dpdk_mem_utility -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:10.079 00:52:02 dpdk_mem_utility -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:10.079 00:52:02 dpdk_mem_utility -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:10.079 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:10.079 00:52:02 dpdk_mem_utility -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:10.079 00:52:02 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:10.079 [2024-07-25 00:52:03.029976] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 23.11.0 initialization... 00:06:10.079 [2024-07-25 00:52:03.030073] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3641740 ] 00:06:10.079 EAL: No free 2048 kB hugepages reported on node 1 00:06:10.079 [2024-07-25 00:52:03.087270] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:10.079 [2024-07-25 00:52:03.171341] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:10.338 00:52:03 dpdk_mem_utility -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:10.338 00:52:03 dpdk_mem_utility -- common/autotest_common.sh@860 -- # return 0 00:06:10.338 00:52:03 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:06:10.338 00:52:03 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:06:10.338 00:52:03 dpdk_mem_utility -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:10.338 00:52:03 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:10.338 { 00:06:10.338 "filename": "/tmp/spdk_mem_dump.txt" 00:06:10.338 } 00:06:10.338 00:52:03 dpdk_mem_utility -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:10.338 00:52:03 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:06:10.338 DPDK memory size 814.000000 MiB in 1 heap(s) 00:06:10.338 1 heaps totaling size 814.000000 MiB 00:06:10.338 size: 814.000000 MiB heap id: 0 00:06:10.338 end heaps---------- 00:06:10.338 8 mempools totaling size 598.116089 MiB 00:06:10.338 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:06:10.338 size: 158.602051 MiB name: PDU_data_out_Pool 00:06:10.338 size: 84.521057 MiB name: bdev_io_3641740 00:06:10.338 size: 51.011292 MiB name: evtpool_3641740 00:06:10.338 size: 50.003479 MiB name: msgpool_3641740 00:06:10.338 size: 21.763794 MiB name: PDU_Pool 00:06:10.338 size: 19.513306 MiB name: SCSI_TASK_Pool 00:06:10.338 size: 0.026123 MiB name: Session_Pool 00:06:10.338 end mempools------- 00:06:10.338 6 memzones totaling size 4.142822 MiB 00:06:10.338 size: 1.000366 MiB name: RG_ring_0_3641740 00:06:10.338 size: 1.000366 MiB name: RG_ring_1_3641740 00:06:10.338 size: 1.000366 MiB name: RG_ring_4_3641740 00:06:10.338 size: 1.000366 MiB name: RG_ring_5_3641740 00:06:10.338 size: 0.125366 MiB name: RG_ring_2_3641740 00:06:10.338 size: 0.015991 MiB name: RG_ring_3_3641740 00:06:10.338 end memzones------- 00:06:10.338 00:52:03 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:06:10.596 heap id: 0 total size: 814.000000 MiB number of busy elements: 41 number of free elements: 15 00:06:10.596 list of free elements. size: 12.519348 MiB 00:06:10.596 element at address: 0x200000400000 with size: 1.999512 MiB 00:06:10.596 element at address: 0x200018e00000 with size: 0.999878 MiB 00:06:10.596 element at address: 0x200019000000 with size: 0.999878 MiB 00:06:10.596 element at address: 0x200003e00000 with size: 0.996277 MiB 00:06:10.596 element at address: 0x200031c00000 with size: 0.994446 MiB 00:06:10.596 element at address: 0x200013800000 with size: 0.978699 MiB 00:06:10.596 element at address: 0x200007000000 with size: 0.959839 MiB 00:06:10.596 element at address: 0x200019200000 with size: 0.936584 MiB 00:06:10.596 element at address: 0x200000200000 with size: 0.841614 MiB 00:06:10.596 element at address: 0x20001aa00000 with size: 0.582886 MiB 00:06:10.596 element at address: 0x20000b200000 with size: 0.490723 MiB 00:06:10.596 element at address: 0x200000800000 with size: 0.487793 MiB 00:06:10.596 element at address: 0x200019400000 with size: 0.485657 MiB 00:06:10.596 element at address: 0x200027e00000 with size: 0.410034 MiB 00:06:10.596 element at address: 0x200003a00000 with size: 0.355530 MiB 00:06:10.596 list of standard malloc elements. size: 199.218079 MiB 00:06:10.596 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:06:10.596 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:06:10.596 element at address: 0x200018efff80 with size: 1.000122 MiB 00:06:10.596 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:06:10.596 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:06:10.596 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:06:10.596 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:06:10.596 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:06:10.596 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:06:10.596 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:06:10.596 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:06:10.596 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:06:10.596 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:06:10.596 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:06:10.596 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:06:10.596 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:06:10.596 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:06:10.596 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:06:10.596 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:06:10.596 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:06:10.596 element at address: 0x200003adb300 with size: 0.000183 MiB 00:06:10.596 element at address: 0x200003adb500 with size: 0.000183 MiB 00:06:10.596 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:06:10.596 element at address: 0x200003affa80 with size: 0.000183 MiB 00:06:10.596 element at address: 0x200003affb40 with size: 0.000183 MiB 00:06:10.596 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:06:10.596 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:06:10.596 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:06:10.597 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:06:10.597 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:06:10.597 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:06:10.597 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:06:10.597 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:06:10.597 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:06:10.597 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:06:10.597 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:06:10.597 element at address: 0x200027e68f80 with size: 0.000183 MiB 00:06:10.597 element at address: 0x200027e69040 with size: 0.000183 MiB 00:06:10.597 element at address: 0x200027e6fc40 with size: 0.000183 MiB 00:06:10.597 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:06:10.597 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:06:10.597 list of memzone associated elements. size: 602.262573 MiB 00:06:10.597 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:06:10.597 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:06:10.597 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:06:10.597 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:06:10.597 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:06:10.597 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_3641740_0 00:06:10.597 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:06:10.597 associated memzone info: size: 48.002930 MiB name: MP_evtpool_3641740_0 00:06:10.597 element at address: 0x200003fff380 with size: 48.003052 MiB 00:06:10.597 associated memzone info: size: 48.002930 MiB name: MP_msgpool_3641740_0 00:06:10.597 element at address: 0x2000195be940 with size: 20.255554 MiB 00:06:10.597 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:06:10.597 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:06:10.597 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:06:10.597 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:06:10.597 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_3641740 00:06:10.597 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:06:10.597 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_3641740 00:06:10.597 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:06:10.597 associated memzone info: size: 1.007996 MiB name: MP_evtpool_3641740 00:06:10.597 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:06:10.597 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:06:10.597 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:06:10.597 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:06:10.597 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:06:10.597 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:06:10.597 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:06:10.597 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:06:10.597 element at address: 0x200003eff180 with size: 1.000488 MiB 00:06:10.597 associated memzone info: size: 1.000366 MiB name: RG_ring_0_3641740 00:06:10.597 element at address: 0x200003affc00 with size: 1.000488 MiB 00:06:10.597 associated memzone info: size: 1.000366 MiB name: RG_ring_1_3641740 00:06:10.597 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:06:10.597 associated memzone info: size: 1.000366 MiB name: RG_ring_4_3641740 00:06:10.597 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:06:10.597 associated memzone info: size: 1.000366 MiB name: RG_ring_5_3641740 00:06:10.597 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:06:10.597 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_3641740 00:06:10.597 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:06:10.597 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:06:10.597 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:06:10.597 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:06:10.597 element at address: 0x20001947c540 with size: 0.250488 MiB 00:06:10.597 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:06:10.597 element at address: 0x200003adf880 with size: 0.125488 MiB 00:06:10.597 associated memzone info: size: 0.125366 MiB name: RG_ring_2_3641740 00:06:10.597 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:06:10.597 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:06:10.597 element at address: 0x200027e69100 with size: 0.023743 MiB 00:06:10.597 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:06:10.597 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:06:10.597 associated memzone info: size: 0.015991 MiB name: RG_ring_3_3641740 00:06:10.597 element at address: 0x200027e6f240 with size: 0.002441 MiB 00:06:10.597 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:06:10.597 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:06:10.597 associated memzone info: size: 0.000183 MiB name: MP_msgpool_3641740 00:06:10.597 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:06:10.597 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_3641740 00:06:10.597 element at address: 0x200027e6fd00 with size: 0.000305 MiB 00:06:10.597 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:06:10.597 00:52:03 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:06:10.597 00:52:03 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 3641740 00:06:10.597 00:52:03 dpdk_mem_utility -- common/autotest_common.sh@946 -- # '[' -z 3641740 ']' 00:06:10.597 00:52:03 dpdk_mem_utility -- common/autotest_common.sh@950 -- # kill -0 3641740 00:06:10.597 00:52:03 dpdk_mem_utility -- common/autotest_common.sh@951 -- # uname 00:06:10.597 00:52:03 dpdk_mem_utility -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:10.597 00:52:03 dpdk_mem_utility -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3641740 00:06:10.597 00:52:03 dpdk_mem_utility -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:10.597 00:52:03 dpdk_mem_utility -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:10.597 00:52:03 dpdk_mem_utility -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3641740' 00:06:10.597 killing process with pid 3641740 00:06:10.597 00:52:03 dpdk_mem_utility -- common/autotest_common.sh@965 -- # kill 3641740 00:06:10.597 00:52:03 dpdk_mem_utility -- common/autotest_common.sh@970 -- # wait 3641740 00:06:10.854 00:06:10.854 real 0m1.035s 00:06:10.854 user 0m1.002s 00:06:10.854 sys 0m0.403s 00:06:10.854 00:52:03 dpdk_mem_utility -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:10.854 00:52:03 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:10.854 ************************************ 00:06:10.854 END TEST dpdk_mem_utility 00:06:10.854 ************************************ 00:06:10.854 00:52:03 -- spdk/autotest.sh@181 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:06:10.854 00:52:03 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:10.854 00:52:03 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:10.854 00:52:03 -- common/autotest_common.sh@10 -- # set +x 00:06:10.854 ************************************ 00:06:10.854 START TEST event 00:06:10.854 ************************************ 00:06:10.854 00:52:04 event -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:06:11.111 * Looking for test storage... 00:06:11.111 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:06:11.111 00:52:04 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:06:11.111 00:52:04 event -- bdev/nbd_common.sh@6 -- # set -e 00:06:11.111 00:52:04 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:11.111 00:52:04 event -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:06:11.111 00:52:04 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:11.111 00:52:04 event -- common/autotest_common.sh@10 -- # set +x 00:06:11.112 ************************************ 00:06:11.112 START TEST event_perf 00:06:11.112 ************************************ 00:06:11.112 00:52:04 event.event_perf -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:11.112 Running I/O for 1 seconds...[2024-07-25 00:52:04.084671] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 23.11.0 initialization... 00:06:11.112 [2024-07-25 00:52:04.084736] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3641927 ] 00:06:11.112 EAL: No free 2048 kB hugepages reported on node 1 00:06:11.112 [2024-07-25 00:52:04.145151] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:11.112 [2024-07-25 00:52:04.236161] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:11.112 [2024-07-25 00:52:04.236220] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:11.112 [2024-07-25 00:52:04.236298] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:11.112 [2024-07-25 00:52:04.236301] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.481 Running I/O for 1 seconds... 00:06:12.481 lcore 0: 229266 00:06:12.481 lcore 1: 229266 00:06:12.481 lcore 2: 229265 00:06:12.481 lcore 3: 229266 00:06:12.481 done. 00:06:12.481 00:06:12.481 real 0m1.247s 00:06:12.481 user 0m4.152s 00:06:12.481 sys 0m0.090s 00:06:12.481 00:52:05 event.event_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:12.481 00:52:05 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:06:12.481 ************************************ 00:06:12.481 END TEST event_perf 00:06:12.481 ************************************ 00:06:12.481 00:52:05 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:06:12.481 00:52:05 event -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:06:12.481 00:52:05 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:12.481 00:52:05 event -- common/autotest_common.sh@10 -- # set +x 00:06:12.481 ************************************ 00:06:12.481 START TEST event_reactor 00:06:12.481 ************************************ 00:06:12.481 00:52:05 event.event_reactor -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:06:12.481 [2024-07-25 00:52:05.384688] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 23.11.0 initialization... 00:06:12.481 [2024-07-25 00:52:05.384754] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3642085 ] 00:06:12.481 EAL: No free 2048 kB hugepages reported on node 1 00:06:12.481 [2024-07-25 00:52:05.448598] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:12.481 [2024-07-25 00:52:05.537951] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:13.854 test_start 00:06:13.854 oneshot 00:06:13.854 tick 100 00:06:13.854 tick 100 00:06:13.854 tick 250 00:06:13.854 tick 100 00:06:13.854 tick 100 00:06:13.854 tick 100 00:06:13.854 tick 250 00:06:13.854 tick 500 00:06:13.854 tick 100 00:06:13.854 tick 100 00:06:13.854 tick 250 00:06:13.854 tick 100 00:06:13.854 tick 100 00:06:13.854 test_end 00:06:13.854 00:06:13.854 real 0m1.245s 00:06:13.854 user 0m1.162s 00:06:13.854 sys 0m0.079s 00:06:13.854 00:52:06 event.event_reactor -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:13.854 00:52:06 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:06:13.854 ************************************ 00:06:13.854 END TEST event_reactor 00:06:13.854 ************************************ 00:06:13.854 00:52:06 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:13.854 00:52:06 event -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:06:13.854 00:52:06 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:13.854 00:52:06 event -- common/autotest_common.sh@10 -- # set +x 00:06:13.854 ************************************ 00:06:13.854 START TEST event_reactor_perf 00:06:13.854 ************************************ 00:06:13.854 00:52:06 event.event_reactor_perf -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:13.854 [2024-07-25 00:52:06.680374] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 23.11.0 initialization... 00:06:13.854 [2024-07-25 00:52:06.680437] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3642242 ] 00:06:13.854 EAL: No free 2048 kB hugepages reported on node 1 00:06:13.854 [2024-07-25 00:52:06.741361] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:13.854 [2024-07-25 00:52:06.833898] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:14.787 test_start 00:06:14.787 test_end 00:06:14.787 Performance: 360206 events per second 00:06:14.787 00:06:14.787 real 0m1.249s 00:06:14.787 user 0m1.168s 00:06:14.787 sys 0m0.076s 00:06:14.787 00:52:07 event.event_reactor_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:14.787 00:52:07 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:06:14.787 ************************************ 00:06:14.787 END TEST event_reactor_perf 00:06:14.787 ************************************ 00:06:15.045 00:52:07 event -- event/event.sh@49 -- # uname -s 00:06:15.045 00:52:07 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:06:15.045 00:52:07 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:06:15.045 00:52:07 event -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:15.046 00:52:07 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:15.046 00:52:07 event -- common/autotest_common.sh@10 -- # set +x 00:06:15.046 ************************************ 00:06:15.046 START TEST event_scheduler 00:06:15.046 ************************************ 00:06:15.046 00:52:07 event.event_scheduler -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:06:15.046 * Looking for test storage... 00:06:15.046 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:06:15.046 00:52:08 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:06:15.046 00:52:08 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=3642426 00:06:15.046 00:52:08 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:06:15.046 00:52:08 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:06:15.046 00:52:08 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 3642426 00:06:15.046 00:52:08 event.event_scheduler -- common/autotest_common.sh@827 -- # '[' -z 3642426 ']' 00:06:15.046 00:52:08 event.event_scheduler -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:15.046 00:52:08 event.event_scheduler -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:15.046 00:52:08 event.event_scheduler -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:15.046 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:15.046 00:52:08 event.event_scheduler -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:15.046 00:52:08 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:15.046 [2024-07-25 00:52:08.066946] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 23.11.0 initialization... 00:06:15.046 [2024-07-25 00:52:08.067030] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3642426 ] 00:06:15.046 EAL: No free 2048 kB hugepages reported on node 1 00:06:15.046 [2024-07-25 00:52:08.124332] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:15.304 [2024-07-25 00:52:08.212361] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.304 [2024-07-25 00:52:08.212417] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:15.304 [2024-07-25 00:52:08.212481] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:15.304 [2024-07-25 00:52:08.212484] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:15.304 00:52:08 event.event_scheduler -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:15.304 00:52:08 event.event_scheduler -- common/autotest_common.sh@860 -- # return 0 00:06:15.304 00:52:08 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:06:15.304 00:52:08 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:15.304 00:52:08 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:15.304 POWER: Env isn't set yet! 00:06:15.304 POWER: Attempting to initialise ACPI cpufreq power management... 00:06:15.304 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_available_frequencies 00:06:15.304 POWER: Cannot get available frequencies of lcore 0 00:06:15.304 POWER: Attempting to initialise PSTAT power management... 00:06:15.304 POWER: Power management governor of lcore 0 has been set to 'performance' successfully 00:06:15.304 POWER: Initialized successfully for lcore 0 power management 00:06:15.304 POWER: Power management governor of lcore 1 has been set to 'performance' successfully 00:06:15.304 POWER: Initialized successfully for lcore 1 power management 00:06:15.304 POWER: Power management governor of lcore 2 has been set to 'performance' successfully 00:06:15.304 POWER: Initialized successfully for lcore 2 power management 00:06:15.304 POWER: Power management governor of lcore 3 has been set to 'performance' successfully 00:06:15.304 POWER: Initialized successfully for lcore 3 power management 00:06:15.304 [2024-07-25 00:52:08.313428] scheduler_dynamic.c: 382:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:06:15.304 [2024-07-25 00:52:08.313444] scheduler_dynamic.c: 384:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:06:15.304 [2024-07-25 00:52:08.313455] scheduler_dynamic.c: 386:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:06:15.304 00:52:08 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:15.304 00:52:08 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:06:15.304 00:52:08 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:15.304 00:52:08 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:15.304 [2024-07-25 00:52:08.412913] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:06:15.304 00:52:08 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:15.304 00:52:08 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:06:15.304 00:52:08 event.event_scheduler -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:15.304 00:52:08 event.event_scheduler -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:15.304 00:52:08 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:15.304 ************************************ 00:06:15.304 START TEST scheduler_create_thread 00:06:15.304 ************************************ 00:06:15.304 00:52:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1121 -- # scheduler_create_thread 00:06:15.304 00:52:08 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:06:15.304 00:52:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:15.304 00:52:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:15.304 2 00:06:15.304 00:52:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:15.304 00:52:08 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:06:15.304 00:52:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:15.304 00:52:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:15.562 3 00:06:15.562 00:52:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:15.562 00:52:08 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:06:15.562 00:52:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:15.562 00:52:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:15.562 4 00:06:15.562 00:52:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:15.562 00:52:08 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:06:15.562 00:52:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:15.562 00:52:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:15.562 5 00:06:15.562 00:52:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:15.562 00:52:08 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:06:15.562 00:52:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:15.562 00:52:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:15.562 6 00:06:15.562 00:52:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:15.562 00:52:08 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:06:15.562 00:52:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:15.562 00:52:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:15.562 7 00:06:15.562 00:52:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:15.563 00:52:08 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:06:15.563 00:52:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:15.563 00:52:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:15.563 8 00:06:15.563 00:52:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:15.563 00:52:08 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:06:15.563 00:52:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:15.563 00:52:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:15.563 9 00:06:15.563 00:52:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:15.563 00:52:08 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:06:15.563 00:52:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:15.563 00:52:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:15.563 10 00:06:15.563 00:52:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:15.563 00:52:08 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:06:15.563 00:52:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:15.563 00:52:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:15.563 00:52:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:15.563 00:52:08 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:06:15.563 00:52:08 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:06:15.563 00:52:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:15.563 00:52:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:16.496 00:52:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:16.496 00:52:09 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:06:16.496 00:52:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:16.496 00:52:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:17.874 00:52:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:17.874 00:52:10 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:06:17.874 00:52:10 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:06:17.874 00:52:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:17.874 00:52:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:18.807 00:52:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:18.807 00:06:18.807 real 0m3.380s 00:06:18.807 user 0m0.012s 00:06:18.807 sys 0m0.002s 00:06:18.807 00:52:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:18.807 00:52:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:18.807 ************************************ 00:06:18.807 END TEST scheduler_create_thread 00:06:18.807 ************************************ 00:06:18.807 00:52:11 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:06:18.807 00:52:11 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 3642426 00:06:18.807 00:52:11 event.event_scheduler -- common/autotest_common.sh@946 -- # '[' -z 3642426 ']' 00:06:18.807 00:52:11 event.event_scheduler -- common/autotest_common.sh@950 -- # kill -0 3642426 00:06:18.807 00:52:11 event.event_scheduler -- common/autotest_common.sh@951 -- # uname 00:06:18.807 00:52:11 event.event_scheduler -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:18.807 00:52:11 event.event_scheduler -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3642426 00:06:18.807 00:52:11 event.event_scheduler -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:06:18.807 00:52:11 event.event_scheduler -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:06:18.807 00:52:11 event.event_scheduler -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3642426' 00:06:18.807 killing process with pid 3642426 00:06:18.807 00:52:11 event.event_scheduler -- common/autotest_common.sh@965 -- # kill 3642426 00:06:18.807 00:52:11 event.event_scheduler -- common/autotest_common.sh@970 -- # wait 3642426 00:06:19.064 [2024-07-25 00:52:12.202349] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:06:19.323 POWER: Power management governor of lcore 0 has been set to 'userspace' successfully 00:06:19.323 POWER: Power management of lcore 0 has exited from 'performance' mode and been set back to the original 00:06:19.323 POWER: Power management governor of lcore 1 has been set to 'schedutil' successfully 00:06:19.323 POWER: Power management of lcore 1 has exited from 'performance' mode and been set back to the original 00:06:19.323 POWER: Power management governor of lcore 2 has been set to 'schedutil' successfully 00:06:19.323 POWER: Power management of lcore 2 has exited from 'performance' mode and been set back to the original 00:06:19.323 POWER: Power management governor of lcore 3 has been set to 'schedutil' successfully 00:06:19.323 POWER: Power management of lcore 3 has exited from 'performance' mode and been set back to the original 00:06:19.323 00:06:19.323 real 0m4.492s 00:06:19.323 user 0m8.013s 00:06:19.323 sys 0m0.315s 00:06:19.323 00:52:12 event.event_scheduler -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:19.323 00:52:12 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:19.323 ************************************ 00:06:19.323 END TEST event_scheduler 00:06:19.323 ************************************ 00:06:19.581 00:52:12 event -- event/event.sh@51 -- # modprobe -n nbd 00:06:19.581 00:52:12 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:06:19.581 00:52:12 event -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:19.581 00:52:12 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:19.581 00:52:12 event -- common/autotest_common.sh@10 -- # set +x 00:06:19.581 ************************************ 00:06:19.581 START TEST app_repeat 00:06:19.581 ************************************ 00:06:19.581 00:52:12 event.app_repeat -- common/autotest_common.sh@1121 -- # app_repeat_test 00:06:19.581 00:52:12 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:19.581 00:52:12 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:19.581 00:52:12 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:06:19.581 00:52:12 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:19.581 00:52:12 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:06:19.581 00:52:12 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:06:19.581 00:52:12 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:06:19.581 00:52:12 event.app_repeat -- event/event.sh@19 -- # repeat_pid=3643007 00:06:19.581 00:52:12 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:06:19.581 00:52:12 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:06:19.581 00:52:12 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 3643007' 00:06:19.581 Process app_repeat pid: 3643007 00:06:19.581 00:52:12 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:19.581 00:52:12 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:06:19.581 spdk_app_start Round 0 00:06:19.581 00:52:12 event.app_repeat -- event/event.sh@25 -- # waitforlisten 3643007 /var/tmp/spdk-nbd.sock 00:06:19.581 00:52:12 event.app_repeat -- common/autotest_common.sh@827 -- # '[' -z 3643007 ']' 00:06:19.581 00:52:12 event.app_repeat -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:19.581 00:52:12 event.app_repeat -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:19.581 00:52:12 event.app_repeat -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:19.581 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:19.581 00:52:12 event.app_repeat -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:19.581 00:52:12 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:19.581 [2024-07-25 00:52:12.540680] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 23.11.0 initialization... 00:06:19.581 [2024-07-25 00:52:12.540753] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3643007 ] 00:06:19.581 EAL: No free 2048 kB hugepages reported on node 1 00:06:19.581 [2024-07-25 00:52:12.603369] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:19.581 [2024-07-25 00:52:12.693378] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:19.581 [2024-07-25 00:52:12.693384] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:19.838 00:52:12 event.app_repeat -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:19.838 00:52:12 event.app_repeat -- common/autotest_common.sh@860 -- # return 0 00:06:19.839 00:52:12 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:20.096 Malloc0 00:06:20.096 00:52:13 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:20.353 Malloc1 00:06:20.353 00:52:13 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:20.353 00:52:13 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:20.353 00:52:13 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:20.353 00:52:13 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:20.353 00:52:13 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:20.353 00:52:13 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:20.353 00:52:13 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:20.353 00:52:13 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:20.354 00:52:13 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:20.354 00:52:13 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:20.354 00:52:13 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:20.354 00:52:13 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:20.354 00:52:13 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:20.354 00:52:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:20.354 00:52:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:20.354 00:52:13 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:20.611 /dev/nbd0 00:06:20.611 00:52:13 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:20.611 00:52:13 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:20.611 00:52:13 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:06:20.611 00:52:13 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:06:20.611 00:52:13 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:06:20.611 00:52:13 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:06:20.611 00:52:13 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:06:20.611 00:52:13 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:06:20.611 00:52:13 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:06:20.611 00:52:13 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:06:20.611 00:52:13 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:20.611 1+0 records in 00:06:20.611 1+0 records out 00:06:20.611 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000181239 s, 22.6 MB/s 00:06:20.611 00:52:13 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:20.611 00:52:13 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:06:20.611 00:52:13 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:20.611 00:52:13 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:06:20.611 00:52:13 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:06:20.611 00:52:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:20.611 00:52:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:20.611 00:52:13 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:20.868 /dev/nbd1 00:06:20.868 00:52:13 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:20.868 00:52:13 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:20.868 00:52:13 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:06:20.868 00:52:13 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:06:20.868 00:52:13 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:06:20.868 00:52:13 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:06:20.868 00:52:13 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:06:20.868 00:52:13 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:06:20.868 00:52:13 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:06:20.868 00:52:13 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:06:20.868 00:52:13 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:20.868 1+0 records in 00:06:20.868 1+0 records out 00:06:20.868 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000222232 s, 18.4 MB/s 00:06:20.868 00:52:13 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:20.868 00:52:13 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:06:20.868 00:52:13 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:20.868 00:52:13 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:06:20.868 00:52:13 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:06:20.868 00:52:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:20.868 00:52:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:20.868 00:52:13 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:20.868 00:52:13 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:20.868 00:52:13 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:21.126 00:52:14 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:21.126 { 00:06:21.126 "nbd_device": "/dev/nbd0", 00:06:21.126 "bdev_name": "Malloc0" 00:06:21.126 }, 00:06:21.126 { 00:06:21.126 "nbd_device": "/dev/nbd1", 00:06:21.126 "bdev_name": "Malloc1" 00:06:21.126 } 00:06:21.126 ]' 00:06:21.126 00:52:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:21.126 { 00:06:21.126 "nbd_device": "/dev/nbd0", 00:06:21.126 "bdev_name": "Malloc0" 00:06:21.126 }, 00:06:21.126 { 00:06:21.126 "nbd_device": "/dev/nbd1", 00:06:21.126 "bdev_name": "Malloc1" 00:06:21.126 } 00:06:21.126 ]' 00:06:21.126 00:52:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:21.126 00:52:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:21.126 /dev/nbd1' 00:06:21.126 00:52:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:21.126 /dev/nbd1' 00:06:21.126 00:52:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:21.126 00:52:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:21.126 00:52:14 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:21.126 00:52:14 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:21.126 00:52:14 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:21.126 00:52:14 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:21.126 00:52:14 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:21.126 00:52:14 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:21.126 00:52:14 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:21.126 00:52:14 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:21.126 00:52:14 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:21.126 00:52:14 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:21.126 256+0 records in 00:06:21.126 256+0 records out 00:06:21.126 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00501056 s, 209 MB/s 00:06:21.126 00:52:14 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:21.126 00:52:14 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:21.126 256+0 records in 00:06:21.126 256+0 records out 00:06:21.126 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0236315 s, 44.4 MB/s 00:06:21.126 00:52:14 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:21.126 00:52:14 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:21.126 256+0 records in 00:06:21.126 256+0 records out 00:06:21.126 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0229296 s, 45.7 MB/s 00:06:21.126 00:52:14 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:21.126 00:52:14 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:21.126 00:52:14 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:21.126 00:52:14 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:21.126 00:52:14 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:21.126 00:52:14 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:21.126 00:52:14 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:21.126 00:52:14 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:21.126 00:52:14 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:21.126 00:52:14 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:21.126 00:52:14 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:21.126 00:52:14 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:21.126 00:52:14 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:21.126 00:52:14 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:21.126 00:52:14 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:21.126 00:52:14 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:21.384 00:52:14 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:21.384 00:52:14 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:21.384 00:52:14 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:21.641 00:52:14 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:21.641 00:52:14 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:21.641 00:52:14 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:21.641 00:52:14 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:21.641 00:52:14 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:21.641 00:52:14 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:21.641 00:52:14 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:21.641 00:52:14 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:21.641 00:52:14 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:21.641 00:52:14 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:21.898 00:52:14 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:21.898 00:52:14 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:21.898 00:52:14 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:21.898 00:52:14 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:21.898 00:52:14 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:21.898 00:52:14 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:21.898 00:52:14 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:21.898 00:52:14 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:21.898 00:52:14 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:21.898 00:52:14 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:21.898 00:52:14 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:22.157 00:52:15 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:22.157 00:52:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:22.157 00:52:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:22.157 00:52:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:22.157 00:52:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:22.157 00:52:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:22.157 00:52:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:22.157 00:52:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:22.157 00:52:15 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:22.157 00:52:15 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:22.157 00:52:15 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:22.157 00:52:15 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:22.157 00:52:15 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:22.439 00:52:15 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:22.706 [2024-07-25 00:52:15.613011] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:22.706 [2024-07-25 00:52:15.703347] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:22.706 [2024-07-25 00:52:15.703348] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:22.706 [2024-07-25 00:52:15.762200] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:22.706 [2024-07-25 00:52:15.762300] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:25.984 00:52:18 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:25.984 00:52:18 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:25.984 spdk_app_start Round 1 00:06:25.984 00:52:18 event.app_repeat -- event/event.sh@25 -- # waitforlisten 3643007 /var/tmp/spdk-nbd.sock 00:06:25.984 00:52:18 event.app_repeat -- common/autotest_common.sh@827 -- # '[' -z 3643007 ']' 00:06:25.984 00:52:18 event.app_repeat -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:25.984 00:52:18 event.app_repeat -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:25.985 00:52:18 event.app_repeat -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:25.985 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:25.985 00:52:18 event.app_repeat -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:25.985 00:52:18 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:25.985 00:52:18 event.app_repeat -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:25.985 00:52:18 event.app_repeat -- common/autotest_common.sh@860 -- # return 0 00:06:25.985 00:52:18 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:25.985 Malloc0 00:06:25.985 00:52:18 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:25.985 Malloc1 00:06:26.242 00:52:19 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:26.242 00:52:19 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:26.242 00:52:19 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:26.242 00:52:19 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:26.242 00:52:19 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:26.242 00:52:19 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:26.242 00:52:19 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:26.242 00:52:19 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:26.242 00:52:19 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:26.242 00:52:19 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:26.242 00:52:19 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:26.242 00:52:19 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:26.242 00:52:19 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:26.242 00:52:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:26.242 00:52:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:26.242 00:52:19 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:26.500 /dev/nbd0 00:06:26.500 00:52:19 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:26.500 00:52:19 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:26.500 00:52:19 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:06:26.500 00:52:19 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:06:26.500 00:52:19 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:06:26.500 00:52:19 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:06:26.500 00:52:19 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:06:26.500 00:52:19 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:06:26.500 00:52:19 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:06:26.500 00:52:19 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:06:26.500 00:52:19 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:26.500 1+0 records in 00:06:26.500 1+0 records out 00:06:26.500 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000181712 s, 22.5 MB/s 00:06:26.500 00:52:19 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:26.500 00:52:19 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:06:26.500 00:52:19 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:26.500 00:52:19 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:06:26.500 00:52:19 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:06:26.500 00:52:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:26.500 00:52:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:26.500 00:52:19 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:26.758 /dev/nbd1 00:06:26.758 00:52:19 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:26.758 00:52:19 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:26.758 00:52:19 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:06:26.758 00:52:19 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:06:26.758 00:52:19 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:06:26.758 00:52:19 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:06:26.758 00:52:19 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:06:26.758 00:52:19 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:06:26.758 00:52:19 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:06:26.758 00:52:19 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:06:26.758 00:52:19 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:26.758 1+0 records in 00:06:26.758 1+0 records out 00:06:26.758 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000187663 s, 21.8 MB/s 00:06:26.758 00:52:19 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:26.758 00:52:19 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:06:26.758 00:52:19 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:26.758 00:52:19 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:06:26.758 00:52:19 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:06:26.758 00:52:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:26.758 00:52:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:26.758 00:52:19 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:26.758 00:52:19 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:26.758 00:52:19 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:27.016 00:52:19 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:27.016 { 00:06:27.016 "nbd_device": "/dev/nbd0", 00:06:27.016 "bdev_name": "Malloc0" 00:06:27.016 }, 00:06:27.016 { 00:06:27.016 "nbd_device": "/dev/nbd1", 00:06:27.016 "bdev_name": "Malloc1" 00:06:27.016 } 00:06:27.016 ]' 00:06:27.016 00:52:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:27.016 00:52:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:27.016 { 00:06:27.016 "nbd_device": "/dev/nbd0", 00:06:27.016 "bdev_name": "Malloc0" 00:06:27.016 }, 00:06:27.016 { 00:06:27.016 "nbd_device": "/dev/nbd1", 00:06:27.016 "bdev_name": "Malloc1" 00:06:27.016 } 00:06:27.016 ]' 00:06:27.016 00:52:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:27.016 /dev/nbd1' 00:06:27.016 00:52:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:27.016 /dev/nbd1' 00:06:27.016 00:52:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:27.016 00:52:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:27.016 00:52:19 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:27.016 00:52:19 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:27.016 00:52:19 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:27.016 00:52:19 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:27.016 00:52:19 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:27.016 00:52:19 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:27.016 00:52:19 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:27.016 00:52:20 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:27.016 00:52:20 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:27.016 00:52:20 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:27.016 256+0 records in 00:06:27.016 256+0 records out 00:06:27.017 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00604916 s, 173 MB/s 00:06:27.017 00:52:20 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:27.017 00:52:20 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:27.017 256+0 records in 00:06:27.017 256+0 records out 00:06:27.017 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0233753 s, 44.9 MB/s 00:06:27.017 00:52:20 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:27.017 00:52:20 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:27.017 256+0 records in 00:06:27.017 256+0 records out 00:06:27.017 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0258167 s, 40.6 MB/s 00:06:27.017 00:52:20 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:27.017 00:52:20 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:27.017 00:52:20 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:27.017 00:52:20 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:27.017 00:52:20 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:27.017 00:52:20 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:27.017 00:52:20 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:27.017 00:52:20 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:27.017 00:52:20 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:27.017 00:52:20 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:27.017 00:52:20 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:27.017 00:52:20 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:27.017 00:52:20 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:27.017 00:52:20 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:27.017 00:52:20 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:27.017 00:52:20 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:27.017 00:52:20 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:27.017 00:52:20 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:27.017 00:52:20 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:27.275 00:52:20 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:27.275 00:52:20 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:27.275 00:52:20 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:27.275 00:52:20 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:27.275 00:52:20 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:27.275 00:52:20 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:27.275 00:52:20 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:27.275 00:52:20 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:27.275 00:52:20 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:27.275 00:52:20 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:27.533 00:52:20 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:27.533 00:52:20 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:27.533 00:52:20 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:27.533 00:52:20 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:27.533 00:52:20 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:27.533 00:52:20 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:27.533 00:52:20 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:27.533 00:52:20 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:27.533 00:52:20 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:27.533 00:52:20 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:27.533 00:52:20 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:27.791 00:52:20 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:27.791 00:52:20 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:27.791 00:52:20 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:27.791 00:52:20 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:27.791 00:52:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:27.791 00:52:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:27.791 00:52:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:27.791 00:52:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:27.791 00:52:20 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:27.791 00:52:20 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:27.791 00:52:20 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:27.791 00:52:20 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:27.791 00:52:20 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:28.049 00:52:21 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:28.306 [2024-07-25 00:52:21.404100] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:28.564 [2024-07-25 00:52:21.493552] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:28.564 [2024-07-25 00:52:21.493558] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:28.564 [2024-07-25 00:52:21.551683] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:28.564 [2024-07-25 00:52:21.551765] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:31.089 00:52:24 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:31.089 00:52:24 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:31.089 spdk_app_start Round 2 00:06:31.089 00:52:24 event.app_repeat -- event/event.sh@25 -- # waitforlisten 3643007 /var/tmp/spdk-nbd.sock 00:06:31.089 00:52:24 event.app_repeat -- common/autotest_common.sh@827 -- # '[' -z 3643007 ']' 00:06:31.089 00:52:24 event.app_repeat -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:31.089 00:52:24 event.app_repeat -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:31.089 00:52:24 event.app_repeat -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:31.089 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:31.089 00:52:24 event.app_repeat -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:31.089 00:52:24 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:31.347 00:52:24 event.app_repeat -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:31.347 00:52:24 event.app_repeat -- common/autotest_common.sh@860 -- # return 0 00:06:31.347 00:52:24 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:31.605 Malloc0 00:06:31.605 00:52:24 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:31.863 Malloc1 00:06:31.863 00:52:24 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:31.863 00:52:24 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:31.863 00:52:24 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:31.863 00:52:24 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:31.863 00:52:24 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:31.863 00:52:24 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:31.863 00:52:24 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:31.863 00:52:24 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:31.863 00:52:24 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:31.863 00:52:24 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:31.863 00:52:24 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:31.863 00:52:24 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:31.863 00:52:24 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:31.863 00:52:24 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:31.863 00:52:24 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:31.863 00:52:24 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:32.121 /dev/nbd0 00:06:32.121 00:52:25 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:32.121 00:52:25 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:32.121 00:52:25 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:06:32.121 00:52:25 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:06:32.121 00:52:25 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:06:32.121 00:52:25 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:06:32.121 00:52:25 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:06:32.121 00:52:25 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:06:32.121 00:52:25 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:06:32.121 00:52:25 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:06:32.121 00:52:25 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:32.121 1+0 records in 00:06:32.121 1+0 records out 00:06:32.121 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000181644 s, 22.5 MB/s 00:06:32.121 00:52:25 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:32.121 00:52:25 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:06:32.121 00:52:25 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:32.121 00:52:25 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:06:32.121 00:52:25 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:06:32.121 00:52:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:32.121 00:52:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:32.121 00:52:25 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:32.379 /dev/nbd1 00:06:32.379 00:52:25 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:32.379 00:52:25 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:32.379 00:52:25 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:06:32.379 00:52:25 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:06:32.379 00:52:25 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:06:32.379 00:52:25 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:06:32.379 00:52:25 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:06:32.379 00:52:25 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:06:32.379 00:52:25 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:06:32.379 00:52:25 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:06:32.379 00:52:25 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:32.379 1+0 records in 00:06:32.379 1+0 records out 00:06:32.379 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000180839 s, 22.6 MB/s 00:06:32.379 00:52:25 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:32.379 00:52:25 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:06:32.379 00:52:25 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:32.379 00:52:25 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:06:32.379 00:52:25 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:06:32.379 00:52:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:32.379 00:52:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:32.379 00:52:25 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:32.379 00:52:25 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:32.379 00:52:25 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:32.637 00:52:25 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:32.637 { 00:06:32.637 "nbd_device": "/dev/nbd0", 00:06:32.637 "bdev_name": "Malloc0" 00:06:32.637 }, 00:06:32.637 { 00:06:32.637 "nbd_device": "/dev/nbd1", 00:06:32.637 "bdev_name": "Malloc1" 00:06:32.637 } 00:06:32.637 ]' 00:06:32.637 00:52:25 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:32.637 { 00:06:32.637 "nbd_device": "/dev/nbd0", 00:06:32.637 "bdev_name": "Malloc0" 00:06:32.637 }, 00:06:32.637 { 00:06:32.637 "nbd_device": "/dev/nbd1", 00:06:32.637 "bdev_name": "Malloc1" 00:06:32.637 } 00:06:32.637 ]' 00:06:32.637 00:52:25 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:32.637 00:52:25 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:32.637 /dev/nbd1' 00:06:32.637 00:52:25 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:32.637 /dev/nbd1' 00:06:32.637 00:52:25 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:32.637 00:52:25 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:32.637 00:52:25 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:32.637 00:52:25 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:32.637 00:52:25 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:32.637 00:52:25 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:32.637 00:52:25 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:32.637 00:52:25 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:32.637 00:52:25 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:32.637 00:52:25 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:32.637 00:52:25 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:32.637 00:52:25 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:32.896 256+0 records in 00:06:32.896 256+0 records out 00:06:32.896 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00506744 s, 207 MB/s 00:06:32.896 00:52:25 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:32.896 00:52:25 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:32.896 256+0 records in 00:06:32.896 256+0 records out 00:06:32.896 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0233357 s, 44.9 MB/s 00:06:32.896 00:52:25 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:32.896 00:52:25 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:32.896 256+0 records in 00:06:32.896 256+0 records out 00:06:32.896 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0251177 s, 41.7 MB/s 00:06:32.896 00:52:25 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:32.896 00:52:25 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:32.896 00:52:25 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:32.896 00:52:25 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:32.896 00:52:25 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:32.896 00:52:25 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:32.896 00:52:25 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:32.896 00:52:25 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:32.896 00:52:25 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:32.896 00:52:25 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:32.896 00:52:25 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:32.896 00:52:25 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:32.896 00:52:25 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:32.896 00:52:25 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:32.896 00:52:25 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:32.896 00:52:25 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:32.896 00:52:25 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:32.896 00:52:25 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:32.896 00:52:25 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:33.153 00:52:26 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:33.153 00:52:26 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:33.153 00:52:26 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:33.153 00:52:26 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:33.153 00:52:26 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:33.153 00:52:26 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:33.153 00:52:26 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:33.153 00:52:26 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:33.153 00:52:26 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:33.153 00:52:26 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:33.411 00:52:26 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:33.411 00:52:26 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:33.411 00:52:26 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:33.411 00:52:26 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:33.411 00:52:26 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:33.411 00:52:26 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:33.411 00:52:26 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:33.411 00:52:26 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:33.411 00:52:26 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:33.411 00:52:26 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:33.411 00:52:26 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:33.669 00:52:26 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:33.669 00:52:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:33.669 00:52:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:33.669 00:52:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:33.669 00:52:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:33.669 00:52:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:33.669 00:52:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:33.669 00:52:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:33.669 00:52:26 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:33.669 00:52:26 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:33.669 00:52:26 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:33.669 00:52:26 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:33.669 00:52:26 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:33.926 00:52:26 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:34.184 [2024-07-25 00:52:27.182463] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:34.184 [2024-07-25 00:52:27.271231] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.184 [2024-07-25 00:52:27.271231] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:34.184 [2024-07-25 00:52:27.331197] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:34.184 [2024-07-25 00:52:27.331308] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:37.464 00:52:29 event.app_repeat -- event/event.sh@38 -- # waitforlisten 3643007 /var/tmp/spdk-nbd.sock 00:06:37.464 00:52:29 event.app_repeat -- common/autotest_common.sh@827 -- # '[' -z 3643007 ']' 00:06:37.464 00:52:29 event.app_repeat -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:37.464 00:52:29 event.app_repeat -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:37.464 00:52:29 event.app_repeat -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:37.464 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:37.464 00:52:29 event.app_repeat -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:37.464 00:52:29 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:37.464 00:52:30 event.app_repeat -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:37.464 00:52:30 event.app_repeat -- common/autotest_common.sh@860 -- # return 0 00:06:37.464 00:52:30 event.app_repeat -- event/event.sh@39 -- # killprocess 3643007 00:06:37.464 00:52:30 event.app_repeat -- common/autotest_common.sh@946 -- # '[' -z 3643007 ']' 00:06:37.464 00:52:30 event.app_repeat -- common/autotest_common.sh@950 -- # kill -0 3643007 00:06:37.464 00:52:30 event.app_repeat -- common/autotest_common.sh@951 -- # uname 00:06:37.464 00:52:30 event.app_repeat -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:37.464 00:52:30 event.app_repeat -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3643007 00:06:37.464 00:52:30 event.app_repeat -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:37.464 00:52:30 event.app_repeat -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:37.464 00:52:30 event.app_repeat -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3643007' 00:06:37.464 killing process with pid 3643007 00:06:37.464 00:52:30 event.app_repeat -- common/autotest_common.sh@965 -- # kill 3643007 00:06:37.464 00:52:30 event.app_repeat -- common/autotest_common.sh@970 -- # wait 3643007 00:06:37.464 spdk_app_start is called in Round 0. 00:06:37.464 Shutdown signal received, stop current app iteration 00:06:37.464 Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 23.11.0 reinitialization... 00:06:37.464 spdk_app_start is called in Round 1. 00:06:37.464 Shutdown signal received, stop current app iteration 00:06:37.464 Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 23.11.0 reinitialization... 00:06:37.464 spdk_app_start is called in Round 2. 00:06:37.464 Shutdown signal received, stop current app iteration 00:06:37.464 Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 23.11.0 reinitialization... 00:06:37.464 spdk_app_start is called in Round 3. 00:06:37.464 Shutdown signal received, stop current app iteration 00:06:37.464 00:52:30 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:37.464 00:52:30 event.app_repeat -- event/event.sh@42 -- # return 0 00:06:37.464 00:06:37.464 real 0m17.934s 00:06:37.464 user 0m39.092s 00:06:37.464 sys 0m3.221s 00:06:37.464 00:52:30 event.app_repeat -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:37.464 00:52:30 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:37.464 ************************************ 00:06:37.464 END TEST app_repeat 00:06:37.464 ************************************ 00:06:37.464 00:52:30 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:37.464 00:52:30 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:37.464 00:52:30 event -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:37.464 00:52:30 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:37.464 00:52:30 event -- common/autotest_common.sh@10 -- # set +x 00:06:37.464 ************************************ 00:06:37.464 START TEST cpu_locks 00:06:37.464 ************************************ 00:06:37.464 00:52:30 event.cpu_locks -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:37.464 * Looking for test storage... 00:06:37.464 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:06:37.464 00:52:30 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:37.464 00:52:30 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:37.464 00:52:30 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:37.464 00:52:30 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:37.464 00:52:30 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:37.464 00:52:30 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:37.464 00:52:30 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:37.464 ************************************ 00:06:37.464 START TEST default_locks 00:06:37.464 ************************************ 00:06:37.464 00:52:30 event.cpu_locks.default_locks -- common/autotest_common.sh@1121 -- # default_locks 00:06:37.464 00:52:30 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=3645355 00:06:37.464 00:52:30 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:37.464 00:52:30 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 3645355 00:06:37.464 00:52:30 event.cpu_locks.default_locks -- common/autotest_common.sh@827 -- # '[' -z 3645355 ']' 00:06:37.464 00:52:30 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:37.464 00:52:30 event.cpu_locks.default_locks -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:37.464 00:52:30 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:37.464 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:37.464 00:52:30 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:37.464 00:52:30 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:37.722 [2024-07-25 00:52:30.629822] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 23.11.0 initialization... 00:06:37.722 [2024-07-25 00:52:30.629901] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3645355 ] 00:06:37.722 EAL: No free 2048 kB hugepages reported on node 1 00:06:37.722 [2024-07-25 00:52:30.689652] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:37.722 [2024-07-25 00:52:30.773894] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:37.980 00:52:31 event.cpu_locks.default_locks -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:37.980 00:52:31 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # return 0 00:06:37.980 00:52:31 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 3645355 00:06:37.980 00:52:31 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 3645355 00:06:37.980 00:52:31 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:38.238 lslocks: write error 00:06:38.238 00:52:31 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 3645355 00:06:38.238 00:52:31 event.cpu_locks.default_locks -- common/autotest_common.sh@946 -- # '[' -z 3645355 ']' 00:06:38.238 00:52:31 event.cpu_locks.default_locks -- common/autotest_common.sh@950 -- # kill -0 3645355 00:06:38.238 00:52:31 event.cpu_locks.default_locks -- common/autotest_common.sh@951 -- # uname 00:06:38.238 00:52:31 event.cpu_locks.default_locks -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:38.238 00:52:31 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3645355 00:06:38.238 00:52:31 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:38.238 00:52:31 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:38.238 00:52:31 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3645355' 00:06:38.238 killing process with pid 3645355 00:06:38.238 00:52:31 event.cpu_locks.default_locks -- common/autotest_common.sh@965 -- # kill 3645355 00:06:38.238 00:52:31 event.cpu_locks.default_locks -- common/autotest_common.sh@970 -- # wait 3645355 00:06:38.804 00:52:31 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 3645355 00:06:38.804 00:52:31 event.cpu_locks.default_locks -- common/autotest_common.sh@648 -- # local es=0 00:06:38.804 00:52:31 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 3645355 00:06:38.804 00:52:31 event.cpu_locks.default_locks -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:06:38.804 00:52:31 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:38.804 00:52:31 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:06:38.804 00:52:31 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:38.804 00:52:31 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # waitforlisten 3645355 00:06:38.804 00:52:31 event.cpu_locks.default_locks -- common/autotest_common.sh@827 -- # '[' -z 3645355 ']' 00:06:38.804 00:52:31 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:38.804 00:52:31 event.cpu_locks.default_locks -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:38.804 00:52:31 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:38.804 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:38.804 00:52:31 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:38.804 00:52:31 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:38.804 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 842: kill: (3645355) - No such process 00:06:38.804 ERROR: process (pid: 3645355) is no longer running 00:06:38.804 00:52:31 event.cpu_locks.default_locks -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:38.804 00:52:31 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # return 1 00:06:38.804 00:52:31 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # es=1 00:06:38.804 00:52:31 event.cpu_locks.default_locks -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:38.804 00:52:31 event.cpu_locks.default_locks -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:38.804 00:52:31 event.cpu_locks.default_locks -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:38.804 00:52:31 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:06:38.804 00:52:31 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:38.804 00:52:31 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:06:38.804 00:52:31 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:38.804 00:06:38.804 real 0m1.193s 00:06:38.804 user 0m1.153s 00:06:38.804 sys 0m0.496s 00:06:38.804 00:52:31 event.cpu_locks.default_locks -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:38.804 00:52:31 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:38.804 ************************************ 00:06:38.804 END TEST default_locks 00:06:38.804 ************************************ 00:06:38.804 00:52:31 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:38.804 00:52:31 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:38.804 00:52:31 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:38.804 00:52:31 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:38.804 ************************************ 00:06:38.804 START TEST default_locks_via_rpc 00:06:38.804 ************************************ 00:06:38.804 00:52:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1121 -- # default_locks_via_rpc 00:06:38.804 00:52:31 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=3645521 00:06:38.804 00:52:31 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:38.804 00:52:31 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 3645521 00:06:38.804 00:52:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 3645521 ']' 00:06:38.804 00:52:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:38.804 00:52:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:38.805 00:52:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:38.805 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:38.805 00:52:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:38.805 00:52:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:38.805 [2024-07-25 00:52:31.869904] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 23.11.0 initialization... 00:06:38.805 [2024-07-25 00:52:31.870006] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3645521 ] 00:06:38.805 EAL: No free 2048 kB hugepages reported on node 1 00:06:38.805 [2024-07-25 00:52:31.928266] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:39.063 [2024-07-25 00:52:32.018398] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:39.321 00:52:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:39.321 00:52:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:06:39.321 00:52:32 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:39.321 00:52:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:39.321 00:52:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:39.321 00:52:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:39.321 00:52:32 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:06:39.321 00:52:32 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:39.321 00:52:32 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:06:39.321 00:52:32 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:39.321 00:52:32 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:39.321 00:52:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:39.321 00:52:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:39.321 00:52:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:39.321 00:52:32 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 3645521 00:06:39.321 00:52:32 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 3645521 00:06:39.321 00:52:32 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:39.579 00:52:32 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 3645521 00:06:39.579 00:52:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@946 -- # '[' -z 3645521 ']' 00:06:39.579 00:52:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@950 -- # kill -0 3645521 00:06:39.579 00:52:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@951 -- # uname 00:06:39.579 00:52:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:39.579 00:52:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3645521 00:06:39.580 00:52:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:39.580 00:52:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:39.580 00:52:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3645521' 00:06:39.580 killing process with pid 3645521 00:06:39.580 00:52:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@965 -- # kill 3645521 00:06:39.580 00:52:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@970 -- # wait 3645521 00:06:40.145 00:06:40.145 real 0m1.258s 00:06:40.145 user 0m1.199s 00:06:40.145 sys 0m0.532s 00:06:40.145 00:52:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:40.145 00:52:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:40.145 ************************************ 00:06:40.145 END TEST default_locks_via_rpc 00:06:40.145 ************************************ 00:06:40.145 00:52:33 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:40.145 00:52:33 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:40.145 00:52:33 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:40.145 00:52:33 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:40.145 ************************************ 00:06:40.145 START TEST non_locking_app_on_locked_coremask 00:06:40.145 ************************************ 00:06:40.145 00:52:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1121 -- # non_locking_app_on_locked_coremask 00:06:40.145 00:52:33 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=3645697 00:06:40.145 00:52:33 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:40.145 00:52:33 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 3645697 /var/tmp/spdk.sock 00:06:40.145 00:52:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@827 -- # '[' -z 3645697 ']' 00:06:40.145 00:52:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:40.145 00:52:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:40.145 00:52:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:40.145 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:40.145 00:52:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:40.145 00:52:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:40.145 [2024-07-25 00:52:33.172926] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 23.11.0 initialization... 00:06:40.145 [2024-07-25 00:52:33.173022] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3645697 ] 00:06:40.145 EAL: No free 2048 kB hugepages reported on node 1 00:06:40.145 [2024-07-25 00:52:33.232165] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:40.402 [2024-07-25 00:52:33.318358] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.660 00:52:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:40.660 00:52:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # return 0 00:06:40.660 00:52:33 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=3645813 00:06:40.660 00:52:33 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 3645813 /var/tmp/spdk2.sock 00:06:40.660 00:52:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@827 -- # '[' -z 3645813 ']' 00:06:40.660 00:52:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:40.660 00:52:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:40.660 00:52:33 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:40.660 00:52:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:40.660 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:40.660 00:52:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:40.660 00:52:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:40.660 [2024-07-25 00:52:33.622069] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 23.11.0 initialization... 00:06:40.660 [2024-07-25 00:52:33.622153] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3645813 ] 00:06:40.660 EAL: No free 2048 kB hugepages reported on node 1 00:06:40.660 [2024-07-25 00:52:33.719042] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:40.660 [2024-07-25 00:52:33.719081] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:40.917 [2024-07-25 00:52:33.903533] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:41.484 00:52:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:41.484 00:52:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # return 0 00:06:41.484 00:52:34 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 3645697 00:06:41.484 00:52:34 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 3645697 00:06:41.484 00:52:34 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:42.115 lslocks: write error 00:06:42.115 00:52:34 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 3645697 00:06:42.115 00:52:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@946 -- # '[' -z 3645697 ']' 00:06:42.115 00:52:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # kill -0 3645697 00:06:42.115 00:52:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # uname 00:06:42.115 00:52:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:42.115 00:52:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3645697 00:06:42.115 00:52:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:42.115 00:52:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:42.115 00:52:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3645697' 00:06:42.115 killing process with pid 3645697 00:06:42.115 00:52:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@965 -- # kill 3645697 00:06:42.115 00:52:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # wait 3645697 00:06:42.679 00:52:35 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 3645813 00:06:42.679 00:52:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@946 -- # '[' -z 3645813 ']' 00:06:42.679 00:52:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # kill -0 3645813 00:06:42.679 00:52:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # uname 00:06:42.679 00:52:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:42.679 00:52:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3645813 00:06:42.679 00:52:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:42.679 00:52:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:42.679 00:52:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3645813' 00:06:42.679 killing process with pid 3645813 00:06:42.679 00:52:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@965 -- # kill 3645813 00:06:42.679 00:52:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # wait 3645813 00:06:43.243 00:06:43.243 real 0m3.085s 00:06:43.243 user 0m3.216s 00:06:43.243 sys 0m1.029s 00:06:43.243 00:52:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:43.243 00:52:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:43.243 ************************************ 00:06:43.243 END TEST non_locking_app_on_locked_coremask 00:06:43.243 ************************************ 00:06:43.243 00:52:36 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:43.243 00:52:36 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:43.243 00:52:36 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:43.243 00:52:36 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:43.243 ************************************ 00:06:43.243 START TEST locking_app_on_unlocked_coremask 00:06:43.243 ************************************ 00:06:43.243 00:52:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1121 -- # locking_app_on_unlocked_coremask 00:06:43.243 00:52:36 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=3646121 00:06:43.243 00:52:36 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:43.243 00:52:36 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 3646121 /var/tmp/spdk.sock 00:06:43.243 00:52:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@827 -- # '[' -z 3646121 ']' 00:06:43.243 00:52:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:43.243 00:52:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:43.244 00:52:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:43.244 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:43.244 00:52:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:43.244 00:52:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:43.244 [2024-07-25 00:52:36.307650] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 23.11.0 initialization... 00:06:43.244 [2024-07-25 00:52:36.307749] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3646121 ] 00:06:43.244 EAL: No free 2048 kB hugepages reported on node 1 00:06:43.244 [2024-07-25 00:52:36.365759] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:43.244 [2024-07-25 00:52:36.365794] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:43.501 [2024-07-25 00:52:36.453086] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.759 00:52:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:43.759 00:52:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # return 0 00:06:43.759 00:52:36 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=3646248 00:06:43.759 00:52:36 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 3646248 /var/tmp/spdk2.sock 00:06:43.759 00:52:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@827 -- # '[' -z 3646248 ']' 00:06:43.759 00:52:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:43.759 00:52:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:43.759 00:52:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:43.759 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:43.759 00:52:36 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:43.759 00:52:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:43.759 00:52:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:43.759 [2024-07-25 00:52:36.758604] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 23.11.0 initialization... 00:06:43.759 [2024-07-25 00:52:36.758692] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3646248 ] 00:06:43.759 EAL: No free 2048 kB hugepages reported on node 1 00:06:43.759 [2024-07-25 00:52:36.856809] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:44.016 [2024-07-25 00:52:37.041345] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:44.581 00:52:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:44.581 00:52:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # return 0 00:06:44.581 00:52:37 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 3646248 00:06:44.581 00:52:37 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 3646248 00:06:44.581 00:52:37 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:45.145 lslocks: write error 00:06:45.145 00:52:38 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 3646121 00:06:45.145 00:52:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@946 -- # '[' -z 3646121 ']' 00:06:45.145 00:52:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # kill -0 3646121 00:06:45.145 00:52:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@951 -- # uname 00:06:45.145 00:52:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:45.145 00:52:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3646121 00:06:45.145 00:52:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:45.145 00:52:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:45.145 00:52:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3646121' 00:06:45.145 killing process with pid 3646121 00:06:45.145 00:52:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@965 -- # kill 3646121 00:06:45.145 00:52:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@970 -- # wait 3646121 00:06:46.079 00:52:38 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 3646248 00:06:46.079 00:52:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@946 -- # '[' -z 3646248 ']' 00:06:46.079 00:52:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # kill -0 3646248 00:06:46.079 00:52:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@951 -- # uname 00:06:46.079 00:52:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:46.079 00:52:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3646248 00:06:46.079 00:52:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:46.079 00:52:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:46.079 00:52:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3646248' 00:06:46.079 killing process with pid 3646248 00:06:46.079 00:52:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@965 -- # kill 3646248 00:06:46.079 00:52:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@970 -- # wait 3646248 00:06:46.337 00:06:46.337 real 0m3.176s 00:06:46.337 user 0m3.311s 00:06:46.337 sys 0m1.052s 00:06:46.337 00:52:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:46.337 00:52:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:46.337 ************************************ 00:06:46.337 END TEST locking_app_on_unlocked_coremask 00:06:46.337 ************************************ 00:06:46.337 00:52:39 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:46.337 00:52:39 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:46.337 00:52:39 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:46.337 00:52:39 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:46.337 ************************************ 00:06:46.337 START TEST locking_app_on_locked_coremask 00:06:46.337 ************************************ 00:06:46.337 00:52:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1121 -- # locking_app_on_locked_coremask 00:06:46.337 00:52:39 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=3646555 00:06:46.337 00:52:39 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:46.337 00:52:39 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 3646555 /var/tmp/spdk.sock 00:06:46.337 00:52:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@827 -- # '[' -z 3646555 ']' 00:06:46.337 00:52:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:46.337 00:52:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:46.337 00:52:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:46.337 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:46.337 00:52:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:46.337 00:52:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:46.595 [2024-07-25 00:52:39.531390] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 23.11.0 initialization... 00:06:46.595 [2024-07-25 00:52:39.531494] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3646555 ] 00:06:46.595 EAL: No free 2048 kB hugepages reported on node 1 00:06:46.595 [2024-07-25 00:52:39.596254] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:46.595 [2024-07-25 00:52:39.691810] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:46.855 00:52:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:46.855 00:52:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # return 0 00:06:46.855 00:52:39 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=3646590 00:06:46.855 00:52:39 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:46.855 00:52:39 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 3646590 /var/tmp/spdk2.sock 00:06:46.855 00:52:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@648 -- # local es=0 00:06:46.855 00:52:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 3646590 /var/tmp/spdk2.sock 00:06:46.855 00:52:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:06:46.855 00:52:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:46.855 00:52:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:06:46.855 00:52:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:46.855 00:52:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # waitforlisten 3646590 /var/tmp/spdk2.sock 00:06:46.855 00:52:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@827 -- # '[' -z 3646590 ']' 00:06:46.855 00:52:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:46.855 00:52:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:46.855 00:52:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:46.855 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:46.855 00:52:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:46.856 00:52:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:46.856 [2024-07-25 00:52:39.994016] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 23.11.0 initialization... 00:06:46.856 [2024-07-25 00:52:39.994107] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3646590 ] 00:06:47.114 EAL: No free 2048 kB hugepages reported on node 1 00:06:47.114 [2024-07-25 00:52:40.094335] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 3646555 has claimed it. 00:06:47.114 [2024-07-25 00:52:40.094420] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:47.679 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 842: kill: (3646590) - No such process 00:06:47.679 ERROR: process (pid: 3646590) is no longer running 00:06:47.679 00:52:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:47.679 00:52:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # return 1 00:06:47.679 00:52:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # es=1 00:06:47.679 00:52:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:47.679 00:52:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:47.679 00:52:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:47.679 00:52:40 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 3646555 00:06:47.679 00:52:40 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 3646555 00:06:47.679 00:52:40 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:47.937 lslocks: write error 00:06:47.937 00:52:41 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 3646555 00:06:47.937 00:52:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@946 -- # '[' -z 3646555 ']' 00:06:47.937 00:52:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # kill -0 3646555 00:06:47.937 00:52:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # uname 00:06:47.937 00:52:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:47.937 00:52:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3646555 00:06:47.937 00:52:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:47.937 00:52:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:47.937 00:52:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3646555' 00:06:47.937 killing process with pid 3646555 00:06:47.937 00:52:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@965 -- # kill 3646555 00:06:47.937 00:52:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # wait 3646555 00:06:48.502 00:06:48.502 real 0m1.974s 00:06:48.502 user 0m2.154s 00:06:48.502 sys 0m0.665s 00:06:48.502 00:52:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:48.502 00:52:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:48.502 ************************************ 00:06:48.502 END TEST locking_app_on_locked_coremask 00:06:48.502 ************************************ 00:06:48.502 00:52:41 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:48.502 00:52:41 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:48.502 00:52:41 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:48.502 00:52:41 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:48.502 ************************************ 00:06:48.502 START TEST locking_overlapped_coremask 00:06:48.502 ************************************ 00:06:48.502 00:52:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1121 -- # locking_overlapped_coremask 00:06:48.502 00:52:41 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=3646851 00:06:48.503 00:52:41 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:06:48.503 00:52:41 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 3646851 /var/tmp/spdk.sock 00:06:48.503 00:52:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@827 -- # '[' -z 3646851 ']' 00:06:48.503 00:52:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:48.503 00:52:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:48.503 00:52:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:48.503 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:48.503 00:52:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:48.503 00:52:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:48.503 [2024-07-25 00:52:41.556821] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 23.11.0 initialization... 00:06:48.503 [2024-07-25 00:52:41.556906] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3646851 ] 00:06:48.503 EAL: No free 2048 kB hugepages reported on node 1 00:06:48.503 [2024-07-25 00:52:41.619783] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:48.761 [2024-07-25 00:52:41.710005] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:48.761 [2024-07-25 00:52:41.710073] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:48.761 [2024-07-25 00:52:41.710075] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:49.019 00:52:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:49.019 00:52:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # return 0 00:06:49.019 00:52:41 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=3646856 00:06:49.019 00:52:41 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:49.019 00:52:41 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 3646856 /var/tmp/spdk2.sock 00:06:49.019 00:52:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@648 -- # local es=0 00:06:49.019 00:52:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 3646856 /var/tmp/spdk2.sock 00:06:49.019 00:52:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:06:49.019 00:52:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:49.019 00:52:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:06:49.019 00:52:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:49.019 00:52:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # waitforlisten 3646856 /var/tmp/spdk2.sock 00:06:49.019 00:52:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@827 -- # '[' -z 3646856 ']' 00:06:49.019 00:52:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:49.019 00:52:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:49.019 00:52:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:49.019 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:49.019 00:52:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:49.019 00:52:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:49.019 [2024-07-25 00:52:42.013010] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 23.11.0 initialization... 00:06:49.019 [2024-07-25 00:52:42.013091] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3646856 ] 00:06:49.019 EAL: No free 2048 kB hugepages reported on node 1 00:06:49.019 [2024-07-25 00:52:42.101871] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 3646851 has claimed it. 00:06:49.019 [2024-07-25 00:52:42.101939] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:49.584 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 842: kill: (3646856) - No such process 00:06:49.584 ERROR: process (pid: 3646856) is no longer running 00:06:49.584 00:52:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:49.584 00:52:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # return 1 00:06:49.584 00:52:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # es=1 00:06:49.584 00:52:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:49.584 00:52:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:49.584 00:52:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:49.584 00:52:42 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:49.584 00:52:42 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:49.584 00:52:42 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:49.584 00:52:42 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:49.584 00:52:42 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 3646851 00:06:49.584 00:52:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@946 -- # '[' -z 3646851 ']' 00:06:49.584 00:52:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@950 -- # kill -0 3646851 00:06:49.584 00:52:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@951 -- # uname 00:06:49.584 00:52:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:49.584 00:52:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3646851 00:06:49.841 00:52:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:49.841 00:52:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:49.841 00:52:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3646851' 00:06:49.841 killing process with pid 3646851 00:06:49.841 00:52:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@965 -- # kill 3646851 00:06:49.841 00:52:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@970 -- # wait 3646851 00:06:50.098 00:06:50.098 real 0m1.657s 00:06:50.098 user 0m4.464s 00:06:50.098 sys 0m0.477s 00:06:50.099 00:52:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:50.099 00:52:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:50.099 ************************************ 00:06:50.099 END TEST locking_overlapped_coremask 00:06:50.099 ************************************ 00:06:50.099 00:52:43 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:50.099 00:52:43 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:50.099 00:52:43 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:50.099 00:52:43 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:50.099 ************************************ 00:06:50.099 START TEST locking_overlapped_coremask_via_rpc 00:06:50.099 ************************************ 00:06:50.099 00:52:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1121 -- # locking_overlapped_coremask_via_rpc 00:06:50.099 00:52:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=3647028 00:06:50.099 00:52:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:50.099 00:52:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 3647028 /var/tmp/spdk.sock 00:06:50.099 00:52:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 3647028 ']' 00:06:50.099 00:52:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:50.099 00:52:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:50.099 00:52:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:50.099 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:50.099 00:52:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:50.099 00:52:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:50.356 [2024-07-25 00:52:43.262446] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 23.11.0 initialization... 00:06:50.356 [2024-07-25 00:52:43.262541] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3647028 ] 00:06:50.356 EAL: No free 2048 kB hugepages reported on node 1 00:06:50.356 [2024-07-25 00:52:43.322787] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:50.356 [2024-07-25 00:52:43.322824] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:50.356 [2024-07-25 00:52:43.411190] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:50.356 [2024-07-25 00:52:43.411260] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:50.356 [2024-07-25 00:52:43.411263] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:50.614 00:52:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:50.614 00:52:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:06:50.614 00:52:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=3647156 00:06:50.614 00:52:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 3647156 /var/tmp/spdk2.sock 00:06:50.614 00:52:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 3647156 ']' 00:06:50.614 00:52:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:50.614 00:52:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:50.614 00:52:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:50.614 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:50.614 00:52:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:50.614 00:52:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:50.614 00:52:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:50.614 [2024-07-25 00:52:43.699523] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 23.11.0 initialization... 00:06:50.614 [2024-07-25 00:52:43.699617] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3647156 ] 00:06:50.614 EAL: No free 2048 kB hugepages reported on node 1 00:06:50.871 [2024-07-25 00:52:43.786834] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:50.871 [2024-07-25 00:52:43.786864] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:50.871 [2024-07-25 00:52:43.962905] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:50.871 [2024-07-25 00:52:43.966298] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:06:50.871 [2024-07-25 00:52:43.966300] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:51.835 00:52:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:51.835 00:52:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:06:51.835 00:52:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:51.835 00:52:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:51.835 00:52:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:51.835 00:52:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:51.835 00:52:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:51.835 00:52:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@648 -- # local es=0 00:06:51.835 00:52:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:51.835 00:52:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:06:51.835 00:52:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:51.835 00:52:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:06:51.835 00:52:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:51.835 00:52:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:51.835 00:52:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:51.835 00:52:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:51.835 [2024-07-25 00:52:44.650334] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 3647028 has claimed it. 00:06:51.835 request: 00:06:51.835 { 00:06:51.835 "method": "framework_enable_cpumask_locks", 00:06:51.835 "req_id": 1 00:06:51.835 } 00:06:51.835 Got JSON-RPC error response 00:06:51.835 response: 00:06:51.835 { 00:06:51.835 "code": -32603, 00:06:51.835 "message": "Failed to claim CPU core: 2" 00:06:51.835 } 00:06:51.835 00:52:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:06:51.835 00:52:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # es=1 00:06:51.835 00:52:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:51.835 00:52:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:51.835 00:52:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:51.836 00:52:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 3647028 /var/tmp/spdk.sock 00:06:51.836 00:52:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 3647028 ']' 00:06:51.836 00:52:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:51.836 00:52:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:51.836 00:52:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:51.836 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:51.836 00:52:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:51.836 00:52:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:51.836 00:52:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:51.836 00:52:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:06:51.836 00:52:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 3647156 /var/tmp/spdk2.sock 00:06:51.836 00:52:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 3647156 ']' 00:06:51.836 00:52:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:51.836 00:52:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:51.836 00:52:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:51.836 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:51.836 00:52:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:51.836 00:52:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:52.093 00:52:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:52.094 00:52:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:06:52.094 00:52:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:52.094 00:52:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:52.094 00:52:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:52.094 00:52:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:52.094 00:06:52.094 real 0m1.938s 00:06:52.094 user 0m1.023s 00:06:52.094 sys 0m0.178s 00:06:52.094 00:52:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:52.094 00:52:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:52.094 ************************************ 00:06:52.094 END TEST locking_overlapped_coremask_via_rpc 00:06:52.094 ************************************ 00:06:52.094 00:52:45 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:52.094 00:52:45 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 3647028 ]] 00:06:52.094 00:52:45 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 3647028 00:06:52.094 00:52:45 event.cpu_locks -- common/autotest_common.sh@946 -- # '[' -z 3647028 ']' 00:06:52.094 00:52:45 event.cpu_locks -- common/autotest_common.sh@950 -- # kill -0 3647028 00:06:52.094 00:52:45 event.cpu_locks -- common/autotest_common.sh@951 -- # uname 00:06:52.094 00:52:45 event.cpu_locks -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:52.094 00:52:45 event.cpu_locks -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3647028 00:06:52.094 00:52:45 event.cpu_locks -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:52.094 00:52:45 event.cpu_locks -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:52.094 00:52:45 event.cpu_locks -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3647028' 00:06:52.094 killing process with pid 3647028 00:06:52.094 00:52:45 event.cpu_locks -- common/autotest_common.sh@965 -- # kill 3647028 00:06:52.094 00:52:45 event.cpu_locks -- common/autotest_common.sh@970 -- # wait 3647028 00:06:52.659 00:52:45 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 3647156 ]] 00:06:52.659 00:52:45 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 3647156 00:06:52.659 00:52:45 event.cpu_locks -- common/autotest_common.sh@946 -- # '[' -z 3647156 ']' 00:06:52.659 00:52:45 event.cpu_locks -- common/autotest_common.sh@950 -- # kill -0 3647156 00:06:52.659 00:52:45 event.cpu_locks -- common/autotest_common.sh@951 -- # uname 00:06:52.659 00:52:45 event.cpu_locks -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:52.659 00:52:45 event.cpu_locks -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3647156 00:06:52.659 00:52:45 event.cpu_locks -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:06:52.659 00:52:45 event.cpu_locks -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:06:52.659 00:52:45 event.cpu_locks -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3647156' 00:06:52.659 killing process with pid 3647156 00:06:52.659 00:52:45 event.cpu_locks -- common/autotest_common.sh@965 -- # kill 3647156 00:06:52.659 00:52:45 event.cpu_locks -- common/autotest_common.sh@970 -- # wait 3647156 00:06:52.917 00:52:46 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:52.917 00:52:46 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:52.917 00:52:46 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 3647028 ]] 00:06:52.917 00:52:46 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 3647028 00:06:52.917 00:52:46 event.cpu_locks -- common/autotest_common.sh@946 -- # '[' -z 3647028 ']' 00:06:52.917 00:52:46 event.cpu_locks -- common/autotest_common.sh@950 -- # kill -0 3647028 00:06:52.917 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 950: kill: (3647028) - No such process 00:06:52.917 00:52:46 event.cpu_locks -- common/autotest_common.sh@973 -- # echo 'Process with pid 3647028 is not found' 00:06:52.917 Process with pid 3647028 is not found 00:06:52.917 00:52:46 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 3647156 ]] 00:06:52.917 00:52:46 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 3647156 00:06:52.917 00:52:46 event.cpu_locks -- common/autotest_common.sh@946 -- # '[' -z 3647156 ']' 00:06:52.917 00:52:46 event.cpu_locks -- common/autotest_common.sh@950 -- # kill -0 3647156 00:06:52.917 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 950: kill: (3647156) - No such process 00:06:52.917 00:52:46 event.cpu_locks -- common/autotest_common.sh@973 -- # echo 'Process with pid 3647156 is not found' 00:06:52.917 Process with pid 3647156 is not found 00:06:52.917 00:52:46 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:52.917 00:06:52.917 real 0m15.527s 00:06:52.917 user 0m27.159s 00:06:52.917 sys 0m5.323s 00:06:52.917 00:52:46 event.cpu_locks -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:52.917 00:52:46 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:52.917 ************************************ 00:06:52.917 END TEST cpu_locks 00:06:52.917 ************************************ 00:06:52.917 00:06:52.917 real 0m42.046s 00:06:52.917 user 1m20.869s 00:06:52.917 sys 0m9.352s 00:06:52.917 00:52:46 event -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:52.917 00:52:46 event -- common/autotest_common.sh@10 -- # set +x 00:06:52.917 ************************************ 00:06:52.917 END TEST event 00:06:52.917 ************************************ 00:06:53.175 00:52:46 -- spdk/autotest.sh@182 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:53.175 00:52:46 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:53.175 00:52:46 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:53.175 00:52:46 -- common/autotest_common.sh@10 -- # set +x 00:06:53.175 ************************************ 00:06:53.175 START TEST thread 00:06:53.175 ************************************ 00:06:53.175 00:52:46 thread -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:53.175 * Looking for test storage... 00:06:53.175 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:06:53.175 00:52:46 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:53.175 00:52:46 thread -- common/autotest_common.sh@1097 -- # '[' 8 -le 1 ']' 00:06:53.175 00:52:46 thread -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:53.175 00:52:46 thread -- common/autotest_common.sh@10 -- # set +x 00:06:53.175 ************************************ 00:06:53.175 START TEST thread_poller_perf 00:06:53.175 ************************************ 00:06:53.175 00:52:46 thread.thread_poller_perf -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:53.175 [2024-07-25 00:52:46.188120] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 23.11.0 initialization... 00:06:53.175 [2024-07-25 00:52:46.188182] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3647520 ] 00:06:53.175 EAL: No free 2048 kB hugepages reported on node 1 00:06:53.175 [2024-07-25 00:52:46.250525] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:53.432 [2024-07-25 00:52:46.340362] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:53.432 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:54.363 ====================================== 00:06:54.363 busy:2707830440 (cyc) 00:06:54.363 total_run_count: 292000 00:06:54.363 tsc_hz: 2700000000 (cyc) 00:06:54.363 ====================================== 00:06:54.363 poller_cost: 9273 (cyc), 3434 (nsec) 00:06:54.363 00:06:54.363 real 0m1.256s 00:06:54.363 user 0m1.165s 00:06:54.363 sys 0m0.085s 00:06:54.363 00:52:47 thread.thread_poller_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:54.363 00:52:47 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:54.363 ************************************ 00:06:54.363 END TEST thread_poller_perf 00:06:54.363 ************************************ 00:06:54.363 00:52:47 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:54.363 00:52:47 thread -- common/autotest_common.sh@1097 -- # '[' 8 -le 1 ']' 00:06:54.363 00:52:47 thread -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:54.364 00:52:47 thread -- common/autotest_common.sh@10 -- # set +x 00:06:54.364 ************************************ 00:06:54.364 START TEST thread_poller_perf 00:06:54.364 ************************************ 00:06:54.364 00:52:47 thread.thread_poller_perf -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:54.364 [2024-07-25 00:52:47.495759] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 23.11.0 initialization... 00:06:54.364 [2024-07-25 00:52:47.495829] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3647678 ] 00:06:54.621 EAL: No free 2048 kB hugepages reported on node 1 00:06:54.621 [2024-07-25 00:52:47.560617] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:54.621 [2024-07-25 00:52:47.652540] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:54.621 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:55.994 ====================================== 00:06:55.994 busy:2702586940 (cyc) 00:06:55.994 total_run_count: 3902000 00:06:55.994 tsc_hz: 2700000000 (cyc) 00:06:55.994 ====================================== 00:06:55.994 poller_cost: 692 (cyc), 256 (nsec) 00:06:55.994 00:06:55.994 real 0m1.256s 00:06:55.994 user 0m1.164s 00:06:55.994 sys 0m0.086s 00:06:55.994 00:52:48 thread.thread_poller_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:55.994 00:52:48 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:55.994 ************************************ 00:06:55.994 END TEST thread_poller_perf 00:06:55.994 ************************************ 00:06:55.994 00:52:48 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:55.994 00:06:55.994 real 0m2.659s 00:06:55.994 user 0m2.384s 00:06:55.994 sys 0m0.276s 00:06:55.994 00:52:48 thread -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:55.994 00:52:48 thread -- common/autotest_common.sh@10 -- # set +x 00:06:55.994 ************************************ 00:06:55.994 END TEST thread 00:06:55.994 ************************************ 00:06:55.994 00:52:48 -- spdk/autotest.sh@183 -- # run_test accel /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:06:55.994 00:52:48 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:55.994 00:52:48 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:55.994 00:52:48 -- common/autotest_common.sh@10 -- # set +x 00:06:55.994 ************************************ 00:06:55.994 START TEST accel 00:06:55.994 ************************************ 00:06:55.994 00:52:48 accel -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:06:55.994 * Looking for test storage... 00:06:55.994 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:06:55.994 00:52:48 accel -- accel/accel.sh@81 -- # declare -A expected_opcs 00:06:55.994 00:52:48 accel -- accel/accel.sh@82 -- # get_expected_opcs 00:06:55.994 00:52:48 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:55.994 00:52:48 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=3647873 00:06:55.994 00:52:48 accel -- accel/accel.sh@63 -- # waitforlisten 3647873 00:06:55.994 00:52:48 accel -- common/autotest_common.sh@827 -- # '[' -z 3647873 ']' 00:06:55.994 00:52:48 accel -- accel/accel.sh@61 -- # build_accel_config 00:06:55.994 00:52:48 accel -- accel/accel.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:06:55.994 00:52:48 accel -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:55.994 00:52:48 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:55.994 00:52:48 accel -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:55.994 00:52:48 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:55.994 00:52:48 accel -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:55.994 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:55.994 00:52:48 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:55.994 00:52:48 accel -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:55.994 00:52:48 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:55.994 00:52:48 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:55.994 00:52:48 accel -- common/autotest_common.sh@10 -- # set +x 00:06:55.994 00:52:48 accel -- accel/accel.sh@40 -- # local IFS=, 00:06:55.994 00:52:48 accel -- accel/accel.sh@41 -- # jq -r . 00:06:55.994 [2024-07-25 00:52:48.907602] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 23.11.0 initialization... 00:06:55.994 [2024-07-25 00:52:48.907691] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3647873 ] 00:06:55.994 EAL: No free 2048 kB hugepages reported on node 1 00:06:55.994 [2024-07-25 00:52:48.969487] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:55.994 [2024-07-25 00:52:49.055553] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:56.252 00:52:49 accel -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:56.252 00:52:49 accel -- common/autotest_common.sh@860 -- # return 0 00:06:56.252 00:52:49 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:06:56.252 00:52:49 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:06:56.252 00:52:49 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:06:56.252 00:52:49 accel -- accel/accel.sh@68 -- # [[ -n '' ]] 00:06:56.252 00:52:49 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:06:56.252 00:52:49 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:06:56.252 00:52:49 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:06:56.252 00:52:49 accel -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:56.252 00:52:49 accel -- common/autotest_common.sh@10 -- # set +x 00:06:56.252 00:52:49 accel -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:56.252 00:52:49 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:56.252 00:52:49 accel -- accel/accel.sh@72 -- # IFS== 00:06:56.252 00:52:49 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:56.252 00:52:49 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:56.252 00:52:49 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:56.252 00:52:49 accel -- accel/accel.sh@72 -- # IFS== 00:06:56.252 00:52:49 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:56.252 00:52:49 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:56.252 00:52:49 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:56.252 00:52:49 accel -- accel/accel.sh@72 -- # IFS== 00:06:56.252 00:52:49 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:56.253 00:52:49 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:56.253 00:52:49 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:56.253 00:52:49 accel -- accel/accel.sh@72 -- # IFS== 00:06:56.253 00:52:49 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:56.253 00:52:49 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:56.253 00:52:49 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:56.253 00:52:49 accel -- accel/accel.sh@72 -- # IFS== 00:06:56.253 00:52:49 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:56.253 00:52:49 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:56.253 00:52:49 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:56.253 00:52:49 accel -- accel/accel.sh@72 -- # IFS== 00:06:56.253 00:52:49 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:56.253 00:52:49 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:56.253 00:52:49 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:56.253 00:52:49 accel -- accel/accel.sh@72 -- # IFS== 00:06:56.253 00:52:49 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:56.253 00:52:49 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:56.253 00:52:49 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:56.253 00:52:49 accel -- accel/accel.sh@72 -- # IFS== 00:06:56.253 00:52:49 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:56.253 00:52:49 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:56.253 00:52:49 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:56.253 00:52:49 accel -- accel/accel.sh@72 -- # IFS== 00:06:56.253 00:52:49 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:56.253 00:52:49 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:56.253 00:52:49 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:56.253 00:52:49 accel -- accel/accel.sh@72 -- # IFS== 00:06:56.253 00:52:49 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:56.253 00:52:49 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:56.253 00:52:49 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:56.253 00:52:49 accel -- accel/accel.sh@72 -- # IFS== 00:06:56.253 00:52:49 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:56.253 00:52:49 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:56.253 00:52:49 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:56.253 00:52:49 accel -- accel/accel.sh@72 -- # IFS== 00:06:56.253 00:52:49 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:56.253 00:52:49 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:56.253 00:52:49 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:56.253 00:52:49 accel -- accel/accel.sh@72 -- # IFS== 00:06:56.253 00:52:49 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:56.253 00:52:49 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:56.253 00:52:49 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:56.253 00:52:49 accel -- accel/accel.sh@72 -- # IFS== 00:06:56.253 00:52:49 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:56.253 00:52:49 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:56.253 00:52:49 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:56.253 00:52:49 accel -- accel/accel.sh@72 -- # IFS== 00:06:56.253 00:52:49 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:56.253 00:52:49 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:56.253 00:52:49 accel -- accel/accel.sh@75 -- # killprocess 3647873 00:06:56.253 00:52:49 accel -- common/autotest_common.sh@946 -- # '[' -z 3647873 ']' 00:06:56.253 00:52:49 accel -- common/autotest_common.sh@950 -- # kill -0 3647873 00:06:56.253 00:52:49 accel -- common/autotest_common.sh@951 -- # uname 00:06:56.253 00:52:49 accel -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:56.253 00:52:49 accel -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3647873 00:06:56.253 00:52:49 accel -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:56.253 00:52:49 accel -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:56.253 00:52:49 accel -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3647873' 00:06:56.253 killing process with pid 3647873 00:06:56.253 00:52:49 accel -- common/autotest_common.sh@965 -- # kill 3647873 00:06:56.253 00:52:49 accel -- common/autotest_common.sh@970 -- # wait 3647873 00:06:56.818 00:52:49 accel -- accel/accel.sh@76 -- # trap - ERR 00:06:56.818 00:52:49 accel -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:06:56.818 00:52:49 accel -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:06:56.818 00:52:49 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:56.818 00:52:49 accel -- common/autotest_common.sh@10 -- # set +x 00:06:56.818 00:52:49 accel.accel_help -- common/autotest_common.sh@1121 -- # accel_perf -h 00:06:56.818 00:52:49 accel.accel_help -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:06:56.818 00:52:49 accel.accel_help -- accel/accel.sh@12 -- # build_accel_config 00:06:56.818 00:52:49 accel.accel_help -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:56.818 00:52:49 accel.accel_help -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:56.818 00:52:49 accel.accel_help -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:56.818 00:52:49 accel.accel_help -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:56.818 00:52:49 accel.accel_help -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:56.818 00:52:49 accel.accel_help -- accel/accel.sh@40 -- # local IFS=, 00:06:56.818 00:52:49 accel.accel_help -- accel/accel.sh@41 -- # jq -r . 00:06:56.818 00:52:49 accel.accel_help -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:56.818 00:52:49 accel.accel_help -- common/autotest_common.sh@10 -- # set +x 00:06:56.818 00:52:49 accel -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:06:56.818 00:52:49 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:06:56.818 00:52:49 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:56.818 00:52:49 accel -- common/autotest_common.sh@10 -- # set +x 00:06:56.818 ************************************ 00:06:56.818 START TEST accel_missing_filename 00:06:56.818 ************************************ 00:06:56.818 00:52:49 accel.accel_missing_filename -- common/autotest_common.sh@1121 -- # NOT accel_perf -t 1 -w compress 00:06:56.818 00:52:49 accel.accel_missing_filename -- common/autotest_common.sh@648 -- # local es=0 00:06:56.818 00:52:49 accel.accel_missing_filename -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress 00:06:56.818 00:52:49 accel.accel_missing_filename -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:56.818 00:52:49 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:56.818 00:52:49 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:56.818 00:52:49 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:56.818 00:52:49 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress 00:06:56.818 00:52:49 accel.accel_missing_filename -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:06:56.818 00:52:49 accel.accel_missing_filename -- accel/accel.sh@12 -- # build_accel_config 00:06:56.818 00:52:49 accel.accel_missing_filename -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:56.818 00:52:49 accel.accel_missing_filename -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:56.818 00:52:49 accel.accel_missing_filename -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:56.818 00:52:49 accel.accel_missing_filename -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:56.818 00:52:49 accel.accel_missing_filename -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:56.818 00:52:49 accel.accel_missing_filename -- accel/accel.sh@40 -- # local IFS=, 00:06:56.818 00:52:49 accel.accel_missing_filename -- accel/accel.sh@41 -- # jq -r . 00:06:56.818 [2024-07-25 00:52:49.898139] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 23.11.0 initialization... 00:06:56.818 [2024-07-25 00:52:49.898205] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3648039 ] 00:06:56.818 EAL: No free 2048 kB hugepages reported on node 1 00:06:56.818 [2024-07-25 00:52:49.961518] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:57.076 [2024-07-25 00:52:50.059586] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:57.076 [2024-07-25 00:52:50.120975] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:57.076 [2024-07-25 00:52:50.198892] accel_perf.c:1464:main: *ERROR*: ERROR starting application 00:06:57.335 A filename is required. 00:06:57.335 00:52:50 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # es=234 00:06:57.335 00:52:50 accel.accel_missing_filename -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:57.335 00:52:50 accel.accel_missing_filename -- common/autotest_common.sh@660 -- # es=106 00:06:57.335 00:52:50 accel.accel_missing_filename -- common/autotest_common.sh@661 -- # case "$es" in 00:06:57.335 00:52:50 accel.accel_missing_filename -- common/autotest_common.sh@668 -- # es=1 00:06:57.335 00:52:50 accel.accel_missing_filename -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:57.335 00:06:57.335 real 0m0.400s 00:06:57.335 user 0m0.292s 00:06:57.335 sys 0m0.143s 00:06:57.335 00:52:50 accel.accel_missing_filename -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:57.335 00:52:50 accel.accel_missing_filename -- common/autotest_common.sh@10 -- # set +x 00:06:57.335 ************************************ 00:06:57.335 END TEST accel_missing_filename 00:06:57.335 ************************************ 00:06:57.335 00:52:50 accel -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:57.335 00:52:50 accel -- common/autotest_common.sh@1097 -- # '[' 10 -le 1 ']' 00:06:57.335 00:52:50 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:57.335 00:52:50 accel -- common/autotest_common.sh@10 -- # set +x 00:06:57.335 ************************************ 00:06:57.335 START TEST accel_compress_verify 00:06:57.335 ************************************ 00:06:57.335 00:52:50 accel.accel_compress_verify -- common/autotest_common.sh@1121 -- # NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:57.335 00:52:50 accel.accel_compress_verify -- common/autotest_common.sh@648 -- # local es=0 00:06:57.335 00:52:50 accel.accel_compress_verify -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:57.335 00:52:50 accel.accel_compress_verify -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:57.335 00:52:50 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:57.335 00:52:50 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:57.335 00:52:50 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:57.335 00:52:50 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:57.335 00:52:50 accel.accel_compress_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:57.335 00:52:50 accel.accel_compress_verify -- accel/accel.sh@12 -- # build_accel_config 00:06:57.335 00:52:50 accel.accel_compress_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:57.335 00:52:50 accel.accel_compress_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:57.335 00:52:50 accel.accel_compress_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:57.335 00:52:50 accel.accel_compress_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:57.335 00:52:50 accel.accel_compress_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:57.335 00:52:50 accel.accel_compress_verify -- accel/accel.sh@40 -- # local IFS=, 00:06:57.335 00:52:50 accel.accel_compress_verify -- accel/accel.sh@41 -- # jq -r . 00:06:57.335 [2024-07-25 00:52:50.340537] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 23.11.0 initialization... 00:06:57.335 [2024-07-25 00:52:50.340603] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3648190 ] 00:06:57.335 EAL: No free 2048 kB hugepages reported on node 1 00:06:57.335 [2024-07-25 00:52:50.406443] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:57.593 [2024-07-25 00:52:50.500070] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:57.593 [2024-07-25 00:52:50.561553] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:57.593 [2024-07-25 00:52:50.646996] accel_perf.c:1464:main: *ERROR*: ERROR starting application 00:06:57.593 00:06:57.593 Compression does not support the verify option, aborting. 00:06:57.593 00:52:50 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # es=161 00:06:57.593 00:52:50 accel.accel_compress_verify -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:57.593 00:52:50 accel.accel_compress_verify -- common/autotest_common.sh@660 -- # es=33 00:06:57.593 00:52:50 accel.accel_compress_verify -- common/autotest_common.sh@661 -- # case "$es" in 00:06:57.593 00:52:50 accel.accel_compress_verify -- common/autotest_common.sh@668 -- # es=1 00:06:57.593 00:52:50 accel.accel_compress_verify -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:57.593 00:06:57.593 real 0m0.408s 00:06:57.593 user 0m0.297s 00:06:57.593 sys 0m0.146s 00:06:57.593 00:52:50 accel.accel_compress_verify -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:57.593 00:52:50 accel.accel_compress_verify -- common/autotest_common.sh@10 -- # set +x 00:06:57.593 ************************************ 00:06:57.593 END TEST accel_compress_verify 00:06:57.593 ************************************ 00:06:57.852 00:52:50 accel -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:06:57.852 00:52:50 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:06:57.852 00:52:50 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:57.852 00:52:50 accel -- common/autotest_common.sh@10 -- # set +x 00:06:57.852 ************************************ 00:06:57.852 START TEST accel_wrong_workload 00:06:57.852 ************************************ 00:06:57.852 00:52:50 accel.accel_wrong_workload -- common/autotest_common.sh@1121 -- # NOT accel_perf -t 1 -w foobar 00:06:57.852 00:52:50 accel.accel_wrong_workload -- common/autotest_common.sh@648 -- # local es=0 00:06:57.852 00:52:50 accel.accel_wrong_workload -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:06:57.852 00:52:50 accel.accel_wrong_workload -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:57.852 00:52:50 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:57.852 00:52:50 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:57.852 00:52:50 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:57.852 00:52:50 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w foobar 00:06:57.852 00:52:50 accel.accel_wrong_workload -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:06:57.852 00:52:50 accel.accel_wrong_workload -- accel/accel.sh@12 -- # build_accel_config 00:06:57.852 00:52:50 accel.accel_wrong_workload -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:57.852 00:52:50 accel.accel_wrong_workload -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:57.852 00:52:50 accel.accel_wrong_workload -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:57.852 00:52:50 accel.accel_wrong_workload -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:57.852 00:52:50 accel.accel_wrong_workload -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:57.852 00:52:50 accel.accel_wrong_workload -- accel/accel.sh@40 -- # local IFS=, 00:06:57.852 00:52:50 accel.accel_wrong_workload -- accel/accel.sh@41 -- # jq -r . 00:06:57.852 Unsupported workload type: foobar 00:06:57.853 [2024-07-25 00:52:50.792272] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:06:57.853 accel_perf options: 00:06:57.853 [-h help message] 00:06:57.853 [-q queue depth per core] 00:06:57.853 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:57.853 [-T number of threads per core 00:06:57.853 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:57.853 [-t time in seconds] 00:06:57.853 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:57.853 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:06:57.853 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:57.853 [-l for compress/decompress workloads, name of uncompressed input file 00:06:57.853 [-S for crc32c workload, use this seed value (default 0) 00:06:57.853 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:57.853 [-f for fill workload, use this BYTE value (default 255) 00:06:57.853 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:57.853 [-y verify result if this switch is on] 00:06:57.853 [-a tasks to allocate per core (default: same value as -q)] 00:06:57.853 Can be used to spread operations across a wider range of memory. 00:06:57.853 00:52:50 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # es=1 00:06:57.853 00:52:50 accel.accel_wrong_workload -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:57.853 00:52:50 accel.accel_wrong_workload -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:57.853 00:52:50 accel.accel_wrong_workload -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:57.853 00:06:57.853 real 0m0.023s 00:06:57.853 user 0m0.011s 00:06:57.853 sys 0m0.012s 00:06:57.853 00:52:50 accel.accel_wrong_workload -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:57.853 00:52:50 accel.accel_wrong_workload -- common/autotest_common.sh@10 -- # set +x 00:06:57.853 ************************************ 00:06:57.853 END TEST accel_wrong_workload 00:06:57.853 ************************************ 00:06:57.853 Error: writing output failed: Broken pipe 00:06:57.853 00:52:50 accel -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:06:57.853 00:52:50 accel -- common/autotest_common.sh@1097 -- # '[' 10 -le 1 ']' 00:06:57.853 00:52:50 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:57.853 00:52:50 accel -- common/autotest_common.sh@10 -- # set +x 00:06:57.853 ************************************ 00:06:57.853 START TEST accel_negative_buffers 00:06:57.853 ************************************ 00:06:57.853 00:52:50 accel.accel_negative_buffers -- common/autotest_common.sh@1121 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:06:57.853 00:52:50 accel.accel_negative_buffers -- common/autotest_common.sh@648 -- # local es=0 00:06:57.853 00:52:50 accel.accel_negative_buffers -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:06:57.853 00:52:50 accel.accel_negative_buffers -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:57.853 00:52:50 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:57.853 00:52:50 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:57.853 00:52:50 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:57.853 00:52:50 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w xor -y -x -1 00:06:57.853 00:52:50 accel.accel_negative_buffers -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:06:57.853 00:52:50 accel.accel_negative_buffers -- accel/accel.sh@12 -- # build_accel_config 00:06:57.853 00:52:50 accel.accel_negative_buffers -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:57.853 00:52:50 accel.accel_negative_buffers -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:57.853 00:52:50 accel.accel_negative_buffers -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:57.853 00:52:50 accel.accel_negative_buffers -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:57.853 00:52:50 accel.accel_negative_buffers -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:57.853 00:52:50 accel.accel_negative_buffers -- accel/accel.sh@40 -- # local IFS=, 00:06:57.853 00:52:50 accel.accel_negative_buffers -- accel/accel.sh@41 -- # jq -r . 00:06:57.853 -x option must be non-negative. 00:06:57.853 [2024-07-25 00:52:50.864509] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:06:57.853 accel_perf options: 00:06:57.853 [-h help message] 00:06:57.853 [-q queue depth per core] 00:06:57.853 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:57.853 [-T number of threads per core 00:06:57.853 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:57.853 [-t time in seconds] 00:06:57.853 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:57.853 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:06:57.853 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:57.853 [-l for compress/decompress workloads, name of uncompressed input file 00:06:57.853 [-S for crc32c workload, use this seed value (default 0) 00:06:57.853 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:57.853 [-f for fill workload, use this BYTE value (default 255) 00:06:57.853 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:57.853 [-y verify result if this switch is on] 00:06:57.853 [-a tasks to allocate per core (default: same value as -q)] 00:06:57.853 Can be used to spread operations across a wider range of memory. 00:06:57.853 00:52:50 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # es=1 00:06:57.853 00:52:50 accel.accel_negative_buffers -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:57.853 00:52:50 accel.accel_negative_buffers -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:57.853 00:52:50 accel.accel_negative_buffers -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:57.853 00:06:57.853 real 0m0.023s 00:06:57.853 user 0m0.011s 00:06:57.853 sys 0m0.012s 00:06:57.853 00:52:50 accel.accel_negative_buffers -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:57.853 00:52:50 accel.accel_negative_buffers -- common/autotest_common.sh@10 -- # set +x 00:06:57.853 ************************************ 00:06:57.853 END TEST accel_negative_buffers 00:06:57.853 ************************************ 00:06:57.853 Error: writing output failed: Broken pipe 00:06:57.853 00:52:50 accel -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:06:57.853 00:52:50 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:06:57.853 00:52:50 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:57.853 00:52:50 accel -- common/autotest_common.sh@10 -- # set +x 00:06:57.853 ************************************ 00:06:57.853 START TEST accel_crc32c 00:06:57.853 ************************************ 00:06:57.853 00:52:50 accel.accel_crc32c -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w crc32c -S 32 -y 00:06:57.853 00:52:50 accel.accel_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:06:57.853 00:52:50 accel.accel_crc32c -- accel/accel.sh@17 -- # local accel_module 00:06:57.853 00:52:50 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:57.853 00:52:50 accel.accel_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:06:57.853 00:52:50 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:57.853 00:52:50 accel.accel_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:06:57.853 00:52:50 accel.accel_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:06:57.853 00:52:50 accel.accel_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:57.853 00:52:50 accel.accel_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:57.853 00:52:50 accel.accel_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:57.853 00:52:50 accel.accel_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:57.853 00:52:50 accel.accel_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:57.853 00:52:50 accel.accel_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:06:57.853 00:52:50 accel.accel_crc32c -- accel/accel.sh@41 -- # jq -r . 00:06:57.853 [2024-07-25 00:52:50.928767] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 23.11.0 initialization... 00:06:57.853 [2024-07-25 00:52:50.928831] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3648250 ] 00:06:57.854 EAL: No free 2048 kB hugepages reported on node 1 00:06:57.854 [2024-07-25 00:52:50.990022] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:58.112 [2024-07-25 00:52:51.083571] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:58.112 00:52:51 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:58.112 00:52:51 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:58.112 00:52:51 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:58.112 00:52:51 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:58.112 00:52:51 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:58.112 00:52:51 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:58.112 00:52:51 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:58.112 00:52:51 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:58.112 00:52:51 accel.accel_crc32c -- accel/accel.sh@20 -- # val=0x1 00:06:58.112 00:52:51 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:58.112 00:52:51 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:58.112 00:52:51 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:58.112 00:52:51 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:58.112 00:52:51 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:58.112 00:52:51 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:58.112 00:52:51 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:58.112 00:52:51 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:58.112 00:52:51 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:58.112 00:52:51 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:58.112 00:52:51 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:58.112 00:52:51 accel.accel_crc32c -- accel/accel.sh@20 -- # val=crc32c 00:06:58.112 00:52:51 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:58.112 00:52:51 accel.accel_crc32c -- accel/accel.sh@23 -- # accel_opc=crc32c 00:06:58.112 00:52:51 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:58.112 00:52:51 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:58.112 00:52:51 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:06:58.112 00:52:51 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:58.112 00:52:51 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:58.112 00:52:51 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:58.112 00:52:51 accel.accel_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:58.112 00:52:51 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:58.112 00:52:51 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:58.112 00:52:51 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:58.112 00:52:51 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:58.112 00:52:51 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:58.112 00:52:51 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:58.112 00:52:51 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:58.112 00:52:51 accel.accel_crc32c -- accel/accel.sh@20 -- # val=software 00:06:58.112 00:52:51 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:58.112 00:52:51 accel.accel_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:06:58.112 00:52:51 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:58.112 00:52:51 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:58.112 00:52:51 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:06:58.112 00:52:51 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:58.112 00:52:51 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:58.112 00:52:51 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:58.112 00:52:51 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:06:58.112 00:52:51 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:58.112 00:52:51 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:58.112 00:52:51 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:58.112 00:52:51 accel.accel_crc32c -- accel/accel.sh@20 -- # val=1 00:06:58.113 00:52:51 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:58.113 00:52:51 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:58.113 00:52:51 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:58.113 00:52:51 accel.accel_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:06:58.113 00:52:51 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:58.113 00:52:51 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:58.113 00:52:51 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:58.113 00:52:51 accel.accel_crc32c -- accel/accel.sh@20 -- # val=Yes 00:06:58.113 00:52:51 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:58.113 00:52:51 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:58.113 00:52:51 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:58.113 00:52:51 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:58.113 00:52:51 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:58.113 00:52:51 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:58.113 00:52:51 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:58.113 00:52:51 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:58.113 00:52:51 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:58.113 00:52:51 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:58.113 00:52:51 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:59.486 00:52:52 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:59.486 00:52:52 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:59.486 00:52:52 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:59.486 00:52:52 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:59.486 00:52:52 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:59.486 00:52:52 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:59.486 00:52:52 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:59.486 00:52:52 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:59.486 00:52:52 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:59.486 00:52:52 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:59.486 00:52:52 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:59.486 00:52:52 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:59.486 00:52:52 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:59.486 00:52:52 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:59.486 00:52:52 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:59.486 00:52:52 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:59.486 00:52:52 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:59.486 00:52:52 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:59.486 00:52:52 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:59.486 00:52:52 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:59.486 00:52:52 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:59.486 00:52:52 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:59.486 00:52:52 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:59.486 00:52:52 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:59.486 00:52:52 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:59.486 00:52:52 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:06:59.486 00:52:52 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:59.486 00:06:59.486 real 0m1.409s 00:06:59.486 user 0m1.266s 00:06:59.486 sys 0m0.146s 00:06:59.487 00:52:52 accel.accel_crc32c -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:59.487 00:52:52 accel.accel_crc32c -- common/autotest_common.sh@10 -- # set +x 00:06:59.487 ************************************ 00:06:59.487 END TEST accel_crc32c 00:06:59.487 ************************************ 00:06:59.487 00:52:52 accel -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:06:59.487 00:52:52 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:06:59.487 00:52:52 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:59.487 00:52:52 accel -- common/autotest_common.sh@10 -- # set +x 00:06:59.487 ************************************ 00:06:59.487 START TEST accel_crc32c_C2 00:06:59.487 ************************************ 00:06:59.487 00:52:52 accel.accel_crc32c_C2 -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w crc32c -y -C 2 00:06:59.487 00:52:52 accel.accel_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:06:59.487 00:52:52 accel.accel_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:06:59.487 00:52:52 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:59.487 00:52:52 accel.accel_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:06:59.487 00:52:52 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:59.487 00:52:52 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:06:59.487 00:52:52 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:06:59.487 00:52:52 accel.accel_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:59.487 00:52:52 accel.accel_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:59.487 00:52:52 accel.accel_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:59.487 00:52:52 accel.accel_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:59.487 00:52:52 accel.accel_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:59.487 00:52:52 accel.accel_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:06:59.487 00:52:52 accel.accel_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:06:59.487 [2024-07-25 00:52:52.382027] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 23.11.0 initialization... 00:06:59.487 [2024-07-25 00:52:52.382091] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3648525 ] 00:06:59.487 EAL: No free 2048 kB hugepages reported on node 1 00:06:59.487 [2024-07-25 00:52:52.444401] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:59.487 [2024-07-25 00:52:52.537716] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:59.487 00:52:52 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:59.487 00:52:52 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:59.487 00:52:52 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:59.487 00:52:52 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:59.487 00:52:52 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:59.487 00:52:52 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:59.487 00:52:52 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:59.487 00:52:52 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:59.487 00:52:52 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:06:59.487 00:52:52 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:59.487 00:52:52 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:59.487 00:52:52 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:59.487 00:52:52 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:59.487 00:52:52 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:59.487 00:52:52 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:59.487 00:52:52 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:59.487 00:52:52 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:59.487 00:52:52 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:59.487 00:52:52 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:59.487 00:52:52 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:59.487 00:52:52 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=crc32c 00:06:59.487 00:52:52 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:59.487 00:52:52 accel.accel_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:06:59.487 00:52:52 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:59.487 00:52:52 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:59.487 00:52:52 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:06:59.487 00:52:52 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:59.487 00:52:52 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:59.487 00:52:52 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:59.487 00:52:52 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:59.487 00:52:52 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:59.487 00:52:52 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:59.487 00:52:52 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:59.487 00:52:52 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:59.487 00:52:52 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:59.487 00:52:52 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:59.487 00:52:52 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:59.487 00:52:52 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:06:59.487 00:52:52 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:59.487 00:52:52 accel.accel_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:06:59.487 00:52:52 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:59.487 00:52:52 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:59.487 00:52:52 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:59.487 00:52:52 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:59.487 00:52:52 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:59.487 00:52:52 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:59.487 00:52:52 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:59.487 00:52:52 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:59.487 00:52:52 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:59.487 00:52:52 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:59.487 00:52:52 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:06:59.487 00:52:52 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:59.487 00:52:52 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:59.487 00:52:52 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:59.487 00:52:52 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:59.487 00:52:52 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:59.487 00:52:52 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:59.487 00:52:52 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:59.487 00:52:52 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:06:59.487 00:52:52 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:59.487 00:52:52 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:59.487 00:52:52 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:59.487 00:52:52 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:59.487 00:52:52 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:59.487 00:52:52 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:59.487 00:52:52 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:59.487 00:52:52 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:59.487 00:52:52 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:59.487 00:52:52 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:59.487 00:52:52 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:00.859 00:52:53 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:00.859 00:52:53 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:00.859 00:52:53 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:00.859 00:52:53 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:00.859 00:52:53 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:00.859 00:52:53 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:00.859 00:52:53 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:00.859 00:52:53 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:00.859 00:52:53 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:00.859 00:52:53 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:00.859 00:52:53 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:00.859 00:52:53 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:00.859 00:52:53 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:00.859 00:52:53 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:00.859 00:52:53 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:00.859 00:52:53 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:00.859 00:52:53 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:00.859 00:52:53 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:00.859 00:52:53 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:00.859 00:52:53 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:00.859 00:52:53 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:00.859 00:52:53 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:00.859 00:52:53 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:00.859 00:52:53 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:00.859 00:52:53 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:00.859 00:52:53 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:07:00.859 00:52:53 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:00.859 00:07:00.859 real 0m1.396s 00:07:00.859 user 0m1.260s 00:07:00.859 sys 0m0.137s 00:07:00.859 00:52:53 accel.accel_crc32c_C2 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:00.859 00:52:53 accel.accel_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:07:00.859 ************************************ 00:07:00.859 END TEST accel_crc32c_C2 00:07:00.859 ************************************ 00:07:00.859 00:52:53 accel -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:07:00.859 00:52:53 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:07:00.859 00:52:53 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:00.859 00:52:53 accel -- common/autotest_common.sh@10 -- # set +x 00:07:00.859 ************************************ 00:07:00.859 START TEST accel_copy 00:07:00.859 ************************************ 00:07:00.859 00:52:53 accel.accel_copy -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w copy -y 00:07:00.859 00:52:53 accel.accel_copy -- accel/accel.sh@16 -- # local accel_opc 00:07:00.859 00:52:53 accel.accel_copy -- accel/accel.sh@17 -- # local accel_module 00:07:00.859 00:52:53 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:00.859 00:52:53 accel.accel_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:07:00.859 00:52:53 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:00.859 00:52:53 accel.accel_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:07:00.859 00:52:53 accel.accel_copy -- accel/accel.sh@12 -- # build_accel_config 00:07:00.859 00:52:53 accel.accel_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:00.859 00:52:53 accel.accel_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:00.859 00:52:53 accel.accel_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:00.859 00:52:53 accel.accel_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:00.859 00:52:53 accel.accel_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:00.859 00:52:53 accel.accel_copy -- accel/accel.sh@40 -- # local IFS=, 00:07:00.859 00:52:53 accel.accel_copy -- accel/accel.sh@41 -- # jq -r . 00:07:00.859 [2024-07-25 00:52:53.820249] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 23.11.0 initialization... 00:07:00.859 [2024-07-25 00:52:53.820338] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3648686 ] 00:07:00.859 EAL: No free 2048 kB hugepages reported on node 1 00:07:00.859 [2024-07-25 00:52:53.884175] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:00.859 [2024-07-25 00:52:53.977087] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:01.118 00:52:54 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:01.118 00:52:54 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:01.118 00:52:54 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:01.118 00:52:54 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:01.118 00:52:54 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:01.118 00:52:54 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:01.118 00:52:54 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:01.118 00:52:54 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:01.118 00:52:54 accel.accel_copy -- accel/accel.sh@20 -- # val=0x1 00:07:01.118 00:52:54 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:01.118 00:52:54 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:01.118 00:52:54 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:01.118 00:52:54 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:01.118 00:52:54 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:01.118 00:52:54 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:01.118 00:52:54 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:01.118 00:52:54 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:01.118 00:52:54 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:01.118 00:52:54 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:01.118 00:52:54 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:01.118 00:52:54 accel.accel_copy -- accel/accel.sh@20 -- # val=copy 00:07:01.118 00:52:54 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:01.118 00:52:54 accel.accel_copy -- accel/accel.sh@23 -- # accel_opc=copy 00:07:01.118 00:52:54 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:01.118 00:52:54 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:01.118 00:52:54 accel.accel_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:01.118 00:52:54 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:01.118 00:52:54 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:01.118 00:52:54 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:01.118 00:52:54 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:01.118 00:52:54 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:01.118 00:52:54 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:01.118 00:52:54 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:01.118 00:52:54 accel.accel_copy -- accel/accel.sh@20 -- # val=software 00:07:01.118 00:52:54 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:01.118 00:52:54 accel.accel_copy -- accel/accel.sh@22 -- # accel_module=software 00:07:01.118 00:52:54 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:01.118 00:52:54 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:01.118 00:52:54 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:07:01.118 00:52:54 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:01.118 00:52:54 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:01.118 00:52:54 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:01.118 00:52:54 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:07:01.118 00:52:54 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:01.118 00:52:54 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:01.118 00:52:54 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:01.118 00:52:54 accel.accel_copy -- accel/accel.sh@20 -- # val=1 00:07:01.118 00:52:54 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:01.118 00:52:54 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:01.118 00:52:54 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:01.118 00:52:54 accel.accel_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:07:01.118 00:52:54 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:01.118 00:52:54 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:01.118 00:52:54 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:01.118 00:52:54 accel.accel_copy -- accel/accel.sh@20 -- # val=Yes 00:07:01.118 00:52:54 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:01.118 00:52:54 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:01.118 00:52:54 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:01.118 00:52:54 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:01.118 00:52:54 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:01.118 00:52:54 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:01.118 00:52:54 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:01.118 00:52:54 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:01.118 00:52:54 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:01.118 00:52:54 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:01.118 00:52:54 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:02.491 00:52:55 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:02.491 00:52:55 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:02.491 00:52:55 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:02.491 00:52:55 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:02.491 00:52:55 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:02.491 00:52:55 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:02.491 00:52:55 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:02.491 00:52:55 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:02.491 00:52:55 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:02.491 00:52:55 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:02.491 00:52:55 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:02.491 00:52:55 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:02.491 00:52:55 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:02.491 00:52:55 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:02.491 00:52:55 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:02.491 00:52:55 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:02.491 00:52:55 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:02.491 00:52:55 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:02.491 00:52:55 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:02.491 00:52:55 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:02.491 00:52:55 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:02.491 00:52:55 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:02.491 00:52:55 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:02.491 00:52:55 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:02.491 00:52:55 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:02.492 00:52:55 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n copy ]] 00:07:02.492 00:52:55 accel.accel_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:02.492 00:07:02.492 real 0m1.411s 00:07:02.492 user 0m1.256s 00:07:02.492 sys 0m0.157s 00:07:02.492 00:52:55 accel.accel_copy -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:02.492 00:52:55 accel.accel_copy -- common/autotest_common.sh@10 -- # set +x 00:07:02.492 ************************************ 00:07:02.492 END TEST accel_copy 00:07:02.492 ************************************ 00:07:02.492 00:52:55 accel -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:02.492 00:52:55 accel -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:07:02.492 00:52:55 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:02.492 00:52:55 accel -- common/autotest_common.sh@10 -- # set +x 00:07:02.492 ************************************ 00:07:02.492 START TEST accel_fill 00:07:02.492 ************************************ 00:07:02.492 00:52:55 accel.accel_fill -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:02.492 00:52:55 accel.accel_fill -- accel/accel.sh@16 -- # local accel_opc 00:07:02.492 00:52:55 accel.accel_fill -- accel/accel.sh@17 -- # local accel_module 00:07:02.492 00:52:55 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:02.492 00:52:55 accel.accel_fill -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:02.492 00:52:55 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:02.492 00:52:55 accel.accel_fill -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:02.492 00:52:55 accel.accel_fill -- accel/accel.sh@12 -- # build_accel_config 00:07:02.492 00:52:55 accel.accel_fill -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:02.492 00:52:55 accel.accel_fill -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:02.492 00:52:55 accel.accel_fill -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:02.492 00:52:55 accel.accel_fill -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:02.492 00:52:55 accel.accel_fill -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:02.492 00:52:55 accel.accel_fill -- accel/accel.sh@40 -- # local IFS=, 00:07:02.492 00:52:55 accel.accel_fill -- accel/accel.sh@41 -- # jq -r . 00:07:02.492 [2024-07-25 00:52:55.274142] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 23.11.0 initialization... 00:07:02.492 [2024-07-25 00:52:55.274194] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3648839 ] 00:07:02.492 EAL: No free 2048 kB hugepages reported on node 1 00:07:02.492 [2024-07-25 00:52:55.335230] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:02.492 [2024-07-25 00:52:55.428376] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:02.492 00:52:55 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:02.492 00:52:55 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:02.492 00:52:55 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:02.492 00:52:55 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:02.492 00:52:55 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:02.492 00:52:55 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:02.492 00:52:55 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:02.492 00:52:55 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:02.492 00:52:55 accel.accel_fill -- accel/accel.sh@20 -- # val=0x1 00:07:02.492 00:52:55 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:02.492 00:52:55 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:02.492 00:52:55 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:02.492 00:52:55 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:02.492 00:52:55 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:02.492 00:52:55 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:02.492 00:52:55 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:02.492 00:52:55 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:02.492 00:52:55 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:02.492 00:52:55 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:02.492 00:52:55 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:02.492 00:52:55 accel.accel_fill -- accel/accel.sh@20 -- # val=fill 00:07:02.492 00:52:55 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:02.492 00:52:55 accel.accel_fill -- accel/accel.sh@23 -- # accel_opc=fill 00:07:02.492 00:52:55 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:02.492 00:52:55 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:02.492 00:52:55 accel.accel_fill -- accel/accel.sh@20 -- # val=0x80 00:07:02.492 00:52:55 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:02.492 00:52:55 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:02.492 00:52:55 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:02.492 00:52:55 accel.accel_fill -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:02.492 00:52:55 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:02.492 00:52:55 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:02.492 00:52:55 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:02.492 00:52:55 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:02.492 00:52:55 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:02.492 00:52:55 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:02.492 00:52:55 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:02.492 00:52:55 accel.accel_fill -- accel/accel.sh@20 -- # val=software 00:07:02.492 00:52:55 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:02.492 00:52:55 accel.accel_fill -- accel/accel.sh@22 -- # accel_module=software 00:07:02.492 00:52:55 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:02.492 00:52:55 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:02.492 00:52:55 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:07:02.492 00:52:55 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:02.492 00:52:55 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:02.492 00:52:55 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:02.492 00:52:55 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:07:02.492 00:52:55 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:02.492 00:52:55 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:02.492 00:52:55 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:02.492 00:52:55 accel.accel_fill -- accel/accel.sh@20 -- # val=1 00:07:02.492 00:52:55 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:02.492 00:52:55 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:02.492 00:52:55 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:02.492 00:52:55 accel.accel_fill -- accel/accel.sh@20 -- # val='1 seconds' 00:07:02.492 00:52:55 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:02.492 00:52:55 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:02.492 00:52:55 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:02.492 00:52:55 accel.accel_fill -- accel/accel.sh@20 -- # val=Yes 00:07:02.492 00:52:55 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:02.492 00:52:55 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:02.492 00:52:55 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:02.492 00:52:55 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:02.492 00:52:55 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:02.492 00:52:55 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:02.492 00:52:55 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:02.492 00:52:55 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:02.492 00:52:55 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:02.492 00:52:55 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:02.492 00:52:55 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:03.866 00:52:56 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:03.866 00:52:56 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:03.866 00:52:56 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:03.866 00:52:56 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:03.866 00:52:56 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:03.866 00:52:56 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:03.866 00:52:56 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:03.866 00:52:56 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:03.866 00:52:56 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:03.866 00:52:56 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:03.866 00:52:56 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:03.867 00:52:56 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:03.867 00:52:56 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:03.867 00:52:56 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:03.867 00:52:56 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:03.867 00:52:56 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:03.867 00:52:56 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:03.867 00:52:56 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:03.867 00:52:56 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:03.867 00:52:56 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:03.867 00:52:56 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:03.867 00:52:56 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:03.867 00:52:56 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:03.867 00:52:56 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:03.867 00:52:56 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:03.867 00:52:56 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n fill ]] 00:07:03.867 00:52:56 accel.accel_fill -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:03.867 00:07:03.867 real 0m1.402s 00:07:03.867 user 0m1.263s 00:07:03.867 sys 0m0.142s 00:07:03.867 00:52:56 accel.accel_fill -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:03.867 00:52:56 accel.accel_fill -- common/autotest_common.sh@10 -- # set +x 00:07:03.867 ************************************ 00:07:03.867 END TEST accel_fill 00:07:03.867 ************************************ 00:07:03.867 00:52:56 accel -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:07:03.867 00:52:56 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:07:03.867 00:52:56 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:03.867 00:52:56 accel -- common/autotest_common.sh@10 -- # set +x 00:07:03.867 ************************************ 00:07:03.867 START TEST accel_copy_crc32c 00:07:03.867 ************************************ 00:07:03.867 00:52:56 accel.accel_copy_crc32c -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w copy_crc32c -y 00:07:03.867 00:52:56 accel.accel_copy_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:07:03.867 00:52:56 accel.accel_copy_crc32c -- accel/accel.sh@17 -- # local accel_module 00:07:03.867 00:52:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:03.867 00:52:56 accel.accel_copy_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:07:03.867 00:52:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:03.867 00:52:56 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:07:03.867 00:52:56 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:07:03.867 00:52:56 accel.accel_copy_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:03.867 00:52:56 accel.accel_copy_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:03.867 00:52:56 accel.accel_copy_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:03.867 00:52:56 accel.accel_copy_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:03.867 00:52:56 accel.accel_copy_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:03.867 00:52:56 accel.accel_copy_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:07:03.867 00:52:56 accel.accel_copy_crc32c -- accel/accel.sh@41 -- # jq -r . 00:07:03.867 [2024-07-25 00:52:56.724231] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 23.11.0 initialization... 00:07:03.867 [2024-07-25 00:52:56.724321] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3648996 ] 00:07:03.867 EAL: No free 2048 kB hugepages reported on node 1 00:07:03.867 [2024-07-25 00:52:56.786836] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:03.867 [2024-07-25 00:52:56.879682] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:03.867 00:52:56 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:03.867 00:52:56 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:03.867 00:52:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:03.867 00:52:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:03.867 00:52:56 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:03.867 00:52:56 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:03.867 00:52:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:03.867 00:52:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:03.867 00:52:56 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0x1 00:07:03.867 00:52:56 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:03.867 00:52:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:03.867 00:52:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:03.867 00:52:56 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:03.867 00:52:56 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:03.867 00:52:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:03.867 00:52:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:03.867 00:52:56 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:03.867 00:52:56 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:03.867 00:52:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:03.867 00:52:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:03.867 00:52:56 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=copy_crc32c 00:07:03.867 00:52:56 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:03.867 00:52:56 accel.accel_copy_crc32c -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:07:03.867 00:52:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:03.867 00:52:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:03.867 00:52:56 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0 00:07:03.867 00:52:56 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:03.867 00:52:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:03.867 00:52:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:03.867 00:52:56 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:03.867 00:52:56 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:03.867 00:52:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:03.867 00:52:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:03.867 00:52:56 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:03.867 00:52:56 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:03.867 00:52:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:03.867 00:52:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:03.867 00:52:56 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:03.867 00:52:56 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:03.867 00:52:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:03.867 00:52:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:03.867 00:52:56 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=software 00:07:03.867 00:52:56 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:03.867 00:52:56 accel.accel_copy_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:07:03.867 00:52:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:03.867 00:52:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:03.867 00:52:56 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:07:03.867 00:52:56 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:03.867 00:52:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:03.867 00:52:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:03.867 00:52:56 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:07:03.867 00:52:56 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:03.867 00:52:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:03.867 00:52:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:03.867 00:52:56 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=1 00:07:03.867 00:52:56 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:03.867 00:52:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:03.867 00:52:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:03.867 00:52:56 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:07:03.867 00:52:56 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:03.867 00:52:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:03.867 00:52:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:03.867 00:52:56 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=Yes 00:07:03.867 00:52:56 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:03.867 00:52:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:03.867 00:52:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:03.867 00:52:56 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:03.867 00:52:56 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:03.867 00:52:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:03.867 00:52:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:03.867 00:52:56 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:03.867 00:52:56 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:03.867 00:52:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:03.867 00:52:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:05.241 00:52:58 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:05.241 00:52:58 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:05.241 00:52:58 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:05.241 00:52:58 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:05.241 00:52:58 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:05.241 00:52:58 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:05.241 00:52:58 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:05.241 00:52:58 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:05.241 00:52:58 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:05.241 00:52:58 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:05.241 00:52:58 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:05.241 00:52:58 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:05.241 00:52:58 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:05.241 00:52:58 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:05.241 00:52:58 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:05.241 00:52:58 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:05.241 00:52:58 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:05.241 00:52:58 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:05.241 00:52:58 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:05.241 00:52:58 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:05.241 00:52:58 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:05.241 00:52:58 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:05.241 00:52:58 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:05.241 00:52:58 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:05.241 00:52:58 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:05.241 00:52:58 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:07:05.241 00:52:58 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:05.241 00:07:05.241 real 0m1.400s 00:07:05.241 user 0m1.257s 00:07:05.241 sys 0m0.145s 00:07:05.241 00:52:58 accel.accel_copy_crc32c -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:05.241 00:52:58 accel.accel_copy_crc32c -- common/autotest_common.sh@10 -- # set +x 00:07:05.241 ************************************ 00:07:05.241 END TEST accel_copy_crc32c 00:07:05.241 ************************************ 00:07:05.241 00:52:58 accel -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:07:05.241 00:52:58 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:07:05.241 00:52:58 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:05.241 00:52:58 accel -- common/autotest_common.sh@10 -- # set +x 00:07:05.241 ************************************ 00:07:05.241 START TEST accel_copy_crc32c_C2 00:07:05.241 ************************************ 00:07:05.241 00:52:58 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:07:05.241 00:52:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:07:05.241 00:52:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:07:05.241 00:52:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:05.241 00:52:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:07:05.241 00:52:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:05.241 00:52:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:07:05.241 00:52:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:07:05.241 00:52:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:05.241 00:52:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:05.241 00:52:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:05.241 00:52:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:05.241 00:52:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:05.241 00:52:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:07:05.241 00:52:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:07:05.241 [2024-07-25 00:52:58.171266] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 23.11.0 initialization... 00:07:05.241 [2024-07-25 00:52:58.171343] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3649268 ] 00:07:05.241 EAL: No free 2048 kB hugepages reported on node 1 00:07:05.241 [2024-07-25 00:52:58.234001] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:05.241 [2024-07-25 00:52:58.325188] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:05.241 00:52:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:05.241 00:52:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:05.241 00:52:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:05.241 00:52:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:05.241 00:52:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:05.241 00:52:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:05.241 00:52:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:05.241 00:52:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:05.241 00:52:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:07:05.241 00:52:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:05.241 00:52:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:05.241 00:52:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:05.241 00:52:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:05.241 00:52:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:05.241 00:52:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:05.241 00:52:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:05.241 00:52:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:05.241 00:52:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:05.241 00:52:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:05.241 00:52:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:05.241 00:52:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=copy_crc32c 00:07:05.241 00:52:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:05.241 00:52:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:07:05.241 00:52:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:05.241 00:52:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:05.241 00:52:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:07:05.241 00:52:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:05.241 00:52:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:05.241 00:52:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:05.241 00:52:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:05.241 00:52:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:05.241 00:52:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:05.241 00:52:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:05.241 00:52:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='8192 bytes' 00:07:05.241 00:52:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:05.241 00:52:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:05.241 00:52:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:05.241 00:52:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:05.241 00:52:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:05.241 00:52:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:05.241 00:52:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:05.241 00:52:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:07:05.241 00:52:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:05.241 00:52:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:07:05.241 00:52:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:05.241 00:52:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:05.241 00:52:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:07:05.241 00:52:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:05.241 00:52:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:05.241 00:52:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:05.241 00:52:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:07:05.241 00:52:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:05.241 00:52:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:05.241 00:52:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:05.241 00:52:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:07:05.241 00:52:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:05.241 00:52:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:05.241 00:52:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:05.241 00:52:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:05.499 00:52:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:05.499 00:52:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:05.499 00:52:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:05.499 00:52:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:07:05.499 00:52:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:05.499 00:52:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:05.499 00:52:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:05.499 00:52:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:05.499 00:52:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:05.499 00:52:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:05.499 00:52:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:05.499 00:52:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:05.499 00:52:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:05.499 00:52:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:05.499 00:52:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:06.432 00:52:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:06.432 00:52:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:06.432 00:52:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:06.432 00:52:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:06.432 00:52:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:06.432 00:52:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:06.432 00:52:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:06.432 00:52:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:06.432 00:52:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:06.432 00:52:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:06.432 00:52:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:06.432 00:52:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:06.432 00:52:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:06.432 00:52:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:06.432 00:52:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:06.432 00:52:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:06.432 00:52:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:06.433 00:52:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:06.433 00:52:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:06.433 00:52:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:06.433 00:52:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:06.433 00:52:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:06.433 00:52:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:06.433 00:52:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:06.433 00:52:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:06.433 00:52:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:07:06.433 00:52:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:06.433 00:07:06.433 real 0m1.401s 00:07:06.433 user 0m1.261s 00:07:06.433 sys 0m0.143s 00:07:06.433 00:52:59 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:06.433 00:52:59 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:07:06.433 ************************************ 00:07:06.433 END TEST accel_copy_crc32c_C2 00:07:06.433 ************************************ 00:07:06.433 00:52:59 accel -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:07:06.433 00:52:59 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:07:06.433 00:52:59 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:06.433 00:52:59 accel -- common/autotest_common.sh@10 -- # set +x 00:07:06.690 ************************************ 00:07:06.690 START TEST accel_dualcast 00:07:06.690 ************************************ 00:07:06.690 00:52:59 accel.accel_dualcast -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w dualcast -y 00:07:06.690 00:52:59 accel.accel_dualcast -- accel/accel.sh@16 -- # local accel_opc 00:07:06.690 00:52:59 accel.accel_dualcast -- accel/accel.sh@17 -- # local accel_module 00:07:06.690 00:52:59 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:06.690 00:52:59 accel.accel_dualcast -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:07:06.690 00:52:59 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:06.690 00:52:59 accel.accel_dualcast -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:07:06.690 00:52:59 accel.accel_dualcast -- accel/accel.sh@12 -- # build_accel_config 00:07:06.690 00:52:59 accel.accel_dualcast -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:06.690 00:52:59 accel.accel_dualcast -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:06.690 00:52:59 accel.accel_dualcast -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:06.690 00:52:59 accel.accel_dualcast -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:06.690 00:52:59 accel.accel_dualcast -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:06.690 00:52:59 accel.accel_dualcast -- accel/accel.sh@40 -- # local IFS=, 00:07:06.690 00:52:59 accel.accel_dualcast -- accel/accel.sh@41 -- # jq -r . 00:07:06.690 [2024-07-25 00:52:59.615370] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 23.11.0 initialization... 00:07:06.690 [2024-07-25 00:52:59.615428] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3649431 ] 00:07:06.690 EAL: No free 2048 kB hugepages reported on node 1 00:07:06.690 [2024-07-25 00:52:59.677282] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:06.690 [2024-07-25 00:52:59.770445] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:06.690 00:52:59 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:06.690 00:52:59 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:06.690 00:52:59 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:06.691 00:52:59 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:06.691 00:52:59 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:06.691 00:52:59 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:06.691 00:52:59 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:06.691 00:52:59 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:06.691 00:52:59 accel.accel_dualcast -- accel/accel.sh@20 -- # val=0x1 00:07:06.691 00:52:59 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:06.691 00:52:59 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:06.691 00:52:59 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:06.691 00:52:59 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:06.691 00:52:59 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:06.691 00:52:59 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:06.691 00:52:59 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:06.691 00:52:59 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:06.691 00:52:59 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:06.691 00:52:59 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:06.691 00:52:59 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:06.691 00:52:59 accel.accel_dualcast -- accel/accel.sh@20 -- # val=dualcast 00:07:06.691 00:52:59 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:06.691 00:52:59 accel.accel_dualcast -- accel/accel.sh@23 -- # accel_opc=dualcast 00:07:06.691 00:52:59 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:06.691 00:52:59 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:06.691 00:52:59 accel.accel_dualcast -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:06.691 00:52:59 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:06.691 00:52:59 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:06.691 00:52:59 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:06.691 00:52:59 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:06.691 00:52:59 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:06.691 00:52:59 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:06.691 00:52:59 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:06.691 00:52:59 accel.accel_dualcast -- accel/accel.sh@20 -- # val=software 00:07:06.691 00:52:59 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:06.691 00:52:59 accel.accel_dualcast -- accel/accel.sh@22 -- # accel_module=software 00:07:06.691 00:52:59 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:06.691 00:52:59 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:06.691 00:52:59 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:07:06.691 00:52:59 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:06.691 00:52:59 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:06.691 00:52:59 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:06.691 00:52:59 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:07:06.691 00:52:59 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:06.691 00:52:59 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:06.691 00:52:59 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:06.691 00:52:59 accel.accel_dualcast -- accel/accel.sh@20 -- # val=1 00:07:06.691 00:52:59 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:06.691 00:52:59 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:06.691 00:52:59 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:06.691 00:52:59 accel.accel_dualcast -- accel/accel.sh@20 -- # val='1 seconds' 00:07:06.691 00:52:59 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:06.691 00:52:59 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:06.691 00:52:59 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:06.691 00:52:59 accel.accel_dualcast -- accel/accel.sh@20 -- # val=Yes 00:07:06.691 00:52:59 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:06.691 00:52:59 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:06.691 00:52:59 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:06.691 00:52:59 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:06.691 00:52:59 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:06.691 00:52:59 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:06.691 00:52:59 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:06.691 00:52:59 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:06.691 00:52:59 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:06.691 00:52:59 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:06.691 00:52:59 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:08.107 00:53:01 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:08.107 00:53:01 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:08.107 00:53:01 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:08.107 00:53:01 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:08.107 00:53:01 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:08.107 00:53:01 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:08.107 00:53:01 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:08.107 00:53:01 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:08.107 00:53:01 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:08.107 00:53:01 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:08.107 00:53:01 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:08.107 00:53:01 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:08.107 00:53:01 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:08.107 00:53:01 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:08.107 00:53:01 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:08.107 00:53:01 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:08.107 00:53:01 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:08.107 00:53:01 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:08.107 00:53:01 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:08.107 00:53:01 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:08.107 00:53:01 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:08.107 00:53:01 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:08.107 00:53:01 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:08.107 00:53:01 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:08.107 00:53:01 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:08.107 00:53:01 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:07:08.107 00:53:01 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:08.107 00:07:08.107 real 0m1.410s 00:07:08.107 user 0m1.264s 00:07:08.107 sys 0m0.148s 00:07:08.107 00:53:01 accel.accel_dualcast -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:08.107 00:53:01 accel.accel_dualcast -- common/autotest_common.sh@10 -- # set +x 00:07:08.107 ************************************ 00:07:08.107 END TEST accel_dualcast 00:07:08.107 ************************************ 00:07:08.107 00:53:01 accel -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:07:08.107 00:53:01 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:07:08.107 00:53:01 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:08.107 00:53:01 accel -- common/autotest_common.sh@10 -- # set +x 00:07:08.107 ************************************ 00:07:08.107 START TEST accel_compare 00:07:08.107 ************************************ 00:07:08.107 00:53:01 accel.accel_compare -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w compare -y 00:07:08.107 00:53:01 accel.accel_compare -- accel/accel.sh@16 -- # local accel_opc 00:07:08.107 00:53:01 accel.accel_compare -- accel/accel.sh@17 -- # local accel_module 00:07:08.107 00:53:01 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:08.107 00:53:01 accel.accel_compare -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:07:08.107 00:53:01 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:08.107 00:53:01 accel.accel_compare -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:07:08.107 00:53:01 accel.accel_compare -- accel/accel.sh@12 -- # build_accel_config 00:07:08.107 00:53:01 accel.accel_compare -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:08.107 00:53:01 accel.accel_compare -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:08.108 00:53:01 accel.accel_compare -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:08.108 00:53:01 accel.accel_compare -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:08.108 00:53:01 accel.accel_compare -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:08.108 00:53:01 accel.accel_compare -- accel/accel.sh@40 -- # local IFS=, 00:07:08.108 00:53:01 accel.accel_compare -- accel/accel.sh@41 -- # jq -r . 00:07:08.108 [2024-07-25 00:53:01.069391] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 23.11.0 initialization... 00:07:08.108 [2024-07-25 00:53:01.069455] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3649584 ] 00:07:08.108 EAL: No free 2048 kB hugepages reported on node 1 00:07:08.108 [2024-07-25 00:53:01.131417] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:08.108 [2024-07-25 00:53:01.224618] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:08.366 00:53:01 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:08.366 00:53:01 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:08.366 00:53:01 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:08.366 00:53:01 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:08.366 00:53:01 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:08.366 00:53:01 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:08.366 00:53:01 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:08.366 00:53:01 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:08.366 00:53:01 accel.accel_compare -- accel/accel.sh@20 -- # val=0x1 00:07:08.366 00:53:01 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:08.366 00:53:01 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:08.366 00:53:01 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:08.366 00:53:01 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:08.366 00:53:01 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:08.366 00:53:01 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:08.366 00:53:01 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:08.366 00:53:01 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:08.366 00:53:01 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:08.366 00:53:01 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:08.366 00:53:01 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:08.366 00:53:01 accel.accel_compare -- accel/accel.sh@20 -- # val=compare 00:07:08.366 00:53:01 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:08.366 00:53:01 accel.accel_compare -- accel/accel.sh@23 -- # accel_opc=compare 00:07:08.366 00:53:01 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:08.366 00:53:01 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:08.366 00:53:01 accel.accel_compare -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:08.366 00:53:01 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:08.366 00:53:01 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:08.366 00:53:01 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:08.366 00:53:01 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:08.366 00:53:01 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:08.366 00:53:01 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:08.366 00:53:01 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:08.366 00:53:01 accel.accel_compare -- accel/accel.sh@20 -- # val=software 00:07:08.366 00:53:01 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:08.366 00:53:01 accel.accel_compare -- accel/accel.sh@22 -- # accel_module=software 00:07:08.366 00:53:01 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:08.366 00:53:01 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:08.366 00:53:01 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:07:08.366 00:53:01 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:08.366 00:53:01 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:08.366 00:53:01 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:08.366 00:53:01 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:07:08.366 00:53:01 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:08.366 00:53:01 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:08.366 00:53:01 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:08.366 00:53:01 accel.accel_compare -- accel/accel.sh@20 -- # val=1 00:07:08.366 00:53:01 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:08.366 00:53:01 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:08.366 00:53:01 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:08.366 00:53:01 accel.accel_compare -- accel/accel.sh@20 -- # val='1 seconds' 00:07:08.366 00:53:01 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:08.366 00:53:01 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:08.366 00:53:01 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:08.366 00:53:01 accel.accel_compare -- accel/accel.sh@20 -- # val=Yes 00:07:08.366 00:53:01 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:08.366 00:53:01 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:08.366 00:53:01 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:08.366 00:53:01 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:08.366 00:53:01 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:08.366 00:53:01 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:08.366 00:53:01 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:08.366 00:53:01 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:08.366 00:53:01 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:08.366 00:53:01 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:08.366 00:53:01 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:09.737 00:53:02 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:09.737 00:53:02 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:09.737 00:53:02 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:09.737 00:53:02 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:09.737 00:53:02 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:09.737 00:53:02 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:09.737 00:53:02 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:09.737 00:53:02 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:09.737 00:53:02 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:09.737 00:53:02 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:09.737 00:53:02 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:09.737 00:53:02 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:09.737 00:53:02 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:09.737 00:53:02 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:09.737 00:53:02 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:09.737 00:53:02 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:09.737 00:53:02 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:09.737 00:53:02 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:09.737 00:53:02 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:09.737 00:53:02 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:09.737 00:53:02 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:09.737 00:53:02 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:09.737 00:53:02 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:09.737 00:53:02 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:09.737 00:53:02 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:09.737 00:53:02 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n compare ]] 00:07:09.737 00:53:02 accel.accel_compare -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:09.737 00:07:09.737 real 0m1.406s 00:07:09.737 user 0m1.266s 00:07:09.737 sys 0m0.143s 00:07:09.737 00:53:02 accel.accel_compare -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:09.737 00:53:02 accel.accel_compare -- common/autotest_common.sh@10 -- # set +x 00:07:09.737 ************************************ 00:07:09.737 END TEST accel_compare 00:07:09.737 ************************************ 00:07:09.737 00:53:02 accel -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:07:09.737 00:53:02 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:07:09.737 00:53:02 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:09.737 00:53:02 accel -- common/autotest_common.sh@10 -- # set +x 00:07:09.737 ************************************ 00:07:09.737 START TEST accel_xor 00:07:09.737 ************************************ 00:07:09.737 00:53:02 accel.accel_xor -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w xor -y 00:07:09.737 00:53:02 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:07:09.737 00:53:02 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:07:09.737 00:53:02 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:09.737 00:53:02 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:09.737 00:53:02 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:07:09.737 00:53:02 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:07:09.737 00:53:02 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:07:09.737 00:53:02 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:09.737 00:53:02 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:09.737 00:53:02 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:09.737 00:53:02 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:09.737 00:53:02 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:09.737 00:53:02 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:07:09.737 00:53:02 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:07:09.737 [2024-07-25 00:53:02.521655] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 23.11.0 initialization... 00:07:09.737 [2024-07-25 00:53:02.521719] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3649754 ] 00:07:09.737 EAL: No free 2048 kB hugepages reported on node 1 00:07:09.737 [2024-07-25 00:53:02.583682] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:09.737 [2024-07-25 00:53:02.676259] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:09.737 00:53:02 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:09.737 00:53:02 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:09.737 00:53:02 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:09.737 00:53:02 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:09.737 00:53:02 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:09.737 00:53:02 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:09.737 00:53:02 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:09.737 00:53:02 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:09.737 00:53:02 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:07:09.737 00:53:02 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:09.737 00:53:02 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:09.737 00:53:02 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:09.737 00:53:02 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:09.737 00:53:02 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:09.737 00:53:02 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:09.737 00:53:02 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:09.737 00:53:02 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:09.737 00:53:02 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:09.737 00:53:02 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:09.737 00:53:02 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:09.737 00:53:02 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:07:09.737 00:53:02 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:09.737 00:53:02 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:07:09.737 00:53:02 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:09.737 00:53:02 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:09.737 00:53:02 accel.accel_xor -- accel/accel.sh@20 -- # val=2 00:07:09.737 00:53:02 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:09.737 00:53:02 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:09.737 00:53:02 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:09.737 00:53:02 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:09.737 00:53:02 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:09.737 00:53:02 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:09.737 00:53:02 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:09.737 00:53:02 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:09.737 00:53:02 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:09.737 00:53:02 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:09.737 00:53:02 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:09.737 00:53:02 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:07:09.737 00:53:02 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:09.737 00:53:02 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:07:09.738 00:53:02 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:09.738 00:53:02 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:09.738 00:53:02 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:09.738 00:53:02 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:09.738 00:53:02 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:09.738 00:53:02 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:09.738 00:53:02 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:09.738 00:53:02 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:09.738 00:53:02 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:09.738 00:53:02 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:09.738 00:53:02 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:07:09.738 00:53:02 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:09.738 00:53:02 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:09.738 00:53:02 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:09.738 00:53:02 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:07:09.738 00:53:02 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:09.738 00:53:02 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:09.738 00:53:02 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:09.738 00:53:02 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:07:09.738 00:53:02 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:09.738 00:53:02 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:09.738 00:53:02 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:09.738 00:53:02 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:09.738 00:53:02 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:09.738 00:53:02 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:09.738 00:53:02 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:09.738 00:53:02 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:09.738 00:53:02 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:09.738 00:53:02 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:09.738 00:53:02 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:11.109 00:53:03 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:11.109 00:53:03 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:11.109 00:53:03 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:11.109 00:53:03 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:11.109 00:53:03 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:11.109 00:53:03 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:11.109 00:53:03 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:11.109 00:53:03 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:11.109 00:53:03 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:11.109 00:53:03 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:11.109 00:53:03 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:11.109 00:53:03 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:11.109 00:53:03 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:11.109 00:53:03 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:11.109 00:53:03 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:11.109 00:53:03 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:11.109 00:53:03 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:11.109 00:53:03 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:11.109 00:53:03 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:11.109 00:53:03 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:11.109 00:53:03 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:11.109 00:53:03 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:11.109 00:53:03 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:11.109 00:53:03 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:11.109 00:53:03 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:11.109 00:53:03 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:07:11.109 00:53:03 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:11.109 00:07:11.109 real 0m1.401s 00:07:11.109 user 0m1.266s 00:07:11.109 sys 0m0.136s 00:07:11.109 00:53:03 accel.accel_xor -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:11.109 00:53:03 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:07:11.109 ************************************ 00:07:11.109 END TEST accel_xor 00:07:11.109 ************************************ 00:07:11.109 00:53:03 accel -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:07:11.109 00:53:03 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:07:11.109 00:53:03 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:11.109 00:53:03 accel -- common/autotest_common.sh@10 -- # set +x 00:07:11.109 ************************************ 00:07:11.109 START TEST accel_xor 00:07:11.109 ************************************ 00:07:11.109 00:53:03 accel.accel_xor -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w xor -y -x 3 00:07:11.109 00:53:03 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:07:11.109 00:53:03 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:07:11.109 00:53:03 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:11.109 00:53:03 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:11.109 00:53:03 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:07:11.109 00:53:03 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:07:11.109 00:53:03 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:07:11.109 00:53:03 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:11.109 00:53:03 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:11.109 00:53:03 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:11.109 00:53:03 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:11.109 00:53:03 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:11.109 00:53:03 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:07:11.109 00:53:03 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:07:11.109 [2024-07-25 00:53:03.968608] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 23.11.0 initialization... 00:07:11.109 [2024-07-25 00:53:03.968669] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3650018 ] 00:07:11.109 EAL: No free 2048 kB hugepages reported on node 1 00:07:11.109 [2024-07-25 00:53:04.029737] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:11.109 [2024-07-25 00:53:04.120762] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:11.109 00:53:04 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:11.109 00:53:04 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:11.109 00:53:04 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:11.109 00:53:04 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:11.109 00:53:04 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:11.110 00:53:04 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:11.110 00:53:04 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:11.110 00:53:04 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:11.110 00:53:04 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:07:11.110 00:53:04 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:11.110 00:53:04 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:11.110 00:53:04 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:11.110 00:53:04 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:11.110 00:53:04 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:11.110 00:53:04 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:11.110 00:53:04 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:11.110 00:53:04 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:11.110 00:53:04 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:11.110 00:53:04 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:11.110 00:53:04 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:11.110 00:53:04 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:07:11.110 00:53:04 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:11.110 00:53:04 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:07:11.110 00:53:04 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:11.110 00:53:04 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:11.110 00:53:04 accel.accel_xor -- accel/accel.sh@20 -- # val=3 00:07:11.110 00:53:04 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:11.110 00:53:04 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:11.110 00:53:04 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:11.110 00:53:04 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:11.110 00:53:04 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:11.110 00:53:04 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:11.110 00:53:04 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:11.110 00:53:04 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:11.110 00:53:04 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:11.110 00:53:04 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:11.110 00:53:04 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:11.110 00:53:04 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:07:11.110 00:53:04 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:11.110 00:53:04 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:07:11.110 00:53:04 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:11.110 00:53:04 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:11.110 00:53:04 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:11.110 00:53:04 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:11.110 00:53:04 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:11.110 00:53:04 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:11.110 00:53:04 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:11.110 00:53:04 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:11.110 00:53:04 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:11.110 00:53:04 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:11.110 00:53:04 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:07:11.110 00:53:04 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:11.110 00:53:04 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:11.110 00:53:04 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:11.110 00:53:04 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:07:11.110 00:53:04 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:11.110 00:53:04 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:11.110 00:53:04 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:11.110 00:53:04 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:07:11.110 00:53:04 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:11.110 00:53:04 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:11.110 00:53:04 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:11.110 00:53:04 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:11.110 00:53:04 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:11.110 00:53:04 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:11.110 00:53:04 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:11.110 00:53:04 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:11.110 00:53:04 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:11.110 00:53:04 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:11.110 00:53:04 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:12.483 00:53:05 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:12.483 00:53:05 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:12.483 00:53:05 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:12.483 00:53:05 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:12.483 00:53:05 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:12.483 00:53:05 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:12.483 00:53:05 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:12.483 00:53:05 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:12.483 00:53:05 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:12.483 00:53:05 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:12.483 00:53:05 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:12.483 00:53:05 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:12.483 00:53:05 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:12.483 00:53:05 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:12.483 00:53:05 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:12.483 00:53:05 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:12.483 00:53:05 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:12.483 00:53:05 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:12.483 00:53:05 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:12.483 00:53:05 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:12.483 00:53:05 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:12.483 00:53:05 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:12.483 00:53:05 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:12.483 00:53:05 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:12.483 00:53:05 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:12.483 00:53:05 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:07:12.483 00:53:05 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:12.483 00:07:12.483 real 0m1.401s 00:07:12.483 user 0m1.265s 00:07:12.483 sys 0m0.139s 00:07:12.483 00:53:05 accel.accel_xor -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:12.483 00:53:05 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:07:12.483 ************************************ 00:07:12.483 END TEST accel_xor 00:07:12.483 ************************************ 00:07:12.483 00:53:05 accel -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:07:12.483 00:53:05 accel -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:07:12.483 00:53:05 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:12.483 00:53:05 accel -- common/autotest_common.sh@10 -- # set +x 00:07:12.483 ************************************ 00:07:12.483 START TEST accel_dif_verify 00:07:12.483 ************************************ 00:07:12.483 00:53:05 accel.accel_dif_verify -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w dif_verify 00:07:12.483 00:53:05 accel.accel_dif_verify -- accel/accel.sh@16 -- # local accel_opc 00:07:12.483 00:53:05 accel.accel_dif_verify -- accel/accel.sh@17 -- # local accel_module 00:07:12.483 00:53:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:12.483 00:53:05 accel.accel_dif_verify -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:07:12.483 00:53:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:12.483 00:53:05 accel.accel_dif_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:07:12.483 00:53:05 accel.accel_dif_verify -- accel/accel.sh@12 -- # build_accel_config 00:07:12.483 00:53:05 accel.accel_dif_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:12.483 00:53:05 accel.accel_dif_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:12.483 00:53:05 accel.accel_dif_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:12.483 00:53:05 accel.accel_dif_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:12.483 00:53:05 accel.accel_dif_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:12.483 00:53:05 accel.accel_dif_verify -- accel/accel.sh@40 -- # local IFS=, 00:07:12.483 00:53:05 accel.accel_dif_verify -- accel/accel.sh@41 -- # jq -r . 00:07:12.483 [2024-07-25 00:53:05.420301] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 23.11.0 initialization... 00:07:12.483 [2024-07-25 00:53:05.420366] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3650170 ] 00:07:12.483 EAL: No free 2048 kB hugepages reported on node 1 00:07:12.483 [2024-07-25 00:53:05.484179] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:12.483 [2024-07-25 00:53:05.576045] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:12.741 00:53:05 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:12.741 00:53:05 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:12.741 00:53:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:12.741 00:53:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:12.741 00:53:05 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:12.741 00:53:05 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:12.741 00:53:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:12.741 00:53:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:12.741 00:53:05 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=0x1 00:07:12.741 00:53:05 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:12.741 00:53:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:12.741 00:53:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:12.741 00:53:05 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:12.741 00:53:05 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:12.741 00:53:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:12.741 00:53:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:12.741 00:53:05 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:12.741 00:53:05 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:12.741 00:53:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:12.741 00:53:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:12.741 00:53:05 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=dif_verify 00:07:12.741 00:53:05 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:12.741 00:53:05 accel.accel_dif_verify -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:07:12.741 00:53:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:12.741 00:53:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:12.741 00:53:05 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:12.741 00:53:05 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:12.741 00:53:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:12.741 00:53:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:12.741 00:53:05 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:12.741 00:53:05 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:12.741 00:53:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:12.741 00:53:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:12.741 00:53:05 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='512 bytes' 00:07:12.741 00:53:05 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:12.741 00:53:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:12.741 00:53:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:12.741 00:53:05 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='8 bytes' 00:07:12.741 00:53:05 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:12.741 00:53:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:12.741 00:53:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:12.741 00:53:05 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:12.741 00:53:05 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:12.741 00:53:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:12.741 00:53:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:12.741 00:53:05 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=software 00:07:12.741 00:53:05 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:12.742 00:53:05 accel.accel_dif_verify -- accel/accel.sh@22 -- # accel_module=software 00:07:12.742 00:53:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:12.742 00:53:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:12.742 00:53:05 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:07:12.742 00:53:05 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:12.742 00:53:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:12.742 00:53:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:12.742 00:53:05 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:07:12.742 00:53:05 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:12.742 00:53:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:12.742 00:53:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:12.742 00:53:05 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=1 00:07:12.742 00:53:05 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:12.742 00:53:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:12.742 00:53:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:12.742 00:53:05 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='1 seconds' 00:07:12.742 00:53:05 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:12.742 00:53:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:12.742 00:53:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:12.742 00:53:05 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=No 00:07:12.742 00:53:05 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:12.742 00:53:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:12.742 00:53:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:12.742 00:53:05 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:12.742 00:53:05 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:12.742 00:53:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:12.742 00:53:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:12.742 00:53:05 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:12.742 00:53:05 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:12.742 00:53:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:12.742 00:53:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:13.676 00:53:06 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:13.676 00:53:06 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:13.676 00:53:06 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:13.676 00:53:06 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:13.676 00:53:06 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:13.676 00:53:06 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:13.676 00:53:06 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:13.676 00:53:06 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:13.676 00:53:06 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:13.676 00:53:06 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:13.676 00:53:06 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:13.676 00:53:06 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:13.676 00:53:06 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:13.676 00:53:06 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:13.676 00:53:06 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:13.676 00:53:06 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:13.676 00:53:06 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:13.676 00:53:06 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:13.676 00:53:06 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:13.676 00:53:06 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:13.676 00:53:06 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:13.676 00:53:06 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:13.676 00:53:06 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:13.676 00:53:06 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:13.676 00:53:06 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:13.676 00:53:06 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:07:13.676 00:53:06 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:13.676 00:07:13.676 real 0m1.410s 00:07:13.676 user 0m1.266s 00:07:13.676 sys 0m0.148s 00:07:13.676 00:53:06 accel.accel_dif_verify -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:13.676 00:53:06 accel.accel_dif_verify -- common/autotest_common.sh@10 -- # set +x 00:07:13.676 ************************************ 00:07:13.676 END TEST accel_dif_verify 00:07:13.676 ************************************ 00:07:13.934 00:53:06 accel -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:07:13.934 00:53:06 accel -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:07:13.934 00:53:06 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:13.934 00:53:06 accel -- common/autotest_common.sh@10 -- # set +x 00:07:13.934 ************************************ 00:07:13.934 START TEST accel_dif_generate 00:07:13.934 ************************************ 00:07:13.934 00:53:06 accel.accel_dif_generate -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w dif_generate 00:07:13.934 00:53:06 accel.accel_dif_generate -- accel/accel.sh@16 -- # local accel_opc 00:07:13.934 00:53:06 accel.accel_dif_generate -- accel/accel.sh@17 -- # local accel_module 00:07:13.934 00:53:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:13.934 00:53:06 accel.accel_dif_generate -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:07:13.934 00:53:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:13.934 00:53:06 accel.accel_dif_generate -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:07:13.934 00:53:06 accel.accel_dif_generate -- accel/accel.sh@12 -- # build_accel_config 00:07:13.934 00:53:06 accel.accel_dif_generate -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:13.934 00:53:06 accel.accel_dif_generate -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:13.934 00:53:06 accel.accel_dif_generate -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:13.935 00:53:06 accel.accel_dif_generate -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:13.935 00:53:06 accel.accel_dif_generate -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:13.935 00:53:06 accel.accel_dif_generate -- accel/accel.sh@40 -- # local IFS=, 00:07:13.935 00:53:06 accel.accel_dif_generate -- accel/accel.sh@41 -- # jq -r . 00:07:13.935 [2024-07-25 00:53:06.880838] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 23.11.0 initialization... 00:07:13.935 [2024-07-25 00:53:06.880903] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3650329 ] 00:07:13.935 EAL: No free 2048 kB hugepages reported on node 1 00:07:13.935 [2024-07-25 00:53:06.944434] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:13.935 [2024-07-25 00:53:07.035801] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:14.194 00:53:07 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:14.194 00:53:07 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:14.194 00:53:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:14.194 00:53:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:14.194 00:53:07 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:14.194 00:53:07 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:14.194 00:53:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:14.194 00:53:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:14.194 00:53:07 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=0x1 00:07:14.194 00:53:07 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:14.194 00:53:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:14.194 00:53:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:14.194 00:53:07 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:14.194 00:53:07 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:14.194 00:53:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:14.194 00:53:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:14.194 00:53:07 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:14.194 00:53:07 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:14.194 00:53:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:14.194 00:53:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:14.194 00:53:07 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=dif_generate 00:07:14.194 00:53:07 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:14.194 00:53:07 accel.accel_dif_generate -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:07:14.194 00:53:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:14.194 00:53:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:14.194 00:53:07 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:14.194 00:53:07 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:14.194 00:53:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:14.194 00:53:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:14.194 00:53:07 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:14.194 00:53:07 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:14.194 00:53:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:14.194 00:53:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:14.194 00:53:07 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='512 bytes' 00:07:14.194 00:53:07 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:14.194 00:53:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:14.194 00:53:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:14.194 00:53:07 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='8 bytes' 00:07:14.194 00:53:07 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:14.194 00:53:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:14.194 00:53:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:14.194 00:53:07 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:14.194 00:53:07 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:14.194 00:53:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:14.194 00:53:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:14.194 00:53:07 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=software 00:07:14.194 00:53:07 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:14.194 00:53:07 accel.accel_dif_generate -- accel/accel.sh@22 -- # accel_module=software 00:07:14.194 00:53:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:14.194 00:53:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:14.194 00:53:07 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:07:14.194 00:53:07 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:14.194 00:53:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:14.194 00:53:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:14.194 00:53:07 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:07:14.194 00:53:07 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:14.194 00:53:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:14.194 00:53:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:14.194 00:53:07 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=1 00:07:14.194 00:53:07 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:14.194 00:53:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:14.194 00:53:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:14.194 00:53:07 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='1 seconds' 00:07:14.194 00:53:07 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:14.194 00:53:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:14.194 00:53:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:14.194 00:53:07 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=No 00:07:14.194 00:53:07 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:14.194 00:53:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:14.194 00:53:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:14.194 00:53:07 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:14.194 00:53:07 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:14.194 00:53:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:14.194 00:53:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:14.194 00:53:07 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:14.194 00:53:07 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:14.194 00:53:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:14.194 00:53:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:15.191 00:53:08 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:15.191 00:53:08 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:15.191 00:53:08 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:15.191 00:53:08 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:15.191 00:53:08 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:15.191 00:53:08 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:15.191 00:53:08 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:15.191 00:53:08 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:15.191 00:53:08 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:15.191 00:53:08 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:15.191 00:53:08 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:15.191 00:53:08 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:15.191 00:53:08 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:15.191 00:53:08 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:15.191 00:53:08 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:15.191 00:53:08 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:15.191 00:53:08 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:15.191 00:53:08 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:15.191 00:53:08 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:15.191 00:53:08 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:15.191 00:53:08 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:15.191 00:53:08 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:15.191 00:53:08 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:15.191 00:53:08 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:15.191 00:53:08 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:15.191 00:53:08 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:07:15.191 00:53:08 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:15.191 00:07:15.191 real 0m1.400s 00:07:15.191 user 0m1.262s 00:07:15.191 sys 0m0.142s 00:07:15.191 00:53:08 accel.accel_dif_generate -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:15.191 00:53:08 accel.accel_dif_generate -- common/autotest_common.sh@10 -- # set +x 00:07:15.191 ************************************ 00:07:15.191 END TEST accel_dif_generate 00:07:15.191 ************************************ 00:07:15.191 00:53:08 accel -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:07:15.191 00:53:08 accel -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:07:15.191 00:53:08 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:15.191 00:53:08 accel -- common/autotest_common.sh@10 -- # set +x 00:07:15.191 ************************************ 00:07:15.191 START TEST accel_dif_generate_copy 00:07:15.191 ************************************ 00:07:15.191 00:53:08 accel.accel_dif_generate_copy -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w dif_generate_copy 00:07:15.191 00:53:08 accel.accel_dif_generate_copy -- accel/accel.sh@16 -- # local accel_opc 00:07:15.191 00:53:08 accel.accel_dif_generate_copy -- accel/accel.sh@17 -- # local accel_module 00:07:15.191 00:53:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:15.191 00:53:08 accel.accel_dif_generate_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:07:15.191 00:53:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:15.191 00:53:08 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:07:15.191 00:53:08 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # build_accel_config 00:07:15.191 00:53:08 accel.accel_dif_generate_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:15.191 00:53:08 accel.accel_dif_generate_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:15.191 00:53:08 accel.accel_dif_generate_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:15.191 00:53:08 accel.accel_dif_generate_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:15.191 00:53:08 accel.accel_dif_generate_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:15.191 00:53:08 accel.accel_dif_generate_copy -- accel/accel.sh@40 -- # local IFS=, 00:07:15.191 00:53:08 accel.accel_dif_generate_copy -- accel/accel.sh@41 -- # jq -r . 00:07:15.191 [2024-07-25 00:53:08.324991] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 23.11.0 initialization... 00:07:15.191 [2024-07-25 00:53:08.325052] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3650598 ] 00:07:15.450 EAL: No free 2048 kB hugepages reported on node 1 00:07:15.450 [2024-07-25 00:53:08.391695] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:15.450 [2024-07-25 00:53:08.483460] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:15.450 00:53:08 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:15.450 00:53:08 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:15.450 00:53:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:15.450 00:53:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:15.450 00:53:08 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:15.450 00:53:08 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:15.450 00:53:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:15.450 00:53:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:15.450 00:53:08 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=0x1 00:07:15.450 00:53:08 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:15.450 00:53:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:15.450 00:53:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:15.450 00:53:08 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:15.450 00:53:08 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:15.450 00:53:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:15.450 00:53:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:15.450 00:53:08 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:15.450 00:53:08 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:15.450 00:53:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:15.450 00:53:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:15.450 00:53:08 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=dif_generate_copy 00:07:15.450 00:53:08 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:15.450 00:53:08 accel.accel_dif_generate_copy -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:07:15.450 00:53:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:15.450 00:53:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:15.450 00:53:08 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:15.450 00:53:08 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:15.450 00:53:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:15.450 00:53:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:15.450 00:53:08 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:15.450 00:53:08 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:15.450 00:53:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:15.450 00:53:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:15.450 00:53:08 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:15.450 00:53:08 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:15.450 00:53:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:15.450 00:53:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:15.450 00:53:08 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=software 00:07:15.450 00:53:08 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:15.450 00:53:08 accel.accel_dif_generate_copy -- accel/accel.sh@22 -- # accel_module=software 00:07:15.450 00:53:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:15.450 00:53:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:15.450 00:53:08 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:07:15.450 00:53:08 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:15.450 00:53:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:15.450 00:53:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:15.450 00:53:08 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:07:15.450 00:53:08 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:15.450 00:53:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:15.450 00:53:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:15.450 00:53:08 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=1 00:07:15.450 00:53:08 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:15.450 00:53:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:15.450 00:53:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:15.450 00:53:08 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:07:15.450 00:53:08 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:15.450 00:53:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:15.450 00:53:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:15.450 00:53:08 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=No 00:07:15.450 00:53:08 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:15.450 00:53:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:15.450 00:53:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:15.450 00:53:08 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:15.450 00:53:08 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:15.450 00:53:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:15.450 00:53:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:15.450 00:53:08 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:15.450 00:53:08 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:15.450 00:53:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:15.450 00:53:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:16.824 00:53:09 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:16.824 00:53:09 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:16.824 00:53:09 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:16.824 00:53:09 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:16.824 00:53:09 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:16.824 00:53:09 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:16.824 00:53:09 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:16.824 00:53:09 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:16.824 00:53:09 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:16.824 00:53:09 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:16.824 00:53:09 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:16.824 00:53:09 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:16.824 00:53:09 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:16.824 00:53:09 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:16.824 00:53:09 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:16.824 00:53:09 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:16.824 00:53:09 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:16.824 00:53:09 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:16.824 00:53:09 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:16.824 00:53:09 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:16.824 00:53:09 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:16.824 00:53:09 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:16.824 00:53:09 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:16.824 00:53:09 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:16.824 00:53:09 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:16.824 00:53:09 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:07:16.824 00:53:09 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:16.824 00:07:16.824 real 0m1.396s 00:07:16.824 user 0m1.259s 00:07:16.824 sys 0m0.139s 00:07:16.824 00:53:09 accel.accel_dif_generate_copy -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:16.824 00:53:09 accel.accel_dif_generate_copy -- common/autotest_common.sh@10 -- # set +x 00:07:16.824 ************************************ 00:07:16.824 END TEST accel_dif_generate_copy 00:07:16.824 ************************************ 00:07:16.824 00:53:09 accel -- accel/accel.sh@115 -- # [[ y == y ]] 00:07:16.824 00:53:09 accel -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:16.824 00:53:09 accel -- common/autotest_common.sh@1097 -- # '[' 8 -le 1 ']' 00:07:16.824 00:53:09 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:16.824 00:53:09 accel -- common/autotest_common.sh@10 -- # set +x 00:07:16.824 ************************************ 00:07:16.824 START TEST accel_comp 00:07:16.824 ************************************ 00:07:16.824 00:53:09 accel.accel_comp -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:16.824 00:53:09 accel.accel_comp -- accel/accel.sh@16 -- # local accel_opc 00:07:16.824 00:53:09 accel.accel_comp -- accel/accel.sh@17 -- # local accel_module 00:07:16.824 00:53:09 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:16.824 00:53:09 accel.accel_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:16.824 00:53:09 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:16.824 00:53:09 accel.accel_comp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:16.824 00:53:09 accel.accel_comp -- accel/accel.sh@12 -- # build_accel_config 00:07:16.824 00:53:09 accel.accel_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:16.824 00:53:09 accel.accel_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:16.824 00:53:09 accel.accel_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:16.824 00:53:09 accel.accel_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:16.824 00:53:09 accel.accel_comp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:16.824 00:53:09 accel.accel_comp -- accel/accel.sh@40 -- # local IFS=, 00:07:16.824 00:53:09 accel.accel_comp -- accel/accel.sh@41 -- # jq -r . 00:07:16.824 [2024-07-25 00:53:09.769872] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 23.11.0 initialization... 00:07:16.824 [2024-07-25 00:53:09.769940] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3650766 ] 00:07:16.824 EAL: No free 2048 kB hugepages reported on node 1 00:07:16.824 [2024-07-25 00:53:09.833515] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:16.824 [2024-07-25 00:53:09.926138] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:17.082 00:53:09 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:17.082 00:53:09 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:17.082 00:53:09 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:17.082 00:53:09 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:17.082 00:53:09 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:17.082 00:53:09 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:17.082 00:53:09 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:17.082 00:53:09 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:17.082 00:53:09 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:17.082 00:53:09 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:17.082 00:53:09 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:17.082 00:53:09 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:17.082 00:53:09 accel.accel_comp -- accel/accel.sh@20 -- # val=0x1 00:07:17.082 00:53:09 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:17.082 00:53:09 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:17.082 00:53:09 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:17.082 00:53:09 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:17.082 00:53:09 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:17.082 00:53:09 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:17.082 00:53:09 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:17.082 00:53:09 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:17.082 00:53:09 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:17.082 00:53:09 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:17.082 00:53:09 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:17.082 00:53:09 accel.accel_comp -- accel/accel.sh@20 -- # val=compress 00:07:17.083 00:53:09 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:17.083 00:53:09 accel.accel_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:07:17.083 00:53:09 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:17.083 00:53:09 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:17.083 00:53:09 accel.accel_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:17.083 00:53:09 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:17.083 00:53:09 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:17.083 00:53:09 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:17.083 00:53:09 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:17.083 00:53:09 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:17.083 00:53:09 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:17.083 00:53:09 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:17.083 00:53:09 accel.accel_comp -- accel/accel.sh@20 -- # val=software 00:07:17.083 00:53:09 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:17.083 00:53:09 accel.accel_comp -- accel/accel.sh@22 -- # accel_module=software 00:07:17.083 00:53:09 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:17.083 00:53:09 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:17.083 00:53:09 accel.accel_comp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:17.083 00:53:09 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:17.083 00:53:09 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:17.083 00:53:09 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:17.083 00:53:09 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:07:17.083 00:53:09 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:17.083 00:53:09 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:17.083 00:53:09 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:17.083 00:53:09 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:07:17.083 00:53:09 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:17.083 00:53:09 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:17.083 00:53:09 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:17.083 00:53:09 accel.accel_comp -- accel/accel.sh@20 -- # val=1 00:07:17.083 00:53:09 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:17.083 00:53:09 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:17.083 00:53:09 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:17.083 00:53:09 accel.accel_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:07:17.083 00:53:09 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:17.083 00:53:09 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:17.083 00:53:09 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:17.083 00:53:09 accel.accel_comp -- accel/accel.sh@20 -- # val=No 00:07:17.083 00:53:09 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:17.083 00:53:09 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:17.083 00:53:09 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:17.083 00:53:09 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:17.083 00:53:09 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:17.083 00:53:09 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:17.083 00:53:09 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:17.083 00:53:09 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:17.083 00:53:09 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:17.083 00:53:09 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:17.083 00:53:09 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:18.016 00:53:11 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:18.016 00:53:11 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:18.016 00:53:11 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:18.016 00:53:11 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:18.016 00:53:11 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:18.016 00:53:11 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:18.016 00:53:11 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:18.016 00:53:11 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:18.016 00:53:11 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:18.016 00:53:11 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:18.016 00:53:11 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:18.016 00:53:11 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:18.016 00:53:11 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:18.016 00:53:11 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:18.016 00:53:11 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:18.016 00:53:11 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:18.016 00:53:11 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:18.016 00:53:11 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:18.016 00:53:11 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:18.016 00:53:11 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:18.016 00:53:11 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:18.016 00:53:11 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:18.016 00:53:11 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:18.016 00:53:11 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:18.016 00:53:11 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:18.016 00:53:11 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:07:18.016 00:53:11 accel.accel_comp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:18.016 00:07:18.016 real 0m1.414s 00:07:18.016 user 0m1.271s 00:07:18.017 sys 0m0.147s 00:07:18.017 00:53:11 accel.accel_comp -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:18.017 00:53:11 accel.accel_comp -- common/autotest_common.sh@10 -- # set +x 00:07:18.017 ************************************ 00:07:18.017 END TEST accel_comp 00:07:18.017 ************************************ 00:07:18.275 00:53:11 accel -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:18.275 00:53:11 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:07:18.275 00:53:11 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:18.275 00:53:11 accel -- common/autotest_common.sh@10 -- # set +x 00:07:18.275 ************************************ 00:07:18.275 START TEST accel_decomp 00:07:18.275 ************************************ 00:07:18.275 00:53:11 accel.accel_decomp -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:18.275 00:53:11 accel.accel_decomp -- accel/accel.sh@16 -- # local accel_opc 00:07:18.275 00:53:11 accel.accel_decomp -- accel/accel.sh@17 -- # local accel_module 00:07:18.275 00:53:11 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:18.275 00:53:11 accel.accel_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:18.275 00:53:11 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:18.275 00:53:11 accel.accel_decomp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:18.275 00:53:11 accel.accel_decomp -- accel/accel.sh@12 -- # build_accel_config 00:07:18.275 00:53:11 accel.accel_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:18.275 00:53:11 accel.accel_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:18.275 00:53:11 accel.accel_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:18.275 00:53:11 accel.accel_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:18.275 00:53:11 accel.accel_decomp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:18.275 00:53:11 accel.accel_decomp -- accel/accel.sh@40 -- # local IFS=, 00:07:18.275 00:53:11 accel.accel_decomp -- accel/accel.sh@41 -- # jq -r . 00:07:18.275 [2024-07-25 00:53:11.231946] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 23.11.0 initialization... 00:07:18.275 [2024-07-25 00:53:11.232010] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3650918 ] 00:07:18.275 EAL: No free 2048 kB hugepages reported on node 1 00:07:18.275 [2024-07-25 00:53:11.295566] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:18.275 [2024-07-25 00:53:11.387133] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:18.533 00:53:11 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:18.533 00:53:11 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:18.533 00:53:11 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:18.533 00:53:11 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:18.533 00:53:11 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:18.533 00:53:11 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:18.533 00:53:11 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:18.533 00:53:11 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:18.533 00:53:11 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:18.533 00:53:11 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:18.533 00:53:11 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:18.533 00:53:11 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:18.533 00:53:11 accel.accel_decomp -- accel/accel.sh@20 -- # val=0x1 00:07:18.533 00:53:11 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:18.533 00:53:11 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:18.533 00:53:11 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:18.533 00:53:11 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:18.533 00:53:11 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:18.533 00:53:11 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:18.533 00:53:11 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:18.533 00:53:11 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:18.533 00:53:11 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:18.533 00:53:11 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:18.533 00:53:11 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:18.533 00:53:11 accel.accel_decomp -- accel/accel.sh@20 -- # val=decompress 00:07:18.533 00:53:11 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:18.533 00:53:11 accel.accel_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:18.533 00:53:11 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:18.533 00:53:11 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:18.533 00:53:11 accel.accel_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:18.533 00:53:11 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:18.533 00:53:11 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:18.533 00:53:11 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:18.533 00:53:11 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:18.533 00:53:11 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:18.533 00:53:11 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:18.533 00:53:11 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:18.533 00:53:11 accel.accel_decomp -- accel/accel.sh@20 -- # val=software 00:07:18.533 00:53:11 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:18.533 00:53:11 accel.accel_decomp -- accel/accel.sh@22 -- # accel_module=software 00:07:18.533 00:53:11 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:18.533 00:53:11 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:18.533 00:53:11 accel.accel_decomp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:18.533 00:53:11 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:18.533 00:53:11 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:18.533 00:53:11 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:18.533 00:53:11 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:07:18.533 00:53:11 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:18.533 00:53:11 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:18.533 00:53:11 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:18.533 00:53:11 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:07:18.533 00:53:11 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:18.534 00:53:11 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:18.534 00:53:11 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:18.534 00:53:11 accel.accel_decomp -- accel/accel.sh@20 -- # val=1 00:07:18.534 00:53:11 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:18.534 00:53:11 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:18.534 00:53:11 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:18.534 00:53:11 accel.accel_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:07:18.534 00:53:11 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:18.534 00:53:11 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:18.534 00:53:11 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:18.534 00:53:11 accel.accel_decomp -- accel/accel.sh@20 -- # val=Yes 00:07:18.534 00:53:11 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:18.534 00:53:11 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:18.534 00:53:11 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:18.534 00:53:11 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:18.534 00:53:11 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:18.534 00:53:11 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:18.534 00:53:11 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:18.534 00:53:11 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:18.534 00:53:11 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:18.534 00:53:11 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:18.534 00:53:11 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:19.908 00:53:12 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:19.908 00:53:12 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:19.908 00:53:12 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:19.908 00:53:12 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:19.908 00:53:12 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:19.908 00:53:12 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:19.908 00:53:12 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:19.908 00:53:12 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:19.908 00:53:12 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:19.908 00:53:12 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:19.908 00:53:12 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:19.908 00:53:12 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:19.908 00:53:12 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:19.908 00:53:12 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:19.908 00:53:12 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:19.908 00:53:12 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:19.908 00:53:12 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:19.908 00:53:12 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:19.908 00:53:12 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:19.908 00:53:12 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:19.908 00:53:12 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:19.908 00:53:12 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:19.908 00:53:12 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:19.908 00:53:12 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:19.908 00:53:12 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:19.908 00:53:12 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:19.908 00:53:12 accel.accel_decomp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:19.908 00:07:19.908 real 0m1.411s 00:07:19.908 user 0m1.265s 00:07:19.908 sys 0m0.149s 00:07:19.908 00:53:12 accel.accel_decomp -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:19.908 00:53:12 accel.accel_decomp -- common/autotest_common.sh@10 -- # set +x 00:07:19.908 ************************************ 00:07:19.908 END TEST accel_decomp 00:07:19.908 ************************************ 00:07:19.908 00:53:12 accel -- accel/accel.sh@118 -- # run_test accel_decmop_full accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:19.908 00:53:12 accel -- common/autotest_common.sh@1097 -- # '[' 11 -le 1 ']' 00:07:19.908 00:53:12 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:19.908 00:53:12 accel -- common/autotest_common.sh@10 -- # set +x 00:07:19.908 ************************************ 00:07:19.908 START TEST accel_decmop_full 00:07:19.908 ************************************ 00:07:19.908 00:53:12 accel.accel_decmop_full -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:19.908 00:53:12 accel.accel_decmop_full -- accel/accel.sh@16 -- # local accel_opc 00:07:19.908 00:53:12 accel.accel_decmop_full -- accel/accel.sh@17 -- # local accel_module 00:07:19.908 00:53:12 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:19.908 00:53:12 accel.accel_decmop_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:19.908 00:53:12 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:19.908 00:53:12 accel.accel_decmop_full -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:19.908 00:53:12 accel.accel_decmop_full -- accel/accel.sh@12 -- # build_accel_config 00:07:19.908 00:53:12 accel.accel_decmop_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:19.908 00:53:12 accel.accel_decmop_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:19.908 00:53:12 accel.accel_decmop_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:19.908 00:53:12 accel.accel_decmop_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:19.908 00:53:12 accel.accel_decmop_full -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:19.908 00:53:12 accel.accel_decmop_full -- accel/accel.sh@40 -- # local IFS=, 00:07:19.908 00:53:12 accel.accel_decmop_full -- accel/accel.sh@41 -- # jq -r . 00:07:19.908 [2024-07-25 00:53:12.693592] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 23.11.0 initialization... 00:07:19.908 [2024-07-25 00:53:12.693656] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3651077 ] 00:07:19.908 EAL: No free 2048 kB hugepages reported on node 1 00:07:19.908 [2024-07-25 00:53:12.758090] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:19.908 [2024-07-25 00:53:12.848974] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:19.908 00:53:12 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:19.908 00:53:12 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:19.908 00:53:12 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:19.908 00:53:12 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:19.908 00:53:12 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:19.908 00:53:12 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:19.908 00:53:12 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:19.908 00:53:12 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:19.908 00:53:12 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:19.908 00:53:12 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:19.908 00:53:12 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:19.908 00:53:12 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:19.908 00:53:12 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=0x1 00:07:19.908 00:53:12 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:19.908 00:53:12 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:19.908 00:53:12 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:19.908 00:53:12 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:19.908 00:53:12 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:19.908 00:53:12 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:19.908 00:53:12 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:19.908 00:53:12 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:19.908 00:53:12 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:19.908 00:53:12 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:19.908 00:53:12 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:19.908 00:53:12 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=decompress 00:07:19.908 00:53:12 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:19.908 00:53:12 accel.accel_decmop_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:19.908 00:53:12 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:19.908 00:53:12 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:19.908 00:53:12 accel.accel_decmop_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:19.908 00:53:12 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:19.908 00:53:12 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:19.908 00:53:12 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:19.908 00:53:12 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:19.908 00:53:12 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:19.908 00:53:12 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:19.908 00:53:12 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:19.908 00:53:12 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=software 00:07:19.908 00:53:12 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:19.909 00:53:12 accel.accel_decmop_full -- accel/accel.sh@22 -- # accel_module=software 00:07:19.909 00:53:12 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:19.909 00:53:12 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:19.909 00:53:12 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:19.909 00:53:12 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:19.909 00:53:12 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:19.909 00:53:12 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:19.909 00:53:12 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=32 00:07:19.909 00:53:12 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:19.909 00:53:12 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:19.909 00:53:12 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:19.909 00:53:12 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=32 00:07:19.909 00:53:12 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:19.909 00:53:12 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:19.909 00:53:12 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:19.909 00:53:12 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=1 00:07:19.909 00:53:12 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:19.909 00:53:12 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:19.909 00:53:12 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:19.909 00:53:12 accel.accel_decmop_full -- accel/accel.sh@20 -- # val='1 seconds' 00:07:19.909 00:53:12 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:19.909 00:53:12 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:19.909 00:53:12 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:19.909 00:53:12 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=Yes 00:07:19.909 00:53:12 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:19.909 00:53:12 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:19.909 00:53:12 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:19.909 00:53:12 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:19.909 00:53:12 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:19.909 00:53:12 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:19.909 00:53:12 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:19.909 00:53:12 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:19.909 00:53:12 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:19.909 00:53:12 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:19.909 00:53:12 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:21.281 00:53:14 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:21.281 00:53:14 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:21.281 00:53:14 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:21.281 00:53:14 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:21.281 00:53:14 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:21.281 00:53:14 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:21.281 00:53:14 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:21.281 00:53:14 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:21.281 00:53:14 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:21.281 00:53:14 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:21.281 00:53:14 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:21.281 00:53:14 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:21.281 00:53:14 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:21.281 00:53:14 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:21.281 00:53:14 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:21.281 00:53:14 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:21.281 00:53:14 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:21.281 00:53:14 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:21.281 00:53:14 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:21.281 00:53:14 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:21.281 00:53:14 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:21.281 00:53:14 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:21.281 00:53:14 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:21.281 00:53:14 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:21.281 00:53:14 accel.accel_decmop_full -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:21.281 00:53:14 accel.accel_decmop_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:21.282 00:53:14 accel.accel_decmop_full -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:21.282 00:07:21.282 real 0m1.425s 00:07:21.282 user 0m1.278s 00:07:21.282 sys 0m0.151s 00:07:21.282 00:53:14 accel.accel_decmop_full -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:21.282 00:53:14 accel.accel_decmop_full -- common/autotest_common.sh@10 -- # set +x 00:07:21.282 ************************************ 00:07:21.282 END TEST accel_decmop_full 00:07:21.282 ************************************ 00:07:21.282 00:53:14 accel -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:21.282 00:53:14 accel -- common/autotest_common.sh@1097 -- # '[' 11 -le 1 ']' 00:07:21.282 00:53:14 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:21.282 00:53:14 accel -- common/autotest_common.sh@10 -- # set +x 00:07:21.282 ************************************ 00:07:21.282 START TEST accel_decomp_mcore 00:07:21.282 ************************************ 00:07:21.282 00:53:14 accel.accel_decomp_mcore -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:21.282 00:53:14 accel.accel_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:07:21.282 00:53:14 accel.accel_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:07:21.282 00:53:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:21.282 00:53:14 accel.accel_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:21.282 00:53:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:21.282 00:53:14 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:21.282 00:53:14 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:07:21.282 00:53:14 accel.accel_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:21.282 00:53:14 accel.accel_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:21.282 00:53:14 accel.accel_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:21.282 00:53:14 accel.accel_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:21.282 00:53:14 accel.accel_decomp_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:21.282 00:53:14 accel.accel_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:07:21.282 00:53:14 accel.accel_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:07:21.282 [2024-07-25 00:53:14.163177] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 23.11.0 initialization... 00:07:21.282 [2024-07-25 00:53:14.163251] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3651345 ] 00:07:21.282 EAL: No free 2048 kB hugepages reported on node 1 00:07:21.282 [2024-07-25 00:53:14.228260] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:21.282 [2024-07-25 00:53:14.321667] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:21.282 [2024-07-25 00:53:14.325261] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:21.282 [2024-07-25 00:53:14.325327] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:21.282 [2024-07-25 00:53:14.325331] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:21.282 00:53:14 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:21.282 00:53:14 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:21.282 00:53:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:21.282 00:53:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:21.282 00:53:14 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:21.282 00:53:14 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:21.282 00:53:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:21.282 00:53:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:21.282 00:53:14 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:21.282 00:53:14 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:21.282 00:53:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:21.282 00:53:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:21.282 00:53:14 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:07:21.282 00:53:14 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:21.282 00:53:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:21.282 00:53:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:21.282 00:53:14 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:21.282 00:53:14 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:21.282 00:53:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:21.282 00:53:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:21.282 00:53:14 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:21.282 00:53:14 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:21.282 00:53:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:21.282 00:53:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:21.282 00:53:14 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:07:21.282 00:53:14 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:21.282 00:53:14 accel.accel_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:21.282 00:53:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:21.282 00:53:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:21.282 00:53:14 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:21.282 00:53:14 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:21.282 00:53:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:21.282 00:53:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:21.282 00:53:14 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:21.282 00:53:14 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:21.282 00:53:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:21.282 00:53:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:21.282 00:53:14 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=software 00:07:21.282 00:53:14 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:21.282 00:53:14 accel.accel_decomp_mcore -- accel/accel.sh@22 -- # accel_module=software 00:07:21.282 00:53:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:21.282 00:53:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:21.282 00:53:14 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:21.282 00:53:14 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:21.282 00:53:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:21.282 00:53:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:21.282 00:53:14 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:07:21.282 00:53:14 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:21.282 00:53:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:21.282 00:53:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:21.282 00:53:14 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:07:21.282 00:53:14 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:21.282 00:53:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:21.282 00:53:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:21.282 00:53:14 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:07:21.282 00:53:14 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:21.282 00:53:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:21.282 00:53:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:21.282 00:53:14 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:07:21.282 00:53:14 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:21.282 00:53:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:21.282 00:53:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:21.282 00:53:14 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:07:21.282 00:53:14 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:21.282 00:53:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:21.282 00:53:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:21.282 00:53:14 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:21.282 00:53:14 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:21.282 00:53:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:21.282 00:53:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:21.282 00:53:14 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:21.282 00:53:14 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:21.282 00:53:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:21.282 00:53:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:22.655 00:53:15 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:22.655 00:53:15 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:22.655 00:53:15 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:22.655 00:53:15 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:22.655 00:53:15 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:22.655 00:53:15 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:22.655 00:53:15 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:22.655 00:53:15 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:22.655 00:53:15 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:22.655 00:53:15 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:22.655 00:53:15 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:22.655 00:53:15 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:22.655 00:53:15 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:22.655 00:53:15 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:22.655 00:53:15 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:22.655 00:53:15 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:22.655 00:53:15 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:22.655 00:53:15 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:22.655 00:53:15 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:22.655 00:53:15 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:22.655 00:53:15 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:22.655 00:53:15 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:22.655 00:53:15 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:22.655 00:53:15 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:22.655 00:53:15 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:22.655 00:53:15 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:22.655 00:53:15 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:22.655 00:53:15 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:22.655 00:53:15 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:22.655 00:53:15 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:22.655 00:53:15 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:22.655 00:53:15 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:22.655 00:53:15 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:22.655 00:53:15 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:22.655 00:53:15 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:22.655 00:53:15 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:22.655 00:53:15 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:22.655 00:53:15 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:22.655 00:53:15 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:22.655 00:07:22.655 real 0m1.404s 00:07:22.655 user 0m4.670s 00:07:22.655 sys 0m0.157s 00:07:22.655 00:53:15 accel.accel_decomp_mcore -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:22.655 00:53:15 accel.accel_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:07:22.655 ************************************ 00:07:22.655 END TEST accel_decomp_mcore 00:07:22.655 ************************************ 00:07:22.655 00:53:15 accel -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:22.655 00:53:15 accel -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:07:22.655 00:53:15 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:22.655 00:53:15 accel -- common/autotest_common.sh@10 -- # set +x 00:07:22.655 ************************************ 00:07:22.655 START TEST accel_decomp_full_mcore 00:07:22.655 ************************************ 00:07:22.655 00:53:15 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:22.655 00:53:15 accel.accel_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:07:22.655 00:53:15 accel.accel_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:07:22.655 00:53:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:22.655 00:53:15 accel.accel_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:22.655 00:53:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:22.655 00:53:15 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:22.655 00:53:15 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:07:22.655 00:53:15 accel.accel_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:22.655 00:53:15 accel.accel_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:22.655 00:53:15 accel.accel_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:22.655 00:53:15 accel.accel_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:22.655 00:53:15 accel.accel_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:22.655 00:53:15 accel.accel_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:07:22.655 00:53:15 accel.accel_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:07:22.655 [2024-07-25 00:53:15.611304] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 23.11.0 initialization... 00:07:22.655 [2024-07-25 00:53:15.611367] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3651516 ] 00:07:22.655 EAL: No free 2048 kB hugepages reported on node 1 00:07:22.655 [2024-07-25 00:53:15.674325] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:22.655 [2024-07-25 00:53:15.770680] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:22.655 [2024-07-25 00:53:15.770736] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:22.655 [2024-07-25 00:53:15.770852] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:22.655 [2024-07-25 00:53:15.770855] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:22.914 00:53:15 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:22.914 00:53:15 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:22.914 00:53:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:22.914 00:53:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:22.914 00:53:15 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:22.914 00:53:15 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:22.914 00:53:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:22.914 00:53:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:22.914 00:53:15 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:22.914 00:53:15 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:22.914 00:53:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:22.914 00:53:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:22.914 00:53:15 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:07:22.914 00:53:15 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:22.914 00:53:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:22.914 00:53:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:22.914 00:53:15 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:22.914 00:53:15 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:22.914 00:53:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:22.914 00:53:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:22.914 00:53:15 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:22.914 00:53:15 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:22.914 00:53:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:22.914 00:53:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:22.914 00:53:15 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:07:22.914 00:53:15 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:22.914 00:53:15 accel.accel_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:22.914 00:53:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:22.914 00:53:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:22.914 00:53:15 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:22.914 00:53:15 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:22.914 00:53:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:22.914 00:53:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:22.914 00:53:15 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:22.914 00:53:15 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:22.914 00:53:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:22.914 00:53:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:22.914 00:53:15 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=software 00:07:22.914 00:53:15 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:22.914 00:53:15 accel.accel_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=software 00:07:22.914 00:53:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:22.914 00:53:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:22.914 00:53:15 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:22.914 00:53:15 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:22.914 00:53:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:22.914 00:53:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:22.914 00:53:15 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:07:22.914 00:53:15 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:22.914 00:53:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:22.914 00:53:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:22.914 00:53:15 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:07:22.914 00:53:15 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:22.914 00:53:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:22.914 00:53:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:22.914 00:53:15 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:07:22.914 00:53:15 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:22.914 00:53:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:22.914 00:53:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:22.914 00:53:15 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:07:22.914 00:53:15 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:22.914 00:53:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:22.914 00:53:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:22.914 00:53:15 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:07:22.914 00:53:15 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:22.914 00:53:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:22.914 00:53:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:22.914 00:53:15 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:22.914 00:53:15 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:22.914 00:53:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:22.914 00:53:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:22.914 00:53:15 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:22.914 00:53:15 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:22.914 00:53:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:22.914 00:53:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:24.285 00:53:17 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:24.285 00:53:17 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:24.285 00:53:17 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:24.285 00:53:17 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:24.286 00:53:17 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:24.286 00:53:17 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:24.286 00:53:17 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:24.286 00:53:17 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:24.286 00:53:17 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:24.286 00:53:17 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:24.286 00:53:17 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:24.286 00:53:17 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:24.286 00:53:17 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:24.286 00:53:17 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:24.286 00:53:17 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:24.286 00:53:17 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:24.286 00:53:17 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:24.286 00:53:17 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:24.286 00:53:17 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:24.286 00:53:17 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:24.286 00:53:17 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:24.286 00:53:17 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:24.286 00:53:17 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:24.286 00:53:17 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:24.286 00:53:17 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:24.286 00:53:17 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:24.286 00:53:17 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:24.286 00:53:17 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:24.286 00:53:17 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:24.286 00:53:17 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:24.286 00:53:17 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:24.286 00:53:17 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:24.286 00:53:17 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:24.286 00:53:17 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:24.286 00:53:17 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:24.286 00:53:17 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:24.286 00:53:17 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:24.286 00:53:17 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:24.286 00:53:17 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:24.286 00:07:24.286 real 0m1.431s 00:07:24.286 user 0m4.773s 00:07:24.286 sys 0m0.156s 00:07:24.286 00:53:17 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:24.286 00:53:17 accel.accel_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:07:24.286 ************************************ 00:07:24.286 END TEST accel_decomp_full_mcore 00:07:24.286 ************************************ 00:07:24.286 00:53:17 accel -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:24.286 00:53:17 accel -- common/autotest_common.sh@1097 -- # '[' 11 -le 1 ']' 00:07:24.286 00:53:17 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:24.286 00:53:17 accel -- common/autotest_common.sh@10 -- # set +x 00:07:24.286 ************************************ 00:07:24.286 START TEST accel_decomp_mthread 00:07:24.286 ************************************ 00:07:24.286 00:53:17 accel.accel_decomp_mthread -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:24.286 00:53:17 accel.accel_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:07:24.286 00:53:17 accel.accel_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:07:24.286 00:53:17 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:24.286 00:53:17 accel.accel_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:24.286 00:53:17 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:24.286 00:53:17 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:24.286 00:53:17 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:07:24.286 00:53:17 accel.accel_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:24.286 00:53:17 accel.accel_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:24.286 00:53:17 accel.accel_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:24.286 00:53:17 accel.accel_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:24.286 00:53:17 accel.accel_decomp_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:24.286 00:53:17 accel.accel_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:07:24.286 00:53:17 accel.accel_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:07:24.286 [2024-07-25 00:53:17.093123] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 23.11.0 initialization... 00:07:24.286 [2024-07-25 00:53:17.093184] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3651670 ] 00:07:24.286 EAL: No free 2048 kB hugepages reported on node 1 00:07:24.286 [2024-07-25 00:53:17.156982] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:24.286 [2024-07-25 00:53:17.248697] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:24.286 00:53:17 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:24.286 00:53:17 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:24.286 00:53:17 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:24.286 00:53:17 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:24.286 00:53:17 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:24.286 00:53:17 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:24.286 00:53:17 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:24.286 00:53:17 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:24.286 00:53:17 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:24.286 00:53:17 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:24.286 00:53:17 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:24.286 00:53:17 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:24.286 00:53:17 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:07:24.286 00:53:17 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:24.286 00:53:17 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:24.286 00:53:17 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:24.286 00:53:17 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:24.286 00:53:17 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:24.286 00:53:17 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:24.286 00:53:17 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:24.286 00:53:17 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:24.286 00:53:17 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:24.286 00:53:17 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:24.286 00:53:17 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:24.286 00:53:17 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:07:24.286 00:53:17 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:24.286 00:53:17 accel.accel_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:24.286 00:53:17 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:24.286 00:53:17 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:24.286 00:53:17 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:24.286 00:53:17 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:24.286 00:53:17 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:24.286 00:53:17 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:24.286 00:53:17 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:24.286 00:53:17 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:24.286 00:53:17 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:24.286 00:53:17 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:24.286 00:53:17 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=software 00:07:24.286 00:53:17 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:24.286 00:53:17 accel.accel_decomp_mthread -- accel/accel.sh@22 -- # accel_module=software 00:07:24.286 00:53:17 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:24.286 00:53:17 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:24.286 00:53:17 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:24.286 00:53:17 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:24.286 00:53:17 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:24.286 00:53:17 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:24.286 00:53:17 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:07:24.286 00:53:17 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:24.286 00:53:17 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:24.286 00:53:17 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:24.286 00:53:17 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:07:24.286 00:53:17 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:24.286 00:53:17 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:24.286 00:53:17 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:24.286 00:53:17 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:07:24.286 00:53:17 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:24.286 00:53:17 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:24.286 00:53:17 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:24.286 00:53:17 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:07:24.286 00:53:17 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:24.286 00:53:17 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:24.286 00:53:17 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:24.286 00:53:17 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:07:24.286 00:53:17 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:24.287 00:53:17 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:24.287 00:53:17 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:24.287 00:53:17 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:24.287 00:53:17 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:24.287 00:53:17 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:24.287 00:53:17 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:24.287 00:53:17 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:24.287 00:53:17 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:24.287 00:53:17 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:24.287 00:53:17 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:25.659 00:53:18 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:25.659 00:53:18 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:25.659 00:53:18 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:25.659 00:53:18 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:25.659 00:53:18 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:25.659 00:53:18 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:25.659 00:53:18 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:25.659 00:53:18 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:25.659 00:53:18 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:25.659 00:53:18 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:25.659 00:53:18 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:25.659 00:53:18 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:25.659 00:53:18 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:25.659 00:53:18 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:25.659 00:53:18 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:25.659 00:53:18 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:25.659 00:53:18 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:25.659 00:53:18 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:25.659 00:53:18 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:25.659 00:53:18 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:25.659 00:53:18 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:25.659 00:53:18 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:25.659 00:53:18 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:25.659 00:53:18 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:25.659 00:53:18 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:25.659 00:53:18 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:25.659 00:53:18 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:25.659 00:53:18 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:25.659 00:53:18 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:25.659 00:53:18 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:25.659 00:53:18 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:25.659 00:07:25.659 real 0m1.411s 00:07:25.659 user 0m1.265s 00:07:25.659 sys 0m0.149s 00:07:25.659 00:53:18 accel.accel_decomp_mthread -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:25.659 00:53:18 accel.accel_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:07:25.659 ************************************ 00:07:25.659 END TEST accel_decomp_mthread 00:07:25.659 ************************************ 00:07:25.659 00:53:18 accel -- accel/accel.sh@122 -- # run_test accel_decomp_full_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:25.659 00:53:18 accel -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:07:25.659 00:53:18 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:25.659 00:53:18 accel -- common/autotest_common.sh@10 -- # set +x 00:07:25.659 ************************************ 00:07:25.659 START TEST accel_decomp_full_mthread 00:07:25.659 ************************************ 00:07:25.659 00:53:18 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:25.659 00:53:18 accel.accel_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:07:25.659 00:53:18 accel.accel_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:07:25.659 00:53:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:25.659 00:53:18 accel.accel_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:25.659 00:53:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:25.659 00:53:18 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:25.659 00:53:18 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:07:25.659 00:53:18 accel.accel_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:25.659 00:53:18 accel.accel_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:25.659 00:53:18 accel.accel_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:25.659 00:53:18 accel.accel_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:25.659 00:53:18 accel.accel_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:25.659 00:53:18 accel.accel_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:07:25.659 00:53:18 accel.accel_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:07:25.659 [2024-07-25 00:53:18.551214] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 23.11.0 initialization... 00:07:25.659 [2024-07-25 00:53:18.551286] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3651938 ] 00:07:25.659 EAL: No free 2048 kB hugepages reported on node 1 00:07:25.659 [2024-07-25 00:53:18.615389] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:25.659 [2024-07-25 00:53:18.708306] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:25.659 00:53:18 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:25.659 00:53:18 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:25.659 00:53:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:25.659 00:53:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:25.659 00:53:18 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:25.659 00:53:18 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:25.659 00:53:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:25.659 00:53:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:25.659 00:53:18 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:25.659 00:53:18 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:25.659 00:53:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:25.659 00:53:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:25.659 00:53:18 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:07:25.659 00:53:18 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:25.659 00:53:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:25.659 00:53:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:25.659 00:53:18 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:25.660 00:53:18 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:25.660 00:53:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:25.660 00:53:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:25.660 00:53:18 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:25.660 00:53:18 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:25.660 00:53:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:25.660 00:53:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:25.660 00:53:18 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:07:25.660 00:53:18 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:25.660 00:53:18 accel.accel_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:25.660 00:53:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:25.660 00:53:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:25.660 00:53:18 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:25.660 00:53:18 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:25.660 00:53:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:25.660 00:53:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:25.660 00:53:18 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:25.660 00:53:18 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:25.660 00:53:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:25.660 00:53:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:25.660 00:53:18 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=software 00:07:25.660 00:53:18 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:25.660 00:53:18 accel.accel_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=software 00:07:25.660 00:53:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:25.660 00:53:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:25.660 00:53:18 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:25.660 00:53:18 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:25.660 00:53:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:25.660 00:53:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:25.660 00:53:18 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:07:25.660 00:53:18 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:25.660 00:53:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:25.660 00:53:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:25.660 00:53:18 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:07:25.660 00:53:18 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:25.660 00:53:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:25.660 00:53:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:25.660 00:53:18 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:07:25.660 00:53:18 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:25.660 00:53:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:25.660 00:53:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:25.660 00:53:18 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:07:25.660 00:53:18 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:25.660 00:53:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:25.660 00:53:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:25.660 00:53:18 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:07:25.660 00:53:18 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:25.660 00:53:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:25.660 00:53:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:25.660 00:53:18 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:25.660 00:53:18 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:25.660 00:53:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:25.660 00:53:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:25.660 00:53:18 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:25.660 00:53:18 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:25.660 00:53:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:25.660 00:53:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:27.031 00:53:19 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:27.031 00:53:19 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:27.031 00:53:19 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:27.031 00:53:19 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:27.031 00:53:19 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:27.031 00:53:19 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:27.031 00:53:19 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:27.031 00:53:19 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:27.031 00:53:19 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:27.031 00:53:19 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:27.031 00:53:19 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:27.031 00:53:19 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:27.031 00:53:19 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:27.031 00:53:19 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:27.031 00:53:19 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:27.031 00:53:19 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:27.031 00:53:19 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:27.031 00:53:19 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:27.031 00:53:19 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:27.031 00:53:19 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:27.031 00:53:19 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:27.031 00:53:19 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:27.031 00:53:19 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:27.031 00:53:19 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:27.031 00:53:19 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:27.031 00:53:19 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:27.031 00:53:19 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:27.031 00:53:19 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:27.031 00:53:19 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:27.031 00:53:19 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:27.031 00:53:19 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:27.031 00:07:27.031 real 0m1.439s 00:07:27.031 user 0m1.301s 00:07:27.031 sys 0m0.141s 00:07:27.031 00:53:19 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:27.031 00:53:19 accel.accel_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:07:27.031 ************************************ 00:07:27.031 END TEST accel_decomp_full_mthread 00:07:27.031 ************************************ 00:07:27.031 00:53:19 accel -- accel/accel.sh@124 -- # [[ n == y ]] 00:07:27.031 00:53:19 accel -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:27.031 00:53:19 accel -- accel/accel.sh@137 -- # build_accel_config 00:07:27.031 00:53:19 accel -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:07:27.031 00:53:19 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:27.031 00:53:19 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:27.031 00:53:19 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:27.031 00:53:19 accel -- common/autotest_common.sh@10 -- # set +x 00:07:27.031 00:53:19 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:27.031 00:53:19 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:27.031 00:53:19 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:27.032 00:53:19 accel -- accel/accel.sh@40 -- # local IFS=, 00:07:27.032 00:53:19 accel -- accel/accel.sh@41 -- # jq -r . 00:07:27.032 ************************************ 00:07:27.032 START TEST accel_dif_functional_tests 00:07:27.032 ************************************ 00:07:27.032 00:53:20 accel.accel_dif_functional_tests -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:27.032 [2024-07-25 00:53:20.056929] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 23.11.0 initialization... 00:07:27.032 [2024-07-25 00:53:20.057017] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3652104 ] 00:07:27.032 EAL: No free 2048 kB hugepages reported on node 1 00:07:27.032 [2024-07-25 00:53:20.119945] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:27.289 [2024-07-25 00:53:20.214970] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:27.289 [2024-07-25 00:53:20.215022] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:27.289 [2024-07-25 00:53:20.215025] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:27.289 00:07:27.289 00:07:27.289 CUnit - A unit testing framework for C - Version 2.1-3 00:07:27.289 http://cunit.sourceforge.net/ 00:07:27.289 00:07:27.289 00:07:27.289 Suite: accel_dif 00:07:27.289 Test: verify: DIF generated, GUARD check ...passed 00:07:27.289 Test: verify: DIF generated, APPTAG check ...passed 00:07:27.289 Test: verify: DIF generated, REFTAG check ...passed 00:07:27.289 Test: verify: DIF not generated, GUARD check ...[2024-07-25 00:53:20.312768] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:27.289 passed 00:07:27.289 Test: verify: DIF not generated, APPTAG check ...[2024-07-25 00:53:20.312840] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:27.289 passed 00:07:27.289 Test: verify: DIF not generated, REFTAG check ...[2024-07-25 00:53:20.312877] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:27.289 passed 00:07:27.289 Test: verify: APPTAG correct, APPTAG check ...passed 00:07:27.289 Test: verify: APPTAG incorrect, APPTAG check ...[2024-07-25 00:53:20.312952] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:07:27.289 passed 00:07:27.289 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:07:27.289 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:07:27.289 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:07:27.289 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-07-25 00:53:20.313132] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:07:27.289 passed 00:07:27.289 Test: verify copy: DIF generated, GUARD check ...passed 00:07:27.289 Test: verify copy: DIF generated, APPTAG check ...passed 00:07:27.289 Test: verify copy: DIF generated, REFTAG check ...passed 00:07:27.289 Test: verify copy: DIF not generated, GUARD check ...[2024-07-25 00:53:20.313342] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:27.289 passed 00:07:27.289 Test: verify copy: DIF not generated, APPTAG check ...[2024-07-25 00:53:20.313388] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:27.289 passed 00:07:27.289 Test: verify copy: DIF not generated, REFTAG check ...[2024-07-25 00:53:20.313431] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:27.289 passed 00:07:27.289 Test: generate copy: DIF generated, GUARD check ...passed 00:07:27.289 Test: generate copy: DIF generated, APTTAG check ...passed 00:07:27.289 Test: generate copy: DIF generated, REFTAG check ...passed 00:07:27.289 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:07:27.289 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:07:27.289 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:07:27.289 Test: generate copy: iovecs-len validate ...[2024-07-25 00:53:20.313706] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:07:27.289 passed 00:07:27.289 Test: generate copy: buffer alignment validate ...passed 00:07:27.289 00:07:27.289 Run Summary: Type Total Ran Passed Failed Inactive 00:07:27.289 suites 1 1 n/a 0 0 00:07:27.289 tests 26 26 26 0 0 00:07:27.289 asserts 115 115 115 0 n/a 00:07:27.289 00:07:27.289 Elapsed time = 0.003 seconds 00:07:27.547 00:07:27.547 real 0m0.511s 00:07:27.547 user 0m0.805s 00:07:27.547 sys 0m0.188s 00:07:27.547 00:53:20 accel.accel_dif_functional_tests -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:27.547 00:53:20 accel.accel_dif_functional_tests -- common/autotest_common.sh@10 -- # set +x 00:07:27.547 ************************************ 00:07:27.547 END TEST accel_dif_functional_tests 00:07:27.547 ************************************ 00:07:27.547 00:07:27.547 real 0m31.750s 00:07:27.547 user 0m35.130s 00:07:27.547 sys 0m4.634s 00:07:27.547 00:53:20 accel -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:27.547 00:53:20 accel -- common/autotest_common.sh@10 -- # set +x 00:07:27.547 ************************************ 00:07:27.547 END TEST accel 00:07:27.547 ************************************ 00:07:27.547 00:53:20 -- spdk/autotest.sh@184 -- # run_test accel_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:07:27.547 00:53:20 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:27.547 00:53:20 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:27.547 00:53:20 -- common/autotest_common.sh@10 -- # set +x 00:07:27.547 ************************************ 00:07:27.547 START TEST accel_rpc 00:07:27.547 ************************************ 00:07:27.547 00:53:20 accel_rpc -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:07:27.547 * Looking for test storage... 00:07:27.547 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:07:27.547 00:53:20 accel_rpc -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:27.547 00:53:20 accel_rpc -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=3652173 00:07:27.547 00:53:20 accel_rpc -- accel/accel_rpc.sh@13 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --wait-for-rpc 00:07:27.547 00:53:20 accel_rpc -- accel/accel_rpc.sh@15 -- # waitforlisten 3652173 00:07:27.547 00:53:20 accel_rpc -- common/autotest_common.sh@827 -- # '[' -z 3652173 ']' 00:07:27.547 00:53:20 accel_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:27.547 00:53:20 accel_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:27.547 00:53:20 accel_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:27.547 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:27.547 00:53:20 accel_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:27.547 00:53:20 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:27.804 [2024-07-25 00:53:20.710509] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 23.11.0 initialization... 00:07:27.804 [2024-07-25 00:53:20.710610] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3652173 ] 00:07:27.804 EAL: No free 2048 kB hugepages reported on node 1 00:07:27.804 [2024-07-25 00:53:20.769596] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:27.804 [2024-07-25 00:53:20.853977] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:27.804 00:53:20 accel_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:27.804 00:53:20 accel_rpc -- common/autotest_common.sh@860 -- # return 0 00:07:27.804 00:53:20 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:07:27.804 00:53:20 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:07:27.804 00:53:20 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:07:27.804 00:53:20 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:07:27.804 00:53:20 accel_rpc -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:07:27.804 00:53:20 accel_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:27.804 00:53:20 accel_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:27.804 00:53:20 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:27.804 ************************************ 00:07:27.804 START TEST accel_assign_opcode 00:07:27.804 ************************************ 00:07:27.804 00:53:20 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1121 -- # accel_assign_opcode_test_suite 00:07:27.804 00:53:20 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:07:27.804 00:53:20 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:27.804 00:53:20 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:27.805 [2024-07-25 00:53:20.934730] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:07:27.805 00:53:20 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:27.805 00:53:20 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:07:27.805 00:53:20 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:27.805 00:53:20 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:27.805 [2024-07-25 00:53:20.942736] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:07:27.805 00:53:20 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:27.805 00:53:20 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:07:27.805 00:53:20 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:27.805 00:53:20 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:28.062 00:53:21 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:28.062 00:53:21 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:07:28.062 00:53:21 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:28.062 00:53:21 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:07:28.062 00:53:21 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:28.062 00:53:21 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # grep software 00:07:28.062 00:53:21 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:28.320 software 00:07:28.320 00:07:28.320 real 0m0.289s 00:07:28.320 user 0m0.039s 00:07:28.320 sys 0m0.007s 00:07:28.320 00:53:21 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:28.320 00:53:21 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:28.320 ************************************ 00:07:28.320 END TEST accel_assign_opcode 00:07:28.320 ************************************ 00:07:28.320 00:53:21 accel_rpc -- accel/accel_rpc.sh@55 -- # killprocess 3652173 00:07:28.320 00:53:21 accel_rpc -- common/autotest_common.sh@946 -- # '[' -z 3652173 ']' 00:07:28.320 00:53:21 accel_rpc -- common/autotest_common.sh@950 -- # kill -0 3652173 00:07:28.320 00:53:21 accel_rpc -- common/autotest_common.sh@951 -- # uname 00:07:28.320 00:53:21 accel_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:28.320 00:53:21 accel_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3652173 00:07:28.320 00:53:21 accel_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:28.320 00:53:21 accel_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:28.320 00:53:21 accel_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3652173' 00:07:28.320 killing process with pid 3652173 00:07:28.320 00:53:21 accel_rpc -- common/autotest_common.sh@965 -- # kill 3652173 00:07:28.320 00:53:21 accel_rpc -- common/autotest_common.sh@970 -- # wait 3652173 00:07:28.578 00:07:28.578 real 0m1.056s 00:07:28.579 user 0m0.977s 00:07:28.579 sys 0m0.424s 00:07:28.579 00:53:21 accel_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:28.579 00:53:21 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:28.579 ************************************ 00:07:28.579 END TEST accel_rpc 00:07:28.579 ************************************ 00:07:28.579 00:53:21 -- spdk/autotest.sh@185 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:07:28.579 00:53:21 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:28.579 00:53:21 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:28.579 00:53:21 -- common/autotest_common.sh@10 -- # set +x 00:07:28.579 ************************************ 00:07:28.579 START TEST app_cmdline 00:07:28.579 ************************************ 00:07:28.579 00:53:21 app_cmdline -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:07:28.836 * Looking for test storage... 00:07:28.836 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:28.836 00:53:21 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:07:28.836 00:53:21 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=3652379 00:07:28.836 00:53:21 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:07:28.836 00:53:21 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 3652379 00:07:28.836 00:53:21 app_cmdline -- common/autotest_common.sh@827 -- # '[' -z 3652379 ']' 00:07:28.836 00:53:21 app_cmdline -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:28.836 00:53:21 app_cmdline -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:28.836 00:53:21 app_cmdline -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:28.836 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:28.836 00:53:21 app_cmdline -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:28.836 00:53:21 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:28.836 [2024-07-25 00:53:21.815815] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 23.11.0 initialization... 00:07:28.836 [2024-07-25 00:53:21.815896] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3652379 ] 00:07:28.836 EAL: No free 2048 kB hugepages reported on node 1 00:07:28.836 [2024-07-25 00:53:21.882491] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:28.836 [2024-07-25 00:53:21.974005] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:29.094 00:53:22 app_cmdline -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:29.094 00:53:22 app_cmdline -- common/autotest_common.sh@860 -- # return 0 00:07:29.094 00:53:22 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:07:29.352 { 00:07:29.352 "version": "SPDK v24.05.1-pre git sha1 241d0f3c9", 00:07:29.352 "fields": { 00:07:29.352 "major": 24, 00:07:29.352 "minor": 5, 00:07:29.352 "patch": 1, 00:07:29.352 "suffix": "-pre", 00:07:29.352 "commit": "241d0f3c9" 00:07:29.352 } 00:07:29.352 } 00:07:29.352 00:53:22 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:07:29.352 00:53:22 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:07:29.352 00:53:22 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:07:29.352 00:53:22 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:07:29.352 00:53:22 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:07:29.352 00:53:22 app_cmdline -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:29.352 00:53:22 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:07:29.352 00:53:22 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:29.352 00:53:22 app_cmdline -- app/cmdline.sh@26 -- # sort 00:07:29.352 00:53:22 app_cmdline -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:29.610 00:53:22 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:07:29.610 00:53:22 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:07:29.610 00:53:22 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:29.610 00:53:22 app_cmdline -- common/autotest_common.sh@648 -- # local es=0 00:07:29.610 00:53:22 app_cmdline -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:29.610 00:53:22 app_cmdline -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:29.610 00:53:22 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:29.610 00:53:22 app_cmdline -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:29.610 00:53:22 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:29.610 00:53:22 app_cmdline -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:29.610 00:53:22 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:29.610 00:53:22 app_cmdline -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:29.610 00:53:22 app_cmdline -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:07:29.610 00:53:22 app_cmdline -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:29.610 request: 00:07:29.610 { 00:07:29.610 "method": "env_dpdk_get_mem_stats", 00:07:29.610 "req_id": 1 00:07:29.610 } 00:07:29.610 Got JSON-RPC error response 00:07:29.610 response: 00:07:29.610 { 00:07:29.610 "code": -32601, 00:07:29.610 "message": "Method not found" 00:07:29.610 } 00:07:29.610 00:53:22 app_cmdline -- common/autotest_common.sh@651 -- # es=1 00:07:29.610 00:53:22 app_cmdline -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:29.610 00:53:22 app_cmdline -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:29.610 00:53:22 app_cmdline -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:29.610 00:53:22 app_cmdline -- app/cmdline.sh@1 -- # killprocess 3652379 00:07:29.610 00:53:22 app_cmdline -- common/autotest_common.sh@946 -- # '[' -z 3652379 ']' 00:07:29.610 00:53:22 app_cmdline -- common/autotest_common.sh@950 -- # kill -0 3652379 00:07:29.610 00:53:22 app_cmdline -- common/autotest_common.sh@951 -- # uname 00:07:29.868 00:53:22 app_cmdline -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:29.868 00:53:22 app_cmdline -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3652379 00:07:29.868 00:53:22 app_cmdline -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:29.868 00:53:22 app_cmdline -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:29.868 00:53:22 app_cmdline -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3652379' 00:07:29.868 killing process with pid 3652379 00:07:29.868 00:53:22 app_cmdline -- common/autotest_common.sh@965 -- # kill 3652379 00:07:29.868 00:53:22 app_cmdline -- common/autotest_common.sh@970 -- # wait 3652379 00:07:30.143 00:07:30.143 real 0m1.472s 00:07:30.143 user 0m1.771s 00:07:30.143 sys 0m0.479s 00:07:30.143 00:53:23 app_cmdline -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:30.143 00:53:23 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:30.143 ************************************ 00:07:30.143 END TEST app_cmdline 00:07:30.143 ************************************ 00:07:30.143 00:53:23 -- spdk/autotest.sh@186 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:07:30.143 00:53:23 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:30.143 00:53:23 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:30.143 00:53:23 -- common/autotest_common.sh@10 -- # set +x 00:07:30.143 ************************************ 00:07:30.143 START TEST version 00:07:30.143 ************************************ 00:07:30.143 00:53:23 version -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:07:30.143 * Looking for test storage... 00:07:30.424 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:30.424 00:53:23 version -- app/version.sh@17 -- # get_header_version major 00:07:30.424 00:53:23 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:30.424 00:53:23 version -- app/version.sh@14 -- # cut -f2 00:07:30.424 00:53:23 version -- app/version.sh@14 -- # tr -d '"' 00:07:30.424 00:53:23 version -- app/version.sh@17 -- # major=24 00:07:30.424 00:53:23 version -- app/version.sh@18 -- # get_header_version minor 00:07:30.424 00:53:23 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:30.424 00:53:23 version -- app/version.sh@14 -- # cut -f2 00:07:30.424 00:53:23 version -- app/version.sh@14 -- # tr -d '"' 00:07:30.424 00:53:23 version -- app/version.sh@18 -- # minor=5 00:07:30.424 00:53:23 version -- app/version.sh@19 -- # get_header_version patch 00:07:30.424 00:53:23 version -- app/version.sh@14 -- # cut -f2 00:07:30.424 00:53:23 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:30.424 00:53:23 version -- app/version.sh@14 -- # tr -d '"' 00:07:30.424 00:53:23 version -- app/version.sh@19 -- # patch=1 00:07:30.424 00:53:23 version -- app/version.sh@20 -- # get_header_version suffix 00:07:30.424 00:53:23 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:30.424 00:53:23 version -- app/version.sh@14 -- # cut -f2 00:07:30.424 00:53:23 version -- app/version.sh@14 -- # tr -d '"' 00:07:30.424 00:53:23 version -- app/version.sh@20 -- # suffix=-pre 00:07:30.424 00:53:23 version -- app/version.sh@22 -- # version=24.5 00:07:30.424 00:53:23 version -- app/version.sh@25 -- # (( patch != 0 )) 00:07:30.424 00:53:23 version -- app/version.sh@25 -- # version=24.5.1 00:07:30.424 00:53:23 version -- app/version.sh@28 -- # version=24.5.1rc0 00:07:30.424 00:53:23 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:07:30.424 00:53:23 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:07:30.424 00:53:23 version -- app/version.sh@30 -- # py_version=24.5.1rc0 00:07:30.424 00:53:23 version -- app/version.sh@31 -- # [[ 24.5.1rc0 == \2\4\.\5\.\1\r\c\0 ]] 00:07:30.424 00:07:30.424 real 0m0.104s 00:07:30.424 user 0m0.048s 00:07:30.424 sys 0m0.076s 00:07:30.424 00:53:23 version -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:30.424 00:53:23 version -- common/autotest_common.sh@10 -- # set +x 00:07:30.424 ************************************ 00:07:30.424 END TEST version 00:07:30.424 ************************************ 00:07:30.424 00:53:23 -- spdk/autotest.sh@188 -- # '[' 0 -eq 1 ']' 00:07:30.424 00:53:23 -- spdk/autotest.sh@198 -- # uname -s 00:07:30.424 00:53:23 -- spdk/autotest.sh@198 -- # [[ Linux == Linux ]] 00:07:30.424 00:53:23 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:07:30.424 00:53:23 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:07:30.424 00:53:23 -- spdk/autotest.sh@211 -- # '[' 0 -eq 1 ']' 00:07:30.424 00:53:23 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:07:30.424 00:53:23 -- spdk/autotest.sh@260 -- # timing_exit lib 00:07:30.424 00:53:23 -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:30.424 00:53:23 -- common/autotest_common.sh@10 -- # set +x 00:07:30.424 00:53:23 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:07:30.424 00:53:23 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:07:30.424 00:53:23 -- spdk/autotest.sh@279 -- # '[' 1 -eq 1 ']' 00:07:30.424 00:53:23 -- spdk/autotest.sh@280 -- # export NET_TYPE 00:07:30.424 00:53:23 -- spdk/autotest.sh@283 -- # '[' tcp = rdma ']' 00:07:30.424 00:53:23 -- spdk/autotest.sh@286 -- # '[' tcp = tcp ']' 00:07:30.424 00:53:23 -- spdk/autotest.sh@287 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:30.424 00:53:23 -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:07:30.424 00:53:23 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:30.424 00:53:23 -- common/autotest_common.sh@10 -- # set +x 00:07:30.424 ************************************ 00:07:30.424 START TEST nvmf_tcp 00:07:30.424 ************************************ 00:07:30.424 00:53:23 nvmf_tcp -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:30.424 * Looking for test storage... 00:07:30.424 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:07:30.424 00:53:23 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:07:30.425 00:53:23 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:07:30.425 00:53:23 nvmf_tcp -- nvmf/nvmf.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:30.425 00:53:23 nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:07:30.425 00:53:23 nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:30.425 00:53:23 nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:30.425 00:53:23 nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:30.425 00:53:23 nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:30.425 00:53:23 nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:30.425 00:53:23 nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:30.425 00:53:23 nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:30.425 00:53:23 nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:30.425 00:53:23 nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:30.425 00:53:23 nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:30.425 00:53:23 nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:30.425 00:53:23 nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:30.425 00:53:23 nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:30.425 00:53:23 nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:30.425 00:53:23 nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:30.425 00:53:23 nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:30.425 00:53:23 nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:30.425 00:53:23 nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:30.425 00:53:23 nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:30.425 00:53:23 nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:30.425 00:53:23 nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:30.425 00:53:23 nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:30.425 00:53:23 nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:30.425 00:53:23 nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:07:30.425 00:53:23 nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:30.425 00:53:23 nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:07:30.425 00:53:23 nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:30.425 00:53:23 nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:30.425 00:53:23 nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:30.425 00:53:23 nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:30.425 00:53:23 nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:30.425 00:53:23 nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:30.425 00:53:23 nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:30.425 00:53:23 nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:30.425 00:53:23 nvmf_tcp -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:07:30.425 00:53:23 nvmf_tcp -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:07:30.425 00:53:23 nvmf_tcp -- nvmf/nvmf.sh@20 -- # timing_enter target 00:07:30.425 00:53:23 nvmf_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:07:30.425 00:53:23 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:30.425 00:53:23 nvmf_tcp -- nvmf/nvmf.sh@22 -- # [[ 0 -eq 0 ]] 00:07:30.425 00:53:23 nvmf_tcp -- nvmf/nvmf.sh@23 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:07:30.425 00:53:23 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:07:30.425 00:53:23 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:30.425 00:53:23 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:30.425 ************************************ 00:07:30.425 START TEST nvmf_example 00:07:30.425 ************************************ 00:07:30.425 00:53:23 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:07:30.425 * Looking for test storage... 00:07:30.425 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:30.425 00:53:23 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:30.425 00:53:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:07:30.425 00:53:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:30.425 00:53:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:30.425 00:53:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:30.425 00:53:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:30.425 00:53:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:30.425 00:53:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:30.425 00:53:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:30.425 00:53:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:30.425 00:53:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:30.425 00:53:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:30.425 00:53:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:30.425 00:53:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:30.425 00:53:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:30.425 00:53:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:30.425 00:53:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:30.425 00:53:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:30.425 00:53:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:30.425 00:53:23 nvmf_tcp.nvmf_example -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:30.425 00:53:23 nvmf_tcp.nvmf_example -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:30.425 00:53:23 nvmf_tcp.nvmf_example -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:30.425 00:53:23 nvmf_tcp.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:30.425 00:53:23 nvmf_tcp.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:30.425 00:53:23 nvmf_tcp.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:30.425 00:53:23 nvmf_tcp.nvmf_example -- paths/export.sh@5 -- # export PATH 00:07:30.425 00:53:23 nvmf_tcp.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:30.425 00:53:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@47 -- # : 0 00:07:30.425 00:53:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:30.425 00:53:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:30.425 00:53:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:30.425 00:53:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:30.425 00:53:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:30.425 00:53:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:30.425 00:53:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:30.425 00:53:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:30.425 00:53:23 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:07:30.425 00:53:23 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:07:30.425 00:53:23 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:07:30.425 00:53:23 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:07:30.425 00:53:23 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:07:30.426 00:53:23 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:07:30.426 00:53:23 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:07:30.426 00:53:23 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:07:30.426 00:53:23 nvmf_tcp.nvmf_example -- common/autotest_common.sh@720 -- # xtrace_disable 00:07:30.426 00:53:23 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:30.426 00:53:23 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:07:30.426 00:53:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:30.426 00:53:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:30.426 00:53:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:30.426 00:53:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:30.426 00:53:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:30.426 00:53:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:30.426 00:53:23 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:30.426 00:53:23 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:30.426 00:53:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:30.426 00:53:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:30.426 00:53:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@285 -- # xtrace_disable 00:07:30.426 00:53:23 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:32.953 00:53:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:32.953 00:53:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@291 -- # pci_devs=() 00:07:32.953 00:53:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:32.953 00:53:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:32.953 00:53:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:32.953 00:53:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:32.953 00:53:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:32.953 00:53:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@295 -- # net_devs=() 00:07:32.953 00:53:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:32.953 00:53:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@296 -- # e810=() 00:07:32.953 00:53:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@296 -- # local -ga e810 00:07:32.953 00:53:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@297 -- # x722=() 00:07:32.953 00:53:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@297 -- # local -ga x722 00:07:32.953 00:53:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@298 -- # mlx=() 00:07:32.953 00:53:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@298 -- # local -ga mlx 00:07:32.953 00:53:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:32.953 00:53:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:32.953 00:53:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:32.953 00:53:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:32.953 00:53:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:32.954 00:53:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:32.954 00:53:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:32.954 00:53:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:32.954 00:53:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:32.954 00:53:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:32.954 00:53:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:32.954 00:53:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:32.954 00:53:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:32.954 00:53:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:32.954 00:53:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:32.954 00:53:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:32.954 00:53:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:32.954 00:53:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:32.954 00:53:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:07:32.954 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:07:32.954 00:53:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:32.954 00:53:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:32.954 00:53:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:32.954 00:53:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:32.954 00:53:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:32.954 00:53:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:32.954 00:53:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:07:32.954 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:07:32.954 00:53:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:32.954 00:53:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:32.954 00:53:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:32.954 00:53:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:32.954 00:53:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:32.954 00:53:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:32.954 00:53:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:32.954 00:53:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:32.954 00:53:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:32.954 00:53:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:32.954 00:53:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:32.954 00:53:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:32.954 00:53:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:32.954 00:53:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:32.954 00:53:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:32.954 00:53:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:07:32.954 Found net devices under 0000:0a:00.0: cvl_0_0 00:07:32.954 00:53:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:32.954 00:53:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:32.954 00:53:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:32.954 00:53:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:32.954 00:53:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:32.954 00:53:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:32.954 00:53:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:32.954 00:53:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:32.954 00:53:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:07:32.954 Found net devices under 0000:0a:00.1: cvl_0_1 00:07:32.954 00:53:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:32.954 00:53:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:32.954 00:53:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # is_hw=yes 00:07:32.954 00:53:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:32.954 00:53:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:32.954 00:53:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:32.954 00:53:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:32.954 00:53:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:32.954 00:53:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:32.954 00:53:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:32.954 00:53:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:32.954 00:53:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:32.954 00:53:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:32.954 00:53:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:32.954 00:53:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:32.954 00:53:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:32.954 00:53:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:32.954 00:53:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:32.954 00:53:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:32.954 00:53:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:32.954 00:53:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:32.954 00:53:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:32.954 00:53:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:32.954 00:53:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:32.954 00:53:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:32.954 00:53:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:32.954 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:32.954 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.142 ms 00:07:32.954 00:07:32.954 --- 10.0.0.2 ping statistics --- 00:07:32.954 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:32.954 rtt min/avg/max/mdev = 0.142/0.142/0.142/0.000 ms 00:07:32.954 00:53:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:32.954 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:32.954 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.115 ms 00:07:32.954 00:07:32.954 --- 10.0.0.1 ping statistics --- 00:07:32.954 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:32.954 rtt min/avg/max/mdev = 0.115/0.115/0.115/0.000 ms 00:07:32.954 00:53:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:32.954 00:53:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@422 -- # return 0 00:07:32.954 00:53:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:32.954 00:53:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:32.954 00:53:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:32.954 00:53:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:32.954 00:53:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:32.954 00:53:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:32.954 00:53:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:32.954 00:53:25 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:07:32.954 00:53:25 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:07:32.954 00:53:25 nvmf_tcp.nvmf_example -- common/autotest_common.sh@720 -- # xtrace_disable 00:07:32.954 00:53:25 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:32.954 00:53:25 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:07:32.954 00:53:25 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:07:32.954 00:53:25 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=3654393 00:07:32.954 00:53:25 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:07:32.954 00:53:25 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:32.954 00:53:25 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 3654393 00:07:32.954 00:53:25 nvmf_tcp.nvmf_example -- common/autotest_common.sh@827 -- # '[' -z 3654393 ']' 00:07:32.954 00:53:25 nvmf_tcp.nvmf_example -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:32.954 00:53:25 nvmf_tcp.nvmf_example -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:32.954 00:53:25 nvmf_tcp.nvmf_example -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:32.954 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:32.954 00:53:25 nvmf_tcp.nvmf_example -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:32.954 00:53:25 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:32.954 EAL: No free 2048 kB hugepages reported on node 1 00:07:33.887 00:53:26 nvmf_tcp.nvmf_example -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:33.887 00:53:26 nvmf_tcp.nvmf_example -- common/autotest_common.sh@860 -- # return 0 00:07:33.887 00:53:26 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:07:33.887 00:53:26 nvmf_tcp.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:33.887 00:53:26 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:33.887 00:53:26 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:33.887 00:53:26 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:33.887 00:53:26 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:33.887 00:53:26 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:33.887 00:53:26 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:07:33.887 00:53:26 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:33.887 00:53:26 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:33.887 00:53:26 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:33.887 00:53:26 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:07:33.887 00:53:26 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:33.887 00:53:26 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:33.887 00:53:26 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:33.888 00:53:26 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:33.888 00:53:26 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:07:33.888 00:53:26 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:07:33.888 00:53:26 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:33.888 00:53:26 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:33.888 00:53:26 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:33.888 00:53:26 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:33.888 00:53:26 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:33.888 00:53:26 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:33.888 00:53:26 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:33.888 00:53:26 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:07:33.888 00:53:26 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:07:33.888 EAL: No free 2048 kB hugepages reported on node 1 00:07:43.851 Initializing NVMe Controllers 00:07:43.851 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:43.851 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:07:43.851 Initialization complete. Launching workers. 00:07:43.851 ======================================================== 00:07:43.851 Latency(us) 00:07:43.851 Device Information : IOPS MiB/s Average min max 00:07:43.851 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 15092.42 58.95 4240.08 928.10 15802.54 00:07:43.851 ======================================================== 00:07:43.851 Total : 15092.42 58.95 4240.08 928.10 15802.54 00:07:43.851 00:07:43.851 00:53:36 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:07:43.851 00:53:36 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:07:43.852 00:53:36 nvmf_tcp.nvmf_example -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:43.852 00:53:36 nvmf_tcp.nvmf_example -- nvmf/common.sh@117 -- # sync 00:07:43.852 00:53:36 nvmf_tcp.nvmf_example -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:43.852 00:53:36 nvmf_tcp.nvmf_example -- nvmf/common.sh@120 -- # set +e 00:07:43.852 00:53:36 nvmf_tcp.nvmf_example -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:43.852 00:53:36 nvmf_tcp.nvmf_example -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:43.852 rmmod nvme_tcp 00:07:43.852 rmmod nvme_fabrics 00:07:43.852 rmmod nvme_keyring 00:07:44.109 00:53:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:44.109 00:53:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@124 -- # set -e 00:07:44.109 00:53:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@125 -- # return 0 00:07:44.109 00:53:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@489 -- # '[' -n 3654393 ']' 00:07:44.109 00:53:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@490 -- # killprocess 3654393 00:07:44.109 00:53:37 nvmf_tcp.nvmf_example -- common/autotest_common.sh@946 -- # '[' -z 3654393 ']' 00:07:44.109 00:53:37 nvmf_tcp.nvmf_example -- common/autotest_common.sh@950 -- # kill -0 3654393 00:07:44.109 00:53:37 nvmf_tcp.nvmf_example -- common/autotest_common.sh@951 -- # uname 00:07:44.109 00:53:37 nvmf_tcp.nvmf_example -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:44.110 00:53:37 nvmf_tcp.nvmf_example -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3654393 00:07:44.110 00:53:37 nvmf_tcp.nvmf_example -- common/autotest_common.sh@952 -- # process_name=nvmf 00:07:44.110 00:53:37 nvmf_tcp.nvmf_example -- common/autotest_common.sh@956 -- # '[' nvmf = sudo ']' 00:07:44.110 00:53:37 nvmf_tcp.nvmf_example -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3654393' 00:07:44.110 killing process with pid 3654393 00:07:44.110 00:53:37 nvmf_tcp.nvmf_example -- common/autotest_common.sh@965 -- # kill 3654393 00:07:44.110 00:53:37 nvmf_tcp.nvmf_example -- common/autotest_common.sh@970 -- # wait 3654393 00:07:44.110 nvmf threads initialize successfully 00:07:44.110 bdev subsystem init successfully 00:07:44.110 created a nvmf target service 00:07:44.110 create targets's poll groups done 00:07:44.110 all subsystems of target started 00:07:44.110 nvmf target is running 00:07:44.110 all subsystems of target stopped 00:07:44.110 destroy targets's poll groups done 00:07:44.110 destroyed the nvmf target service 00:07:44.110 bdev subsystem finish successfully 00:07:44.110 nvmf threads destroy successfully 00:07:44.110 00:53:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:44.110 00:53:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:44.110 00:53:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:44.110 00:53:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:44.368 00:53:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:44.368 00:53:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:44.368 00:53:37 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:44.368 00:53:37 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:46.269 00:53:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:46.269 00:53:39 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:07:46.269 00:53:39 nvmf_tcp.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:46.269 00:53:39 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:46.269 00:07:46.269 real 0m15.826s 00:07:46.269 user 0m45.039s 00:07:46.269 sys 0m3.277s 00:07:46.269 00:53:39 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:46.269 00:53:39 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:46.269 ************************************ 00:07:46.269 END TEST nvmf_example 00:07:46.269 ************************************ 00:07:46.269 00:53:39 nvmf_tcp -- nvmf/nvmf.sh@24 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:07:46.269 00:53:39 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:07:46.269 00:53:39 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:46.269 00:53:39 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:46.269 ************************************ 00:07:46.269 START TEST nvmf_filesystem 00:07:46.269 ************************************ 00:07:46.269 00:53:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:07:46.269 * Looking for test storage... 00:07:46.269 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:46.269 00:53:39 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:07:46.269 00:53:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:07:46.269 00:53:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:07:46.269 00:53:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:07:46.269 00:53:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:07:46.269 00:53:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@38 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:07:46.269 00:53:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@43 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:07:46.269 00:53:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:07:46.269 00:53:39 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:07:46.269 00:53:39 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:07:46.269 00:53:39 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:07:46.269 00:53:39 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:07:46.269 00:53:39 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:07:46.269 00:53:39 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:07:46.269 00:53:39 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:07:46.269 00:53:39 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:07:46.269 00:53:39 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:07:46.269 00:53:39 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:07:46.269 00:53:39 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:07:46.269 00:53:39 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:07:46.269 00:53:39 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:07:46.269 00:53:39 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:07:46.269 00:53:39 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:07:46.269 00:53:39 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:07:46.269 00:53:39 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:07:46.269 00:53:39 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:07:46.269 00:53:39 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:07:46.269 00:53:39 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:07:46.269 00:53:39 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:07:46.269 00:53:39 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_CET=n 00:07:46.269 00:53:39 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:07:46.269 00:53:39 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:07:46.269 00:53:39 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:07:46.269 00:53:39 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:07:46.269 00:53:39 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:07:46.269 00:53:39 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:07:46.269 00:53:39 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:07:46.529 00:53:39 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:07:46.529 00:53:39 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:07:46.529 00:53:39 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:07:46.529 00:53:39 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:07:46.529 00:53:39 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:07:46.529 00:53:39 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:07:46.529 00:53:39 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:07:46.529 00:53:39 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:07:46.529 00:53:39 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:07:46.529 00:53:39 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:07:46.529 00:53:39 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:07:46.529 00:53:39 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR=//var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:07:46.529 00:53:39 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:07:46.529 00:53:39 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:07:46.530 00:53:39 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:07:46.530 00:53:39 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:07:46.530 00:53:39 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:07:46.530 00:53:39 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:07:46.530 00:53:39 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:07:46.530 00:53:39 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:07:46.530 00:53:39 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:07:46.530 00:53:39 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:07:46.530 00:53:39 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=y 00:07:46.530 00:53:39 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:07:46.530 00:53:39 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:07:46.530 00:53:39 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:07:46.530 00:53:39 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:07:46.530 00:53:39 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:07:46.530 00:53:39 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:07:46.530 00:53:39 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:07:46.530 00:53:39 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:07:46.530 00:53:39 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:07:46.530 00:53:39 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:07:46.530 00:53:39 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:07:46.530 00:53:39 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:07:46.530 00:53:39 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:07:46.530 00:53:39 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:07:46.530 00:53:39 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:07:46.530 00:53:39 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:07:46.530 00:53:39 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:07:46.530 00:53:39 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_FC=n 00:07:46.530 00:53:39 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:07:46.530 00:53:39 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:07:46.530 00:53:39 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:07:46.530 00:53:39 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:07:46.530 00:53:39 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:07:46.530 00:53:39 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:07:46.530 00:53:39 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES= 00:07:46.530 00:53:39 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:07:46.530 00:53:39 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:07:46.530 00:53:39 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:07:46.530 00:53:39 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:07:46.530 00:53:39 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:07:46.530 00:53:39 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_URING=n 00:07:46.530 00:53:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@53 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:07:46.530 00:53:39 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:07:46.530 00:53:39 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:07:46.530 00:53:39 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:07:46.530 00:53:39 nvmf_tcp.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:07:46.530 00:53:39 nvmf_tcp.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:07:46.530 00:53:39 nvmf_tcp.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:46.530 00:53:39 nvmf_tcp.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:07:46.530 00:53:39 nvmf_tcp.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:07:46.530 00:53:39 nvmf_tcp.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:07:46.530 00:53:39 nvmf_tcp.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:07:46.530 00:53:39 nvmf_tcp.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:07:46.530 00:53:39 nvmf_tcp.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:07:46.530 00:53:39 nvmf_tcp.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:07:46.530 00:53:39 nvmf_tcp.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:07:46.530 00:53:39 nvmf_tcp.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:07:46.530 #define SPDK_CONFIG_H 00:07:46.530 #define SPDK_CONFIG_APPS 1 00:07:46.530 #define SPDK_CONFIG_ARCH native 00:07:46.530 #undef SPDK_CONFIG_ASAN 00:07:46.530 #undef SPDK_CONFIG_AVAHI 00:07:46.530 #undef SPDK_CONFIG_CET 00:07:46.530 #define SPDK_CONFIG_COVERAGE 1 00:07:46.530 #define SPDK_CONFIG_CROSS_PREFIX 00:07:46.530 #undef SPDK_CONFIG_CRYPTO 00:07:46.530 #undef SPDK_CONFIG_CRYPTO_MLX5 00:07:46.530 #undef SPDK_CONFIG_CUSTOMOCF 00:07:46.530 #undef SPDK_CONFIG_DAOS 00:07:46.530 #define SPDK_CONFIG_DAOS_DIR 00:07:46.530 #define SPDK_CONFIG_DEBUG 1 00:07:46.530 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:07:46.530 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:07:46.530 #define SPDK_CONFIG_DPDK_INC_DIR //var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:07:46.530 #define SPDK_CONFIG_DPDK_LIB_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:07:46.530 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:07:46.530 #undef SPDK_CONFIG_DPDK_UADK 00:07:46.530 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:07:46.530 #define SPDK_CONFIG_EXAMPLES 1 00:07:46.530 #undef SPDK_CONFIG_FC 00:07:46.530 #define SPDK_CONFIG_FC_PATH 00:07:46.530 #define SPDK_CONFIG_FIO_PLUGIN 1 00:07:46.530 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:07:46.530 #undef SPDK_CONFIG_FUSE 00:07:46.530 #undef SPDK_CONFIG_FUZZER 00:07:46.530 #define SPDK_CONFIG_FUZZER_LIB 00:07:46.530 #undef SPDK_CONFIG_GOLANG 00:07:46.530 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:07:46.530 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:07:46.530 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:07:46.530 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:07:46.530 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:07:46.530 #undef SPDK_CONFIG_HAVE_LIBBSD 00:07:46.530 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:07:46.530 #define SPDK_CONFIG_IDXD 1 00:07:46.530 #define SPDK_CONFIG_IDXD_KERNEL 1 00:07:46.530 #undef SPDK_CONFIG_IPSEC_MB 00:07:46.530 #define SPDK_CONFIG_IPSEC_MB_DIR 00:07:46.530 #define SPDK_CONFIG_ISAL 1 00:07:46.530 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:07:46.530 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:07:46.530 #define SPDK_CONFIG_LIBDIR 00:07:46.530 #undef SPDK_CONFIG_LTO 00:07:46.530 #define SPDK_CONFIG_MAX_LCORES 00:07:46.530 #define SPDK_CONFIG_NVME_CUSE 1 00:07:46.530 #undef SPDK_CONFIG_OCF 00:07:46.530 #define SPDK_CONFIG_OCF_PATH 00:07:46.530 #define SPDK_CONFIG_OPENSSL_PATH 00:07:46.530 #undef SPDK_CONFIG_PGO_CAPTURE 00:07:46.530 #define SPDK_CONFIG_PGO_DIR 00:07:46.530 #undef SPDK_CONFIG_PGO_USE 00:07:46.530 #define SPDK_CONFIG_PREFIX /usr/local 00:07:46.530 #undef SPDK_CONFIG_RAID5F 00:07:46.530 #undef SPDK_CONFIG_RBD 00:07:46.530 #define SPDK_CONFIG_RDMA 1 00:07:46.530 #define SPDK_CONFIG_RDMA_PROV verbs 00:07:46.530 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:07:46.530 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:07:46.530 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:07:46.530 #define SPDK_CONFIG_SHARED 1 00:07:46.530 #undef SPDK_CONFIG_SMA 00:07:46.530 #define SPDK_CONFIG_TESTS 1 00:07:46.530 #undef SPDK_CONFIG_TSAN 00:07:46.530 #define SPDK_CONFIG_UBLK 1 00:07:46.530 #define SPDK_CONFIG_UBSAN 1 00:07:46.530 #undef SPDK_CONFIG_UNIT_TESTS 00:07:46.530 #undef SPDK_CONFIG_URING 00:07:46.530 #define SPDK_CONFIG_URING_PATH 00:07:46.530 #undef SPDK_CONFIG_URING_ZNS 00:07:46.530 #undef SPDK_CONFIG_USDT 00:07:46.530 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:07:46.530 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:07:46.530 #define SPDK_CONFIG_VFIO_USER 1 00:07:46.530 #define SPDK_CONFIG_VFIO_USER_DIR 00:07:46.530 #define SPDK_CONFIG_VHOST 1 00:07:46.530 #define SPDK_CONFIG_VIRTIO 1 00:07:46.530 #undef SPDK_CONFIG_VTUNE 00:07:46.530 #define SPDK_CONFIG_VTUNE_DIR 00:07:46.530 #define SPDK_CONFIG_WERROR 1 00:07:46.530 #define SPDK_CONFIG_WPDK_DIR 00:07:46.530 #undef SPDK_CONFIG_XNVME 00:07:46.530 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:07:46.530 00:53:39 nvmf_tcp.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:07:46.530 00:53:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:46.530 00:53:39 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:46.530 00:53:39 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:46.530 00:53:39 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:46.531 00:53:39 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:46.531 00:53:39 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:46.531 00:53:39 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:46.531 00:53:39 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:07:46.531 00:53:39 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:46.531 00:53:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:07:46.531 00:53:39 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:07:46.531 00:53:39 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:07:46.531 00:53:39 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:07:46.531 00:53:39 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:07:46.531 00:53:39 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:07:46.531 00:53:39 nvmf_tcp.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:07:46.531 00:53:39 nvmf_tcp.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:07:46.531 00:53:39 nvmf_tcp.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:07:46.531 00:53:39 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # uname -s 00:07:46.531 00:53:39 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:07:46.531 00:53:39 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:07:46.531 00:53:39 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:07:46.531 00:53:39 nvmf_tcp.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:07:46.531 00:53:39 nvmf_tcp.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:07:46.531 00:53:39 nvmf_tcp.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:07:46.531 00:53:39 nvmf_tcp.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:07:46.531 00:53:39 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:07:46.531 00:53:39 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:07:46.531 00:53:39 nvmf_tcp.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:07:46.531 00:53:39 nvmf_tcp.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:07:46.531 00:53:39 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:07:46.531 00:53:39 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:07:46.531 00:53:39 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:07:46.531 00:53:39 nvmf_tcp.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:07:46.531 00:53:39 nvmf_tcp.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:07:46.531 00:53:39 nvmf_tcp.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:07:46.531 00:53:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@57 -- # : 1 00:07:46.531 00:53:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@58 -- # export RUN_NIGHTLY 00:07:46.531 00:53:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@61 -- # : 0 00:07:46.531 00:53:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@62 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:07:46.531 00:53:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@63 -- # : 0 00:07:46.531 00:53:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@64 -- # export SPDK_RUN_VALGRIND 00:07:46.531 00:53:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@65 -- # : 1 00:07:46.531 00:53:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@66 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:07:46.531 00:53:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@67 -- # : 0 00:07:46.531 00:53:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@68 -- # export SPDK_TEST_UNITTEST 00:07:46.531 00:53:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@69 -- # : 00:07:46.531 00:53:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@70 -- # export SPDK_TEST_AUTOBUILD 00:07:46.531 00:53:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@71 -- # : 0 00:07:46.531 00:53:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@72 -- # export SPDK_TEST_RELEASE_BUILD 00:07:46.531 00:53:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@73 -- # : 0 00:07:46.531 00:53:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@74 -- # export SPDK_TEST_ISAL 00:07:46.531 00:53:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@75 -- # : 0 00:07:46.531 00:53:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@76 -- # export SPDK_TEST_ISCSI 00:07:46.531 00:53:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@77 -- # : 0 00:07:46.531 00:53:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@78 -- # export SPDK_TEST_ISCSI_INITIATOR 00:07:46.531 00:53:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@79 -- # : 0 00:07:46.531 00:53:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@80 -- # export SPDK_TEST_NVME 00:07:46.531 00:53:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@81 -- # : 0 00:07:46.531 00:53:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@82 -- # export SPDK_TEST_NVME_PMR 00:07:46.531 00:53:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@83 -- # : 0 00:07:46.531 00:53:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@84 -- # export SPDK_TEST_NVME_BP 00:07:46.531 00:53:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@85 -- # : 1 00:07:46.531 00:53:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@86 -- # export SPDK_TEST_NVME_CLI 00:07:46.531 00:53:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@87 -- # : 0 00:07:46.531 00:53:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@88 -- # export SPDK_TEST_NVME_CUSE 00:07:46.531 00:53:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@89 -- # : 0 00:07:46.531 00:53:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@90 -- # export SPDK_TEST_NVME_FDP 00:07:46.531 00:53:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@91 -- # : 1 00:07:46.531 00:53:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@92 -- # export SPDK_TEST_NVMF 00:07:46.531 00:53:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@93 -- # : 1 00:07:46.531 00:53:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@94 -- # export SPDK_TEST_VFIOUSER 00:07:46.531 00:53:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@95 -- # : 0 00:07:46.531 00:53:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@96 -- # export SPDK_TEST_VFIOUSER_QEMU 00:07:46.531 00:53:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@97 -- # : 0 00:07:46.531 00:53:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@98 -- # export SPDK_TEST_FUZZER 00:07:46.531 00:53:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@99 -- # : 0 00:07:46.531 00:53:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@100 -- # export SPDK_TEST_FUZZER_SHORT 00:07:46.531 00:53:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@101 -- # : tcp 00:07:46.531 00:53:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@102 -- # export SPDK_TEST_NVMF_TRANSPORT 00:07:46.531 00:53:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@103 -- # : 0 00:07:46.531 00:53:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@104 -- # export SPDK_TEST_RBD 00:07:46.531 00:53:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@105 -- # : 0 00:07:46.531 00:53:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@106 -- # export SPDK_TEST_VHOST 00:07:46.531 00:53:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@107 -- # : 0 00:07:46.531 00:53:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@108 -- # export SPDK_TEST_BLOCKDEV 00:07:46.531 00:53:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@109 -- # : 0 00:07:46.531 00:53:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@110 -- # export SPDK_TEST_IOAT 00:07:46.531 00:53:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@111 -- # : 0 00:07:46.531 00:53:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@112 -- # export SPDK_TEST_BLOBFS 00:07:46.531 00:53:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@113 -- # : 0 00:07:46.531 00:53:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@114 -- # export SPDK_TEST_VHOST_INIT 00:07:46.531 00:53:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@115 -- # : 0 00:07:46.531 00:53:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@116 -- # export SPDK_TEST_LVOL 00:07:46.531 00:53:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@117 -- # : 0 00:07:46.531 00:53:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@118 -- # export SPDK_TEST_VBDEV_COMPRESS 00:07:46.531 00:53:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@119 -- # : 0 00:07:46.531 00:53:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@120 -- # export SPDK_RUN_ASAN 00:07:46.531 00:53:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@121 -- # : 1 00:07:46.531 00:53:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@122 -- # export SPDK_RUN_UBSAN 00:07:46.531 00:53:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@123 -- # : /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:07:46.531 00:53:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@124 -- # export SPDK_RUN_EXTERNAL_DPDK 00:07:46.531 00:53:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@125 -- # : 0 00:07:46.531 00:53:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@126 -- # export SPDK_RUN_NON_ROOT 00:07:46.531 00:53:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@127 -- # : 0 00:07:46.531 00:53:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@128 -- # export SPDK_TEST_CRYPTO 00:07:46.531 00:53:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@129 -- # : 0 00:07:46.531 00:53:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@130 -- # export SPDK_TEST_FTL 00:07:46.531 00:53:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@131 -- # : 0 00:07:46.532 00:53:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@132 -- # export SPDK_TEST_OCF 00:07:46.532 00:53:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@133 -- # : 0 00:07:46.532 00:53:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@134 -- # export SPDK_TEST_VMD 00:07:46.532 00:53:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@135 -- # : 0 00:07:46.532 00:53:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@136 -- # export SPDK_TEST_OPAL 00:07:46.532 00:53:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@137 -- # : v23.11 00:07:46.532 00:53:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@138 -- # export SPDK_TEST_NATIVE_DPDK 00:07:46.532 00:53:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@139 -- # : true 00:07:46.532 00:53:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@140 -- # export SPDK_AUTOTEST_X 00:07:46.532 00:53:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@141 -- # : 0 00:07:46.532 00:53:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@142 -- # export SPDK_TEST_RAID5 00:07:46.532 00:53:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@143 -- # : 0 00:07:46.532 00:53:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@144 -- # export SPDK_TEST_URING 00:07:46.532 00:53:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@145 -- # : 0 00:07:46.532 00:53:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@146 -- # export SPDK_TEST_USDT 00:07:46.532 00:53:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@147 -- # : 0 00:07:46.532 00:53:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@148 -- # export SPDK_TEST_USE_IGB_UIO 00:07:46.532 00:53:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@149 -- # : 0 00:07:46.532 00:53:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@150 -- # export SPDK_TEST_SCHEDULER 00:07:46.532 00:53:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@151 -- # : 0 00:07:46.532 00:53:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@152 -- # export SPDK_TEST_SCANBUILD 00:07:46.532 00:53:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@153 -- # : e810 00:07:46.532 00:53:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@154 -- # export SPDK_TEST_NVMF_NICS 00:07:46.532 00:53:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@155 -- # : 0 00:07:46.532 00:53:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@156 -- # export SPDK_TEST_SMA 00:07:46.532 00:53:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@157 -- # : 0 00:07:46.532 00:53:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@158 -- # export SPDK_TEST_DAOS 00:07:46.532 00:53:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@159 -- # : 0 00:07:46.532 00:53:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@160 -- # export SPDK_TEST_XNVME 00:07:46.532 00:53:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@161 -- # : 0 00:07:46.532 00:53:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@162 -- # export SPDK_TEST_ACCEL_DSA 00:07:46.532 00:53:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@163 -- # : 0 00:07:46.532 00:53:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@164 -- # export SPDK_TEST_ACCEL_IAA 00:07:46.532 00:53:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 00:07:46.532 00:53:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_FUZZER_TARGET 00:07:46.532 00:53:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@168 -- # : 0 00:07:46.532 00:53:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@169 -- # export SPDK_TEST_NVMF_MDNS 00:07:46.532 00:53:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@170 -- # : 0 00:07:46.532 00:53:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@171 -- # export SPDK_JSONRPC_GO_CLIENT 00:07:46.532 00:53:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:07:46.532 00:53:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@174 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:07:46.532 00:53:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:07:46.532 00:53:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:07:46.532 00:53:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:46.532 00:53:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:46.532 00:53:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:46.532 00:53:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:46.532 00:53:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@180 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:07:46.532 00:53:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@180 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:07:46.532 00:53:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@184 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:07:46.532 00:53:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@184 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:07:46.532 00:53:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@188 -- # export PYTHONDONTWRITEBYTECODE=1 00:07:46.532 00:53:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@188 -- # PYTHONDONTWRITEBYTECODE=1 00:07:46.532 00:53:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@192 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:07:46.532 00:53:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@192 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:07:46.532 00:53:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:07:46.532 00:53:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:07:46.532 00:53:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@197 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:07:46.532 00:53:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@198 -- # rm -rf /var/tmp/asan_suppression_file 00:07:46.532 00:53:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@199 -- # cat 00:07:46.532 00:53:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@235 -- # echo leak:libfuse3.so 00:07:46.532 00:53:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@237 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:07:46.532 00:53:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@237 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:07:46.532 00:53:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@239 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:07:46.532 00:53:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@239 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:07:46.532 00:53:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@241 -- # '[' -z /var/spdk/dependencies ']' 00:07:46.532 00:53:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@244 -- # export DEPENDENCY_DIR 00:07:46.532 00:53:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@248 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:07:46.532 00:53:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@248 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:07:46.532 00:53:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:07:46.532 00:53:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:07:46.532 00:53:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@252 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:46.532 00:53:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@252 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:46.532 00:53:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:46.532 00:53:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:46.532 00:53:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@255 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:07:46.532 00:53:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@255 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:07:46.532 00:53:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@258 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:46.532 00:53:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@258 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:46.532 00:53:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@261 -- # '[' 0 -eq 0 ']' 00:07:46.532 00:53:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@262 -- # export valgrind= 00:07:46.532 00:53:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@262 -- # valgrind= 00:07:46.533 00:53:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@268 -- # uname -s 00:07:46.533 00:53:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@268 -- # '[' Linux = Linux ']' 00:07:46.533 00:53:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@269 -- # HUGEMEM=4096 00:07:46.533 00:53:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@270 -- # export CLEAR_HUGE=yes 00:07:46.533 00:53:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@270 -- # CLEAR_HUGE=yes 00:07:46.533 00:53:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # [[ 0 -eq 1 ]] 00:07:46.533 00:53:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # [[ 0 -eq 1 ]] 00:07:46.533 00:53:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@278 -- # MAKE=make 00:07:46.533 00:53:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@279 -- # MAKEFLAGS=-j48 00:07:46.533 00:53:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@295 -- # export HUGEMEM=4096 00:07:46.533 00:53:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@295 -- # HUGEMEM=4096 00:07:46.533 00:53:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@297 -- # NO_HUGE=() 00:07:46.533 00:53:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@298 -- # TEST_MODE= 00:07:46.533 00:53:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@299 -- # for i in "$@" 00:07:46.533 00:53:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@300 -- # case "$i" in 00:07:46.533 00:53:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@305 -- # TEST_TRANSPORT=tcp 00:07:46.533 00:53:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@317 -- # [[ -z 3656097 ]] 00:07:46.533 00:53:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@317 -- # kill -0 3656097 00:07:46.533 00:53:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1676 -- # set_test_storage 2147483648 00:07:46.533 00:53:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@327 -- # [[ -v testdir ]] 00:07:46.533 00:53:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@329 -- # local requested_size=2147483648 00:07:46.533 00:53:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@330 -- # local mount target_dir 00:07:46.533 00:53:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@332 -- # local -A mounts fss sizes avails uses 00:07:46.533 00:53:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@333 -- # local source fs size avail mount use 00:07:46.533 00:53:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@335 -- # local storage_fallback storage_candidates 00:07:46.533 00:53:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@337 -- # mktemp -udt spdk.XXXXXX 00:07:46.533 00:53:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@337 -- # storage_fallback=/tmp/spdk.yIgXek 00:07:46.533 00:53:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@342 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:07:46.533 00:53:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@344 -- # [[ -n '' ]] 00:07:46.533 00:53:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@349 -- # [[ -n '' ]] 00:07:46.533 00:53:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@354 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.yIgXek/tests/target /tmp/spdk.yIgXek 00:07:46.533 00:53:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@357 -- # requested_size=2214592512 00:07:46.533 00:53:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:07:46.533 00:53:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@326 -- # df -T 00:07:46.533 00:53:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@326 -- # grep -v Filesystem 00:07:46.533 00:53:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=spdk_devtmpfs 00:07:46.533 00:53:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=devtmpfs 00:07:46.533 00:53:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=67108864 00:07:46.533 00:53:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=67108864 00:07:46.533 00:53:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=0 00:07:46.533 00:53:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:07:46.533 00:53:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=/dev/pmem0 00:07:46.533 00:53:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=ext2 00:07:46.533 00:53:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=953643008 00:07:46.533 00:53:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=5284429824 00:07:46.533 00:53:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=4330786816 00:07:46.533 00:53:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:07:46.533 00:53:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=spdk_root 00:07:46.533 00:53:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=overlay 00:07:46.533 00:53:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=52941127680 00:07:46.533 00:53:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=61994729472 00:07:46.533 00:53:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=9053601792 00:07:46.533 00:53:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:07:46.533 00:53:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=tmpfs 00:07:46.533 00:53:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=tmpfs 00:07:46.533 00:53:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=30993989632 00:07:46.533 00:53:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=30997364736 00:07:46.533 00:53:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=3375104 00:07:46.533 00:53:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:07:46.533 00:53:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=tmpfs 00:07:46.533 00:53:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=tmpfs 00:07:46.533 00:53:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=12390187008 00:07:46.533 00:53:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=12398948352 00:07:46.533 00:53:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=8761344 00:07:46.533 00:53:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:07:46.533 00:53:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=tmpfs 00:07:46.533 00:53:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=tmpfs 00:07:46.533 00:53:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=30996615168 00:07:46.533 00:53:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=30997364736 00:07:46.533 00:53:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=749568 00:07:46.533 00:53:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:07:46.533 00:53:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=tmpfs 00:07:46.533 00:53:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=tmpfs 00:07:46.533 00:53:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=6199468032 00:07:46.533 00:53:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=6199472128 00:07:46.533 00:53:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=4096 00:07:46.533 00:53:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:07:46.533 00:53:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@365 -- # printf '* Looking for test storage...\n' 00:07:46.533 * Looking for test storage... 00:07:46.533 00:53:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@367 -- # local target_space new_size 00:07:46.533 00:53:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@368 -- # for target_dir in "${storage_candidates[@]}" 00:07:46.533 00:53:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@371 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:46.533 00:53:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@371 -- # awk '$1 !~ /Filesystem/{print $6}' 00:07:46.533 00:53:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@371 -- # mount=/ 00:07:46.533 00:53:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@373 -- # target_space=52941127680 00:07:46.533 00:53:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@374 -- # (( target_space == 0 || target_space < requested_size )) 00:07:46.533 00:53:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@377 -- # (( target_space >= requested_size )) 00:07:46.533 00:53:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@379 -- # [[ overlay == tmpfs ]] 00:07:46.533 00:53:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@379 -- # [[ overlay == ramfs ]] 00:07:46.533 00:53:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@379 -- # [[ / == / ]] 00:07:46.533 00:53:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # new_size=11268194304 00:07:46.533 00:53:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@381 -- # (( new_size * 100 / sizes[/] > 95 )) 00:07:46.533 00:53:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@386 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:46.533 00:53:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@386 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:46.533 00:53:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@387 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:46.533 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:46.533 00:53:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@388 -- # return 0 00:07:46.533 00:53:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1678 -- # set -o errtrace 00:07:46.533 00:53:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1679 -- # shopt -s extdebug 00:07:46.533 00:53:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1680 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:07:46.533 00:53:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1682 -- # PS4=' \t $test_domain -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:07:46.533 00:53:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1683 -- # true 00:07:46.533 00:53:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1685 -- # xtrace_fd 00:07:46.533 00:53:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:07:46.533 00:53:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:07:46.533 00:53:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:07:46.533 00:53:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:07:46.533 00:53:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:07:46.533 00:53:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:07:46.533 00:53:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:07:46.534 00:53:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:07:46.534 00:53:39 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:46.534 00:53:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:07:46.534 00:53:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:46.534 00:53:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:46.534 00:53:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:46.534 00:53:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:46.534 00:53:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:46.534 00:53:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:46.534 00:53:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:46.534 00:53:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:46.534 00:53:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:46.534 00:53:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:46.534 00:53:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:46.534 00:53:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:46.534 00:53:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:46.534 00:53:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:46.534 00:53:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:46.534 00:53:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:46.534 00:53:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:46.534 00:53:39 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:46.534 00:53:39 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:46.534 00:53:39 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:46.534 00:53:39 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:46.534 00:53:39 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:46.534 00:53:39 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:46.534 00:53:39 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:07:46.534 00:53:39 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:46.534 00:53:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@47 -- # : 0 00:07:46.534 00:53:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:46.534 00:53:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:46.534 00:53:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:46.534 00:53:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:46.534 00:53:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:46.534 00:53:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:46.534 00:53:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:46.534 00:53:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:46.534 00:53:39 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:07:46.534 00:53:39 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:07:46.534 00:53:39 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:07:46.534 00:53:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:46.534 00:53:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:46.534 00:53:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:46.534 00:53:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:46.534 00:53:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:46.534 00:53:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:46.534 00:53:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:46.534 00:53:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:46.534 00:53:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:46.534 00:53:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:46.534 00:53:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@285 -- # xtrace_disable 00:07:46.534 00:53:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:07:48.433 00:53:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:48.433 00:53:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@291 -- # pci_devs=() 00:07:48.433 00:53:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:48.433 00:53:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:48.433 00:53:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:48.433 00:53:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:48.433 00:53:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:48.433 00:53:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@295 -- # net_devs=() 00:07:48.433 00:53:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:48.433 00:53:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@296 -- # e810=() 00:07:48.433 00:53:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@296 -- # local -ga e810 00:07:48.433 00:53:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@297 -- # x722=() 00:07:48.433 00:53:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@297 -- # local -ga x722 00:07:48.433 00:53:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@298 -- # mlx=() 00:07:48.433 00:53:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@298 -- # local -ga mlx 00:07:48.433 00:53:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:48.434 00:53:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:48.434 00:53:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:48.434 00:53:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:48.434 00:53:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:48.434 00:53:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:48.434 00:53:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:48.434 00:53:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:48.434 00:53:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:48.434 00:53:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:48.434 00:53:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:48.434 00:53:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:48.434 00:53:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:48.434 00:53:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:48.434 00:53:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:48.434 00:53:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:48.434 00:53:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:48.434 00:53:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:48.434 00:53:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:07:48.434 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:07:48.434 00:53:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:48.434 00:53:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:48.434 00:53:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:48.434 00:53:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:48.434 00:53:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:48.434 00:53:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:48.434 00:53:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:07:48.434 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:07:48.434 00:53:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:48.434 00:53:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:48.434 00:53:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:48.434 00:53:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:48.434 00:53:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:48.434 00:53:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:48.434 00:53:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:48.434 00:53:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:48.434 00:53:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:48.434 00:53:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:48.434 00:53:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:48.434 00:53:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:48.434 00:53:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:48.434 00:53:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:48.434 00:53:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:48.434 00:53:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:07:48.434 Found net devices under 0000:0a:00.0: cvl_0_0 00:07:48.434 00:53:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:48.434 00:53:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:48.434 00:53:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:48.434 00:53:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:48.434 00:53:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:48.434 00:53:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:48.434 00:53:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:48.434 00:53:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:48.434 00:53:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:07:48.434 Found net devices under 0000:0a:00.1: cvl_0_1 00:07:48.434 00:53:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:48.434 00:53:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:48.434 00:53:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # is_hw=yes 00:07:48.434 00:53:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:48.434 00:53:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:48.434 00:53:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:48.434 00:53:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:48.434 00:53:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:48.434 00:53:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:48.434 00:53:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:48.434 00:53:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:48.434 00:53:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:48.434 00:53:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:48.434 00:53:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:48.434 00:53:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:48.434 00:53:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:48.434 00:53:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:48.434 00:53:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:48.434 00:53:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:48.434 00:53:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:48.434 00:53:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:48.434 00:53:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:48.434 00:53:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:48.692 00:53:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:48.692 00:53:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:48.692 00:53:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:48.692 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:48.692 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.199 ms 00:07:48.692 00:07:48.692 --- 10.0.0.2 ping statistics --- 00:07:48.692 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:48.692 rtt min/avg/max/mdev = 0.199/0.199/0.199/0.000 ms 00:07:48.692 00:53:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:48.692 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:48.692 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.113 ms 00:07:48.692 00:07:48.692 --- 10.0.0.1 ping statistics --- 00:07:48.692 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:48.692 rtt min/avg/max/mdev = 0.113/0.113/0.113/0.000 ms 00:07:48.692 00:53:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:48.692 00:53:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@422 -- # return 0 00:07:48.692 00:53:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:48.692 00:53:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:48.692 00:53:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:48.692 00:53:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:48.692 00:53:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:48.692 00:53:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:48.692 00:53:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:48.692 00:53:41 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:07:48.692 00:53:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:07:48.692 00:53:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:48.692 00:53:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:07:48.692 ************************************ 00:07:48.692 START TEST nvmf_filesystem_no_in_capsule 00:07:48.692 ************************************ 00:07:48.692 00:53:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1121 -- # nvmf_filesystem_part 0 00:07:48.692 00:53:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:07:48.692 00:53:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:07:48.692 00:53:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:48.692 00:53:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@720 -- # xtrace_disable 00:07:48.692 00:53:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:48.692 00:53:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=3657720 00:07:48.692 00:53:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:48.692 00:53:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 3657720 00:07:48.692 00:53:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@827 -- # '[' -z 3657720 ']' 00:07:48.692 00:53:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:48.692 00:53:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:48.692 00:53:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:48.692 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:48.692 00:53:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:48.693 00:53:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:48.693 [2024-07-25 00:53:41.737128] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 23.11.0 initialization... 00:07:48.693 [2024-07-25 00:53:41.737202] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:48.693 EAL: No free 2048 kB hugepages reported on node 1 00:07:48.693 [2024-07-25 00:53:41.808731] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:48.950 [2024-07-25 00:53:41.905846] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:48.950 [2024-07-25 00:53:41.905908] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:48.950 [2024-07-25 00:53:41.905933] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:48.950 [2024-07-25 00:53:41.905953] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:48.950 [2024-07-25 00:53:41.905970] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:48.950 [2024-07-25 00:53:41.906066] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:48.950 [2024-07-25 00:53:41.906121] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:48.950 [2024-07-25 00:53:41.906272] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:48.950 [2024-07-25 00:53:41.906279] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:48.950 00:53:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:48.950 00:53:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@860 -- # return 0 00:07:48.950 00:53:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:48.950 00:53:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:48.950 00:53:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:48.950 00:53:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:48.950 00:53:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:07:48.950 00:53:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:07:48.950 00:53:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:48.950 00:53:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:48.950 [2024-07-25 00:53:42.065084] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:48.950 00:53:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:48.950 00:53:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:07:48.950 00:53:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:48.950 00:53:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:49.208 Malloc1 00:07:49.208 00:53:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:49.208 00:53:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:07:49.208 00:53:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:49.208 00:53:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:49.208 00:53:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:49.208 00:53:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:49.208 00:53:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:49.208 00:53:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:49.208 00:53:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:49.208 00:53:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:49.208 00:53:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:49.208 00:53:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:49.208 [2024-07-25 00:53:42.235598] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:49.208 00:53:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:49.208 00:53:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:07:49.208 00:53:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1374 -- # local bdev_name=Malloc1 00:07:49.208 00:53:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1375 -- # local bdev_info 00:07:49.208 00:53:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1376 -- # local bs 00:07:49.208 00:53:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1377 -- # local nb 00:07:49.208 00:53:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:07:49.208 00:53:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:49.208 00:53:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:49.208 00:53:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:49.208 00:53:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # bdev_info='[ 00:07:49.208 { 00:07:49.208 "name": "Malloc1", 00:07:49.208 "aliases": [ 00:07:49.208 "dee592be-03fe-4666-9fe3-85e034615d38" 00:07:49.208 ], 00:07:49.208 "product_name": "Malloc disk", 00:07:49.208 "block_size": 512, 00:07:49.208 "num_blocks": 1048576, 00:07:49.208 "uuid": "dee592be-03fe-4666-9fe3-85e034615d38", 00:07:49.208 "assigned_rate_limits": { 00:07:49.208 "rw_ios_per_sec": 0, 00:07:49.208 "rw_mbytes_per_sec": 0, 00:07:49.208 "r_mbytes_per_sec": 0, 00:07:49.208 "w_mbytes_per_sec": 0 00:07:49.208 }, 00:07:49.208 "claimed": true, 00:07:49.209 "claim_type": "exclusive_write", 00:07:49.209 "zoned": false, 00:07:49.209 "supported_io_types": { 00:07:49.209 "read": true, 00:07:49.209 "write": true, 00:07:49.209 "unmap": true, 00:07:49.209 "write_zeroes": true, 00:07:49.209 "flush": true, 00:07:49.209 "reset": true, 00:07:49.209 "compare": false, 00:07:49.209 "compare_and_write": false, 00:07:49.209 "abort": true, 00:07:49.209 "nvme_admin": false, 00:07:49.209 "nvme_io": false 00:07:49.209 }, 00:07:49.209 "memory_domains": [ 00:07:49.209 { 00:07:49.209 "dma_device_id": "system", 00:07:49.209 "dma_device_type": 1 00:07:49.209 }, 00:07:49.209 { 00:07:49.209 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:49.209 "dma_device_type": 2 00:07:49.209 } 00:07:49.209 ], 00:07:49.209 "driver_specific": {} 00:07:49.209 } 00:07:49.209 ]' 00:07:49.209 00:53:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # jq '.[] .block_size' 00:07:49.209 00:53:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # bs=512 00:07:49.209 00:53:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # jq '.[] .num_blocks' 00:07:49.209 00:53:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # nb=1048576 00:07:49.209 00:53:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # bdev_size=512 00:07:49.209 00:53:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # echo 512 00:07:49.209 00:53:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:07:49.209 00:53:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:50.139 00:53:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:07:50.139 00:53:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1194 -- # local i=0 00:07:50.139 00:53:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:07:50.139 00:53:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:07:50.139 00:53:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1201 -- # sleep 2 00:07:52.032 00:53:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:07:52.032 00:53:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:07:52.032 00:53:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:07:52.032 00:53:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:07:52.032 00:53:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:07:52.032 00:53:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # return 0 00:07:52.032 00:53:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:07:52.032 00:53:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:07:52.032 00:53:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:07:52.032 00:53:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:07:52.032 00:53:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:07:52.032 00:53:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:07:52.032 00:53:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:07:52.032 00:53:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:07:52.032 00:53:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:07:52.032 00:53:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:07:52.032 00:53:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:07:52.289 00:53:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:07:52.546 00:53:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:07:53.479 00:53:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:07:53.479 00:53:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:07:53.479 00:53:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:07:53.479 00:53:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:53.479 00:53:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:53.479 ************************************ 00:07:53.479 START TEST filesystem_ext4 00:07:53.479 ************************************ 00:07:53.479 00:53:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1121 -- # nvmf_filesystem_create ext4 nvme0n1 00:07:53.479 00:53:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:07:53.479 00:53:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:53.479 00:53:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:07:53.479 00:53:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@922 -- # local fstype=ext4 00:07:53.479 00:53:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@923 -- # local dev_name=/dev/nvme0n1p1 00:07:53.479 00:53:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@924 -- # local i=0 00:07:53.479 00:53:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@925 -- # local force 00:07:53.479 00:53:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@927 -- # '[' ext4 = ext4 ']' 00:07:53.479 00:53:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@928 -- # force=-F 00:07:53.479 00:53:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@933 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:07:53.479 mke2fs 1.46.5 (30-Dec-2021) 00:07:53.736 Discarding device blocks: 0/522240 done 00:07:53.736 Creating filesystem with 522240 1k blocks and 130560 inodes 00:07:53.736 Filesystem UUID: ce80907a-6955-4bd1-8bd6-24f34c978f78 00:07:53.736 Superblock backups stored on blocks: 00:07:53.736 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:07:53.736 00:07:53.736 Allocating group tables: 0/64 done 00:07:53.736 Writing inode tables: 0/64 done 00:07:53.994 Creating journal (8192 blocks): done 00:07:55.184 Writing superblocks and filesystem accounting information: 0/64 done 00:07:55.184 00:07:55.184 00:53:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@941 -- # return 0 00:07:55.184 00:53:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:55.442 00:53:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:55.442 00:53:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:07:55.442 00:53:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:55.442 00:53:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:07:55.442 00:53:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:07:55.442 00:53:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:55.442 00:53:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 3657720 00:07:55.442 00:53:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:55.442 00:53:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:55.442 00:53:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:55.442 00:53:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:55.442 00:07:55.442 real 0m1.916s 00:07:55.442 user 0m0.021s 00:07:55.442 sys 0m0.055s 00:07:55.442 00:53:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:55.442 00:53:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:07:55.442 ************************************ 00:07:55.442 END TEST filesystem_ext4 00:07:55.442 ************************************ 00:07:55.442 00:53:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:07:55.442 00:53:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:07:55.442 00:53:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:55.442 00:53:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:55.442 ************************************ 00:07:55.442 START TEST filesystem_btrfs 00:07:55.442 ************************************ 00:07:55.442 00:53:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1121 -- # nvmf_filesystem_create btrfs nvme0n1 00:07:55.442 00:53:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:07:55.442 00:53:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:55.442 00:53:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:07:55.442 00:53:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@922 -- # local fstype=btrfs 00:07:55.442 00:53:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@923 -- # local dev_name=/dev/nvme0n1p1 00:07:55.442 00:53:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@924 -- # local i=0 00:07:55.442 00:53:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@925 -- # local force 00:07:55.442 00:53:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@927 -- # '[' btrfs = ext4 ']' 00:07:55.442 00:53:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@930 -- # force=-f 00:07:55.442 00:53:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@933 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:07:55.701 btrfs-progs v6.6.2 00:07:55.701 See https://btrfs.readthedocs.io for more information. 00:07:55.701 00:07:55.701 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:07:55.701 NOTE: several default settings have changed in version 5.15, please make sure 00:07:55.701 this does not affect your deployments: 00:07:55.701 - DUP for metadata (-m dup) 00:07:55.701 - enabled no-holes (-O no-holes) 00:07:55.701 - enabled free-space-tree (-R free-space-tree) 00:07:55.701 00:07:55.701 Label: (null) 00:07:55.701 UUID: 3971c11d-6e84-4ce3-9b5d-263c33ed8844 00:07:55.701 Node size: 16384 00:07:55.701 Sector size: 4096 00:07:55.701 Filesystem size: 510.00MiB 00:07:55.701 Block group profiles: 00:07:55.701 Data: single 8.00MiB 00:07:55.701 Metadata: DUP 32.00MiB 00:07:55.701 System: DUP 8.00MiB 00:07:55.701 SSD detected: yes 00:07:55.701 Zoned device: no 00:07:55.701 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:07:55.701 Runtime features: free-space-tree 00:07:55.701 Checksum: crc32c 00:07:55.701 Number of devices: 1 00:07:55.701 Devices: 00:07:55.701 ID SIZE PATH 00:07:55.701 1 510.00MiB /dev/nvme0n1p1 00:07:55.701 00:07:55.701 00:53:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@941 -- # return 0 00:07:55.701 00:53:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:56.675 00:53:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:56.675 00:53:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:07:56.675 00:53:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:56.675 00:53:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:07:56.675 00:53:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:07:56.675 00:53:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:56.675 00:53:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 3657720 00:07:56.675 00:53:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:56.675 00:53:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:56.675 00:53:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:56.675 00:53:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:56.675 00:07:56.675 real 0m1.155s 00:07:56.675 user 0m0.012s 00:07:56.675 sys 0m0.117s 00:07:56.675 00:53:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:56.675 00:53:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:07:56.675 ************************************ 00:07:56.675 END TEST filesystem_btrfs 00:07:56.675 ************************************ 00:07:56.675 00:53:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:07:56.675 00:53:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:07:56.675 00:53:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:56.675 00:53:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:56.675 ************************************ 00:07:56.675 START TEST filesystem_xfs 00:07:56.675 ************************************ 00:07:56.675 00:53:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1121 -- # nvmf_filesystem_create xfs nvme0n1 00:07:56.675 00:53:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:07:56.675 00:53:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:56.675 00:53:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:07:56.675 00:53:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@922 -- # local fstype=xfs 00:07:56.675 00:53:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@923 -- # local dev_name=/dev/nvme0n1p1 00:07:56.675 00:53:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@924 -- # local i=0 00:07:56.675 00:53:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@925 -- # local force 00:07:56.675 00:53:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@927 -- # '[' xfs = ext4 ']' 00:07:56.675 00:53:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@930 -- # force=-f 00:07:56.675 00:53:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@933 -- # mkfs.xfs -f /dev/nvme0n1p1 00:07:56.933 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:07:56.933 = sectsz=512 attr=2, projid32bit=1 00:07:56.933 = crc=1 finobt=1, sparse=1, rmapbt=0 00:07:56.933 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:07:56.933 data = bsize=4096 blocks=130560, imaxpct=25 00:07:56.933 = sunit=0 swidth=0 blks 00:07:56.933 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:07:56.933 log =internal log bsize=4096 blocks=16384, version=2 00:07:56.933 = sectsz=512 sunit=0 blks, lazy-count=1 00:07:56.933 realtime =none extsz=4096 blocks=0, rtextents=0 00:07:57.868 Discarding blocks...Done. 00:07:57.868 00:53:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@941 -- # return 0 00:07:57.868 00:53:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:59.766 00:53:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:59.766 00:53:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:07:59.766 00:53:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:59.766 00:53:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:07:59.766 00:53:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:07:59.766 00:53:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:59.766 00:53:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 3657720 00:07:59.766 00:53:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:59.766 00:53:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:59.766 00:53:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:59.766 00:53:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:59.766 00:07:59.766 real 0m2.892s 00:07:59.766 user 0m0.019s 00:07:59.766 sys 0m0.055s 00:07:59.766 00:53:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:59.766 00:53:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:07:59.766 ************************************ 00:07:59.766 END TEST filesystem_xfs 00:07:59.766 ************************************ 00:07:59.766 00:53:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:07:59.766 00:53:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:07:59.766 00:53:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:59.766 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:59.766 00:53:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:59.766 00:53:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1215 -- # local i=0 00:07:59.766 00:53:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:07:59.766 00:53:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:59.766 00:53:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:07:59.766 00:53:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:59.766 00:53:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # return 0 00:07:59.766 00:53:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:59.766 00:53:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:59.766 00:53:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:59.766 00:53:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:59.766 00:53:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:07:59.766 00:53:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 3657720 00:07:59.766 00:53:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@946 -- # '[' -z 3657720 ']' 00:07:59.766 00:53:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@950 -- # kill -0 3657720 00:07:59.766 00:53:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@951 -- # uname 00:07:59.766 00:53:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:59.766 00:53:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3657720 00:07:59.766 00:53:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:59.766 00:53:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:59.766 00:53:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3657720' 00:07:59.766 killing process with pid 3657720 00:07:59.766 00:53:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@965 -- # kill 3657720 00:07:59.766 00:53:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@970 -- # wait 3657720 00:08:00.332 00:53:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:08:00.332 00:08:00.332 real 0m11.610s 00:08:00.332 user 0m44.444s 00:08:00.332 sys 0m1.783s 00:08:00.332 00:53:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:00.332 00:53:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:00.332 ************************************ 00:08:00.332 END TEST nvmf_filesystem_no_in_capsule 00:08:00.332 ************************************ 00:08:00.332 00:53:53 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:08:00.332 00:53:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:08:00.332 00:53:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:00.332 00:53:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:08:00.332 ************************************ 00:08:00.332 START TEST nvmf_filesystem_in_capsule 00:08:00.332 ************************************ 00:08:00.332 00:53:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1121 -- # nvmf_filesystem_part 4096 00:08:00.332 00:53:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:08:00.332 00:53:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:08:00.332 00:53:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:00.332 00:53:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@720 -- # xtrace_disable 00:08:00.332 00:53:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:00.332 00:53:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=3659289 00:08:00.332 00:53:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:00.333 00:53:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 3659289 00:08:00.333 00:53:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@827 -- # '[' -z 3659289 ']' 00:08:00.333 00:53:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:00.333 00:53:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@832 -- # local max_retries=100 00:08:00.333 00:53:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:00.333 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:00.333 00:53:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@836 -- # xtrace_disable 00:08:00.333 00:53:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:00.333 [2024-07-25 00:53:53.399743] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 23.11.0 initialization... 00:08:00.333 [2024-07-25 00:53:53.399835] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:00.333 EAL: No free 2048 kB hugepages reported on node 1 00:08:00.333 [2024-07-25 00:53:53.479446] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:00.590 [2024-07-25 00:53:53.575095] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:00.590 [2024-07-25 00:53:53.575156] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:00.590 [2024-07-25 00:53:53.575183] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:00.590 [2024-07-25 00:53:53.575205] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:00.590 [2024-07-25 00:53:53.575223] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:00.590 [2024-07-25 00:53:53.575307] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:00.590 [2024-07-25 00:53:53.575381] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:00.590 [2024-07-25 00:53:53.575407] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:00.590 [2024-07-25 00:53:53.575417] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:00.590 00:53:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:08:00.590 00:53:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@860 -- # return 0 00:08:00.590 00:53:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:00.590 00:53:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:00.590 00:53:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:00.590 00:53:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:00.590 00:53:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:08:00.590 00:53:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:08:00.590 00:53:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:00.590 00:53:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:00.590 [2024-07-25 00:53:53.724828] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:00.590 00:53:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:00.590 00:53:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:08:00.590 00:53:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:00.590 00:53:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:00.848 Malloc1 00:08:00.848 00:53:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:00.848 00:53:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:00.848 00:53:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:00.848 00:53:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:00.848 00:53:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:00.848 00:53:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:00.848 00:53:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:00.848 00:53:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:00.848 00:53:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:00.848 00:53:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:00.848 00:53:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:00.848 00:53:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:00.848 [2024-07-25 00:53:53.897688] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:00.848 00:53:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:00.848 00:53:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:08:00.848 00:53:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1374 -- # local bdev_name=Malloc1 00:08:00.849 00:53:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1375 -- # local bdev_info 00:08:00.849 00:53:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1376 -- # local bs 00:08:00.849 00:53:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1377 -- # local nb 00:08:00.849 00:53:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:08:00.849 00:53:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:00.849 00:53:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:00.849 00:53:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:00.849 00:53:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # bdev_info='[ 00:08:00.849 { 00:08:00.849 "name": "Malloc1", 00:08:00.849 "aliases": [ 00:08:00.849 "4aa26446-937d-4b8c-b425-98bdb8e43545" 00:08:00.849 ], 00:08:00.849 "product_name": "Malloc disk", 00:08:00.849 "block_size": 512, 00:08:00.849 "num_blocks": 1048576, 00:08:00.849 "uuid": "4aa26446-937d-4b8c-b425-98bdb8e43545", 00:08:00.849 "assigned_rate_limits": { 00:08:00.849 "rw_ios_per_sec": 0, 00:08:00.849 "rw_mbytes_per_sec": 0, 00:08:00.849 "r_mbytes_per_sec": 0, 00:08:00.849 "w_mbytes_per_sec": 0 00:08:00.849 }, 00:08:00.849 "claimed": true, 00:08:00.849 "claim_type": "exclusive_write", 00:08:00.849 "zoned": false, 00:08:00.849 "supported_io_types": { 00:08:00.849 "read": true, 00:08:00.849 "write": true, 00:08:00.849 "unmap": true, 00:08:00.849 "write_zeroes": true, 00:08:00.849 "flush": true, 00:08:00.849 "reset": true, 00:08:00.849 "compare": false, 00:08:00.849 "compare_and_write": false, 00:08:00.849 "abort": true, 00:08:00.849 "nvme_admin": false, 00:08:00.849 "nvme_io": false 00:08:00.849 }, 00:08:00.849 "memory_domains": [ 00:08:00.849 { 00:08:00.849 "dma_device_id": "system", 00:08:00.849 "dma_device_type": 1 00:08:00.849 }, 00:08:00.849 { 00:08:00.849 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:00.849 "dma_device_type": 2 00:08:00.849 } 00:08:00.849 ], 00:08:00.849 "driver_specific": {} 00:08:00.849 } 00:08:00.849 ]' 00:08:00.849 00:53:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # jq '.[] .block_size' 00:08:00.849 00:53:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # bs=512 00:08:00.849 00:53:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # jq '.[] .num_blocks' 00:08:00.849 00:53:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # nb=1048576 00:08:00.849 00:53:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # bdev_size=512 00:08:00.849 00:53:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # echo 512 00:08:00.849 00:53:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:08:00.849 00:53:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:01.783 00:53:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:08:01.783 00:53:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1194 -- # local i=0 00:08:01.783 00:53:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:08:01.783 00:53:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:08:01.783 00:53:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1201 -- # sleep 2 00:08:03.682 00:53:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:08:03.682 00:53:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:08:03.682 00:53:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:08:03.682 00:53:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:08:03.682 00:53:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:08:03.682 00:53:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # return 0 00:08:03.682 00:53:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:08:03.682 00:53:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:08:03.682 00:53:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:08:03.682 00:53:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:08:03.682 00:53:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:08:03.682 00:53:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:08:03.682 00:53:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:08:03.682 00:53:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:08:03.682 00:53:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:08:03.682 00:53:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:08:03.682 00:53:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:08:03.682 00:53:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:08:04.248 00:53:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:08:05.618 00:53:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:08:05.618 00:53:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:08:05.618 00:53:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:08:05.619 00:53:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:05.619 00:53:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:05.619 ************************************ 00:08:05.619 START TEST filesystem_in_capsule_ext4 00:08:05.619 ************************************ 00:08:05.619 00:53:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1121 -- # nvmf_filesystem_create ext4 nvme0n1 00:08:05.619 00:53:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:08:05.619 00:53:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:05.619 00:53:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:08:05.619 00:53:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@922 -- # local fstype=ext4 00:08:05.619 00:53:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@923 -- # local dev_name=/dev/nvme0n1p1 00:08:05.619 00:53:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@924 -- # local i=0 00:08:05.619 00:53:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@925 -- # local force 00:08:05.619 00:53:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@927 -- # '[' ext4 = ext4 ']' 00:08:05.619 00:53:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@928 -- # force=-F 00:08:05.619 00:53:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@933 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:08:05.619 mke2fs 1.46.5 (30-Dec-2021) 00:08:05.619 Discarding device blocks: 0/522240 done 00:08:05.619 Creating filesystem with 522240 1k blocks and 130560 inodes 00:08:05.619 Filesystem UUID: b9d65ccc-fd42-496a-9899-3b55b536a866 00:08:05.619 Superblock backups stored on blocks: 00:08:05.619 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:08:05.619 00:08:05.619 Allocating group tables: 0/64 done 00:08:05.619 Writing inode tables: 0/64 done 00:08:08.150 Creating journal (8192 blocks): done 00:08:08.150 Writing superblocks and filesystem accounting information: 0/64 done 00:08:08.150 00:08:08.150 00:54:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@941 -- # return 0 00:08:08.150 00:54:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:08.408 00:54:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:08.408 00:54:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:08:08.408 00:54:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:08.408 00:54:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:08:08.408 00:54:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:08:08.408 00:54:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:08.408 00:54:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 3659289 00:08:08.408 00:54:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:08.408 00:54:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:08.408 00:54:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:08.408 00:54:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:08.408 00:08:08.408 real 0m3.081s 00:08:08.408 user 0m0.013s 00:08:08.408 sys 0m0.056s 00:08:08.408 00:54:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:08.408 00:54:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:08:08.408 ************************************ 00:08:08.408 END TEST filesystem_in_capsule_ext4 00:08:08.408 ************************************ 00:08:08.408 00:54:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:08:08.408 00:54:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:08:08.408 00:54:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:08.408 00:54:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:08.408 ************************************ 00:08:08.408 START TEST filesystem_in_capsule_btrfs 00:08:08.408 ************************************ 00:08:08.408 00:54:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1121 -- # nvmf_filesystem_create btrfs nvme0n1 00:08:08.408 00:54:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:08:08.408 00:54:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:08.408 00:54:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:08:08.408 00:54:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@922 -- # local fstype=btrfs 00:08:08.408 00:54:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@923 -- # local dev_name=/dev/nvme0n1p1 00:08:08.408 00:54:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@924 -- # local i=0 00:08:08.408 00:54:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@925 -- # local force 00:08:08.408 00:54:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@927 -- # '[' btrfs = ext4 ']' 00:08:08.408 00:54:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@930 -- # force=-f 00:08:08.408 00:54:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@933 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:08:08.974 btrfs-progs v6.6.2 00:08:08.974 See https://btrfs.readthedocs.io for more information. 00:08:08.974 00:08:08.974 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:08:08.974 NOTE: several default settings have changed in version 5.15, please make sure 00:08:08.974 this does not affect your deployments: 00:08:08.974 - DUP for metadata (-m dup) 00:08:08.974 - enabled no-holes (-O no-holes) 00:08:08.974 - enabled free-space-tree (-R free-space-tree) 00:08:08.974 00:08:08.974 Label: (null) 00:08:08.974 UUID: f871b8bd-6d5c-40cd-90e9-440adc510e82 00:08:08.974 Node size: 16384 00:08:08.974 Sector size: 4096 00:08:08.974 Filesystem size: 510.00MiB 00:08:08.974 Block group profiles: 00:08:08.974 Data: single 8.00MiB 00:08:08.974 Metadata: DUP 32.00MiB 00:08:08.974 System: DUP 8.00MiB 00:08:08.974 SSD detected: yes 00:08:08.974 Zoned device: no 00:08:08.974 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:08:08.974 Runtime features: free-space-tree 00:08:08.974 Checksum: crc32c 00:08:08.974 Number of devices: 1 00:08:08.974 Devices: 00:08:08.974 ID SIZE PATH 00:08:08.974 1 510.00MiB /dev/nvme0n1p1 00:08:08.974 00:08:08.974 00:54:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@941 -- # return 0 00:08:08.974 00:54:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:09.231 00:54:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:09.231 00:54:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:08:09.231 00:54:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:09.231 00:54:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:08:09.231 00:54:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:08:09.231 00:54:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:09.231 00:54:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 3659289 00:08:09.231 00:54:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:09.231 00:54:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:09.231 00:54:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:09.231 00:54:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:09.231 00:08:09.231 real 0m0.814s 00:08:09.231 user 0m0.013s 00:08:09.231 sys 0m0.126s 00:08:09.231 00:54:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:09.231 00:54:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:08:09.231 ************************************ 00:08:09.231 END TEST filesystem_in_capsule_btrfs 00:08:09.231 ************************************ 00:08:09.231 00:54:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:08:09.231 00:54:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:08:09.231 00:54:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:09.231 00:54:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:09.489 ************************************ 00:08:09.489 START TEST filesystem_in_capsule_xfs 00:08:09.489 ************************************ 00:08:09.489 00:54:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1121 -- # nvmf_filesystem_create xfs nvme0n1 00:08:09.489 00:54:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:08:09.489 00:54:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:09.489 00:54:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:08:09.489 00:54:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@922 -- # local fstype=xfs 00:08:09.489 00:54:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@923 -- # local dev_name=/dev/nvme0n1p1 00:08:09.489 00:54:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@924 -- # local i=0 00:08:09.489 00:54:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@925 -- # local force 00:08:09.489 00:54:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@927 -- # '[' xfs = ext4 ']' 00:08:09.489 00:54:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@930 -- # force=-f 00:08:09.489 00:54:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@933 -- # mkfs.xfs -f /dev/nvme0n1p1 00:08:09.489 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:08:09.489 = sectsz=512 attr=2, projid32bit=1 00:08:09.489 = crc=1 finobt=1, sparse=1, rmapbt=0 00:08:09.489 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:08:09.489 data = bsize=4096 blocks=130560, imaxpct=25 00:08:09.489 = sunit=0 swidth=0 blks 00:08:09.489 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:08:09.489 log =internal log bsize=4096 blocks=16384, version=2 00:08:09.489 = sectsz=512 sunit=0 blks, lazy-count=1 00:08:09.489 realtime =none extsz=4096 blocks=0, rtextents=0 00:08:10.422 Discarding blocks...Done. 00:08:10.422 00:54:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@941 -- # return 0 00:08:10.422 00:54:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:12.324 00:54:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:12.582 00:54:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:08:12.582 00:54:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:12.582 00:54:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:08:12.582 00:54:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:08:12.582 00:54:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:12.582 00:54:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 3659289 00:08:12.582 00:54:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:12.582 00:54:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:12.582 00:54:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:12.582 00:54:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:12.582 00:08:12.582 real 0m3.147s 00:08:12.582 user 0m0.025s 00:08:12.582 sys 0m0.051s 00:08:12.582 00:54:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:12.582 00:54:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:08:12.582 ************************************ 00:08:12.582 END TEST filesystem_in_capsule_xfs 00:08:12.582 ************************************ 00:08:12.582 00:54:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:08:12.582 00:54:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:08:12.582 00:54:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:12.582 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:12.582 00:54:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:12.582 00:54:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1215 -- # local i=0 00:08:12.582 00:54:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:08:12.582 00:54:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:12.582 00:54:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:08:12.582 00:54:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:12.840 00:54:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # return 0 00:08:12.840 00:54:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:12.840 00:54:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:12.840 00:54:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:12.840 00:54:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:12.840 00:54:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:08:12.840 00:54:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 3659289 00:08:12.840 00:54:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@946 -- # '[' -z 3659289 ']' 00:08:12.840 00:54:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@950 -- # kill -0 3659289 00:08:12.840 00:54:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@951 -- # uname 00:08:12.840 00:54:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:08:12.840 00:54:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3659289 00:08:12.840 00:54:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:08:12.840 00:54:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:08:12.840 00:54:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3659289' 00:08:12.840 killing process with pid 3659289 00:08:12.840 00:54:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@965 -- # kill 3659289 00:08:12.840 00:54:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@970 -- # wait 3659289 00:08:13.098 00:54:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:08:13.098 00:08:13.098 real 0m12.873s 00:08:13.098 user 0m49.393s 00:08:13.098 sys 0m1.903s 00:08:13.098 00:54:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:13.098 00:54:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:13.098 ************************************ 00:08:13.098 END TEST nvmf_filesystem_in_capsule 00:08:13.098 ************************************ 00:08:13.098 00:54:06 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:08:13.098 00:54:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:13.098 00:54:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@117 -- # sync 00:08:13.098 00:54:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:13.098 00:54:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@120 -- # set +e 00:08:13.098 00:54:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:13.098 00:54:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:13.098 rmmod nvme_tcp 00:08:13.357 rmmod nvme_fabrics 00:08:13.357 rmmod nvme_keyring 00:08:13.357 00:54:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:13.357 00:54:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@124 -- # set -e 00:08:13.357 00:54:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@125 -- # return 0 00:08:13.357 00:54:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:08:13.357 00:54:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:13.357 00:54:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:13.357 00:54:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:13.357 00:54:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:13.357 00:54:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:13.357 00:54:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:13.357 00:54:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:13.357 00:54:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:15.257 00:54:08 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:15.257 00:08:15.257 real 0m28.987s 00:08:15.257 user 1m34.738s 00:08:15.257 sys 0m5.282s 00:08:15.257 00:54:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:15.257 00:54:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:08:15.257 ************************************ 00:08:15.257 END TEST nvmf_filesystem 00:08:15.257 ************************************ 00:08:15.257 00:54:08 nvmf_tcp -- nvmf/nvmf.sh@25 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:08:15.257 00:54:08 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:08:15.257 00:54:08 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:15.257 00:54:08 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:15.257 ************************************ 00:08:15.257 START TEST nvmf_target_discovery 00:08:15.257 ************************************ 00:08:15.257 00:54:08 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:08:15.549 * Looking for test storage... 00:08:15.549 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:15.549 00:54:08 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:15.549 00:54:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:08:15.549 00:54:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:15.549 00:54:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:15.549 00:54:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:15.549 00:54:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:15.549 00:54:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:15.549 00:54:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:15.549 00:54:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:15.549 00:54:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:15.549 00:54:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:15.549 00:54:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:15.549 00:54:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:15.549 00:54:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:15.549 00:54:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:15.549 00:54:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:15.549 00:54:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:15.549 00:54:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:15.549 00:54:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:15.549 00:54:08 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:15.549 00:54:08 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:15.549 00:54:08 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:15.549 00:54:08 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:15.549 00:54:08 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:15.549 00:54:08 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:15.549 00:54:08 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:08:15.549 00:54:08 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:15.549 00:54:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@47 -- # : 0 00:08:15.549 00:54:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:15.549 00:54:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:15.549 00:54:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:15.549 00:54:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:15.549 00:54:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:15.549 00:54:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:15.549 00:54:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:15.549 00:54:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:15.549 00:54:08 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:08:15.549 00:54:08 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:08:15.549 00:54:08 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:08:15.549 00:54:08 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:08:15.550 00:54:08 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:08:15.550 00:54:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:15.550 00:54:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:15.550 00:54:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:15.550 00:54:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:15.550 00:54:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:15.550 00:54:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:15.550 00:54:08 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:15.550 00:54:08 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:15.550 00:54:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:15.550 00:54:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:15.550 00:54:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:08:15.550 00:54:08 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:17.459 00:54:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:17.460 00:54:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:08:17.460 00:54:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:17.460 00:54:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:17.460 00:54:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:17.460 00:54:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:17.460 00:54:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:17.460 00:54:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:08:17.460 00:54:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:17.460 00:54:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@296 -- # e810=() 00:08:17.460 00:54:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:08:17.460 00:54:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@297 -- # x722=() 00:08:17.460 00:54:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:08:17.460 00:54:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@298 -- # mlx=() 00:08:17.460 00:54:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:08:17.460 00:54:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:17.460 00:54:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:17.460 00:54:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:17.460 00:54:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:17.460 00:54:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:17.460 00:54:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:17.460 00:54:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:17.460 00:54:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:17.460 00:54:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:17.460 00:54:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:17.460 00:54:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:17.460 00:54:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:17.460 00:54:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:17.460 00:54:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:17.460 00:54:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:17.460 00:54:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:17.460 00:54:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:17.460 00:54:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:17.460 00:54:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:17.460 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:17.460 00:54:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:17.460 00:54:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:17.460 00:54:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:17.460 00:54:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:17.460 00:54:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:17.460 00:54:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:17.460 00:54:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:17.460 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:17.460 00:54:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:17.460 00:54:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:17.460 00:54:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:17.460 00:54:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:17.460 00:54:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:17.460 00:54:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:17.460 00:54:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:17.460 00:54:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:17.460 00:54:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:17.460 00:54:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:17.460 00:54:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:17.460 00:54:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:17.460 00:54:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:17.460 00:54:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:17.460 00:54:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:17.460 00:54:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:17.460 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:17.460 00:54:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:17.460 00:54:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:17.460 00:54:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:17.460 00:54:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:17.460 00:54:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:17.460 00:54:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:17.460 00:54:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:17.460 00:54:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:17.460 00:54:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:17.460 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:17.460 00:54:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:17.460 00:54:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:17.460 00:54:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:08:17.460 00:54:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:17.460 00:54:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:17.460 00:54:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:17.460 00:54:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:17.460 00:54:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:17.460 00:54:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:17.460 00:54:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:17.460 00:54:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:17.460 00:54:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:17.460 00:54:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:17.460 00:54:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:17.460 00:54:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:17.460 00:54:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:17.460 00:54:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:17.460 00:54:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:17.460 00:54:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:17.460 00:54:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:17.460 00:54:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:17.460 00:54:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:17.460 00:54:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:17.460 00:54:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:17.460 00:54:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:17.460 00:54:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:17.460 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:17.460 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.205 ms 00:08:17.460 00:08:17.460 --- 10.0.0.2 ping statistics --- 00:08:17.460 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:17.460 rtt min/avg/max/mdev = 0.205/0.205/0.205/0.000 ms 00:08:17.460 00:54:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:17.460 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:17.460 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.117 ms 00:08:17.460 00:08:17.460 --- 10.0.0.1 ping statistics --- 00:08:17.460 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:17.460 rtt min/avg/max/mdev = 0.117/0.117/0.117/0.000 ms 00:08:17.460 00:54:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:17.460 00:54:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@422 -- # return 0 00:08:17.460 00:54:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:17.460 00:54:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:17.460 00:54:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:17.460 00:54:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:17.460 00:54:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:17.460 00:54:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:17.460 00:54:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:17.460 00:54:10 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:08:17.460 00:54:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:17.460 00:54:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@720 -- # xtrace_disable 00:08:17.460 00:54:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:17.461 00:54:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@481 -- # nvmfpid=3663519 00:08:17.461 00:54:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:17.461 00:54:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@482 -- # waitforlisten 3663519 00:08:17.461 00:54:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@827 -- # '[' -z 3663519 ']' 00:08:17.461 00:54:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:17.461 00:54:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@832 -- # local max_retries=100 00:08:17.461 00:54:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:17.461 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:17.461 00:54:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@836 -- # xtrace_disable 00:08:17.461 00:54:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:17.719 [2024-07-25 00:54:10.654475] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 23.11.0 initialization... 00:08:17.719 [2024-07-25 00:54:10.654552] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:17.719 EAL: No free 2048 kB hugepages reported on node 1 00:08:17.719 [2024-07-25 00:54:10.722838] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:17.719 [2024-07-25 00:54:10.818858] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:17.719 [2024-07-25 00:54:10.818921] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:17.719 [2024-07-25 00:54:10.818947] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:17.719 [2024-07-25 00:54:10.818968] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:17.719 [2024-07-25 00:54:10.818989] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:17.719 [2024-07-25 00:54:10.819074] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:17.719 [2024-07-25 00:54:10.819137] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:17.719 [2024-07-25 00:54:10.819186] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:17.719 [2024-07-25 00:54:10.819192] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:17.977 00:54:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:08:17.977 00:54:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@860 -- # return 0 00:08:17.977 00:54:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:17.977 00:54:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:17.977 00:54:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:17.977 00:54:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:17.977 00:54:10 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:17.977 00:54:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:17.977 00:54:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:17.977 [2024-07-25 00:54:10.972105] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:17.977 00:54:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:17.977 00:54:10 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:08:17.977 00:54:10 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:17.977 00:54:10 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:08:17.977 00:54:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:17.977 00:54:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:17.977 Null1 00:08:17.977 00:54:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:17.977 00:54:10 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:17.977 00:54:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:17.978 00:54:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:17.978 00:54:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:17.978 00:54:10 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:08:17.978 00:54:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:17.978 00:54:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:17.978 00:54:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:17.978 00:54:11 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:17.978 00:54:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:17.978 00:54:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:17.978 [2024-07-25 00:54:11.012449] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:17.978 00:54:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:17.978 00:54:11 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:17.978 00:54:11 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:08:17.978 00:54:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:17.978 00:54:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:17.978 Null2 00:08:17.978 00:54:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:17.978 00:54:11 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:08:17.978 00:54:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:17.978 00:54:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:17.978 00:54:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:17.978 00:54:11 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:08:17.978 00:54:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:17.978 00:54:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:17.978 00:54:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:17.978 00:54:11 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:08:17.978 00:54:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:17.978 00:54:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:17.978 00:54:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:17.978 00:54:11 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:17.978 00:54:11 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:08:17.978 00:54:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:17.978 00:54:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:17.978 Null3 00:08:17.978 00:54:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:17.978 00:54:11 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:08:17.978 00:54:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:17.978 00:54:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:17.978 00:54:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:17.978 00:54:11 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:08:17.978 00:54:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:17.978 00:54:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:17.978 00:54:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:17.978 00:54:11 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:08:17.978 00:54:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:17.978 00:54:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:17.978 00:54:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:17.978 00:54:11 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:17.978 00:54:11 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:08:17.978 00:54:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:17.978 00:54:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:17.978 Null4 00:08:17.978 00:54:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:17.978 00:54:11 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:08:17.978 00:54:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:17.978 00:54:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:17.978 00:54:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:17.978 00:54:11 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:08:17.978 00:54:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:17.978 00:54:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:17.978 00:54:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:17.978 00:54:11 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:08:17.978 00:54:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:17.978 00:54:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:17.978 00:54:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:17.978 00:54:11 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:17.978 00:54:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:17.978 00:54:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:17.978 00:54:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:17.978 00:54:11 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:08:17.978 00:54:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:17.978 00:54:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:18.236 00:54:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:18.236 00:54:11 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 4420 00:08:18.236 00:08:18.236 Discovery Log Number of Records 6, Generation counter 6 00:08:18.236 =====Discovery Log Entry 0====== 00:08:18.236 trtype: tcp 00:08:18.236 adrfam: ipv4 00:08:18.236 subtype: current discovery subsystem 00:08:18.236 treq: not required 00:08:18.236 portid: 0 00:08:18.236 trsvcid: 4420 00:08:18.236 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:08:18.236 traddr: 10.0.0.2 00:08:18.236 eflags: explicit discovery connections, duplicate discovery information 00:08:18.236 sectype: none 00:08:18.236 =====Discovery Log Entry 1====== 00:08:18.236 trtype: tcp 00:08:18.236 adrfam: ipv4 00:08:18.236 subtype: nvme subsystem 00:08:18.236 treq: not required 00:08:18.236 portid: 0 00:08:18.236 trsvcid: 4420 00:08:18.236 subnqn: nqn.2016-06.io.spdk:cnode1 00:08:18.236 traddr: 10.0.0.2 00:08:18.236 eflags: none 00:08:18.236 sectype: none 00:08:18.236 =====Discovery Log Entry 2====== 00:08:18.236 trtype: tcp 00:08:18.236 adrfam: ipv4 00:08:18.236 subtype: nvme subsystem 00:08:18.236 treq: not required 00:08:18.236 portid: 0 00:08:18.236 trsvcid: 4420 00:08:18.236 subnqn: nqn.2016-06.io.spdk:cnode2 00:08:18.236 traddr: 10.0.0.2 00:08:18.236 eflags: none 00:08:18.236 sectype: none 00:08:18.236 =====Discovery Log Entry 3====== 00:08:18.236 trtype: tcp 00:08:18.236 adrfam: ipv4 00:08:18.236 subtype: nvme subsystem 00:08:18.236 treq: not required 00:08:18.236 portid: 0 00:08:18.236 trsvcid: 4420 00:08:18.236 subnqn: nqn.2016-06.io.spdk:cnode3 00:08:18.236 traddr: 10.0.0.2 00:08:18.236 eflags: none 00:08:18.236 sectype: none 00:08:18.236 =====Discovery Log Entry 4====== 00:08:18.236 trtype: tcp 00:08:18.236 adrfam: ipv4 00:08:18.236 subtype: nvme subsystem 00:08:18.236 treq: not required 00:08:18.236 portid: 0 00:08:18.236 trsvcid: 4420 00:08:18.236 subnqn: nqn.2016-06.io.spdk:cnode4 00:08:18.236 traddr: 10.0.0.2 00:08:18.236 eflags: none 00:08:18.236 sectype: none 00:08:18.236 =====Discovery Log Entry 5====== 00:08:18.236 trtype: tcp 00:08:18.236 adrfam: ipv4 00:08:18.236 subtype: discovery subsystem referral 00:08:18.236 treq: not required 00:08:18.236 portid: 0 00:08:18.236 trsvcid: 4430 00:08:18.236 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:08:18.236 traddr: 10.0.0.2 00:08:18.236 eflags: none 00:08:18.237 sectype: none 00:08:18.237 00:54:11 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:08:18.237 Perform nvmf subsystem discovery via RPC 00:08:18.237 00:54:11 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:08:18.237 00:54:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:18.237 00:54:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:18.237 [ 00:08:18.237 { 00:08:18.237 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:08:18.237 "subtype": "Discovery", 00:08:18.237 "listen_addresses": [ 00:08:18.237 { 00:08:18.237 "trtype": "TCP", 00:08:18.237 "adrfam": "IPv4", 00:08:18.237 "traddr": "10.0.0.2", 00:08:18.237 "trsvcid": "4420" 00:08:18.237 } 00:08:18.237 ], 00:08:18.237 "allow_any_host": true, 00:08:18.237 "hosts": [] 00:08:18.237 }, 00:08:18.237 { 00:08:18.237 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:08:18.237 "subtype": "NVMe", 00:08:18.237 "listen_addresses": [ 00:08:18.237 { 00:08:18.237 "trtype": "TCP", 00:08:18.237 "adrfam": "IPv4", 00:08:18.237 "traddr": "10.0.0.2", 00:08:18.237 "trsvcid": "4420" 00:08:18.237 } 00:08:18.237 ], 00:08:18.237 "allow_any_host": true, 00:08:18.237 "hosts": [], 00:08:18.237 "serial_number": "SPDK00000000000001", 00:08:18.237 "model_number": "SPDK bdev Controller", 00:08:18.237 "max_namespaces": 32, 00:08:18.237 "min_cntlid": 1, 00:08:18.237 "max_cntlid": 65519, 00:08:18.237 "namespaces": [ 00:08:18.237 { 00:08:18.237 "nsid": 1, 00:08:18.237 "bdev_name": "Null1", 00:08:18.237 "name": "Null1", 00:08:18.237 "nguid": "E2CC5EC2A6824C9FB48A9558BC6C5703", 00:08:18.237 "uuid": "e2cc5ec2-a682-4c9f-b48a-9558bc6c5703" 00:08:18.237 } 00:08:18.237 ] 00:08:18.237 }, 00:08:18.237 { 00:08:18.237 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:08:18.237 "subtype": "NVMe", 00:08:18.237 "listen_addresses": [ 00:08:18.237 { 00:08:18.237 "trtype": "TCP", 00:08:18.237 "adrfam": "IPv4", 00:08:18.237 "traddr": "10.0.0.2", 00:08:18.237 "trsvcid": "4420" 00:08:18.237 } 00:08:18.237 ], 00:08:18.237 "allow_any_host": true, 00:08:18.237 "hosts": [], 00:08:18.237 "serial_number": "SPDK00000000000002", 00:08:18.237 "model_number": "SPDK bdev Controller", 00:08:18.237 "max_namespaces": 32, 00:08:18.237 "min_cntlid": 1, 00:08:18.237 "max_cntlid": 65519, 00:08:18.237 "namespaces": [ 00:08:18.237 { 00:08:18.237 "nsid": 1, 00:08:18.237 "bdev_name": "Null2", 00:08:18.237 "name": "Null2", 00:08:18.237 "nguid": "BF7E2E5141284DCAA9071A07D86794AA", 00:08:18.237 "uuid": "bf7e2e51-4128-4dca-a907-1a07d86794aa" 00:08:18.237 } 00:08:18.237 ] 00:08:18.237 }, 00:08:18.237 { 00:08:18.237 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:08:18.237 "subtype": "NVMe", 00:08:18.237 "listen_addresses": [ 00:08:18.237 { 00:08:18.237 "trtype": "TCP", 00:08:18.237 "adrfam": "IPv4", 00:08:18.237 "traddr": "10.0.0.2", 00:08:18.237 "trsvcid": "4420" 00:08:18.237 } 00:08:18.237 ], 00:08:18.237 "allow_any_host": true, 00:08:18.237 "hosts": [], 00:08:18.237 "serial_number": "SPDK00000000000003", 00:08:18.237 "model_number": "SPDK bdev Controller", 00:08:18.237 "max_namespaces": 32, 00:08:18.237 "min_cntlid": 1, 00:08:18.237 "max_cntlid": 65519, 00:08:18.237 "namespaces": [ 00:08:18.237 { 00:08:18.237 "nsid": 1, 00:08:18.237 "bdev_name": "Null3", 00:08:18.237 "name": "Null3", 00:08:18.237 "nguid": "7C80D6EFF50547B8B62F484BDE2F1435", 00:08:18.237 "uuid": "7c80d6ef-f505-47b8-b62f-484bde2f1435" 00:08:18.237 } 00:08:18.237 ] 00:08:18.237 }, 00:08:18.237 { 00:08:18.237 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:08:18.237 "subtype": "NVMe", 00:08:18.237 "listen_addresses": [ 00:08:18.237 { 00:08:18.237 "trtype": "TCP", 00:08:18.237 "adrfam": "IPv4", 00:08:18.237 "traddr": "10.0.0.2", 00:08:18.237 "trsvcid": "4420" 00:08:18.237 } 00:08:18.237 ], 00:08:18.237 "allow_any_host": true, 00:08:18.237 "hosts": [], 00:08:18.237 "serial_number": "SPDK00000000000004", 00:08:18.237 "model_number": "SPDK bdev Controller", 00:08:18.237 "max_namespaces": 32, 00:08:18.237 "min_cntlid": 1, 00:08:18.237 "max_cntlid": 65519, 00:08:18.237 "namespaces": [ 00:08:18.237 { 00:08:18.237 "nsid": 1, 00:08:18.237 "bdev_name": "Null4", 00:08:18.237 "name": "Null4", 00:08:18.237 "nguid": "38A44BFBD1B2417882D43D86D7276E79", 00:08:18.237 "uuid": "38a44bfb-d1b2-4178-82d4-3d86d7276e79" 00:08:18.237 } 00:08:18.237 ] 00:08:18.237 } 00:08:18.237 ] 00:08:18.237 00:54:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:18.237 00:54:11 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:08:18.237 00:54:11 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:18.237 00:54:11 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:18.237 00:54:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:18.237 00:54:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:18.237 00:54:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:18.237 00:54:11 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:08:18.237 00:54:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:18.237 00:54:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:18.237 00:54:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:18.237 00:54:11 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:18.237 00:54:11 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:08:18.237 00:54:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:18.237 00:54:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:18.495 00:54:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:18.495 00:54:11 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:08:18.495 00:54:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:18.495 00:54:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:18.495 00:54:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:18.495 00:54:11 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:18.495 00:54:11 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:08:18.495 00:54:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:18.495 00:54:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:18.495 00:54:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:18.495 00:54:11 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:08:18.495 00:54:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:18.495 00:54:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:18.495 00:54:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:18.495 00:54:11 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:18.495 00:54:11 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:08:18.495 00:54:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:18.495 00:54:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:18.495 00:54:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:18.495 00:54:11 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:08:18.495 00:54:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:18.495 00:54:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:18.495 00:54:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:18.495 00:54:11 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:08:18.495 00:54:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:18.495 00:54:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:18.496 00:54:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:18.496 00:54:11 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:08:18.496 00:54:11 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:08:18.496 00:54:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:18.496 00:54:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:18.496 00:54:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:18.496 00:54:11 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:08:18.496 00:54:11 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:08:18.496 00:54:11 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:08:18.496 00:54:11 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:08:18.496 00:54:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:18.496 00:54:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@117 -- # sync 00:08:18.496 00:54:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:18.496 00:54:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@120 -- # set +e 00:08:18.496 00:54:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:18.496 00:54:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:18.496 rmmod nvme_tcp 00:08:18.496 rmmod nvme_fabrics 00:08:18.496 rmmod nvme_keyring 00:08:18.496 00:54:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:18.496 00:54:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@124 -- # set -e 00:08:18.496 00:54:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@125 -- # return 0 00:08:18.496 00:54:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@489 -- # '[' -n 3663519 ']' 00:08:18.496 00:54:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@490 -- # killprocess 3663519 00:08:18.496 00:54:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@946 -- # '[' -z 3663519 ']' 00:08:18.496 00:54:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@950 -- # kill -0 3663519 00:08:18.496 00:54:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@951 -- # uname 00:08:18.496 00:54:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:08:18.496 00:54:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3663519 00:08:18.496 00:54:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:08:18.496 00:54:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:08:18.496 00:54:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3663519' 00:08:18.496 killing process with pid 3663519 00:08:18.496 00:54:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@965 -- # kill 3663519 00:08:18.496 00:54:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@970 -- # wait 3663519 00:08:18.754 00:54:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:18.754 00:54:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:18.754 00:54:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:18.754 00:54:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:18.754 00:54:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:18.754 00:54:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:18.754 00:54:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:18.754 00:54:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:20.698 00:54:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:20.698 00:08:20.698 real 0m5.423s 00:08:20.698 user 0m4.600s 00:08:20.698 sys 0m1.790s 00:08:20.698 00:54:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:20.698 00:54:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:20.698 ************************************ 00:08:20.698 END TEST nvmf_target_discovery 00:08:20.698 ************************************ 00:08:20.956 00:54:13 nvmf_tcp -- nvmf/nvmf.sh@26 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:08:20.956 00:54:13 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:08:20.956 00:54:13 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:20.956 00:54:13 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:20.956 ************************************ 00:08:20.956 START TEST nvmf_referrals 00:08:20.956 ************************************ 00:08:20.956 00:54:13 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:08:20.956 * Looking for test storage... 00:08:20.956 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:20.956 00:54:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:20.956 00:54:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:08:20.956 00:54:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:20.956 00:54:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:20.956 00:54:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:20.956 00:54:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:20.956 00:54:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:20.956 00:54:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:20.956 00:54:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:20.956 00:54:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:20.956 00:54:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:20.956 00:54:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:20.956 00:54:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:20.956 00:54:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:20.956 00:54:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:20.956 00:54:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:20.956 00:54:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:20.956 00:54:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:20.956 00:54:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:20.956 00:54:13 nvmf_tcp.nvmf_referrals -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:20.956 00:54:13 nvmf_tcp.nvmf_referrals -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:20.956 00:54:13 nvmf_tcp.nvmf_referrals -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:20.957 00:54:13 nvmf_tcp.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:20.957 00:54:13 nvmf_tcp.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:20.957 00:54:13 nvmf_tcp.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:20.957 00:54:13 nvmf_tcp.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:08:20.957 00:54:13 nvmf_tcp.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:20.957 00:54:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@47 -- # : 0 00:08:20.957 00:54:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:20.957 00:54:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:20.957 00:54:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:20.957 00:54:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:20.957 00:54:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:20.957 00:54:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:20.957 00:54:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:20.957 00:54:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:20.957 00:54:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:08:20.957 00:54:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:08:20.957 00:54:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:08:20.957 00:54:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:08:20.957 00:54:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:08:20.957 00:54:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:08:20.957 00:54:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:08:20.957 00:54:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:20.957 00:54:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:20.957 00:54:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:20.957 00:54:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:20.957 00:54:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:20.957 00:54:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:20.957 00:54:13 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:20.957 00:54:13 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:20.957 00:54:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:20.957 00:54:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:20.957 00:54:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@285 -- # xtrace_disable 00:08:20.957 00:54:13 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:22.859 00:54:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:22.859 00:54:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@291 -- # pci_devs=() 00:08:22.860 00:54:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:22.860 00:54:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:22.860 00:54:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:22.860 00:54:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:22.860 00:54:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:22.860 00:54:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@295 -- # net_devs=() 00:08:22.860 00:54:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:22.860 00:54:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@296 -- # e810=() 00:08:22.860 00:54:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@296 -- # local -ga e810 00:08:22.860 00:54:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@297 -- # x722=() 00:08:22.860 00:54:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@297 -- # local -ga x722 00:08:22.860 00:54:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@298 -- # mlx=() 00:08:22.860 00:54:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@298 -- # local -ga mlx 00:08:22.860 00:54:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:22.860 00:54:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:22.860 00:54:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:22.860 00:54:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:22.860 00:54:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:22.860 00:54:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:22.860 00:54:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:22.860 00:54:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:22.860 00:54:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:22.860 00:54:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:22.860 00:54:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:22.860 00:54:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:22.860 00:54:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:22.860 00:54:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:22.860 00:54:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:22.860 00:54:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:22.860 00:54:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:22.860 00:54:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:22.860 00:54:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:22.860 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:22.860 00:54:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:22.860 00:54:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:22.860 00:54:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:22.860 00:54:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:22.860 00:54:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:22.860 00:54:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:22.860 00:54:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:22.860 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:22.860 00:54:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:22.860 00:54:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:22.860 00:54:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:22.860 00:54:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:22.860 00:54:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:22.860 00:54:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:22.860 00:54:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:22.860 00:54:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:22.860 00:54:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:22.860 00:54:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:22.860 00:54:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:22.860 00:54:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:22.860 00:54:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:22.860 00:54:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:22.860 00:54:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:22.860 00:54:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:22.860 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:22.860 00:54:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:22.860 00:54:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:22.860 00:54:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:22.860 00:54:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:22.860 00:54:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:22.860 00:54:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:22.860 00:54:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:22.860 00:54:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:22.860 00:54:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:22.860 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:22.860 00:54:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:22.860 00:54:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:22.860 00:54:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # is_hw=yes 00:08:22.860 00:54:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:22.860 00:54:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:22.860 00:54:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:22.860 00:54:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:22.860 00:54:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:22.860 00:54:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:22.860 00:54:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:22.860 00:54:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:22.860 00:54:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:22.860 00:54:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:22.860 00:54:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:22.860 00:54:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:22.860 00:54:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:22.860 00:54:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:22.860 00:54:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:22.860 00:54:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:22.860 00:54:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:22.860 00:54:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:22.860 00:54:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:22.860 00:54:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:22.860 00:54:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:22.860 00:54:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:22.860 00:54:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:22.860 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:22.860 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.217 ms 00:08:22.860 00:08:22.860 --- 10.0.0.2 ping statistics --- 00:08:22.860 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:22.860 rtt min/avg/max/mdev = 0.217/0.217/0.217/0.000 ms 00:08:22.860 00:54:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:22.860 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:22.860 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.134 ms 00:08:22.860 00:08:22.860 --- 10.0.0.1 ping statistics --- 00:08:22.861 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:22.861 rtt min/avg/max/mdev = 0.134/0.134/0.134/0.000 ms 00:08:22.861 00:54:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:22.861 00:54:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@422 -- # return 0 00:08:22.861 00:54:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:22.861 00:54:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:22.861 00:54:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:22.861 00:54:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:22.861 00:54:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:22.861 00:54:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:22.861 00:54:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:22.861 00:54:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:08:22.861 00:54:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:22.861 00:54:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@720 -- # xtrace_disable 00:08:22.861 00:54:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:23.119 00:54:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@481 -- # nvmfpid=3665608 00:08:23.119 00:54:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:23.119 00:54:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@482 -- # waitforlisten 3665608 00:08:23.119 00:54:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@827 -- # '[' -z 3665608 ']' 00:08:23.119 00:54:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:23.119 00:54:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@832 -- # local max_retries=100 00:08:23.119 00:54:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:23.119 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:23.119 00:54:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@836 -- # xtrace_disable 00:08:23.119 00:54:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:23.119 [2024-07-25 00:54:16.061665] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 23.11.0 initialization... 00:08:23.119 [2024-07-25 00:54:16.061749] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:23.119 EAL: No free 2048 kB hugepages reported on node 1 00:08:23.119 [2024-07-25 00:54:16.131757] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:23.119 [2024-07-25 00:54:16.222313] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:23.119 [2024-07-25 00:54:16.222366] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:23.119 [2024-07-25 00:54:16.222389] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:23.119 [2024-07-25 00:54:16.222408] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:23.119 [2024-07-25 00:54:16.222425] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:23.119 [2024-07-25 00:54:16.224356] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:23.119 [2024-07-25 00:54:16.224389] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:23.119 [2024-07-25 00:54:16.224416] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:23.119 [2024-07-25 00:54:16.224419] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:23.378 00:54:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:08:23.378 00:54:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@860 -- # return 0 00:08:23.378 00:54:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:23.378 00:54:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:23.378 00:54:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:23.378 00:54:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:23.378 00:54:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:23.378 00:54:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:23.378 00:54:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:23.378 [2024-07-25 00:54:16.369922] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:23.378 00:54:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:23.378 00:54:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:08:23.378 00:54:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:23.378 00:54:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:23.378 [2024-07-25 00:54:16.382142] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:08:23.378 00:54:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:23.378 00:54:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:08:23.378 00:54:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:23.378 00:54:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:23.378 00:54:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:23.378 00:54:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:08:23.378 00:54:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:23.378 00:54:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:23.378 00:54:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:23.378 00:54:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:08:23.378 00:54:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:23.378 00:54:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:23.378 00:54:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:23.378 00:54:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:23.378 00:54:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:08:23.378 00:54:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:23.378 00:54:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:23.378 00:54:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:23.378 00:54:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:08:23.378 00:54:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:08:23.378 00:54:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:23.378 00:54:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:23.378 00:54:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:23.378 00:54:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:23.378 00:54:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:23.378 00:54:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:08:23.378 00:54:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:23.378 00:54:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:08:23.378 00:54:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:08:23.378 00:54:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:08:23.378 00:54:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:23.378 00:54:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:23.378 00:54:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:23.378 00:54:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:23.378 00:54:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:23.636 00:54:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:08:23.636 00:54:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:08:23.636 00:54:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:08:23.636 00:54:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:23.636 00:54:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:23.636 00:54:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:23.636 00:54:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:08:23.636 00:54:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:23.636 00:54:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:23.636 00:54:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:23.636 00:54:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:08:23.636 00:54:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:23.636 00:54:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:23.636 00:54:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:23.636 00:54:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:23.636 00:54:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:08:23.636 00:54:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:23.636 00:54:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:23.636 00:54:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:23.636 00:54:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:08:23.636 00:54:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:08:23.636 00:54:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:23.637 00:54:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:23.637 00:54:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:23.637 00:54:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:23.637 00:54:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:23.894 00:54:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:08:23.894 00:54:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:08:23.894 00:54:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:08:23.894 00:54:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:23.894 00:54:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:23.894 00:54:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:23.894 00:54:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:08:23.894 00:54:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:23.894 00:54:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:23.894 00:54:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:23.894 00:54:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:08:23.894 00:54:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:23.894 00:54:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:23.894 00:54:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:23.894 00:54:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:23.894 00:54:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:23.894 00:54:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:08:23.894 00:54:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:23.894 00:54:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:08:23.894 00:54:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:08:23.894 00:54:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:08:23.894 00:54:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:23.894 00:54:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:23.895 00:54:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:23.895 00:54:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:23.895 00:54:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:24.153 00:54:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:08:24.153 00:54:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:08:24.153 00:54:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:08:24.153 00:54:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:08:24.153 00:54:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:08:24.153 00:54:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:24.153 00:54:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:08:24.153 00:54:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:08:24.153 00:54:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:08:24.153 00:54:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:08:24.153 00:54:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:08:24.153 00:54:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:24.153 00:54:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:08:24.153 00:54:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:08:24.153 00:54:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:08:24.153 00:54:17 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:24.153 00:54:17 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:24.153 00:54:17 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:24.153 00:54:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:08:24.154 00:54:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:24.154 00:54:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:24.154 00:54:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:24.154 00:54:17 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:24.154 00:54:17 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:24.154 00:54:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:08:24.412 00:54:17 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:24.412 00:54:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:08:24.412 00:54:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:08:24.412 00:54:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:08:24.412 00:54:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:24.412 00:54:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:24.412 00:54:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:24.412 00:54:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:24.412 00:54:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:24.412 00:54:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:08:24.412 00:54:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:08:24.412 00:54:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:08:24.412 00:54:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:08:24.412 00:54:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:08:24.412 00:54:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:24.412 00:54:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:08:24.669 00:54:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:08:24.669 00:54:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:08:24.669 00:54:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:08:24.669 00:54:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:08:24.669 00:54:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:24.669 00:54:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:08:24.669 00:54:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:08:24.669 00:54:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:08:24.669 00:54:17 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:24.669 00:54:17 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:24.669 00:54:17 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:24.669 00:54:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:24.669 00:54:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:08:24.669 00:54:17 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:24.669 00:54:17 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:24.669 00:54:17 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:24.669 00:54:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:08:24.669 00:54:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:08:24.669 00:54:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:24.669 00:54:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:24.926 00:54:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:24.926 00:54:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:24.926 00:54:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:24.926 00:54:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:08:24.926 00:54:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:08:24.926 00:54:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:08:24.926 00:54:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:08:24.926 00:54:17 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:24.926 00:54:17 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@117 -- # sync 00:08:24.926 00:54:17 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:24.926 00:54:17 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@120 -- # set +e 00:08:24.926 00:54:17 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:24.926 00:54:17 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:24.926 rmmod nvme_tcp 00:08:24.926 rmmod nvme_fabrics 00:08:24.926 rmmod nvme_keyring 00:08:24.926 00:54:17 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:24.926 00:54:17 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@124 -- # set -e 00:08:24.926 00:54:17 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@125 -- # return 0 00:08:24.926 00:54:17 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@489 -- # '[' -n 3665608 ']' 00:08:24.926 00:54:17 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@490 -- # killprocess 3665608 00:08:24.926 00:54:17 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@946 -- # '[' -z 3665608 ']' 00:08:24.926 00:54:17 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@950 -- # kill -0 3665608 00:08:24.926 00:54:17 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@951 -- # uname 00:08:24.926 00:54:17 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:08:24.926 00:54:17 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3665608 00:08:24.926 00:54:18 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:08:24.926 00:54:18 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:08:24.926 00:54:18 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3665608' 00:08:24.926 killing process with pid 3665608 00:08:24.926 00:54:18 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@965 -- # kill 3665608 00:08:24.926 00:54:18 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@970 -- # wait 3665608 00:08:25.184 00:54:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:25.184 00:54:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:25.184 00:54:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:25.184 00:54:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:25.184 00:54:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:25.184 00:54:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:25.184 00:54:18 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:25.184 00:54:18 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:27.712 00:54:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:27.712 00:08:27.712 real 0m6.430s 00:08:27.712 user 0m9.372s 00:08:27.712 sys 0m2.047s 00:08:27.712 00:54:20 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:27.712 00:54:20 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:27.712 ************************************ 00:08:27.712 END TEST nvmf_referrals 00:08:27.712 ************************************ 00:08:27.712 00:54:20 nvmf_tcp -- nvmf/nvmf.sh@27 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:08:27.712 00:54:20 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:08:27.712 00:54:20 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:27.712 00:54:20 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:27.712 ************************************ 00:08:27.712 START TEST nvmf_connect_disconnect 00:08:27.712 ************************************ 00:08:27.712 00:54:20 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:08:27.712 * Looking for test storage... 00:08:27.712 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:27.712 00:54:20 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:27.712 00:54:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:08:27.712 00:54:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:27.712 00:54:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:27.712 00:54:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:27.712 00:54:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:27.712 00:54:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:27.712 00:54:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:27.712 00:54:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:27.712 00:54:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:27.712 00:54:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:27.712 00:54:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:27.712 00:54:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:27.712 00:54:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:27.712 00:54:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:27.712 00:54:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:27.712 00:54:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:27.712 00:54:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:27.712 00:54:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:27.713 00:54:20 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:27.713 00:54:20 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:27.713 00:54:20 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:27.713 00:54:20 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:27.713 00:54:20 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:27.713 00:54:20 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:27.713 00:54:20 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:08:27.713 00:54:20 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:27.713 00:54:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@47 -- # : 0 00:08:27.713 00:54:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:27.713 00:54:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:27.713 00:54:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:27.713 00:54:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:27.713 00:54:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:27.713 00:54:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:27.713 00:54:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:27.713 00:54:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:27.713 00:54:20 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:27.713 00:54:20 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:27.713 00:54:20 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:08:27.713 00:54:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:27.713 00:54:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:27.713 00:54:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:27.713 00:54:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:27.713 00:54:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:27.713 00:54:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:27.713 00:54:20 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:27.713 00:54:20 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:27.713 00:54:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:27.713 00:54:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:27.713 00:54:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:08:27.713 00:54:20 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:29.614 00:54:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:29.614 00:54:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:08:29.614 00:54:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:29.614 00:54:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:29.614 00:54:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:29.614 00:54:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:29.614 00:54:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:29.614 00:54:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:08:29.614 00:54:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:29.614 00:54:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # e810=() 00:08:29.614 00:54:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:08:29.614 00:54:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # x722=() 00:08:29.614 00:54:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:08:29.614 00:54:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:08:29.614 00:54:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:08:29.614 00:54:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:29.614 00:54:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:29.614 00:54:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:29.614 00:54:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:29.614 00:54:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:29.614 00:54:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:29.614 00:54:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:29.614 00:54:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:29.614 00:54:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:29.614 00:54:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:29.614 00:54:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:29.614 00:54:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:29.614 00:54:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:29.614 00:54:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:29.614 00:54:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:29.614 00:54:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:29.614 00:54:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:29.614 00:54:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:29.614 00:54:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:29.614 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:29.614 00:54:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:29.614 00:54:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:29.614 00:54:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:29.614 00:54:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:29.614 00:54:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:29.614 00:54:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:29.614 00:54:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:29.614 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:29.614 00:54:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:29.614 00:54:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:29.614 00:54:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:29.614 00:54:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:29.614 00:54:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:29.614 00:54:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:29.614 00:54:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:29.614 00:54:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:29.614 00:54:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:29.614 00:54:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:29.614 00:54:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:29.614 00:54:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:29.614 00:54:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:29.614 00:54:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:29.614 00:54:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:29.614 00:54:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:29.615 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:29.615 00:54:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:29.615 00:54:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:29.615 00:54:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:29.615 00:54:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:29.615 00:54:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:29.615 00:54:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:29.615 00:54:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:29.615 00:54:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:29.615 00:54:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:29.615 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:29.615 00:54:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:29.615 00:54:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:29.615 00:54:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:08:29.615 00:54:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:29.615 00:54:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:29.615 00:54:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:29.615 00:54:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:29.615 00:54:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:29.615 00:54:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:29.615 00:54:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:29.615 00:54:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:29.615 00:54:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:29.615 00:54:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:29.615 00:54:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:29.615 00:54:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:29.615 00:54:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:29.615 00:54:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:29.615 00:54:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:29.615 00:54:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:29.615 00:54:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:29.615 00:54:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:29.615 00:54:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:29.615 00:54:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:29.615 00:54:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:29.615 00:54:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:29.615 00:54:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:29.615 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:29.615 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.209 ms 00:08:29.615 00:08:29.615 --- 10.0.0.2 ping statistics --- 00:08:29.615 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:29.615 rtt min/avg/max/mdev = 0.209/0.209/0.209/0.000 ms 00:08:29.615 00:54:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:29.615 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:29.615 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.121 ms 00:08:29.615 00:08:29.615 --- 10.0.0.1 ping statistics --- 00:08:29.615 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:29.615 rtt min/avg/max/mdev = 0.121/0.121/0.121/0.000 ms 00:08:29.615 00:54:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:29.615 00:54:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # return 0 00:08:29.615 00:54:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:29.615 00:54:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:29.615 00:54:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:29.615 00:54:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:29.615 00:54:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:29.615 00:54:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:29.615 00:54:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:29.615 00:54:22 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:08:29.615 00:54:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:29.615 00:54:22 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@720 -- # xtrace_disable 00:08:29.615 00:54:22 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:29.615 00:54:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@481 -- # nvmfpid=3667906 00:08:29.615 00:54:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:29.615 00:54:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # waitforlisten 3667906 00:08:29.615 00:54:22 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@827 -- # '[' -z 3667906 ']' 00:08:29.615 00:54:22 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:29.615 00:54:22 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@832 -- # local max_retries=100 00:08:29.615 00:54:22 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:29.615 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:29.615 00:54:22 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@836 -- # xtrace_disable 00:08:29.615 00:54:22 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:29.615 [2024-07-25 00:54:22.702124] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 23.11.0 initialization... 00:08:29.615 [2024-07-25 00:54:22.702209] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:29.615 EAL: No free 2048 kB hugepages reported on node 1 00:08:29.874 [2024-07-25 00:54:22.771412] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:29.874 [2024-07-25 00:54:22.862251] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:29.874 [2024-07-25 00:54:22.862312] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:29.874 [2024-07-25 00:54:22.862337] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:29.874 [2024-07-25 00:54:22.862358] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:29.874 [2024-07-25 00:54:22.862378] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:29.874 [2024-07-25 00:54:22.862483] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:29.874 [2024-07-25 00:54:22.862547] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:29.874 [2024-07-25 00:54:22.862645] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:29.874 [2024-07-25 00:54:22.862638] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:29.874 00:54:22 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:08:29.874 00:54:22 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@860 -- # return 0 00:08:29.874 00:54:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:29.874 00:54:22 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:29.874 00:54:22 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:29.874 00:54:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:29.874 00:54:23 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:08:29.874 00:54:23 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:29.874 00:54:23 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:29.874 [2024-07-25 00:54:23.016062] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:29.874 00:54:23 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:30.132 00:54:23 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:08:30.132 00:54:23 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:30.132 00:54:23 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:30.132 00:54:23 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:30.132 00:54:23 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:08:30.132 00:54:23 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:30.132 00:54:23 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:30.132 00:54:23 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:30.132 00:54:23 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:30.132 00:54:23 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:30.132 00:54:23 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:30.132 00:54:23 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:30.132 00:54:23 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:30.132 00:54:23 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:30.132 00:54:23 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:30.132 00:54:23 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:30.132 [2024-07-25 00:54:23.069124] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:30.132 00:54:23 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:30.132 00:54:23 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 1 -eq 1 ']' 00:08:30.132 00:54:23 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@27 -- # num_iterations=100 00:08:30.132 00:54:23 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@29 -- # NVME_CONNECT='nvme connect -i 8' 00:08:30.132 00:54:23 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:08:32.656 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:34.552 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:37.118 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:39.645 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:41.540 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:44.062 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:46.585 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:48.481 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:51.015 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:52.912 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:55.436 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:57.960 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:59.912 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:02.456 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:04.353 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:06.880 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:09.405 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:11.303 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:13.827 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:16.353 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:18.250 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:20.775 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:22.668 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:25.259 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:27.782 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:29.680 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:32.203 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:34.727 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:36.623 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:39.146 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:41.667 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:43.562 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:46.125 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:48.648 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:51.205 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:53.101 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:55.626 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:58.152 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:00.050 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:02.573 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:04.470 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:07.003 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:09.523 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:11.416 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:14.001 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:15.897 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:18.422 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:20.948 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:22.846 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:25.368 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:27.892 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:29.789 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:32.310 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:34.205 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:36.728 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:39.288 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:41.185 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:43.708 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:46.225 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:48.120 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:50.644 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:53.168 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:55.066 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:57.591 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:00.117 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:02.039 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:04.561 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:06.456 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:08.977 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:11.501 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:13.396 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:15.921 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:18.447 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:20.348 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:22.875 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:24.826 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:27.347 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:29.866 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:31.759 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:34.282 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:36.178 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:38.703 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:41.278 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:43.173 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:45.696 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:48.223 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:50.149 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:52.671 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:55.197 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:57.723 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:59.620 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:02.144 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:04.670 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:06.563 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:09.083 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:11.606 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:13.536 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:16.060 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:18.585 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:20.479 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:20.479 00:58:13 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:12:20.479 00:58:13 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:12:20.479 00:58:13 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:20.479 00:58:13 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # sync 00:12:20.479 00:58:13 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:20.479 00:58:13 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@120 -- # set +e 00:12:20.479 00:58:13 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:20.479 00:58:13 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:20.479 rmmod nvme_tcp 00:12:20.736 rmmod nvme_fabrics 00:12:20.736 rmmod nvme_keyring 00:12:20.736 00:58:13 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:20.736 00:58:13 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set -e 00:12:20.736 00:58:13 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # return 0 00:12:20.736 00:58:13 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@489 -- # '[' -n 3667906 ']' 00:12:20.736 00:58:13 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@490 -- # killprocess 3667906 00:12:20.736 00:58:13 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@946 -- # '[' -z 3667906 ']' 00:12:20.736 00:58:13 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@950 -- # kill -0 3667906 00:12:20.736 00:58:13 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@951 -- # uname 00:12:20.736 00:58:13 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:12:20.736 00:58:13 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3667906 00:12:20.736 00:58:13 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:12:20.736 00:58:13 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:12:20.736 00:58:13 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3667906' 00:12:20.736 killing process with pid 3667906 00:12:20.736 00:58:13 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@965 -- # kill 3667906 00:12:20.736 00:58:13 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@970 -- # wait 3667906 00:12:20.993 00:58:13 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:20.993 00:58:13 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:20.993 00:58:13 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:20.993 00:58:13 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:20.993 00:58:13 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:20.993 00:58:13 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:20.993 00:58:13 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:20.993 00:58:13 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:22.891 00:58:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:22.891 00:12:22.891 real 3m55.646s 00:12:22.891 user 14m57.130s 00:12:22.891 sys 0m34.520s 00:12:22.891 00:58:15 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:22.891 00:58:15 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:22.891 ************************************ 00:12:22.891 END TEST nvmf_connect_disconnect 00:12:22.891 ************************************ 00:12:22.891 00:58:16 nvmf_tcp -- nvmf/nvmf.sh@28 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:12:22.891 00:58:16 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:12:22.891 00:58:16 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:22.891 00:58:16 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:23.149 ************************************ 00:12:23.149 START TEST nvmf_multitarget 00:12:23.149 ************************************ 00:12:23.149 00:58:16 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:12:23.149 * Looking for test storage... 00:12:23.149 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:23.149 00:58:16 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:23.149 00:58:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:12:23.149 00:58:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:23.149 00:58:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:23.149 00:58:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:23.149 00:58:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:23.149 00:58:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:23.149 00:58:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:23.149 00:58:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:23.149 00:58:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:23.149 00:58:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:23.149 00:58:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:23.149 00:58:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:23.149 00:58:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:12:23.149 00:58:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:23.149 00:58:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:23.149 00:58:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:23.149 00:58:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:23.149 00:58:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:23.149 00:58:16 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:23.149 00:58:16 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:23.149 00:58:16 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:23.149 00:58:16 nvmf_tcp.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:23.150 00:58:16 nvmf_tcp.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:23.150 00:58:16 nvmf_tcp.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:23.150 00:58:16 nvmf_tcp.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:12:23.150 00:58:16 nvmf_tcp.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:23.150 00:58:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@47 -- # : 0 00:12:23.150 00:58:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:23.150 00:58:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:23.150 00:58:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:23.150 00:58:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:23.150 00:58:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:23.150 00:58:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:23.150 00:58:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:23.150 00:58:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:23.150 00:58:16 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:12:23.150 00:58:16 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:12:23.150 00:58:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:23.150 00:58:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:23.150 00:58:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:23.150 00:58:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:23.150 00:58:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:23.150 00:58:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:23.150 00:58:16 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:23.150 00:58:16 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:23.150 00:58:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:23.150 00:58:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:23.150 00:58:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@285 -- # xtrace_disable 00:12:23.150 00:58:16 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:25.048 00:58:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:25.048 00:58:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@291 -- # pci_devs=() 00:12:25.048 00:58:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:25.048 00:58:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:25.048 00:58:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:25.048 00:58:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:25.048 00:58:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:25.048 00:58:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@295 -- # net_devs=() 00:12:25.048 00:58:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:25.048 00:58:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@296 -- # e810=() 00:12:25.048 00:58:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@296 -- # local -ga e810 00:12:25.048 00:58:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@297 -- # x722=() 00:12:25.048 00:58:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@297 -- # local -ga x722 00:12:25.048 00:58:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@298 -- # mlx=() 00:12:25.048 00:58:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@298 -- # local -ga mlx 00:12:25.048 00:58:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:25.048 00:58:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:25.049 00:58:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:25.049 00:58:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:25.049 00:58:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:25.049 00:58:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:25.049 00:58:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:25.049 00:58:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:25.049 00:58:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:25.049 00:58:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:25.049 00:58:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:25.049 00:58:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:25.049 00:58:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:25.049 00:58:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:25.049 00:58:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:25.049 00:58:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:25.049 00:58:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:25.049 00:58:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:25.049 00:58:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:12:25.049 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:12:25.049 00:58:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:25.049 00:58:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:25.049 00:58:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:25.049 00:58:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:25.049 00:58:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:25.049 00:58:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:25.049 00:58:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:12:25.049 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:12:25.049 00:58:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:25.049 00:58:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:25.049 00:58:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:25.049 00:58:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:25.049 00:58:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:25.049 00:58:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:25.049 00:58:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:25.049 00:58:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:25.049 00:58:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:25.049 00:58:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:25.049 00:58:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:25.049 00:58:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:25.049 00:58:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:25.049 00:58:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:25.049 00:58:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:25.049 00:58:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:12:25.049 Found net devices under 0000:0a:00.0: cvl_0_0 00:12:25.049 00:58:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:25.049 00:58:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:25.049 00:58:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:25.049 00:58:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:25.049 00:58:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:25.049 00:58:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:25.049 00:58:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:25.049 00:58:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:25.049 00:58:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:12:25.049 Found net devices under 0000:0a:00.1: cvl_0_1 00:12:25.049 00:58:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:25.049 00:58:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:25.049 00:58:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # is_hw=yes 00:12:25.049 00:58:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:25.049 00:58:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:25.049 00:58:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:25.049 00:58:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:25.049 00:58:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:25.049 00:58:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:25.049 00:58:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:25.049 00:58:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:25.049 00:58:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:25.049 00:58:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:25.049 00:58:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:25.049 00:58:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:25.049 00:58:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:25.049 00:58:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:25.049 00:58:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:25.049 00:58:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:25.049 00:58:18 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:25.049 00:58:18 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:25.049 00:58:18 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:25.049 00:58:18 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:25.049 00:58:18 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:25.049 00:58:18 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:25.049 00:58:18 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:25.049 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:25.049 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.220 ms 00:12:25.049 00:12:25.049 --- 10.0.0.2 ping statistics --- 00:12:25.049 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:25.049 rtt min/avg/max/mdev = 0.220/0.220/0.220/0.000 ms 00:12:25.049 00:58:18 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:25.049 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:25.049 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.162 ms 00:12:25.049 00:12:25.049 --- 10.0.0.1 ping statistics --- 00:12:25.049 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:25.049 rtt min/avg/max/mdev = 0.162/0.162/0.162/0.000 ms 00:12:25.049 00:58:18 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:25.049 00:58:18 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@422 -- # return 0 00:12:25.049 00:58:18 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:25.049 00:58:18 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:25.049 00:58:18 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:25.049 00:58:18 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:25.049 00:58:18 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:25.049 00:58:18 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:25.049 00:58:18 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:25.049 00:58:18 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:12:25.049 00:58:18 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:25.049 00:58:18 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@720 -- # xtrace_disable 00:12:25.049 00:58:18 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:25.049 00:58:18 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@481 -- # nvmfpid=3698936 00:12:25.049 00:58:18 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:25.049 00:58:18 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@482 -- # waitforlisten 3698936 00:12:25.049 00:58:18 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@827 -- # '[' -z 3698936 ']' 00:12:25.049 00:58:18 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:25.049 00:58:18 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@832 -- # local max_retries=100 00:12:25.049 00:58:18 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:25.049 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:25.049 00:58:18 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@836 -- # xtrace_disable 00:12:25.049 00:58:18 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:25.049 [2024-07-25 00:58:18.181794] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 23.11.0 initialization... 00:12:25.049 [2024-07-25 00:58:18.181865] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:25.307 EAL: No free 2048 kB hugepages reported on node 1 00:12:25.307 [2024-07-25 00:58:18.252896] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:25.307 [2024-07-25 00:58:18.349293] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:25.307 [2024-07-25 00:58:18.349373] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:25.307 [2024-07-25 00:58:18.349389] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:25.307 [2024-07-25 00:58:18.349403] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:25.307 [2024-07-25 00:58:18.349414] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:25.307 [2024-07-25 00:58:18.353265] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:25.307 [2024-07-25 00:58:18.353312] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:25.307 [2024-07-25 00:58:18.353340] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:25.307 [2024-07-25 00:58:18.353344] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:25.565 00:58:18 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:12:25.565 00:58:18 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@860 -- # return 0 00:12:25.565 00:58:18 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:25.565 00:58:18 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:25.565 00:58:18 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:25.565 00:58:18 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:25.565 00:58:18 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:12:25.565 00:58:18 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:25.565 00:58:18 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:12:25.565 00:58:18 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:12:25.565 00:58:18 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:12:25.823 "nvmf_tgt_1" 00:12:25.823 00:58:18 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:12:25.823 "nvmf_tgt_2" 00:12:25.823 00:58:18 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:25.823 00:58:18 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:12:25.823 00:58:18 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:12:25.823 00:58:18 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:12:26.080 true 00:12:26.080 00:58:19 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:12:26.080 true 00:12:26.080 00:58:19 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:26.080 00:58:19 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:12:26.338 00:58:19 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:12:26.338 00:58:19 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:12:26.338 00:58:19 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:12:26.338 00:58:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:26.338 00:58:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@117 -- # sync 00:12:26.338 00:58:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:26.338 00:58:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@120 -- # set +e 00:12:26.338 00:58:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:26.338 00:58:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:26.338 rmmod nvme_tcp 00:12:26.338 rmmod nvme_fabrics 00:12:26.338 rmmod nvme_keyring 00:12:26.338 00:58:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:26.338 00:58:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@124 -- # set -e 00:12:26.338 00:58:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@125 -- # return 0 00:12:26.338 00:58:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@489 -- # '[' -n 3698936 ']' 00:12:26.338 00:58:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@490 -- # killprocess 3698936 00:12:26.338 00:58:19 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@946 -- # '[' -z 3698936 ']' 00:12:26.338 00:58:19 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@950 -- # kill -0 3698936 00:12:26.338 00:58:19 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@951 -- # uname 00:12:26.338 00:58:19 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:12:26.338 00:58:19 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3698936 00:12:26.338 00:58:19 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:12:26.338 00:58:19 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:12:26.338 00:58:19 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3698936' 00:12:26.338 killing process with pid 3698936 00:12:26.338 00:58:19 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@965 -- # kill 3698936 00:12:26.338 00:58:19 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@970 -- # wait 3698936 00:12:26.596 00:58:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:26.596 00:58:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:26.597 00:58:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:26.597 00:58:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:26.597 00:58:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:26.597 00:58:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:26.597 00:58:19 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:26.597 00:58:19 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:29.127 00:58:21 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:29.127 00:12:29.127 real 0m5.607s 00:12:29.127 user 0m6.400s 00:12:29.127 sys 0m1.847s 00:12:29.127 00:58:21 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:29.127 00:58:21 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:29.127 ************************************ 00:12:29.127 END TEST nvmf_multitarget 00:12:29.127 ************************************ 00:12:29.127 00:58:21 nvmf_tcp -- nvmf/nvmf.sh@29 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:12:29.127 00:58:21 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:12:29.127 00:58:21 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:29.127 00:58:21 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:29.127 ************************************ 00:12:29.127 START TEST nvmf_rpc 00:12:29.127 ************************************ 00:12:29.127 00:58:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:12:29.127 * Looking for test storage... 00:12:29.127 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:29.127 00:58:21 nvmf_tcp.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:29.127 00:58:21 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:12:29.127 00:58:21 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:29.127 00:58:21 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:29.127 00:58:21 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:29.128 00:58:21 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:29.128 00:58:21 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:29.128 00:58:21 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:29.128 00:58:21 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:29.128 00:58:21 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:29.128 00:58:21 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:29.128 00:58:21 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:29.128 00:58:21 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:29.128 00:58:21 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:12:29.128 00:58:21 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:29.128 00:58:21 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:29.128 00:58:21 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:29.128 00:58:21 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:29.128 00:58:21 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:29.128 00:58:21 nvmf_tcp.nvmf_rpc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:29.128 00:58:21 nvmf_tcp.nvmf_rpc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:29.128 00:58:21 nvmf_tcp.nvmf_rpc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:29.128 00:58:21 nvmf_tcp.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:29.128 00:58:21 nvmf_tcp.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:29.128 00:58:21 nvmf_tcp.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:29.128 00:58:21 nvmf_tcp.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:12:29.128 00:58:21 nvmf_tcp.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:29.128 00:58:21 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@47 -- # : 0 00:12:29.128 00:58:21 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:29.128 00:58:21 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:29.128 00:58:21 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:29.128 00:58:21 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:29.128 00:58:21 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:29.128 00:58:21 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:29.128 00:58:21 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:29.128 00:58:21 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:29.128 00:58:21 nvmf_tcp.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:12:29.128 00:58:21 nvmf_tcp.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:12:29.128 00:58:21 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:29.128 00:58:21 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:29.128 00:58:21 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:29.128 00:58:21 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:29.128 00:58:21 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:29.128 00:58:21 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:29.128 00:58:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:29.128 00:58:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:29.128 00:58:21 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:29.128 00:58:21 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:29.128 00:58:21 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@285 -- # xtrace_disable 00:12:29.128 00:58:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:31.027 00:58:23 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:31.027 00:58:23 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@291 -- # pci_devs=() 00:12:31.027 00:58:23 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:31.027 00:58:23 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:31.027 00:58:23 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:31.027 00:58:23 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:31.027 00:58:23 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:31.027 00:58:23 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@295 -- # net_devs=() 00:12:31.027 00:58:23 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:31.027 00:58:23 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@296 -- # e810=() 00:12:31.027 00:58:23 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@296 -- # local -ga e810 00:12:31.027 00:58:23 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@297 -- # x722=() 00:12:31.027 00:58:23 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@297 -- # local -ga x722 00:12:31.027 00:58:23 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@298 -- # mlx=() 00:12:31.027 00:58:23 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@298 -- # local -ga mlx 00:12:31.027 00:58:23 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:31.027 00:58:23 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:31.027 00:58:23 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:31.028 00:58:23 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:31.028 00:58:23 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:31.028 00:58:23 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:31.028 00:58:23 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:31.028 00:58:23 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:31.028 00:58:23 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:31.028 00:58:23 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:31.028 00:58:23 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:31.028 00:58:23 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:31.028 00:58:23 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:31.028 00:58:23 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:31.028 00:58:23 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:31.028 00:58:23 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:31.028 00:58:23 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:31.028 00:58:23 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:31.028 00:58:23 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:12:31.028 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:12:31.028 00:58:23 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:31.028 00:58:23 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:31.028 00:58:23 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:31.028 00:58:23 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:31.028 00:58:23 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:31.028 00:58:23 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:31.028 00:58:23 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:12:31.028 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:12:31.028 00:58:23 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:31.028 00:58:23 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:31.028 00:58:23 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:31.028 00:58:23 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:31.028 00:58:23 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:31.028 00:58:23 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:31.028 00:58:23 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:31.028 00:58:23 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:31.028 00:58:23 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:31.028 00:58:23 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:31.028 00:58:23 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:31.028 00:58:23 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:31.028 00:58:23 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:31.028 00:58:23 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:31.028 00:58:23 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:31.028 00:58:23 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:12:31.028 Found net devices under 0000:0a:00.0: cvl_0_0 00:12:31.028 00:58:23 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:31.028 00:58:23 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:31.028 00:58:23 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:31.028 00:58:23 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:31.028 00:58:23 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:31.028 00:58:23 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:31.028 00:58:23 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:31.028 00:58:23 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:31.028 00:58:23 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:12:31.028 Found net devices under 0000:0a:00.1: cvl_0_1 00:12:31.028 00:58:23 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:31.028 00:58:23 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:31.028 00:58:23 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # is_hw=yes 00:12:31.028 00:58:23 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:31.028 00:58:23 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:31.028 00:58:23 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:31.028 00:58:23 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:31.028 00:58:23 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:31.028 00:58:23 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:31.028 00:58:23 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:31.028 00:58:23 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:31.028 00:58:23 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:31.028 00:58:23 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:31.028 00:58:23 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:31.028 00:58:23 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:31.028 00:58:23 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:31.028 00:58:23 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:31.028 00:58:23 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:31.028 00:58:23 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:31.028 00:58:23 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:31.028 00:58:23 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:31.028 00:58:23 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:31.028 00:58:23 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:31.028 00:58:23 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:31.028 00:58:23 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:31.028 00:58:23 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:31.028 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:31.028 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.196 ms 00:12:31.028 00:12:31.028 --- 10.0.0.2 ping statistics --- 00:12:31.028 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:31.028 rtt min/avg/max/mdev = 0.196/0.196/0.196/0.000 ms 00:12:31.028 00:58:23 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:31.028 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:31.028 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.110 ms 00:12:31.028 00:12:31.028 --- 10.0.0.1 ping statistics --- 00:12:31.028 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:31.028 rtt min/avg/max/mdev = 0.110/0.110/0.110/0.000 ms 00:12:31.028 00:58:23 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:31.028 00:58:23 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@422 -- # return 0 00:12:31.028 00:58:23 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:31.028 00:58:23 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:31.028 00:58:23 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:31.028 00:58:23 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:31.028 00:58:23 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:31.028 00:58:23 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:31.028 00:58:23 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:31.028 00:58:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:12:31.028 00:58:23 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:31.028 00:58:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@720 -- # xtrace_disable 00:12:31.028 00:58:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:31.028 00:58:23 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@481 -- # nvmfpid=3701025 00:12:31.028 00:58:23 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:31.028 00:58:23 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@482 -- # waitforlisten 3701025 00:12:31.028 00:58:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@827 -- # '[' -z 3701025 ']' 00:12:31.028 00:58:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:31.028 00:58:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:12:31.028 00:58:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:31.028 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:31.028 00:58:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:12:31.028 00:58:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:31.028 [2024-07-25 00:58:23.950107] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 23.11.0 initialization... 00:12:31.028 [2024-07-25 00:58:23.950183] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:31.028 EAL: No free 2048 kB hugepages reported on node 1 00:12:31.028 [2024-07-25 00:58:24.019585] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:31.028 [2024-07-25 00:58:24.116737] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:31.028 [2024-07-25 00:58:24.116796] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:31.028 [2024-07-25 00:58:24.116813] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:31.028 [2024-07-25 00:58:24.116827] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:31.028 [2024-07-25 00:58:24.116839] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:31.028 [2024-07-25 00:58:24.116923] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:31.028 [2024-07-25 00:58:24.116957] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:31.028 [2024-07-25 00:58:24.117015] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:31.028 [2024-07-25 00:58:24.117017] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:31.286 00:58:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:12:31.286 00:58:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@860 -- # return 0 00:12:31.286 00:58:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:31.286 00:58:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:31.286 00:58:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:31.286 00:58:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:31.286 00:58:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:12:31.286 00:58:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:31.286 00:58:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:31.286 00:58:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:31.286 00:58:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:12:31.286 "tick_rate": 2700000000, 00:12:31.286 "poll_groups": [ 00:12:31.286 { 00:12:31.286 "name": "nvmf_tgt_poll_group_000", 00:12:31.286 "admin_qpairs": 0, 00:12:31.286 "io_qpairs": 0, 00:12:31.286 "current_admin_qpairs": 0, 00:12:31.286 "current_io_qpairs": 0, 00:12:31.286 "pending_bdev_io": 0, 00:12:31.286 "completed_nvme_io": 0, 00:12:31.286 "transports": [] 00:12:31.286 }, 00:12:31.286 { 00:12:31.286 "name": "nvmf_tgt_poll_group_001", 00:12:31.286 "admin_qpairs": 0, 00:12:31.286 "io_qpairs": 0, 00:12:31.286 "current_admin_qpairs": 0, 00:12:31.286 "current_io_qpairs": 0, 00:12:31.286 "pending_bdev_io": 0, 00:12:31.286 "completed_nvme_io": 0, 00:12:31.286 "transports": [] 00:12:31.286 }, 00:12:31.286 { 00:12:31.286 "name": "nvmf_tgt_poll_group_002", 00:12:31.286 "admin_qpairs": 0, 00:12:31.286 "io_qpairs": 0, 00:12:31.286 "current_admin_qpairs": 0, 00:12:31.286 "current_io_qpairs": 0, 00:12:31.286 "pending_bdev_io": 0, 00:12:31.286 "completed_nvme_io": 0, 00:12:31.286 "transports": [] 00:12:31.286 }, 00:12:31.286 { 00:12:31.286 "name": "nvmf_tgt_poll_group_003", 00:12:31.286 "admin_qpairs": 0, 00:12:31.286 "io_qpairs": 0, 00:12:31.286 "current_admin_qpairs": 0, 00:12:31.286 "current_io_qpairs": 0, 00:12:31.286 "pending_bdev_io": 0, 00:12:31.286 "completed_nvme_io": 0, 00:12:31.286 "transports": [] 00:12:31.286 } 00:12:31.286 ] 00:12:31.286 }' 00:12:31.286 00:58:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:12:31.286 00:58:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:12:31.286 00:58:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:12:31.286 00:58:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:12:31.286 00:58:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:12:31.286 00:58:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:12:31.286 00:58:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:12:31.286 00:58:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:31.286 00:58:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:31.286 00:58:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:31.286 [2024-07-25 00:58:24.345057] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:31.286 00:58:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:31.286 00:58:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:12:31.286 00:58:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:31.286 00:58:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:31.286 00:58:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:31.286 00:58:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:12:31.286 "tick_rate": 2700000000, 00:12:31.286 "poll_groups": [ 00:12:31.286 { 00:12:31.286 "name": "nvmf_tgt_poll_group_000", 00:12:31.286 "admin_qpairs": 0, 00:12:31.286 "io_qpairs": 0, 00:12:31.286 "current_admin_qpairs": 0, 00:12:31.286 "current_io_qpairs": 0, 00:12:31.286 "pending_bdev_io": 0, 00:12:31.286 "completed_nvme_io": 0, 00:12:31.286 "transports": [ 00:12:31.286 { 00:12:31.286 "trtype": "TCP" 00:12:31.286 } 00:12:31.286 ] 00:12:31.286 }, 00:12:31.286 { 00:12:31.286 "name": "nvmf_tgt_poll_group_001", 00:12:31.286 "admin_qpairs": 0, 00:12:31.287 "io_qpairs": 0, 00:12:31.287 "current_admin_qpairs": 0, 00:12:31.287 "current_io_qpairs": 0, 00:12:31.287 "pending_bdev_io": 0, 00:12:31.287 "completed_nvme_io": 0, 00:12:31.287 "transports": [ 00:12:31.287 { 00:12:31.287 "trtype": "TCP" 00:12:31.287 } 00:12:31.287 ] 00:12:31.287 }, 00:12:31.287 { 00:12:31.287 "name": "nvmf_tgt_poll_group_002", 00:12:31.287 "admin_qpairs": 0, 00:12:31.287 "io_qpairs": 0, 00:12:31.287 "current_admin_qpairs": 0, 00:12:31.287 "current_io_qpairs": 0, 00:12:31.287 "pending_bdev_io": 0, 00:12:31.287 "completed_nvme_io": 0, 00:12:31.287 "transports": [ 00:12:31.287 { 00:12:31.287 "trtype": "TCP" 00:12:31.287 } 00:12:31.287 ] 00:12:31.287 }, 00:12:31.287 { 00:12:31.287 "name": "nvmf_tgt_poll_group_003", 00:12:31.287 "admin_qpairs": 0, 00:12:31.287 "io_qpairs": 0, 00:12:31.287 "current_admin_qpairs": 0, 00:12:31.287 "current_io_qpairs": 0, 00:12:31.287 "pending_bdev_io": 0, 00:12:31.287 "completed_nvme_io": 0, 00:12:31.287 "transports": [ 00:12:31.287 { 00:12:31.287 "trtype": "TCP" 00:12:31.287 } 00:12:31.287 ] 00:12:31.287 } 00:12:31.287 ] 00:12:31.287 }' 00:12:31.287 00:58:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:12:31.287 00:58:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:12:31.287 00:58:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:12:31.287 00:58:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:31.287 00:58:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:12:31.287 00:58:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:12:31.287 00:58:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:12:31.287 00:58:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:12:31.287 00:58:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:31.287 00:58:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:12:31.287 00:58:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:12:31.287 00:58:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:12:31.287 00:58:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:12:31.287 00:58:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:12:31.287 00:58:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:31.287 00:58:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:31.544 Malloc1 00:12:31.544 00:58:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:31.544 00:58:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:31.544 00:58:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:31.544 00:58:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:31.544 00:58:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:31.544 00:58:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:31.544 00:58:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:31.544 00:58:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:31.544 00:58:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:31.544 00:58:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:12:31.544 00:58:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:31.544 00:58:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:31.544 00:58:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:31.544 00:58:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:31.545 00:58:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:31.545 00:58:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:31.545 [2024-07-25 00:58:24.484367] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:31.545 00:58:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:31.545 00:58:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:12:31.545 00:58:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:12:31.545 00:58:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:12:31.545 00:58:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:12:31.545 00:58:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:31.545 00:58:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:12:31.545 00:58:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:31.545 00:58:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:12:31.545 00:58:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:31.545 00:58:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:12:31.545 00:58:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:12:31.545 00:58:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:12:31.545 [2024-07-25 00:58:24.506822] ctrlr.c: 816:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55' 00:12:31.545 Failed to write to /dev/nvme-fabrics: Input/output error 00:12:31.545 could not add new controller: failed to write to nvme-fabrics device 00:12:31.545 00:58:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:12:31.545 00:58:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:31.545 00:58:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:31.545 00:58:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:31.545 00:58:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:31.545 00:58:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:31.545 00:58:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:31.545 00:58:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:31.545 00:58:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:32.110 00:58:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:12:32.110 00:58:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:12:32.110 00:58:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:12:32.110 00:58:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:12:32.110 00:58:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:12:34.654 00:58:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:12:34.654 00:58:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:12:34.654 00:58:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:12:34.654 00:58:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:12:34.654 00:58:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:12:34.654 00:58:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:12:34.654 00:58:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:34.654 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:34.654 00:58:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:34.654 00:58:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:12:34.655 00:58:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:12:34.655 00:58:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:34.655 00:58:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:12:34.655 00:58:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:34.655 00:58:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:12:34.655 00:58:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:34.655 00:58:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:34.655 00:58:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:34.655 00:58:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:34.655 00:58:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:34.655 00:58:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:12:34.655 00:58:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:34.655 00:58:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:12:34.655 00:58:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:34.655 00:58:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:12:34.655 00:58:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:34.655 00:58:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:12:34.655 00:58:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:34.655 00:58:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:12:34.655 00:58:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:12:34.655 00:58:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:34.655 [2024-07-25 00:58:27.329138] ctrlr.c: 816:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55' 00:12:34.655 Failed to write to /dev/nvme-fabrics: Input/output error 00:12:34.655 could not add new controller: failed to write to nvme-fabrics device 00:12:34.655 00:58:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:12:34.655 00:58:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:34.655 00:58:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:34.655 00:58:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:34.655 00:58:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:12:34.655 00:58:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:34.655 00:58:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:34.655 00:58:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:34.655 00:58:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:34.912 00:58:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:12:34.912 00:58:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:12:34.912 00:58:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:12:34.912 00:58:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:12:34.912 00:58:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:12:36.807 00:58:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:12:36.807 00:58:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:12:36.807 00:58:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:12:37.065 00:58:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:12:37.065 00:58:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:12:37.065 00:58:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:12:37.065 00:58:29 nvmf_tcp.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:37.065 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:37.065 00:58:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:37.065 00:58:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:12:37.065 00:58:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:12:37.065 00:58:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:37.065 00:58:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:12:37.065 00:58:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:37.065 00:58:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:12:37.065 00:58:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:37.065 00:58:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:37.065 00:58:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:37.065 00:58:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:37.065 00:58:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:12:37.065 00:58:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:37.065 00:58:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:37.065 00:58:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:37.065 00:58:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:37.065 00:58:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:37.065 00:58:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:37.065 00:58:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:37.065 00:58:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:37.065 [2024-07-25 00:58:30.056009] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:37.065 00:58:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:37.065 00:58:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:37.065 00:58:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:37.065 00:58:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:37.065 00:58:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:37.065 00:58:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:37.065 00:58:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:37.065 00:58:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:37.065 00:58:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:37.065 00:58:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:37.630 00:58:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:37.630 00:58:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:12:37.630 00:58:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:12:37.630 00:58:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:12:37.630 00:58:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:12:40.155 00:58:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:12:40.155 00:58:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:12:40.155 00:58:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:12:40.155 00:58:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:12:40.155 00:58:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:12:40.155 00:58:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:12:40.155 00:58:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:40.155 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:40.155 00:58:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:40.155 00:58:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:12:40.155 00:58:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:12:40.155 00:58:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:40.155 00:58:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:12:40.155 00:58:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:40.155 00:58:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:12:40.155 00:58:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:40.155 00:58:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:40.155 00:58:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:40.155 00:58:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:40.155 00:58:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:40.155 00:58:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:40.155 00:58:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:40.155 00:58:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:40.155 00:58:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:40.155 00:58:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:40.155 00:58:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:40.155 00:58:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:40.155 00:58:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:40.155 00:58:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:40.155 00:58:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:40.155 00:58:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:40.155 [2024-07-25 00:58:32.908832] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:40.155 00:58:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:40.155 00:58:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:40.155 00:58:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:40.155 00:58:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:40.155 00:58:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:40.155 00:58:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:40.155 00:58:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:40.155 00:58:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:40.155 00:58:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:40.156 00:58:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:40.413 00:58:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:40.413 00:58:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:12:40.413 00:58:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:12:40.413 00:58:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:12:40.413 00:58:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:12:42.936 00:58:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:12:42.936 00:58:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:12:42.936 00:58:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:12:42.936 00:58:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:12:42.936 00:58:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:12:42.936 00:58:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:12:42.936 00:58:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:42.936 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:42.936 00:58:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:42.936 00:58:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:12:42.936 00:58:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:12:42.936 00:58:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:42.936 00:58:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:12:42.936 00:58:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:42.936 00:58:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:12:42.936 00:58:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:42.936 00:58:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:42.936 00:58:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:42.936 00:58:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:42.936 00:58:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:42.936 00:58:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:42.936 00:58:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:42.936 00:58:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:42.936 00:58:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:42.936 00:58:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:42.936 00:58:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:42.936 00:58:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:42.936 00:58:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:42.936 00:58:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:42.936 00:58:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:42.936 00:58:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:42.937 [2024-07-25 00:58:35.677965] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:42.937 00:58:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:42.937 00:58:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:42.937 00:58:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:42.937 00:58:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:42.937 00:58:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:42.937 00:58:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:42.937 00:58:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:42.937 00:58:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:42.937 00:58:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:42.937 00:58:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:43.502 00:58:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:43.502 00:58:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:12:43.502 00:58:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:12:43.502 00:58:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:12:43.502 00:58:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:12:45.400 00:58:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:12:45.400 00:58:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:12:45.400 00:58:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:12:45.400 00:58:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:12:45.400 00:58:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:12:45.400 00:58:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:12:45.400 00:58:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:45.400 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:45.400 00:58:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:45.400 00:58:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:12:45.400 00:58:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:12:45.400 00:58:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:45.400 00:58:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:12:45.400 00:58:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:45.400 00:58:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:12:45.400 00:58:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:45.400 00:58:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:45.400 00:58:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:45.400 00:58:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:45.400 00:58:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:45.401 00:58:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:45.401 00:58:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:45.401 00:58:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:45.401 00:58:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:45.401 00:58:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:45.401 00:58:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:45.401 00:58:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:45.401 00:58:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:45.401 00:58:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:45.401 00:58:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:45.401 00:58:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:45.401 [2024-07-25 00:58:38.530353] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:45.401 00:58:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:45.401 00:58:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:45.401 00:58:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:45.401 00:58:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:45.401 00:58:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:45.401 00:58:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:45.401 00:58:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:45.401 00:58:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:45.401 00:58:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:45.401 00:58:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:46.333 00:58:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:46.333 00:58:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:12:46.334 00:58:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:12:46.334 00:58:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:12:46.334 00:58:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:12:48.231 00:58:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:12:48.231 00:58:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:12:48.231 00:58:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:12:48.231 00:58:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:12:48.231 00:58:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:12:48.231 00:58:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:12:48.231 00:58:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:48.231 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:48.231 00:58:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:48.231 00:58:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:12:48.231 00:58:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:12:48.231 00:58:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:48.231 00:58:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:12:48.231 00:58:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:48.231 00:58:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:12:48.231 00:58:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:48.231 00:58:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:48.231 00:58:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:48.231 00:58:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:48.231 00:58:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:48.231 00:58:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:48.231 00:58:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:48.231 00:58:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:48.231 00:58:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:48.231 00:58:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:48.231 00:58:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:48.231 00:58:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:48.231 00:58:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:48.231 00:58:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:48.231 00:58:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:48.231 00:58:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:48.231 [2024-07-25 00:58:41.318597] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:48.231 00:58:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:48.231 00:58:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:48.231 00:58:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:48.231 00:58:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:48.231 00:58:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:48.231 00:58:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:48.231 00:58:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:48.231 00:58:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:48.231 00:58:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:48.231 00:58:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:49.164 00:58:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:49.164 00:58:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:12:49.164 00:58:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:12:49.164 00:58:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:12:49.164 00:58:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:12:51.061 00:58:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:12:51.061 00:58:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:12:51.061 00:58:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:12:51.061 00:58:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:12:51.061 00:58:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:12:51.061 00:58:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:12:51.061 00:58:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:51.061 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:51.061 00:58:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:51.061 00:58:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:12:51.061 00:58:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:12:51.061 00:58:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:51.061 00:58:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:12:51.061 00:58:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:51.061 00:58:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:12:51.061 00:58:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:51.061 00:58:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:51.061 00:58:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:51.061 00:58:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:51.061 00:58:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:51.061 00:58:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:51.061 00:58:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:51.061 00:58:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:51.061 00:58:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:12:51.320 00:58:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:51.320 00:58:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:51.320 00:58:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:51.320 00:58:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:51.320 00:58:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:51.320 00:58:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:51.320 00:58:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:51.320 00:58:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:51.320 [2024-07-25 00:58:44.225993] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:51.320 00:58:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:51.320 00:58:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:51.320 00:58:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:51.320 00:58:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:51.320 00:58:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:51.320 00:58:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:51.320 00:58:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:51.320 00:58:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:51.320 00:58:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:51.320 00:58:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:51.320 00:58:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:51.320 00:58:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:51.320 00:58:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:51.320 00:58:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:51.320 00:58:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:51.320 00:58:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:51.320 00:58:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:51.320 00:58:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:51.320 00:58:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:51.320 00:58:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:51.320 00:58:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:51.320 00:58:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:51.320 00:58:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:51.320 00:58:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:51.320 00:58:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:51.320 [2024-07-25 00:58:44.274050] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:51.320 00:58:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:51.320 00:58:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:51.320 00:58:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:51.320 00:58:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:51.320 00:58:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:51.320 00:58:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:51.320 00:58:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:51.320 00:58:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:51.320 00:58:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:51.320 00:58:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:51.320 00:58:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:51.320 00:58:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:51.320 00:58:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:51.320 00:58:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:51.320 00:58:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:51.320 00:58:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:51.320 00:58:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:51.320 00:58:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:51.320 00:58:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:51.320 00:58:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:51.320 00:58:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:51.320 00:58:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:51.320 00:58:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:51.320 00:58:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:51.320 00:58:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:51.320 [2024-07-25 00:58:44.322196] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:51.320 00:58:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:51.320 00:58:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:51.320 00:58:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:51.320 00:58:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:51.320 00:58:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:51.320 00:58:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:51.320 00:58:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:51.320 00:58:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:51.320 00:58:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:51.320 00:58:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:51.320 00:58:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:51.320 00:58:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:51.320 00:58:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:51.320 00:58:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:51.320 00:58:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:51.320 00:58:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:51.320 00:58:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:51.320 00:58:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:51.320 00:58:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:51.320 00:58:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:51.320 00:58:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:51.320 00:58:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:51.320 00:58:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:51.320 00:58:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:51.320 00:58:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:51.320 [2024-07-25 00:58:44.370407] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:51.320 00:58:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:51.320 00:58:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:51.320 00:58:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:51.320 00:58:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:51.320 00:58:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:51.320 00:58:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:51.320 00:58:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:51.320 00:58:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:51.320 00:58:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:51.320 00:58:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:51.320 00:58:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:51.320 00:58:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:51.320 00:58:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:51.320 00:58:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:51.320 00:58:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:51.320 00:58:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:51.320 00:58:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:51.320 00:58:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:51.320 00:58:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:51.320 00:58:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:51.320 00:58:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:51.320 00:58:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:51.320 00:58:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:51.320 00:58:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:51.320 00:58:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:51.320 [2024-07-25 00:58:44.418575] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:51.320 00:58:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:51.320 00:58:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:51.320 00:58:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:51.320 00:58:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:51.320 00:58:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:51.320 00:58:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:51.320 00:58:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:51.320 00:58:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:51.320 00:58:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:51.320 00:58:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:51.320 00:58:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:51.320 00:58:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:51.320 00:58:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:51.320 00:58:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:51.320 00:58:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:51.320 00:58:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:51.320 00:58:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:51.320 00:58:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:12:51.320 00:58:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:51.320 00:58:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:51.578 00:58:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:51.578 00:58:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:12:51.578 "tick_rate": 2700000000, 00:12:51.578 "poll_groups": [ 00:12:51.578 { 00:12:51.578 "name": "nvmf_tgt_poll_group_000", 00:12:51.578 "admin_qpairs": 2, 00:12:51.578 "io_qpairs": 84, 00:12:51.578 "current_admin_qpairs": 0, 00:12:51.578 "current_io_qpairs": 0, 00:12:51.578 "pending_bdev_io": 0, 00:12:51.578 "completed_nvme_io": 233, 00:12:51.578 "transports": [ 00:12:51.578 { 00:12:51.578 "trtype": "TCP" 00:12:51.578 } 00:12:51.578 ] 00:12:51.578 }, 00:12:51.578 { 00:12:51.578 "name": "nvmf_tgt_poll_group_001", 00:12:51.578 "admin_qpairs": 2, 00:12:51.578 "io_qpairs": 84, 00:12:51.578 "current_admin_qpairs": 0, 00:12:51.578 "current_io_qpairs": 0, 00:12:51.578 "pending_bdev_io": 0, 00:12:51.578 "completed_nvme_io": 134, 00:12:51.578 "transports": [ 00:12:51.578 { 00:12:51.578 "trtype": "TCP" 00:12:51.578 } 00:12:51.578 ] 00:12:51.578 }, 00:12:51.578 { 00:12:51.578 "name": "nvmf_tgt_poll_group_002", 00:12:51.578 "admin_qpairs": 1, 00:12:51.578 "io_qpairs": 84, 00:12:51.578 "current_admin_qpairs": 0, 00:12:51.578 "current_io_qpairs": 0, 00:12:51.578 "pending_bdev_io": 0, 00:12:51.578 "completed_nvme_io": 186, 00:12:51.578 "transports": [ 00:12:51.578 { 00:12:51.578 "trtype": "TCP" 00:12:51.578 } 00:12:51.578 ] 00:12:51.578 }, 00:12:51.578 { 00:12:51.578 "name": "nvmf_tgt_poll_group_003", 00:12:51.578 "admin_qpairs": 2, 00:12:51.578 "io_qpairs": 84, 00:12:51.578 "current_admin_qpairs": 0, 00:12:51.578 "current_io_qpairs": 0, 00:12:51.578 "pending_bdev_io": 0, 00:12:51.578 "completed_nvme_io": 133, 00:12:51.578 "transports": [ 00:12:51.578 { 00:12:51.578 "trtype": "TCP" 00:12:51.578 } 00:12:51.578 ] 00:12:51.578 } 00:12:51.578 ] 00:12:51.578 }' 00:12:51.578 00:58:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:12:51.578 00:58:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:12:51.578 00:58:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:12:51.578 00:58:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:51.578 00:58:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:12:51.578 00:58:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:12:51.578 00:58:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:12:51.578 00:58:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:12:51.578 00:58:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:51.578 00:58:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # (( 336 > 0 )) 00:12:51.578 00:58:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:12:51.578 00:58:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:12:51.578 00:58:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:12:51.578 00:58:44 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:51.578 00:58:44 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@117 -- # sync 00:12:51.578 00:58:44 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:51.578 00:58:44 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@120 -- # set +e 00:12:51.578 00:58:44 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:51.578 00:58:44 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:51.578 rmmod nvme_tcp 00:12:51.578 rmmod nvme_fabrics 00:12:51.578 rmmod nvme_keyring 00:12:51.578 00:58:44 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:51.578 00:58:44 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@124 -- # set -e 00:12:51.578 00:58:44 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@125 -- # return 0 00:12:51.578 00:58:44 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@489 -- # '[' -n 3701025 ']' 00:12:51.578 00:58:44 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@490 -- # killprocess 3701025 00:12:51.578 00:58:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@946 -- # '[' -z 3701025 ']' 00:12:51.578 00:58:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@950 -- # kill -0 3701025 00:12:51.578 00:58:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@951 -- # uname 00:12:51.578 00:58:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:12:51.578 00:58:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3701025 00:12:51.578 00:58:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:12:51.578 00:58:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:12:51.578 00:58:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3701025' 00:12:51.578 killing process with pid 3701025 00:12:51.578 00:58:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@965 -- # kill 3701025 00:12:51.578 00:58:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@970 -- # wait 3701025 00:12:51.836 00:58:44 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:51.836 00:58:44 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:51.836 00:58:44 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:51.836 00:58:44 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:51.836 00:58:44 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:51.836 00:58:44 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:51.836 00:58:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:51.836 00:58:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:54.370 00:58:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:54.370 00:12:54.370 real 0m25.213s 00:12:54.370 user 1m22.279s 00:12:54.370 sys 0m3.919s 00:12:54.370 00:58:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:54.371 00:58:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:54.371 ************************************ 00:12:54.371 END TEST nvmf_rpc 00:12:54.371 ************************************ 00:12:54.371 00:58:46 nvmf_tcp -- nvmf/nvmf.sh@30 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:12:54.371 00:58:46 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:12:54.371 00:58:46 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:54.371 00:58:46 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:54.371 ************************************ 00:12:54.371 START TEST nvmf_invalid 00:12:54.371 ************************************ 00:12:54.371 00:58:46 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:12:54.371 * Looking for test storage... 00:12:54.371 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:54.371 00:58:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:54.371 00:58:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:12:54.371 00:58:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:54.371 00:58:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:54.371 00:58:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:54.371 00:58:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:54.371 00:58:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:54.371 00:58:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:54.371 00:58:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:54.371 00:58:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:54.371 00:58:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:54.371 00:58:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:54.371 00:58:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:54.371 00:58:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:12:54.371 00:58:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:54.371 00:58:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:54.371 00:58:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:54.371 00:58:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:54.371 00:58:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:54.371 00:58:47 nvmf_tcp.nvmf_invalid -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:54.371 00:58:47 nvmf_tcp.nvmf_invalid -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:54.371 00:58:47 nvmf_tcp.nvmf_invalid -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:54.371 00:58:47 nvmf_tcp.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:54.371 00:58:47 nvmf_tcp.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:54.371 00:58:47 nvmf_tcp.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:54.371 00:58:47 nvmf_tcp.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:12:54.371 00:58:47 nvmf_tcp.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:54.371 00:58:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@47 -- # : 0 00:12:54.371 00:58:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:54.371 00:58:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:54.371 00:58:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:54.371 00:58:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:54.371 00:58:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:54.371 00:58:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:54.371 00:58:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:54.371 00:58:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:54.371 00:58:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:12:54.371 00:58:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:54.371 00:58:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:12:54.371 00:58:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:12:54.371 00:58:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:12:54.371 00:58:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:12:54.371 00:58:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:54.371 00:58:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:54.371 00:58:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:54.371 00:58:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:54.371 00:58:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:54.371 00:58:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:54.371 00:58:47 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:54.371 00:58:47 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:54.371 00:58:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:54.371 00:58:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:54.371 00:58:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@285 -- # xtrace_disable 00:12:54.371 00:58:47 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:56.297 00:58:49 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:56.297 00:58:49 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@291 -- # pci_devs=() 00:12:56.297 00:58:49 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:56.297 00:58:49 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:56.298 00:58:49 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:56.298 00:58:49 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:56.298 00:58:49 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:56.298 00:58:49 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@295 -- # net_devs=() 00:12:56.298 00:58:49 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:56.298 00:58:49 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@296 -- # e810=() 00:12:56.298 00:58:49 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@296 -- # local -ga e810 00:12:56.298 00:58:49 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@297 -- # x722=() 00:12:56.298 00:58:49 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@297 -- # local -ga x722 00:12:56.298 00:58:49 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@298 -- # mlx=() 00:12:56.298 00:58:49 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@298 -- # local -ga mlx 00:12:56.298 00:58:49 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:56.298 00:58:49 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:56.298 00:58:49 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:56.298 00:58:49 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:56.298 00:58:49 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:56.298 00:58:49 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:56.298 00:58:49 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:56.298 00:58:49 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:56.298 00:58:49 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:56.298 00:58:49 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:56.298 00:58:49 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:56.298 00:58:49 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:56.298 00:58:49 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:56.298 00:58:49 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:56.298 00:58:49 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:56.298 00:58:49 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:56.298 00:58:49 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:56.298 00:58:49 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:56.298 00:58:49 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:12:56.298 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:12:56.298 00:58:49 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:56.298 00:58:49 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:56.298 00:58:49 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:56.298 00:58:49 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:56.298 00:58:49 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:56.298 00:58:49 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:56.298 00:58:49 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:12:56.298 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:12:56.298 00:58:49 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:56.298 00:58:49 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:56.298 00:58:49 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:56.298 00:58:49 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:56.298 00:58:49 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:56.298 00:58:49 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:56.298 00:58:49 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:56.298 00:58:49 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:56.298 00:58:49 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:56.298 00:58:49 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:56.298 00:58:49 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:56.298 00:58:49 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:56.298 00:58:49 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:56.298 00:58:49 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:56.298 00:58:49 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:56.298 00:58:49 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:12:56.298 Found net devices under 0000:0a:00.0: cvl_0_0 00:12:56.298 00:58:49 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:56.298 00:58:49 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:56.298 00:58:49 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:56.298 00:58:49 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:56.298 00:58:49 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:56.298 00:58:49 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:56.298 00:58:49 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:56.298 00:58:49 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:56.298 00:58:49 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:12:56.298 Found net devices under 0000:0a:00.1: cvl_0_1 00:12:56.298 00:58:49 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:56.298 00:58:49 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:56.298 00:58:49 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # is_hw=yes 00:12:56.298 00:58:49 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:56.298 00:58:49 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:56.298 00:58:49 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:56.298 00:58:49 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:56.298 00:58:49 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:56.298 00:58:49 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:56.298 00:58:49 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:56.298 00:58:49 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:56.298 00:58:49 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:56.298 00:58:49 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:56.298 00:58:49 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:56.298 00:58:49 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:56.298 00:58:49 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:56.298 00:58:49 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:56.298 00:58:49 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:56.298 00:58:49 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:56.298 00:58:49 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:56.298 00:58:49 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:56.298 00:58:49 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:56.298 00:58:49 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:56.298 00:58:49 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:56.298 00:58:49 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:56.298 00:58:49 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:56.298 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:56.298 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.123 ms 00:12:56.298 00:12:56.298 --- 10.0.0.2 ping statistics --- 00:12:56.298 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:56.298 rtt min/avg/max/mdev = 0.123/0.123/0.123/0.000 ms 00:12:56.298 00:58:49 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:56.298 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:56.298 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.179 ms 00:12:56.298 00:12:56.298 --- 10.0.0.1 ping statistics --- 00:12:56.298 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:56.298 rtt min/avg/max/mdev = 0.179/0.179/0.179/0.000 ms 00:12:56.298 00:58:49 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:56.298 00:58:49 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@422 -- # return 0 00:12:56.298 00:58:49 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:56.298 00:58:49 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:56.298 00:58:49 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:56.298 00:58:49 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:56.298 00:58:49 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:56.298 00:58:49 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:56.298 00:58:49 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:56.298 00:58:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:12:56.298 00:58:49 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:56.298 00:58:49 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@720 -- # xtrace_disable 00:12:56.298 00:58:49 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:56.298 00:58:49 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@481 -- # nvmfpid=3705527 00:12:56.298 00:58:49 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:56.298 00:58:49 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@482 -- # waitforlisten 3705527 00:12:56.298 00:58:49 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@827 -- # '[' -z 3705527 ']' 00:12:56.298 00:58:49 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:56.298 00:58:49 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@832 -- # local max_retries=100 00:12:56.298 00:58:49 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:56.298 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:56.298 00:58:49 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@836 -- # xtrace_disable 00:12:56.298 00:58:49 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:56.298 [2024-07-25 00:58:49.277479] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 23.11.0 initialization... 00:12:56.299 [2024-07-25 00:58:49.277568] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:56.299 EAL: No free 2048 kB hugepages reported on node 1 00:12:56.299 [2024-07-25 00:58:49.347054] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:56.299 [2024-07-25 00:58:49.439498] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:56.299 [2024-07-25 00:58:49.439557] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:56.299 [2024-07-25 00:58:49.439573] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:56.299 [2024-07-25 00:58:49.439585] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:56.299 [2024-07-25 00:58:49.439598] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:56.299 [2024-07-25 00:58:49.439694] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:56.299 [2024-07-25 00:58:49.439728] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:56.299 [2024-07-25 00:58:49.439848] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:56.299 [2024-07-25 00:58:49.439850] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:56.557 00:58:49 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:12:56.557 00:58:49 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@860 -- # return 0 00:12:56.557 00:58:49 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:56.557 00:58:49 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:56.557 00:58:49 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:56.557 00:58:49 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:56.557 00:58:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:12:56.557 00:58:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode21718 00:12:56.814 [2024-07-25 00:58:49.863905] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:12:56.814 00:58:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:12:56.814 { 00:12:56.814 "nqn": "nqn.2016-06.io.spdk:cnode21718", 00:12:56.814 "tgt_name": "foobar", 00:12:56.814 "method": "nvmf_create_subsystem", 00:12:56.814 "req_id": 1 00:12:56.814 } 00:12:56.814 Got JSON-RPC error response 00:12:56.814 response: 00:12:56.814 { 00:12:56.814 "code": -32603, 00:12:56.814 "message": "Unable to find target foobar" 00:12:56.814 }' 00:12:56.814 00:58:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:12:56.814 { 00:12:56.814 "nqn": "nqn.2016-06.io.spdk:cnode21718", 00:12:56.814 "tgt_name": "foobar", 00:12:56.814 "method": "nvmf_create_subsystem", 00:12:56.814 "req_id": 1 00:12:56.814 } 00:12:56.814 Got JSON-RPC error response 00:12:56.814 response: 00:12:56.814 { 00:12:56.814 "code": -32603, 00:12:56.814 "message": "Unable to find target foobar" 00:12:56.814 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:12:56.814 00:58:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:12:56.814 00:58:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode2467 00:12:57.072 [2024-07-25 00:58:50.161002] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2467: invalid serial number 'SPDKISFASTANDAWESOME' 00:12:57.072 00:58:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:12:57.072 { 00:12:57.072 "nqn": "nqn.2016-06.io.spdk:cnode2467", 00:12:57.072 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:12:57.072 "method": "nvmf_create_subsystem", 00:12:57.072 "req_id": 1 00:12:57.072 } 00:12:57.072 Got JSON-RPC error response 00:12:57.072 response: 00:12:57.072 { 00:12:57.072 "code": -32602, 00:12:57.072 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:12:57.072 }' 00:12:57.072 00:58:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:12:57.072 { 00:12:57.072 "nqn": "nqn.2016-06.io.spdk:cnode2467", 00:12:57.072 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:12:57.072 "method": "nvmf_create_subsystem", 00:12:57.072 "req_id": 1 00:12:57.072 } 00:12:57.072 Got JSON-RPC error response 00:12:57.072 response: 00:12:57.072 { 00:12:57.072 "code": -32602, 00:12:57.072 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:12:57.072 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:12:57.072 00:58:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:12:57.072 00:58:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode6290 00:12:57.330 [2024-07-25 00:58:50.433871] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode6290: invalid model number 'SPDK_Controller' 00:12:57.330 00:58:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:12:57.330 { 00:12:57.330 "nqn": "nqn.2016-06.io.spdk:cnode6290", 00:12:57.330 "model_number": "SPDK_Controller\u001f", 00:12:57.330 "method": "nvmf_create_subsystem", 00:12:57.330 "req_id": 1 00:12:57.330 } 00:12:57.330 Got JSON-RPC error response 00:12:57.330 response: 00:12:57.330 { 00:12:57.330 "code": -32602, 00:12:57.330 "message": "Invalid MN SPDK_Controller\u001f" 00:12:57.330 }' 00:12:57.330 00:58:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:12:57.330 { 00:12:57.330 "nqn": "nqn.2016-06.io.spdk:cnode6290", 00:12:57.330 "model_number": "SPDK_Controller\u001f", 00:12:57.330 "method": "nvmf_create_subsystem", 00:12:57.330 "req_id": 1 00:12:57.330 } 00:12:57.330 Got JSON-RPC error response 00:12:57.330 response: 00:12:57.330 { 00:12:57.330 "code": -32602, 00:12:57.330 "message": "Invalid MN SPDK_Controller\u001f" 00:12:57.330 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:12:57.330 00:58:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:12:57.330 00:58:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:12:57.330 00:58:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:12:57.330 00:58:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:12:57.330 00:58:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:12:57.330 00:58:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:12:57.330 00:58:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:57.330 00:58:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 37 00:12:57.330 00:58:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x25' 00:12:57.330 00:58:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=% 00:12:57.330 00:58:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:57.330 00:58:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:57.330 00:58:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 75 00:12:57.330 00:58:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4b' 00:12:57.330 00:58:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=K 00:12:57.330 00:58:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:57.330 00:58:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:57.330 00:58:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:12:57.330 00:58:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:12:57.330 00:58:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:12:57.330 00:58:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:57.330 00:58:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:57.330 00:58:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 64 00:12:57.330 00:58:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x40' 00:12:57.330 00:58:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=@ 00:12:57.330 00:58:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:57.330 00:58:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:57.330 00:58:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 100 00:12:57.330 00:58:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x64' 00:12:57.330 00:58:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=d 00:12:57.330 00:58:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:57.330 00:58:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:57.330 00:58:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 42 00:12:57.330 00:58:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2a' 00:12:57.330 00:58:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='*' 00:12:57.330 00:58:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:57.330 00:58:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:57.588 00:58:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:12:57.588 00:58:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:12:57.588 00:58:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:12:57.588 00:58:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:57.588 00:58:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:57.588 00:58:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 33 00:12:57.588 00:58:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x21' 00:12:57.588 00:58:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='!' 00:12:57.588 00:58:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:57.588 00:58:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:57.588 00:58:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 92 00:12:57.588 00:58:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5c' 00:12:57.588 00:58:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='\' 00:12:57.588 00:58:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:57.588 00:58:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:57.588 00:58:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 119 00:12:57.588 00:58:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x77' 00:12:57.588 00:58:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=w 00:12:57.588 00:58:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:57.588 00:58:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:57.588 00:58:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 100 00:12:57.588 00:58:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x64' 00:12:57.588 00:58:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=d 00:12:57.588 00:58:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:57.588 00:58:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:57.588 00:58:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 106 00:12:57.588 00:58:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6a' 00:12:57.588 00:58:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=j 00:12:57.588 00:58:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:57.588 00:58:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:57.588 00:58:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 79 00:12:57.588 00:58:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4f' 00:12:57.588 00:58:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=O 00:12:57.588 00:58:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:57.588 00:58:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:57.588 00:58:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 92 00:12:57.588 00:58:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5c' 00:12:57.588 00:58:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='\' 00:12:57.588 00:58:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:57.588 00:58:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:57.588 00:58:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:12:57.588 00:58:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:12:57.588 00:58:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:12:57.588 00:58:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:57.588 00:58:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:57.588 00:58:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 44 00:12:57.588 00:58:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2c' 00:12:57.588 00:58:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=, 00:12:57.588 00:58:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:57.589 00:58:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:57.589 00:58:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 125 00:12:57.589 00:58:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7d' 00:12:57.589 00:58:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='}' 00:12:57.589 00:58:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:57.589 00:58:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:57.589 00:58:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 86 00:12:57.589 00:58:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x56' 00:12:57.589 00:58:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=V 00:12:57.589 00:58:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:57.589 00:58:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:57.589 00:58:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 60 00:12:57.589 00:58:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3c' 00:12:57.589 00:58:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='<' 00:12:57.589 00:58:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:57.589 00:58:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:57.589 00:58:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 101 00:12:57.589 00:58:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x65' 00:12:57.589 00:58:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=e 00:12:57.589 00:58:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:57.589 00:58:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:57.589 00:58:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:12:57.589 00:58:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:12:57.589 00:58:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:12:57.589 00:58:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:57.589 00:58:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:57.589 00:58:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@28 -- # [[ % == \- ]] 00:12:57.589 00:58:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@31 -- # echo '%Km@d*C!\wdjO\],}VT>V?!*'\''=q=:Ft!/,0Y55AA`4Ku' 00:12:57.849 00:58:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d '0.05/,Lf|"}Umk>T>V?!*'\''=q=:Ft!/,0Y55AA`4Ku' nqn.2016-06.io.spdk:cnode32559 00:12:58.107 [2024-07-25 00:58:51.136132] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode32559: invalid model number '0.05/,Lf|"}Umk>T>V?!*'=q=:Ft!/,0Y55AA`4Ku' 00:12:58.107 00:58:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:12:58.107 { 00:12:58.107 "nqn": "nqn.2016-06.io.spdk:cnode32559", 00:12:58.107 "model_number": "0.05/,Lf|\"}Umk>T>V?!*'\''=q=:Ft!/,0Y55AA`4Ku", 00:12:58.107 "method": "nvmf_create_subsystem", 00:12:58.107 "req_id": 1 00:12:58.107 } 00:12:58.107 Got JSON-RPC error response 00:12:58.107 response: 00:12:58.107 { 00:12:58.107 "code": -32602, 00:12:58.107 "message": "Invalid MN 0.05/,Lf|\"}Umk>T>V?!*'\''=q=:Ft!/,0Y55AA`4Ku" 00:12:58.107 }' 00:12:58.107 00:58:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:12:58.107 { 00:12:58.107 "nqn": "nqn.2016-06.io.spdk:cnode32559", 00:12:58.107 "model_number": "0.05/,Lf|\"}Umk>T>V?!*'=q=:Ft!/,0Y55AA`4Ku", 00:12:58.107 "method": "nvmf_create_subsystem", 00:12:58.107 "req_id": 1 00:12:58.107 } 00:12:58.107 Got JSON-RPC error response 00:12:58.107 response: 00:12:58.107 { 00:12:58.107 "code": -32602, 00:12:58.107 "message": "Invalid MN 0.05/,Lf|\"}Umk>T>V?!*'=q=:Ft!/,0Y55AA`4Ku" 00:12:58.107 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:12:58.107 00:58:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:12:58.364 [2024-07-25 00:58:51.421129] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:58.364 00:58:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:12:58.621 00:58:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:12:58.621 00:58:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:12:58.621 00:58:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:12:58.621 00:58:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:12:58.621 00:58:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:12:58.879 [2024-07-25 00:58:51.918772] nvmf_rpc.c: 804:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:12:58.879 00:58:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:12:58.879 { 00:12:58.879 "nqn": "nqn.2016-06.io.spdk:cnode", 00:12:58.879 "listen_address": { 00:12:58.879 "trtype": "tcp", 00:12:58.879 "traddr": "", 00:12:58.879 "trsvcid": "4421" 00:12:58.879 }, 00:12:58.879 "method": "nvmf_subsystem_remove_listener", 00:12:58.879 "req_id": 1 00:12:58.879 } 00:12:58.879 Got JSON-RPC error response 00:12:58.879 response: 00:12:58.879 { 00:12:58.879 "code": -32602, 00:12:58.879 "message": "Invalid parameters" 00:12:58.879 }' 00:12:58.879 00:58:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:12:58.879 { 00:12:58.879 "nqn": "nqn.2016-06.io.spdk:cnode", 00:12:58.879 "listen_address": { 00:12:58.879 "trtype": "tcp", 00:12:58.879 "traddr": "", 00:12:58.879 "trsvcid": "4421" 00:12:58.879 }, 00:12:58.879 "method": "nvmf_subsystem_remove_listener", 00:12:58.879 "req_id": 1 00:12:58.879 } 00:12:58.879 Got JSON-RPC error response 00:12:58.879 response: 00:12:58.879 { 00:12:58.879 "code": -32602, 00:12:58.879 "message": "Invalid parameters" 00:12:58.879 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:12:58.879 00:58:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode19884 -i 0 00:12:59.136 [2024-07-25 00:58:52.171569] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode19884: invalid cntlid range [0-65519] 00:12:59.136 00:58:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:12:59.136 { 00:12:59.136 "nqn": "nqn.2016-06.io.spdk:cnode19884", 00:12:59.136 "min_cntlid": 0, 00:12:59.136 "method": "nvmf_create_subsystem", 00:12:59.136 "req_id": 1 00:12:59.136 } 00:12:59.136 Got JSON-RPC error response 00:12:59.136 response: 00:12:59.136 { 00:12:59.136 "code": -32602, 00:12:59.136 "message": "Invalid cntlid range [0-65519]" 00:12:59.136 }' 00:12:59.136 00:58:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:12:59.136 { 00:12:59.136 "nqn": "nqn.2016-06.io.spdk:cnode19884", 00:12:59.136 "min_cntlid": 0, 00:12:59.136 "method": "nvmf_create_subsystem", 00:12:59.136 "req_id": 1 00:12:59.136 } 00:12:59.136 Got JSON-RPC error response 00:12:59.136 response: 00:12:59.136 { 00:12:59.136 "code": -32602, 00:12:59.136 "message": "Invalid cntlid range [0-65519]" 00:12:59.136 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:59.136 00:58:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2927 -i 65520 00:12:59.394 [2024-07-25 00:58:52.412371] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2927: invalid cntlid range [65520-65519] 00:12:59.394 00:58:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:12:59.394 { 00:12:59.394 "nqn": "nqn.2016-06.io.spdk:cnode2927", 00:12:59.394 "min_cntlid": 65520, 00:12:59.394 "method": "nvmf_create_subsystem", 00:12:59.394 "req_id": 1 00:12:59.394 } 00:12:59.394 Got JSON-RPC error response 00:12:59.394 response: 00:12:59.394 { 00:12:59.394 "code": -32602, 00:12:59.394 "message": "Invalid cntlid range [65520-65519]" 00:12:59.394 }' 00:12:59.394 00:58:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:12:59.394 { 00:12:59.394 "nqn": "nqn.2016-06.io.spdk:cnode2927", 00:12:59.394 "min_cntlid": 65520, 00:12:59.394 "method": "nvmf_create_subsystem", 00:12:59.394 "req_id": 1 00:12:59.394 } 00:12:59.394 Got JSON-RPC error response 00:12:59.394 response: 00:12:59.394 { 00:12:59.394 "code": -32602, 00:12:59.394 "message": "Invalid cntlid range [65520-65519]" 00:12:59.394 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:59.394 00:58:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode13701 -I 0 00:12:59.651 [2024-07-25 00:58:52.665273] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode13701: invalid cntlid range [1-0] 00:12:59.651 00:58:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:12:59.651 { 00:12:59.651 "nqn": "nqn.2016-06.io.spdk:cnode13701", 00:12:59.651 "max_cntlid": 0, 00:12:59.651 "method": "nvmf_create_subsystem", 00:12:59.651 "req_id": 1 00:12:59.651 } 00:12:59.651 Got JSON-RPC error response 00:12:59.651 response: 00:12:59.651 { 00:12:59.651 "code": -32602, 00:12:59.651 "message": "Invalid cntlid range [1-0]" 00:12:59.651 }' 00:12:59.651 00:58:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:12:59.651 { 00:12:59.651 "nqn": "nqn.2016-06.io.spdk:cnode13701", 00:12:59.651 "max_cntlid": 0, 00:12:59.651 "method": "nvmf_create_subsystem", 00:12:59.651 "req_id": 1 00:12:59.651 } 00:12:59.651 Got JSON-RPC error response 00:12:59.651 response: 00:12:59.651 { 00:12:59.651 "code": -32602, 00:12:59.651 "message": "Invalid cntlid range [1-0]" 00:12:59.651 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:59.651 00:58:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode18903 -I 65520 00:12:59.909 [2024-07-25 00:58:52.910041] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode18903: invalid cntlid range [1-65520] 00:12:59.909 00:58:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:12:59.909 { 00:12:59.909 "nqn": "nqn.2016-06.io.spdk:cnode18903", 00:12:59.909 "max_cntlid": 65520, 00:12:59.909 "method": "nvmf_create_subsystem", 00:12:59.909 "req_id": 1 00:12:59.909 } 00:12:59.909 Got JSON-RPC error response 00:12:59.909 response: 00:12:59.909 { 00:12:59.909 "code": -32602, 00:12:59.909 "message": "Invalid cntlid range [1-65520]" 00:12:59.909 }' 00:12:59.909 00:58:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:12:59.909 { 00:12:59.909 "nqn": "nqn.2016-06.io.spdk:cnode18903", 00:12:59.909 "max_cntlid": 65520, 00:12:59.909 "method": "nvmf_create_subsystem", 00:12:59.909 "req_id": 1 00:12:59.909 } 00:12:59.909 Got JSON-RPC error response 00:12:59.909 response: 00:12:59.909 { 00:12:59.909 "code": -32602, 00:12:59.909 "message": "Invalid cntlid range [1-65520]" 00:12:59.909 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:59.909 00:58:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode26417 -i 6 -I 5 00:13:00.166 [2024-07-25 00:58:53.170935] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode26417: invalid cntlid range [6-5] 00:13:00.166 00:58:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:13:00.166 { 00:13:00.167 "nqn": "nqn.2016-06.io.spdk:cnode26417", 00:13:00.167 "min_cntlid": 6, 00:13:00.167 "max_cntlid": 5, 00:13:00.167 "method": "nvmf_create_subsystem", 00:13:00.167 "req_id": 1 00:13:00.167 } 00:13:00.167 Got JSON-RPC error response 00:13:00.167 response: 00:13:00.167 { 00:13:00.167 "code": -32602, 00:13:00.167 "message": "Invalid cntlid range [6-5]" 00:13:00.167 }' 00:13:00.167 00:58:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:13:00.167 { 00:13:00.167 "nqn": "nqn.2016-06.io.spdk:cnode26417", 00:13:00.167 "min_cntlid": 6, 00:13:00.167 "max_cntlid": 5, 00:13:00.167 "method": "nvmf_create_subsystem", 00:13:00.167 "req_id": 1 00:13:00.167 } 00:13:00.167 Got JSON-RPC error response 00:13:00.167 response: 00:13:00.167 { 00:13:00.167 "code": -32602, 00:13:00.167 "message": "Invalid cntlid range [6-5]" 00:13:00.167 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:00.167 00:58:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:13:00.167 00:58:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:13:00.167 { 00:13:00.167 "name": "foobar", 00:13:00.167 "method": "nvmf_delete_target", 00:13:00.167 "req_id": 1 00:13:00.167 } 00:13:00.167 Got JSON-RPC error response 00:13:00.167 response: 00:13:00.167 { 00:13:00.167 "code": -32602, 00:13:00.167 "message": "The specified target doesn'\''t exist, cannot delete it." 00:13:00.167 }' 00:13:00.167 00:58:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:13:00.167 { 00:13:00.167 "name": "foobar", 00:13:00.167 "method": "nvmf_delete_target", 00:13:00.167 "req_id": 1 00:13:00.167 } 00:13:00.167 Got JSON-RPC error response 00:13:00.167 response: 00:13:00.167 { 00:13:00.167 "code": -32602, 00:13:00.167 "message": "The specified target doesn't exist, cannot delete it." 00:13:00.167 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:13:00.167 00:58:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:13:00.167 00:58:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:13:00.167 00:58:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:00.167 00:58:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@117 -- # sync 00:13:00.167 00:58:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:00.167 00:58:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@120 -- # set +e 00:13:00.167 00:58:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:00.167 00:58:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:00.167 rmmod nvme_tcp 00:13:00.424 rmmod nvme_fabrics 00:13:00.424 rmmod nvme_keyring 00:13:00.424 00:58:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:00.424 00:58:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@124 -- # set -e 00:13:00.424 00:58:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@125 -- # return 0 00:13:00.424 00:58:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@489 -- # '[' -n 3705527 ']' 00:13:00.424 00:58:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@490 -- # killprocess 3705527 00:13:00.424 00:58:53 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@946 -- # '[' -z 3705527 ']' 00:13:00.424 00:58:53 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@950 -- # kill -0 3705527 00:13:00.424 00:58:53 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@951 -- # uname 00:13:00.424 00:58:53 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:13:00.424 00:58:53 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3705527 00:13:00.424 00:58:53 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:13:00.424 00:58:53 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:13:00.424 00:58:53 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3705527' 00:13:00.424 killing process with pid 3705527 00:13:00.424 00:58:53 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@965 -- # kill 3705527 00:13:00.424 00:58:53 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@970 -- # wait 3705527 00:13:00.682 00:58:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:00.682 00:58:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:00.682 00:58:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:00.682 00:58:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:00.682 00:58:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:00.682 00:58:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:00.682 00:58:53 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:00.682 00:58:53 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:02.581 00:58:55 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:02.581 00:13:02.581 real 0m8.696s 00:13:02.581 user 0m20.346s 00:13:02.581 sys 0m2.435s 00:13:02.581 00:58:55 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:02.581 00:58:55 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:02.581 ************************************ 00:13:02.581 END TEST nvmf_invalid 00:13:02.581 ************************************ 00:13:02.581 00:58:55 nvmf_tcp -- nvmf/nvmf.sh@31 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:13:02.581 00:58:55 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:13:02.581 00:58:55 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:02.581 00:58:55 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:02.581 ************************************ 00:13:02.581 START TEST nvmf_abort 00:13:02.581 ************************************ 00:13:02.581 00:58:55 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:13:02.840 * Looking for test storage... 00:13:02.840 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:02.840 00:58:55 nvmf_tcp.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:02.840 00:58:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:13:02.840 00:58:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:02.840 00:58:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:02.840 00:58:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:02.840 00:58:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:02.840 00:58:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:02.840 00:58:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:02.840 00:58:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:02.840 00:58:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:02.840 00:58:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:02.840 00:58:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:02.840 00:58:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:02.840 00:58:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:13:02.840 00:58:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:02.840 00:58:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:02.840 00:58:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:02.840 00:58:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:02.840 00:58:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:02.840 00:58:55 nvmf_tcp.nvmf_abort -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:02.840 00:58:55 nvmf_tcp.nvmf_abort -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:02.840 00:58:55 nvmf_tcp.nvmf_abort -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:02.840 00:58:55 nvmf_tcp.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:02.840 00:58:55 nvmf_tcp.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:02.840 00:58:55 nvmf_tcp.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:02.840 00:58:55 nvmf_tcp.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:13:02.840 00:58:55 nvmf_tcp.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:02.840 00:58:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@47 -- # : 0 00:13:02.840 00:58:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:02.840 00:58:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:02.840 00:58:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:02.840 00:58:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:02.840 00:58:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:02.840 00:58:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:02.840 00:58:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:02.840 00:58:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:02.840 00:58:55 nvmf_tcp.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:02.840 00:58:55 nvmf_tcp.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:13:02.840 00:58:55 nvmf_tcp.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:13:02.840 00:58:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:02.840 00:58:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:02.840 00:58:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:02.840 00:58:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:02.840 00:58:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:02.840 00:58:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:02.840 00:58:55 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:02.840 00:58:55 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:02.840 00:58:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:02.840 00:58:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:02.840 00:58:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@285 -- # xtrace_disable 00:13:02.840 00:58:55 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:04.738 00:58:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:04.738 00:58:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@291 -- # pci_devs=() 00:13:04.738 00:58:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:04.738 00:58:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:04.738 00:58:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:04.738 00:58:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:04.738 00:58:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:04.738 00:58:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@295 -- # net_devs=() 00:13:04.738 00:58:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:04.738 00:58:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@296 -- # e810=() 00:13:04.738 00:58:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@296 -- # local -ga e810 00:13:04.738 00:58:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@297 -- # x722=() 00:13:04.738 00:58:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@297 -- # local -ga x722 00:13:04.738 00:58:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@298 -- # mlx=() 00:13:04.738 00:58:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@298 -- # local -ga mlx 00:13:04.738 00:58:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:04.738 00:58:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:04.738 00:58:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:04.738 00:58:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:04.738 00:58:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:04.738 00:58:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:04.738 00:58:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:04.738 00:58:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:04.738 00:58:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:04.738 00:58:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:04.738 00:58:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:04.738 00:58:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:04.738 00:58:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:04.738 00:58:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:04.738 00:58:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:04.738 00:58:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:04.738 00:58:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:04.738 00:58:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:04.738 00:58:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:13:04.738 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:13:04.738 00:58:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:04.738 00:58:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:04.738 00:58:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:04.738 00:58:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:04.738 00:58:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:04.738 00:58:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:04.738 00:58:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:13:04.738 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:13:04.738 00:58:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:04.738 00:58:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:04.738 00:58:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:04.738 00:58:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:04.738 00:58:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:04.738 00:58:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:04.738 00:58:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:04.738 00:58:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:04.738 00:58:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:04.738 00:58:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:04.738 00:58:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:04.738 00:58:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:04.738 00:58:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:04.738 00:58:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:04.738 00:58:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:04.738 00:58:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:13:04.738 Found net devices under 0000:0a:00.0: cvl_0_0 00:13:04.738 00:58:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:04.738 00:58:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:04.738 00:58:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:04.738 00:58:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:04.738 00:58:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:04.738 00:58:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:04.738 00:58:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:04.738 00:58:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:04.738 00:58:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:13:04.738 Found net devices under 0000:0a:00.1: cvl_0_1 00:13:04.738 00:58:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:04.738 00:58:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:04.738 00:58:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # is_hw=yes 00:13:04.738 00:58:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:04.738 00:58:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:04.738 00:58:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:04.738 00:58:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:04.738 00:58:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:04.738 00:58:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:04.739 00:58:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:04.739 00:58:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:04.739 00:58:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:04.739 00:58:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:04.739 00:58:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:04.739 00:58:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:04.739 00:58:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:04.739 00:58:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:04.739 00:58:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:04.739 00:58:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:04.739 00:58:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:04.739 00:58:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:04.739 00:58:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:04.739 00:58:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:04.739 00:58:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:04.739 00:58:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:04.739 00:58:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:04.739 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:04.739 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.216 ms 00:13:04.739 00:13:04.739 --- 10.0.0.2 ping statistics --- 00:13:04.739 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:04.739 rtt min/avg/max/mdev = 0.216/0.216/0.216/0.000 ms 00:13:04.739 00:58:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:04.739 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:04.739 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.115 ms 00:13:04.739 00:13:04.739 --- 10.0.0.1 ping statistics --- 00:13:04.739 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:04.739 rtt min/avg/max/mdev = 0.115/0.115/0.115/0.000 ms 00:13:04.739 00:58:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:04.739 00:58:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@422 -- # return 0 00:13:04.739 00:58:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:04.739 00:58:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:04.739 00:58:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:04.739 00:58:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:04.739 00:58:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:04.739 00:58:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:04.739 00:58:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:04.739 00:58:57 nvmf_tcp.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:13:04.739 00:58:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:04.739 00:58:57 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@720 -- # xtrace_disable 00:13:04.739 00:58:57 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:04.739 00:58:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@481 -- # nvmfpid=3708160 00:13:04.739 00:58:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:13:04.739 00:58:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@482 -- # waitforlisten 3708160 00:13:04.739 00:58:57 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@827 -- # '[' -z 3708160 ']' 00:13:04.739 00:58:57 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:04.739 00:58:57 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@832 -- # local max_retries=100 00:13:04.739 00:58:57 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:04.739 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:04.739 00:58:57 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@836 -- # xtrace_disable 00:13:04.739 00:58:57 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:04.739 [2024-07-25 00:58:57.878777] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 23.11.0 initialization... 00:13:04.739 [2024-07-25 00:58:57.878864] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:04.997 EAL: No free 2048 kB hugepages reported on node 1 00:13:04.997 [2024-07-25 00:58:57.949053] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:04.997 [2024-07-25 00:58:58.042025] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:04.997 [2024-07-25 00:58:58.042088] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:04.997 [2024-07-25 00:58:58.042105] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:04.997 [2024-07-25 00:58:58.042126] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:04.997 [2024-07-25 00:58:58.042138] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:04.997 [2024-07-25 00:58:58.042226] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:04.997 [2024-07-25 00:58:58.042283] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:04.997 [2024-07-25 00:58:58.042287] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:05.254 00:58:58 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:13:05.254 00:58:58 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@860 -- # return 0 00:13:05.254 00:58:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:05.254 00:58:58 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:05.254 00:58:58 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:05.254 00:58:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:05.254 00:58:58 nvmf_tcp.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:13:05.254 00:58:58 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:05.254 00:58:58 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:05.254 [2024-07-25 00:58:58.196711] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:05.254 00:58:58 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:05.254 00:58:58 nvmf_tcp.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:13:05.254 00:58:58 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:05.254 00:58:58 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:05.254 Malloc0 00:13:05.254 00:58:58 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:05.254 00:58:58 nvmf_tcp.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:13:05.254 00:58:58 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:05.254 00:58:58 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:05.254 Delay0 00:13:05.254 00:58:58 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:05.254 00:58:58 nvmf_tcp.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:13:05.254 00:58:58 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:05.254 00:58:58 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:05.254 00:58:58 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:05.254 00:58:58 nvmf_tcp.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:13:05.254 00:58:58 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:05.254 00:58:58 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:05.254 00:58:58 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:05.254 00:58:58 nvmf_tcp.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:13:05.254 00:58:58 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:05.254 00:58:58 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:05.254 [2024-07-25 00:58:58.262073] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:05.254 00:58:58 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:05.254 00:58:58 nvmf_tcp.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:05.254 00:58:58 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:05.254 00:58:58 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:05.254 00:58:58 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:05.254 00:58:58 nvmf_tcp.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:13:05.254 EAL: No free 2048 kB hugepages reported on node 1 00:13:05.254 [2024-07-25 00:58:58.368338] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:13:07.779 Initializing NVMe Controllers 00:13:07.779 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:13:07.779 controller IO queue size 128 less than required 00:13:07.779 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:13:07.779 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:13:07.780 Initialization complete. Launching workers. 00:13:07.780 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 32988 00:13:07.780 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 33049, failed to submit 62 00:13:07.780 success 32992, unsuccess 57, failed 0 00:13:07.780 00:59:00 nvmf_tcp.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:13:07.780 00:59:00 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:07.780 00:59:00 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:07.780 00:59:00 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:07.780 00:59:00 nvmf_tcp.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:13:07.780 00:59:00 nvmf_tcp.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:13:07.780 00:59:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:07.780 00:59:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@117 -- # sync 00:13:07.780 00:59:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:07.780 00:59:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@120 -- # set +e 00:13:07.780 00:59:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:07.780 00:59:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:07.780 rmmod nvme_tcp 00:13:07.780 rmmod nvme_fabrics 00:13:07.780 rmmod nvme_keyring 00:13:07.780 00:59:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:07.780 00:59:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@124 -- # set -e 00:13:07.780 00:59:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@125 -- # return 0 00:13:07.780 00:59:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@489 -- # '[' -n 3708160 ']' 00:13:07.780 00:59:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@490 -- # killprocess 3708160 00:13:07.780 00:59:00 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@946 -- # '[' -z 3708160 ']' 00:13:07.780 00:59:00 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@950 -- # kill -0 3708160 00:13:07.780 00:59:00 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@951 -- # uname 00:13:07.780 00:59:00 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:13:07.780 00:59:00 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3708160 00:13:07.780 00:59:00 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:13:07.780 00:59:00 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:13:07.780 00:59:00 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3708160' 00:13:07.780 killing process with pid 3708160 00:13:07.780 00:59:00 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@965 -- # kill 3708160 00:13:07.780 00:59:00 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@970 -- # wait 3708160 00:13:07.780 00:59:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:07.780 00:59:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:07.780 00:59:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:07.780 00:59:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:07.780 00:59:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:07.780 00:59:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:07.780 00:59:00 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:07.780 00:59:00 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:09.681 00:59:02 nvmf_tcp.nvmf_abort -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:09.681 00:13:09.681 real 0m7.105s 00:13:09.681 user 0m10.310s 00:13:09.681 sys 0m2.475s 00:13:09.681 00:59:02 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:09.681 00:59:02 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:09.681 ************************************ 00:13:09.681 END TEST nvmf_abort 00:13:09.681 ************************************ 00:13:09.939 00:59:02 nvmf_tcp -- nvmf/nvmf.sh@32 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:13:09.939 00:59:02 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:13:09.939 00:59:02 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:09.939 00:59:02 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:09.939 ************************************ 00:13:09.939 START TEST nvmf_ns_hotplug_stress 00:13:09.939 ************************************ 00:13:09.939 00:59:02 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:13:09.939 * Looking for test storage... 00:13:09.939 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:09.939 00:59:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:09.939 00:59:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:13:09.939 00:59:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:09.939 00:59:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:09.939 00:59:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:09.939 00:59:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:09.939 00:59:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:09.939 00:59:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:09.939 00:59:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:09.939 00:59:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:09.939 00:59:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:09.939 00:59:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:09.939 00:59:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:09.939 00:59:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:13:09.939 00:59:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:09.939 00:59:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:09.939 00:59:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:09.939 00:59:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:09.939 00:59:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:09.939 00:59:02 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:09.939 00:59:02 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:09.939 00:59:02 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:09.940 00:59:02 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:09.940 00:59:02 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:09.940 00:59:02 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:09.940 00:59:02 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:13:09.940 00:59:02 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:09.940 00:59:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@47 -- # : 0 00:13:09.940 00:59:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:09.940 00:59:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:09.940 00:59:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:09.940 00:59:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:09.940 00:59:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:09.940 00:59:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:09.940 00:59:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:09.940 00:59:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:09.940 00:59:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:09.940 00:59:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:13:09.940 00:59:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:09.940 00:59:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:09.940 00:59:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:09.940 00:59:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:09.940 00:59:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:09.940 00:59:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:09.940 00:59:02 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:09.940 00:59:02 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:09.940 00:59:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:09.940 00:59:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:09.940 00:59:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:13:09.940 00:59:02 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:13:11.840 00:59:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:11.840 00:59:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:13:11.840 00:59:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:11.840 00:59:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:11.840 00:59:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:11.840 00:59:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:11.840 00:59:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:11.840 00:59:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # net_devs=() 00:13:11.840 00:59:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:11.840 00:59:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # e810=() 00:13:11.840 00:59:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # local -ga e810 00:13:11.840 00:59:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # x722=() 00:13:11.840 00:59:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # local -ga x722 00:13:11.840 00:59:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # mlx=() 00:13:11.840 00:59:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:13:11.840 00:59:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:11.840 00:59:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:11.841 00:59:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:11.841 00:59:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:11.841 00:59:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:11.841 00:59:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:11.841 00:59:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:11.841 00:59:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:11.841 00:59:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:11.841 00:59:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:11.841 00:59:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:11.841 00:59:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:11.841 00:59:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:11.841 00:59:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:11.841 00:59:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:11.841 00:59:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:11.841 00:59:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:11.841 00:59:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:11.841 00:59:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:13:11.841 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:13:11.841 00:59:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:11.841 00:59:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:11.841 00:59:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:11.841 00:59:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:11.841 00:59:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:11.841 00:59:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:11.841 00:59:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:13:11.841 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:13:11.841 00:59:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:11.841 00:59:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:11.841 00:59:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:11.841 00:59:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:11.841 00:59:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:11.841 00:59:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:11.841 00:59:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:11.841 00:59:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:11.841 00:59:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:11.841 00:59:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:11.841 00:59:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:11.841 00:59:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:11.841 00:59:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:11.841 00:59:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:11.841 00:59:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:11.841 00:59:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:13:11.841 Found net devices under 0000:0a:00.0: cvl_0_0 00:13:11.841 00:59:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:11.841 00:59:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:11.841 00:59:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:11.841 00:59:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:11.841 00:59:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:11.841 00:59:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:11.841 00:59:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:11.841 00:59:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:11.841 00:59:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:13:11.841 Found net devices under 0000:0a:00.1: cvl_0_1 00:13:11.841 00:59:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:11.841 00:59:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:11.841 00:59:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:13:11.841 00:59:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:11.841 00:59:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:11.841 00:59:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:11.841 00:59:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:11.841 00:59:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:11.841 00:59:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:11.841 00:59:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:11.841 00:59:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:11.841 00:59:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:11.841 00:59:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:11.841 00:59:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:11.841 00:59:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:11.841 00:59:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:11.841 00:59:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:11.841 00:59:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:11.841 00:59:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:12.100 00:59:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:12.100 00:59:05 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:12.100 00:59:05 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:12.100 00:59:05 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:12.100 00:59:05 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:12.100 00:59:05 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:12.100 00:59:05 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:12.100 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:12.100 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.160 ms 00:13:12.100 00:13:12.100 --- 10.0.0.2 ping statistics --- 00:13:12.100 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:12.100 rtt min/avg/max/mdev = 0.160/0.160/0.160/0.000 ms 00:13:12.100 00:59:05 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:12.100 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:12.100 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.133 ms 00:13:12.100 00:13:12.100 --- 10.0.0.1 ping statistics --- 00:13:12.100 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:12.100 rtt min/avg/max/mdev = 0.133/0.133/0.133/0.000 ms 00:13:12.100 00:59:05 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:12.100 00:59:05 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # return 0 00:13:12.100 00:59:05 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:12.100 00:59:05 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:12.100 00:59:05 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:12.100 00:59:05 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:12.100 00:59:05 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:12.100 00:59:05 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:12.100 00:59:05 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:12.100 00:59:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:13:12.100 00:59:05 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:12.100 00:59:05 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@720 -- # xtrace_disable 00:13:12.100 00:59:05 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:13:12.100 00:59:05 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@481 -- # nvmfpid=3710374 00:13:12.100 00:59:05 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:13:12.100 00:59:05 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # waitforlisten 3710374 00:13:12.100 00:59:05 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@827 -- # '[' -z 3710374 ']' 00:13:12.100 00:59:05 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:12.100 00:59:05 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@832 -- # local max_retries=100 00:13:12.100 00:59:05 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:12.100 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:12.100 00:59:05 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@836 -- # xtrace_disable 00:13:12.100 00:59:05 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:13:12.100 [2024-07-25 00:59:05.131864] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 23.11.0 initialization... 00:13:12.100 [2024-07-25 00:59:05.131941] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:12.100 EAL: No free 2048 kB hugepages reported on node 1 00:13:12.100 [2024-07-25 00:59:05.204161] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:12.359 [2024-07-25 00:59:05.300961] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:12.359 [2024-07-25 00:59:05.301027] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:12.359 [2024-07-25 00:59:05.301044] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:12.359 [2024-07-25 00:59:05.301057] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:12.359 [2024-07-25 00:59:05.301069] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:12.359 [2024-07-25 00:59:05.301148] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:12.359 [2024-07-25 00:59:05.301218] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:12.359 [2024-07-25 00:59:05.301224] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:12.359 00:59:05 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:13:12.359 00:59:05 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@860 -- # return 0 00:13:12.359 00:59:05 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:12.359 00:59:05 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:12.359 00:59:05 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:13:12.359 00:59:05 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:12.359 00:59:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:13:12.359 00:59:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:13:12.653 [2024-07-25 00:59:05.724947] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:12.653 00:59:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:12.910 00:59:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:13.168 [2024-07-25 00:59:06.287827] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:13.168 00:59:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:13.425 00:59:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:13:13.989 Malloc0 00:13:13.989 00:59:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:13:14.246 Delay0 00:13:14.246 00:59:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:14.520 00:59:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:13:14.520 NULL1 00:13:14.777 00:59:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:13:15.034 00:59:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=3710797 00:13:15.034 00:59:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:13:15.034 00:59:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3710797 00:13:15.034 00:59:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:15.034 EAL: No free 2048 kB hugepages reported on node 1 00:13:15.034 00:59:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:15.598 00:59:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:13:15.598 00:59:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:13:15.598 true 00:13:15.598 00:59:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3710797 00:13:15.598 00:59:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:15.855 00:59:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:16.112 00:59:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:13:16.112 00:59:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:13:16.369 true 00:13:16.369 00:59:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3710797 00:13:16.369 00:59:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:16.626 00:59:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:16.883 00:59:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:13:16.883 00:59:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:13:17.141 true 00:13:17.141 00:59:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3710797 00:13:17.141 00:59:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:18.511 Read completed with error (sct=0, sc=11) 00:13:18.511 00:59:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:18.511 00:59:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:13:18.511 00:59:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:13:18.768 true 00:13:18.768 00:59:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3710797 00:13:18.768 00:59:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:19.025 00:59:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:19.282 00:59:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:13:19.282 00:59:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:13:19.539 true 00:13:19.539 00:59:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3710797 00:13:19.539 00:59:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:19.797 00:59:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:20.054 00:59:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:13:20.054 00:59:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:13:20.322 true 00:13:20.322 00:59:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3710797 00:13:20.322 00:59:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:21.253 00:59:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:21.253 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:21.253 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:21.253 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:21.509 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:21.509 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:21.509 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:21.509 00:59:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:13:21.509 00:59:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:13:21.766 true 00:13:21.766 00:59:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3710797 00:13:21.766 00:59:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:22.697 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:22.697 00:59:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:22.697 00:59:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:13:22.697 00:59:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:13:22.955 true 00:13:22.955 00:59:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3710797 00:13:22.955 00:59:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:23.212 00:59:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:23.469 00:59:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:13:23.470 00:59:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:13:23.727 true 00:13:23.727 00:59:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3710797 00:13:23.727 00:59:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:24.660 00:59:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:24.918 00:59:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:13:24.918 00:59:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:13:25.175 true 00:13:25.175 00:59:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3710797 00:13:25.175 00:59:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:25.432 00:59:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:25.690 00:59:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:13:25.690 00:59:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:13:25.947 true 00:13:25.947 00:59:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3710797 00:13:25.947 00:59:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:26.205 00:59:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:26.463 00:59:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:13:26.463 00:59:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:13:26.721 true 00:13:26.721 00:59:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3710797 00:13:26.721 00:59:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:27.656 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:27.656 00:59:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:27.656 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:27.656 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:27.962 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:27.962 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:27.962 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:27.962 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:27.962 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:27.962 00:59:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:13:27.962 00:59:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:13:28.220 true 00:13:28.220 00:59:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3710797 00:13:28.220 00:59:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:29.152 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:29.152 00:59:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:29.408 00:59:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:13:29.408 00:59:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:13:29.665 true 00:13:29.665 00:59:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3710797 00:13:29.665 00:59:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:29.922 00:59:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:30.179 00:59:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:13:30.180 00:59:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:13:30.180 true 00:13:30.180 00:59:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3710797 00:13:30.180 00:59:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:31.129 00:59:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:31.129 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:31.386 00:59:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:13:31.386 00:59:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:13:31.643 true 00:13:31.643 00:59:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3710797 00:13:31.643 00:59:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:31.900 00:59:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:32.157 00:59:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:13:32.157 00:59:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:13:32.414 true 00:13:32.414 00:59:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3710797 00:13:32.414 00:59:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:33.346 00:59:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:33.603 00:59:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:13:33.603 00:59:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:13:33.603 true 00:13:33.603 00:59:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3710797 00:13:33.603 00:59:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:33.860 00:59:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:34.118 00:59:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:13:34.118 00:59:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:13:34.376 true 00:13:34.376 00:59:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3710797 00:13:34.376 00:59:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:34.633 00:59:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:34.891 00:59:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:13:34.891 00:59:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:13:35.149 true 00:13:35.149 00:59:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3710797 00:13:35.149 00:59:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:36.521 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:36.521 00:59:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:36.521 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:36.521 00:59:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:13:36.521 00:59:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:13:36.778 true 00:13:36.778 00:59:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3710797 00:13:36.778 00:59:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:37.036 00:59:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:37.294 00:59:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:13:37.294 00:59:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:13:37.551 true 00:13:37.551 00:59:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3710797 00:13:37.551 00:59:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:38.483 00:59:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:38.483 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:38.483 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:38.741 00:59:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:13:38.741 00:59:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:13:38.998 true 00:13:38.998 00:59:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3710797 00:13:38.998 00:59:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:39.256 00:59:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:39.513 00:59:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:13:39.513 00:59:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:13:39.771 true 00:13:39.771 00:59:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3710797 00:13:39.771 00:59:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:40.703 00:59:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:40.703 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:40.959 00:59:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:13:40.959 00:59:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:13:41.216 true 00:13:41.216 00:59:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3710797 00:13:41.216 00:59:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:41.473 00:59:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:41.730 00:59:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:13:41.730 00:59:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:13:41.987 true 00:13:41.987 00:59:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3710797 00:13:41.987 00:59:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:42.965 00:59:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:42.965 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:42.965 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:42.965 00:59:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:13:42.965 00:59:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:13:43.222 true 00:13:43.222 00:59:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3710797 00:13:43.222 00:59:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:43.480 00:59:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:43.737 00:59:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:13:43.737 00:59:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:13:43.995 true 00:13:43.995 00:59:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3710797 00:13:43.995 00:59:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:44.927 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:44.927 00:59:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:45.185 00:59:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:13:45.185 00:59:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:13:45.185 Initializing NVMe Controllers 00:13:45.185 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:45.185 Controller IO queue size 128, less than required. 00:13:45.185 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:13:45.185 Controller IO queue size 128, less than required. 00:13:45.185 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:13:45.185 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:13:45.185 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:13:45.185 Initialization complete. Launching workers. 00:13:45.185 ======================================================== 00:13:45.185 Latency(us) 00:13:45.185 Device Information : IOPS MiB/s Average min max 00:13:45.185 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 925.16 0.45 68217.59 2382.24 1014472.62 00:13:45.185 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 10480.77 5.12 12176.25 2895.99 451928.12 00:13:45.185 ======================================================== 00:13:45.185 Total : 11405.93 5.57 16721.90 2382.24 1014472.62 00:13:45.185 00:13:45.443 true 00:13:45.443 00:59:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3710797 00:13:45.443 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (3710797) - No such process 00:13:45.443 00:59:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 3710797 00:13:45.443 00:59:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:45.700 00:59:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:45.958 00:59:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:13:45.958 00:59:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:13:45.958 00:59:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:13:45.958 00:59:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:45.958 00:59:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:13:46.216 null0 00:13:46.216 00:59:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:46.216 00:59:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:46.216 00:59:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:13:46.474 null1 00:13:46.474 00:59:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:46.474 00:59:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:46.474 00:59:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:13:46.731 null2 00:13:46.731 00:59:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:46.731 00:59:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:46.731 00:59:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:13:46.988 null3 00:13:46.988 00:59:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:46.988 00:59:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:46.988 00:59:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:13:46.988 null4 00:13:47.246 00:59:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:47.246 00:59:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:47.246 00:59:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:13:47.246 null5 00:13:47.503 00:59:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:47.504 00:59:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:47.504 00:59:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:13:47.504 null6 00:13:47.504 00:59:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:47.504 00:59:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:47.504 00:59:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:13:47.761 null7 00:13:47.761 00:59:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:47.761 00:59:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:47.761 00:59:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:13:47.761 00:59:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:47.761 00:59:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:47.761 00:59:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:13:47.761 00:59:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:47.761 00:59:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:47.762 00:59:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:13:47.762 00:59:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:47.762 00:59:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:47.762 00:59:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:47.762 00:59:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:47.762 00:59:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:13:47.762 00:59:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:47.762 00:59:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:47.762 00:59:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:13:47.762 00:59:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:47.762 00:59:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:47.762 00:59:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:47.762 00:59:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:47.762 00:59:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:13:47.762 00:59:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:47.762 00:59:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:47.762 00:59:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:13:47.762 00:59:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:47.762 00:59:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:47.762 00:59:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:47.762 00:59:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:47.762 00:59:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:13:47.762 00:59:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:47.762 00:59:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:47.762 00:59:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:13:47.762 00:59:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:47.762 00:59:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:47.762 00:59:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:47.762 00:59:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:47.762 00:59:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:13:47.762 00:59:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:47.762 00:59:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:47.762 00:59:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:13:47.762 00:59:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:47.762 00:59:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:47.762 00:59:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:47.762 00:59:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:47.762 00:59:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:13:47.762 00:59:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:47.762 00:59:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:47.762 00:59:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:13:47.762 00:59:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:47.762 00:59:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:47.762 00:59:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:47.762 00:59:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:47.762 00:59:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:13:47.762 00:59:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:47.762 00:59:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:47.762 00:59:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:13:47.762 00:59:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:47.762 00:59:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:47.762 00:59:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:47.762 00:59:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:47.762 00:59:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:13:47.762 00:59:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:47.762 00:59:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:47.762 00:59:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:13:47.762 00:59:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 3714715 3714716 3714718 3714720 3714722 3714724 3714726 3714728 00:13:47.762 00:59:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:47.762 00:59:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:47.762 00:59:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:48.020 00:59:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:48.020 00:59:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:48.278 00:59:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:48.278 00:59:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:48.278 00:59:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:48.278 00:59:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:48.278 00:59:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:48.278 00:59:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:48.536 00:59:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:48.536 00:59:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:48.536 00:59:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:48.536 00:59:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:48.536 00:59:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:48.536 00:59:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:48.536 00:59:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:48.536 00:59:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:48.536 00:59:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:48.536 00:59:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:48.536 00:59:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:48.536 00:59:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:48.536 00:59:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:48.536 00:59:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:48.536 00:59:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:48.536 00:59:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:48.536 00:59:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:48.536 00:59:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:48.536 00:59:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:48.536 00:59:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:48.536 00:59:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:48.536 00:59:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:48.536 00:59:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:48.536 00:59:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:48.794 00:59:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:48.794 00:59:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:48.794 00:59:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:48.794 00:59:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:48.794 00:59:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:48.794 00:59:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:48.794 00:59:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:48.794 00:59:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:49.052 00:59:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:49.052 00:59:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:49.052 00:59:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:49.052 00:59:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:49.052 00:59:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:49.052 00:59:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:49.052 00:59:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:49.052 00:59:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:49.052 00:59:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:49.052 00:59:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:49.052 00:59:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:49.052 00:59:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:49.052 00:59:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:49.052 00:59:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:49.052 00:59:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:49.052 00:59:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:49.052 00:59:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:49.052 00:59:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:49.052 00:59:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:49.052 00:59:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:49.052 00:59:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:49.052 00:59:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:49.052 00:59:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:49.052 00:59:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:49.310 00:59:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:49.310 00:59:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:49.310 00:59:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:49.310 00:59:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:49.310 00:59:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:49.310 00:59:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:49.310 00:59:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:49.310 00:59:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:49.568 00:59:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:49.568 00:59:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:49.568 00:59:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:49.568 00:59:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:49.568 00:59:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:49.568 00:59:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:49.568 00:59:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:49.568 00:59:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:49.568 00:59:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:49.568 00:59:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:49.568 00:59:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:49.568 00:59:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:49.568 00:59:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:49.568 00:59:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:49.568 00:59:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:49.568 00:59:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:49.568 00:59:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:49.568 00:59:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:49.568 00:59:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:49.568 00:59:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:49.568 00:59:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:49.569 00:59:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:49.569 00:59:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:49.569 00:59:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:49.826 00:59:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:49.826 00:59:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:49.826 00:59:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:49.826 00:59:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:49.826 00:59:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:49.826 00:59:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:49.826 00:59:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:49.826 00:59:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:50.084 00:59:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:50.084 00:59:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:50.084 00:59:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:50.084 00:59:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:50.084 00:59:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:50.084 00:59:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:50.084 00:59:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:50.084 00:59:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:50.084 00:59:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:50.084 00:59:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:50.084 00:59:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:50.084 00:59:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:50.084 00:59:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:50.084 00:59:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:50.084 00:59:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:50.084 00:59:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:50.084 00:59:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:50.084 00:59:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:50.084 00:59:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:50.084 00:59:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:50.084 00:59:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:50.084 00:59:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:50.084 00:59:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:50.084 00:59:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:50.342 00:59:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:50.342 00:59:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:50.342 00:59:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:50.342 00:59:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:50.342 00:59:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:50.342 00:59:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:50.342 00:59:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:50.342 00:59:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:50.599 00:59:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:50.599 00:59:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:50.599 00:59:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:50.599 00:59:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:50.599 00:59:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:50.599 00:59:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:50.599 00:59:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:50.599 00:59:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:50.599 00:59:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:50.599 00:59:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:50.599 00:59:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:50.599 00:59:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:50.599 00:59:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:50.599 00:59:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:50.599 00:59:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:50.599 00:59:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:50.599 00:59:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:50.599 00:59:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:50.599 00:59:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:50.599 00:59:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:50.599 00:59:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:50.599 00:59:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:50.599 00:59:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:50.599 00:59:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:50.856 00:59:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:50.856 00:59:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:50.856 00:59:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:50.856 00:59:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:50.856 00:59:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:50.856 00:59:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:50.856 00:59:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:50.856 00:59:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:51.113 00:59:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:51.113 00:59:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:51.113 00:59:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:51.113 00:59:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:51.113 00:59:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:51.113 00:59:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:51.113 00:59:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:51.113 00:59:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:51.113 00:59:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:51.113 00:59:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:51.113 00:59:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:51.113 00:59:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:51.113 00:59:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:51.113 00:59:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:51.113 00:59:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:51.113 00:59:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:51.113 00:59:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:51.113 00:59:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:51.113 00:59:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:51.113 00:59:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:51.113 00:59:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:51.113 00:59:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:51.113 00:59:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:51.113 00:59:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:51.370 00:59:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:51.371 00:59:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:51.371 00:59:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:51.371 00:59:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:51.371 00:59:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:51.371 00:59:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:51.371 00:59:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:51.371 00:59:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:51.629 00:59:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:51.629 00:59:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:51.629 00:59:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:51.629 00:59:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:51.629 00:59:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:51.629 00:59:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:51.629 00:59:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:51.629 00:59:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:51.629 00:59:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:51.629 00:59:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:51.629 00:59:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:51.629 00:59:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:51.629 00:59:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:51.629 00:59:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:51.629 00:59:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:51.629 00:59:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:51.629 00:59:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:51.629 00:59:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:51.629 00:59:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:51.629 00:59:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:51.629 00:59:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:51.629 00:59:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:51.629 00:59:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:51.629 00:59:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:51.887 00:59:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:51.887 00:59:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:51.887 00:59:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:51.887 00:59:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:51.887 00:59:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:51.887 00:59:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:51.887 00:59:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:51.887 00:59:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:52.145 00:59:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:52.145 00:59:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:52.145 00:59:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:52.145 00:59:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:52.145 00:59:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:52.145 00:59:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:52.145 00:59:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:52.145 00:59:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:52.145 00:59:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:52.145 00:59:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:52.145 00:59:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:52.145 00:59:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:52.145 00:59:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:52.145 00:59:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:52.145 00:59:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:52.145 00:59:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:52.145 00:59:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:52.145 00:59:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:52.145 00:59:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:52.145 00:59:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:52.145 00:59:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:52.145 00:59:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:52.145 00:59:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:52.145 00:59:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:52.403 00:59:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:52.403 00:59:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:52.403 00:59:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:52.403 00:59:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:52.403 00:59:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:52.403 00:59:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:52.403 00:59:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:52.403 00:59:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:52.661 00:59:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:52.661 00:59:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:52.661 00:59:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:52.661 00:59:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:52.661 00:59:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:52.661 00:59:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:52.661 00:59:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:52.661 00:59:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:52.661 00:59:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:52.661 00:59:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:52.661 00:59:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:52.661 00:59:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:52.661 00:59:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:52.661 00:59:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:52.661 00:59:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:52.661 00:59:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:52.661 00:59:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:52.661 00:59:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:52.661 00:59:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:52.661 00:59:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:52.661 00:59:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:52.661 00:59:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:52.661 00:59:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:52.661 00:59:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:52.919 00:59:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:52.919 00:59:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:52.919 00:59:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:53.177 00:59:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:53.177 00:59:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:53.177 00:59:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:53.177 00:59:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:53.177 00:59:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:53.435 00:59:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:53.435 00:59:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:53.435 00:59:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:53.435 00:59:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:53.435 00:59:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:53.435 00:59:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:53.435 00:59:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:53.435 00:59:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:53.435 00:59:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:53.435 00:59:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:53.435 00:59:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:53.435 00:59:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:53.435 00:59:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:53.435 00:59:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:53.435 00:59:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:53.435 00:59:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:53.435 00:59:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:13:53.435 00:59:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:13:53.435 00:59:46 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:53.435 00:59:46 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # sync 00:13:53.435 00:59:46 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:53.435 00:59:46 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@120 -- # set +e 00:13:53.435 00:59:46 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:53.435 00:59:46 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:53.435 rmmod nvme_tcp 00:13:53.435 rmmod nvme_fabrics 00:13:53.435 rmmod nvme_keyring 00:13:53.435 00:59:46 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:53.435 00:59:46 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set -e 00:13:53.435 00:59:46 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # return 0 00:13:53.435 00:59:46 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@489 -- # '[' -n 3710374 ']' 00:13:53.435 00:59:46 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@490 -- # killprocess 3710374 00:13:53.435 00:59:46 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@946 -- # '[' -z 3710374 ']' 00:13:53.435 00:59:46 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@950 -- # kill -0 3710374 00:13:53.435 00:59:46 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@951 -- # uname 00:13:53.435 00:59:46 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:13:53.435 00:59:46 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3710374 00:13:53.435 00:59:46 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:13:53.435 00:59:46 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:13:53.435 00:59:46 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3710374' 00:13:53.435 killing process with pid 3710374 00:13:53.435 00:59:46 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@965 -- # kill 3710374 00:13:53.435 00:59:46 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@970 -- # wait 3710374 00:13:53.693 00:59:46 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:53.693 00:59:46 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:53.693 00:59:46 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:53.693 00:59:46 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:53.693 00:59:46 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:53.693 00:59:46 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:53.693 00:59:46 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:53.693 00:59:46 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:56.222 00:59:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:56.222 00:13:56.222 real 0m45.899s 00:13:56.222 user 3m28.857s 00:13:56.222 sys 0m16.451s 00:13:56.222 00:59:48 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:56.222 00:59:48 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:13:56.222 ************************************ 00:13:56.222 END TEST nvmf_ns_hotplug_stress 00:13:56.222 ************************************ 00:13:56.222 00:59:48 nvmf_tcp -- nvmf/nvmf.sh@33 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:13:56.222 00:59:48 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:13:56.222 00:59:48 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:56.222 00:59:48 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:56.222 ************************************ 00:13:56.222 START TEST nvmf_connect_stress 00:13:56.222 ************************************ 00:13:56.222 00:59:48 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:13:56.222 * Looking for test storage... 00:13:56.222 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:56.222 00:59:48 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:56.222 00:59:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:13:56.222 00:59:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:56.222 00:59:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:56.222 00:59:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:56.222 00:59:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:56.222 00:59:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:56.222 00:59:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:56.222 00:59:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:56.222 00:59:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:56.222 00:59:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:56.222 00:59:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:56.222 00:59:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:56.222 00:59:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:13:56.222 00:59:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:56.222 00:59:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:56.222 00:59:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:56.222 00:59:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:56.222 00:59:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:56.222 00:59:48 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:56.222 00:59:48 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:56.222 00:59:48 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:56.222 00:59:48 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:56.222 00:59:48 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:56.223 00:59:48 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:56.223 00:59:48 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:13:56.223 00:59:48 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:56.223 00:59:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@47 -- # : 0 00:13:56.223 00:59:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:56.223 00:59:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:56.223 00:59:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:56.223 00:59:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:56.223 00:59:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:56.223 00:59:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:56.223 00:59:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:56.223 00:59:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:56.223 00:59:48 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:13:56.223 00:59:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:56.223 00:59:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:56.223 00:59:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:56.223 00:59:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:56.223 00:59:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:56.223 00:59:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:56.223 00:59:48 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:56.223 00:59:48 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:56.223 00:59:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:56.223 00:59:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:56.223 00:59:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:13:56.223 00:59:48 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:58.122 00:59:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:58.122 00:59:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:13:58.122 00:59:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:58.122 00:59:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:58.122 00:59:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:58.122 00:59:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:58.123 00:59:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:58.123 00:59:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@295 -- # net_devs=() 00:13:58.123 00:59:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:58.123 00:59:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@296 -- # e810=() 00:13:58.123 00:59:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@296 -- # local -ga e810 00:13:58.123 00:59:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@297 -- # x722=() 00:13:58.123 00:59:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@297 -- # local -ga x722 00:13:58.123 00:59:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@298 -- # mlx=() 00:13:58.123 00:59:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:13:58.123 00:59:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:58.123 00:59:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:58.123 00:59:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:58.123 00:59:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:58.123 00:59:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:58.123 00:59:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:58.123 00:59:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:58.123 00:59:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:58.123 00:59:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:58.123 00:59:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:58.123 00:59:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:58.123 00:59:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:58.123 00:59:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:58.123 00:59:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:58.123 00:59:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:58.123 00:59:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:58.123 00:59:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:58.123 00:59:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:58.123 00:59:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:13:58.123 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:13:58.123 00:59:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:58.123 00:59:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:58.123 00:59:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:58.123 00:59:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:58.123 00:59:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:58.123 00:59:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:58.123 00:59:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:13:58.123 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:13:58.123 00:59:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:58.123 00:59:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:58.123 00:59:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:58.123 00:59:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:58.123 00:59:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:58.123 00:59:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:58.123 00:59:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:58.123 00:59:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:58.123 00:59:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:58.123 00:59:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:58.123 00:59:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:58.123 00:59:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:58.123 00:59:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:58.123 00:59:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:58.123 00:59:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:58.123 00:59:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:13:58.123 Found net devices under 0000:0a:00.0: cvl_0_0 00:13:58.123 00:59:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:58.123 00:59:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:58.123 00:59:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:58.123 00:59:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:58.123 00:59:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:58.123 00:59:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:58.123 00:59:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:58.123 00:59:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:58.123 00:59:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:13:58.123 Found net devices under 0000:0a:00.1: cvl_0_1 00:13:58.123 00:59:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:58.123 00:59:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:58.123 00:59:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:13:58.123 00:59:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:58.123 00:59:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:58.123 00:59:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:58.123 00:59:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:58.123 00:59:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:58.123 00:59:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:58.123 00:59:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:58.123 00:59:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:58.123 00:59:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:58.123 00:59:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:58.123 00:59:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:58.123 00:59:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:58.123 00:59:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:58.123 00:59:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:58.123 00:59:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:58.123 00:59:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:58.123 00:59:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:58.123 00:59:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:58.123 00:59:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:58.123 00:59:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:58.123 00:59:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:58.123 00:59:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:58.123 00:59:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:58.123 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:58.123 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.220 ms 00:13:58.123 00:13:58.123 --- 10.0.0.2 ping statistics --- 00:13:58.123 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:58.123 rtt min/avg/max/mdev = 0.220/0.220/0.220/0.000 ms 00:13:58.123 00:59:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:58.123 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:58.123 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.061 ms 00:13:58.123 00:13:58.123 --- 10.0.0.1 ping statistics --- 00:13:58.123 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:58.123 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:13:58.123 00:59:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:58.123 00:59:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@422 -- # return 0 00:13:58.123 00:59:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:58.123 00:59:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:58.123 00:59:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:58.123 00:59:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:58.123 00:59:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:58.123 00:59:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:58.123 00:59:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:58.124 00:59:50 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:13:58.124 00:59:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:58.124 00:59:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@720 -- # xtrace_disable 00:13:58.124 00:59:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:58.124 00:59:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@481 -- # nvmfpid=3717472 00:13:58.124 00:59:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:13:58.124 00:59:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@482 -- # waitforlisten 3717472 00:13:58.124 00:59:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@827 -- # '[' -z 3717472 ']' 00:13:58.124 00:59:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:58.124 00:59:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@832 -- # local max_retries=100 00:13:58.124 00:59:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:58.124 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:58.124 00:59:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@836 -- # xtrace_disable 00:13:58.124 00:59:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:58.124 [2024-07-25 00:59:51.039782] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 23.11.0 initialization... 00:13:58.124 [2024-07-25 00:59:51.039867] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:58.124 EAL: No free 2048 kB hugepages reported on node 1 00:13:58.124 [2024-07-25 00:59:51.107020] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:58.124 [2024-07-25 00:59:51.196659] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:58.124 [2024-07-25 00:59:51.196719] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:58.124 [2024-07-25 00:59:51.196734] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:58.124 [2024-07-25 00:59:51.196748] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:58.124 [2024-07-25 00:59:51.196760] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:58.124 [2024-07-25 00:59:51.196859] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:58.124 [2024-07-25 00:59:51.196957] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:58.124 [2024-07-25 00:59:51.196961] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:58.418 00:59:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:13:58.418 00:59:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@860 -- # return 0 00:13:58.418 00:59:51 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:58.418 00:59:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:58.418 00:59:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:58.418 00:59:51 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:58.418 00:59:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:58.418 00:59:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:58.418 00:59:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:58.418 [2024-07-25 00:59:51.338387] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:58.418 00:59:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:58.418 00:59:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:58.418 00:59:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:58.418 00:59:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:58.418 00:59:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:58.418 00:59:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:58.418 00:59:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:58.418 00:59:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:58.418 [2024-07-25 00:59:51.369409] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:58.418 00:59:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:58.418 00:59:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:13:58.418 00:59:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:58.418 00:59:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:58.418 NULL1 00:13:58.418 00:59:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:58.418 00:59:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=3717596 00:13:58.418 00:59:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:13:58.418 00:59:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:13:58.418 00:59:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:13:58.419 00:59:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:13:58.419 00:59:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:58.419 00:59:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:58.419 00:59:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:58.419 00:59:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:58.419 00:59:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:58.419 00:59:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:58.419 00:59:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:58.419 00:59:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:58.419 00:59:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:58.419 00:59:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:58.419 00:59:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:58.419 00:59:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:58.419 00:59:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:58.419 00:59:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:58.419 00:59:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:58.419 00:59:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:58.419 00:59:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:58.419 00:59:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:58.419 00:59:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:58.419 00:59:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:58.419 00:59:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:58.419 00:59:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:58.419 00:59:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:58.419 00:59:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:58.419 EAL: No free 2048 kB hugepages reported on node 1 00:13:58.419 00:59:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:58.419 00:59:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:58.419 00:59:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:58.419 00:59:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:58.419 00:59:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:58.419 00:59:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:58.419 00:59:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:58.419 00:59:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:58.419 00:59:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:58.419 00:59:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:58.419 00:59:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:58.419 00:59:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:58.419 00:59:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:58.419 00:59:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:58.419 00:59:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:58.419 00:59:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:58.419 00:59:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3717596 00:13:58.419 00:59:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:58.419 00:59:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:58.419 00:59:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:58.676 00:59:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:58.676 00:59:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3717596 00:13:58.676 00:59:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:58.676 00:59:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:58.676 00:59:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:58.933 00:59:52 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:58.933 00:59:52 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3717596 00:13:58.933 00:59:52 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:58.933 00:59:52 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:58.933 00:59:52 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:59.496 00:59:52 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:59.496 00:59:52 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3717596 00:13:59.496 00:59:52 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:59.496 00:59:52 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:59.496 00:59:52 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:59.753 00:59:52 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:59.753 00:59:52 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3717596 00:13:59.753 00:59:52 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:59.753 00:59:52 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:59.753 00:59:52 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:00.010 00:59:53 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:00.010 00:59:53 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3717596 00:14:00.010 00:59:53 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:00.010 00:59:53 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:00.010 00:59:53 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:00.266 00:59:53 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:00.266 00:59:53 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3717596 00:14:00.266 00:59:53 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:00.266 00:59:53 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:00.266 00:59:53 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:00.829 00:59:53 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:00.829 00:59:53 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3717596 00:14:00.829 00:59:53 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:00.829 00:59:53 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:00.829 00:59:53 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:01.085 00:59:53 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:01.085 00:59:53 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3717596 00:14:01.085 00:59:53 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:01.085 00:59:53 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:01.085 00:59:53 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:01.342 00:59:54 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:01.342 00:59:54 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3717596 00:14:01.342 00:59:54 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:01.342 00:59:54 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:01.342 00:59:54 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:01.599 00:59:54 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:01.599 00:59:54 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3717596 00:14:01.599 00:59:54 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:01.599 00:59:54 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:01.599 00:59:54 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:01.856 00:59:54 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:01.856 00:59:54 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3717596 00:14:01.856 00:59:54 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:01.856 00:59:54 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:01.856 00:59:54 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:02.420 00:59:55 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:02.420 00:59:55 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3717596 00:14:02.420 00:59:55 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:02.420 00:59:55 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:02.420 00:59:55 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:02.677 00:59:55 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:02.677 00:59:55 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3717596 00:14:02.677 00:59:55 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:02.677 00:59:55 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:02.677 00:59:55 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:02.934 00:59:55 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:02.934 00:59:55 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3717596 00:14:02.934 00:59:55 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:02.934 00:59:55 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:02.934 00:59:55 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:03.191 00:59:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:03.191 00:59:56 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3717596 00:14:03.191 00:59:56 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:03.191 00:59:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:03.191 00:59:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:03.449 00:59:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:03.449 00:59:56 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3717596 00:14:03.449 00:59:56 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:03.449 00:59:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:03.449 00:59:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:04.013 00:59:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:04.013 00:59:56 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3717596 00:14:04.013 00:59:56 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:04.013 00:59:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:04.013 00:59:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:04.271 00:59:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:04.271 00:59:57 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3717596 00:14:04.271 00:59:57 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:04.271 00:59:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:04.271 00:59:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:04.528 00:59:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:04.528 00:59:57 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3717596 00:14:04.528 00:59:57 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:04.528 00:59:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:04.528 00:59:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:04.786 00:59:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:04.786 00:59:57 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3717596 00:14:04.786 00:59:57 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:04.786 00:59:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:04.786 00:59:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:05.043 00:59:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:05.043 00:59:58 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3717596 00:14:05.043 00:59:58 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:05.043 00:59:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:05.043 00:59:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:05.608 00:59:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:05.608 00:59:58 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3717596 00:14:05.608 00:59:58 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:05.608 00:59:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:05.608 00:59:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:05.864 00:59:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:05.864 00:59:58 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3717596 00:14:05.864 00:59:58 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:05.864 00:59:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:05.864 00:59:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:06.121 00:59:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:06.121 00:59:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3717596 00:14:06.121 00:59:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:06.121 00:59:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:06.121 00:59:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:06.378 00:59:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:06.378 00:59:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3717596 00:14:06.378 00:59:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:06.378 00:59:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:06.378 00:59:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:06.635 00:59:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:06.635 00:59:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3717596 00:14:06.635 00:59:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:06.635 00:59:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:06.635 00:59:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:07.199 01:00:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:07.199 01:00:00 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3717596 00:14:07.199 01:00:00 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:07.199 01:00:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:07.199 01:00:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:07.456 01:00:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:07.456 01:00:00 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3717596 00:14:07.456 01:00:00 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:07.456 01:00:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:07.456 01:00:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:07.713 01:00:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:07.713 01:00:00 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3717596 00:14:07.713 01:00:00 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:07.713 01:00:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:07.713 01:00:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:07.970 01:00:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:07.970 01:00:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3717596 00:14:07.970 01:00:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:07.970 01:00:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:07.970 01:00:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:08.533 01:00:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:08.533 01:00:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3717596 00:14:08.533 01:00:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:08.533 01:00:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:08.533 01:00:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:08.533 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:08.789 01:00:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:08.789 01:00:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3717596 00:14:08.789 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (3717596) - No such process 00:14:08.789 01:00:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 3717596 00:14:08.789 01:00:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:14:08.789 01:00:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:14:08.789 01:00:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:14:08.789 01:00:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:08.789 01:00:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@117 -- # sync 00:14:08.789 01:00:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:08.789 01:00:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@120 -- # set +e 00:14:08.789 01:00:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:08.789 01:00:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:08.789 rmmod nvme_tcp 00:14:08.789 rmmod nvme_fabrics 00:14:08.789 rmmod nvme_keyring 00:14:08.789 01:00:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:08.789 01:00:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@124 -- # set -e 00:14:08.789 01:00:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@125 -- # return 0 00:14:08.789 01:00:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@489 -- # '[' -n 3717472 ']' 00:14:08.789 01:00:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@490 -- # killprocess 3717472 00:14:08.789 01:00:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@946 -- # '[' -z 3717472 ']' 00:14:08.789 01:00:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@950 -- # kill -0 3717472 00:14:08.789 01:00:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@951 -- # uname 00:14:08.789 01:00:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:14:08.789 01:00:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3717472 00:14:08.789 01:00:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:14:08.789 01:00:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:14:08.789 01:00:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3717472' 00:14:08.789 killing process with pid 3717472 00:14:08.789 01:00:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@965 -- # kill 3717472 00:14:08.789 01:00:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@970 -- # wait 3717472 00:14:09.045 01:00:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:09.045 01:00:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:09.045 01:00:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:09.045 01:00:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:09.045 01:00:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:09.045 01:00:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:09.045 01:00:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:09.045 01:00:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:10.941 01:00:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:10.941 00:14:10.941 real 0m15.244s 00:14:10.941 user 0m38.161s 00:14:10.941 sys 0m5.889s 00:14:10.941 01:00:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:10.941 01:00:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:10.941 ************************************ 00:14:10.941 END TEST nvmf_connect_stress 00:14:10.941 ************************************ 00:14:10.941 01:00:04 nvmf_tcp -- nvmf/nvmf.sh@34 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:14:10.941 01:00:04 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:14:10.941 01:00:04 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:10.941 01:00:04 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:11.198 ************************************ 00:14:11.198 START TEST nvmf_fused_ordering 00:14:11.198 ************************************ 00:14:11.198 01:00:04 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:14:11.198 * Looking for test storage... 00:14:11.199 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:11.199 01:00:04 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:11.199 01:00:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:14:11.199 01:00:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:11.199 01:00:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:11.199 01:00:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:11.199 01:00:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:11.199 01:00:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:11.199 01:00:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:11.199 01:00:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:11.199 01:00:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:11.199 01:00:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:11.199 01:00:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:11.199 01:00:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:11.199 01:00:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:14:11.199 01:00:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:11.199 01:00:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:11.199 01:00:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:11.199 01:00:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:11.199 01:00:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:11.199 01:00:04 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:11.199 01:00:04 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:11.199 01:00:04 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:11.199 01:00:04 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:11.199 01:00:04 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:11.199 01:00:04 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:11.199 01:00:04 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:14:11.199 01:00:04 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:11.199 01:00:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@47 -- # : 0 00:14:11.199 01:00:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:11.199 01:00:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:11.199 01:00:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:11.199 01:00:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:11.199 01:00:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:11.199 01:00:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:11.199 01:00:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:11.199 01:00:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:11.199 01:00:04 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:14:11.199 01:00:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:11.199 01:00:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:11.199 01:00:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:11.199 01:00:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:11.199 01:00:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:11.199 01:00:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:11.199 01:00:04 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:11.199 01:00:04 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:11.199 01:00:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:11.199 01:00:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:11.199 01:00:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@285 -- # xtrace_disable 00:14:11.199 01:00:04 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:13.098 01:00:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:13.098 01:00:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@291 -- # pci_devs=() 00:14:13.098 01:00:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:13.098 01:00:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:13.098 01:00:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:13.098 01:00:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:13.098 01:00:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:13.098 01:00:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@295 -- # net_devs=() 00:14:13.098 01:00:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:13.098 01:00:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@296 -- # e810=() 00:14:13.098 01:00:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@296 -- # local -ga e810 00:14:13.098 01:00:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@297 -- # x722=() 00:14:13.098 01:00:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@297 -- # local -ga x722 00:14:13.098 01:00:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@298 -- # mlx=() 00:14:13.098 01:00:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@298 -- # local -ga mlx 00:14:13.098 01:00:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:13.098 01:00:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:13.098 01:00:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:13.098 01:00:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:13.098 01:00:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:13.098 01:00:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:13.098 01:00:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:13.098 01:00:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:13.098 01:00:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:13.098 01:00:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:13.098 01:00:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:13.098 01:00:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:13.098 01:00:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:13.098 01:00:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:13.098 01:00:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:13.098 01:00:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:13.098 01:00:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:13.098 01:00:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:13.098 01:00:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:14:13.098 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:14:13.098 01:00:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:13.098 01:00:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:13.098 01:00:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:13.098 01:00:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:13.098 01:00:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:13.098 01:00:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:13.098 01:00:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:14:13.098 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:14:13.098 01:00:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:13.098 01:00:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:13.098 01:00:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:13.098 01:00:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:13.098 01:00:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:13.098 01:00:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:13.098 01:00:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:13.098 01:00:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:13.098 01:00:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:13.098 01:00:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:13.098 01:00:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:13.098 01:00:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:13.098 01:00:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:13.098 01:00:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:13.098 01:00:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:13.098 01:00:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:14:13.098 Found net devices under 0000:0a:00.0: cvl_0_0 00:14:13.098 01:00:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:13.098 01:00:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:13.098 01:00:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:13.098 01:00:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:13.098 01:00:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:13.098 01:00:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:13.098 01:00:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:13.098 01:00:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:13.098 01:00:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:14:13.098 Found net devices under 0000:0a:00.1: cvl_0_1 00:14:13.098 01:00:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:13.098 01:00:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:13.098 01:00:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # is_hw=yes 00:14:13.098 01:00:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:13.099 01:00:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:13.099 01:00:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:13.099 01:00:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:13.099 01:00:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:13.099 01:00:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:13.099 01:00:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:13.099 01:00:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:13.099 01:00:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:13.099 01:00:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:13.099 01:00:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:13.099 01:00:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:13.099 01:00:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:13.099 01:00:06 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:13.099 01:00:06 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:13.099 01:00:06 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:13.099 01:00:06 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:13.099 01:00:06 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:13.099 01:00:06 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:13.099 01:00:06 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:13.099 01:00:06 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:13.099 01:00:06 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:13.099 01:00:06 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:13.099 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:13.099 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.220 ms 00:14:13.099 00:14:13.099 --- 10.0.0.2 ping statistics --- 00:14:13.099 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:13.099 rtt min/avg/max/mdev = 0.220/0.220/0.220/0.000 ms 00:14:13.099 01:00:06 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:13.099 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:13.099 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.167 ms 00:14:13.099 00:14:13.099 --- 10.0.0.1 ping statistics --- 00:14:13.099 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:13.099 rtt min/avg/max/mdev = 0.167/0.167/0.167/0.000 ms 00:14:13.099 01:00:06 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:13.099 01:00:06 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@422 -- # return 0 00:14:13.099 01:00:06 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:13.099 01:00:06 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:13.099 01:00:06 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:13.099 01:00:06 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:13.099 01:00:06 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:13.099 01:00:06 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:13.099 01:00:06 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:13.099 01:00:06 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:14:13.099 01:00:06 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:13.099 01:00:06 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@720 -- # xtrace_disable 00:14:13.099 01:00:06 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:13.099 01:00:06 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@481 -- # nvmfpid=3720869 00:14:13.099 01:00:06 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@482 -- # waitforlisten 3720869 00:14:13.099 01:00:06 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:13.099 01:00:06 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@827 -- # '[' -z 3720869 ']' 00:14:13.099 01:00:06 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:13.099 01:00:06 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@832 -- # local max_retries=100 00:14:13.099 01:00:06 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:13.099 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:13.099 01:00:06 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@836 -- # xtrace_disable 00:14:13.099 01:00:06 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:13.099 [2024-07-25 01:00:06.219085] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 23.11.0 initialization... 00:14:13.099 [2024-07-25 01:00:06.219183] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:13.357 EAL: No free 2048 kB hugepages reported on node 1 00:14:13.357 [2024-07-25 01:00:06.287940] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:13.357 [2024-07-25 01:00:06.377028] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:13.357 [2024-07-25 01:00:06.377077] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:13.357 [2024-07-25 01:00:06.377106] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:13.357 [2024-07-25 01:00:06.377118] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:13.357 [2024-07-25 01:00:06.377136] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:13.357 [2024-07-25 01:00:06.377163] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:13.357 01:00:06 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:14:13.357 01:00:06 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@860 -- # return 0 00:14:13.357 01:00:06 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:13.357 01:00:06 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:13.357 01:00:06 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:13.646 01:00:06 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:13.646 01:00:06 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:13.646 01:00:06 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:13.646 01:00:06 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:13.646 [2024-07-25 01:00:06.523485] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:13.646 01:00:06 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:13.646 01:00:06 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:13.646 01:00:06 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:13.646 01:00:06 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:13.646 01:00:06 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:13.646 01:00:06 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:13.646 01:00:06 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:13.646 01:00:06 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:13.646 [2024-07-25 01:00:06.539673] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:13.646 01:00:06 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:13.646 01:00:06 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:14:13.646 01:00:06 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:13.646 01:00:06 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:13.646 NULL1 00:14:13.646 01:00:06 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:13.646 01:00:06 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:14:13.646 01:00:06 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:13.646 01:00:06 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:13.646 01:00:06 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:13.646 01:00:06 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:14:13.646 01:00:06 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:13.646 01:00:06 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:13.646 01:00:06 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:13.646 01:00:06 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:14:13.646 [2024-07-25 01:00:06.584265] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 23.11.0 initialization... 00:14:13.646 [2024-07-25 01:00:06.584315] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3720897 ] 00:14:13.646 EAL: No free 2048 kB hugepages reported on node 1 00:14:13.904 Attached to nqn.2016-06.io.spdk:cnode1 00:14:13.904 Namespace ID: 1 size: 1GB 00:14:13.904 fused_ordering(0) 00:14:13.904 fused_ordering(1) 00:14:13.904 fused_ordering(2) 00:14:13.904 fused_ordering(3) 00:14:13.904 fused_ordering(4) 00:14:13.904 fused_ordering(5) 00:14:13.904 fused_ordering(6) 00:14:13.904 fused_ordering(7) 00:14:13.904 fused_ordering(8) 00:14:13.904 fused_ordering(9) 00:14:13.904 fused_ordering(10) 00:14:13.904 fused_ordering(11) 00:14:13.904 fused_ordering(12) 00:14:13.904 fused_ordering(13) 00:14:13.904 fused_ordering(14) 00:14:13.904 fused_ordering(15) 00:14:13.904 fused_ordering(16) 00:14:13.904 fused_ordering(17) 00:14:13.904 fused_ordering(18) 00:14:13.904 fused_ordering(19) 00:14:13.904 fused_ordering(20) 00:14:13.904 fused_ordering(21) 00:14:13.904 fused_ordering(22) 00:14:13.904 fused_ordering(23) 00:14:13.904 fused_ordering(24) 00:14:13.904 fused_ordering(25) 00:14:13.904 fused_ordering(26) 00:14:13.904 fused_ordering(27) 00:14:13.904 fused_ordering(28) 00:14:13.904 fused_ordering(29) 00:14:13.904 fused_ordering(30) 00:14:13.904 fused_ordering(31) 00:14:13.904 fused_ordering(32) 00:14:13.904 fused_ordering(33) 00:14:13.904 fused_ordering(34) 00:14:13.904 fused_ordering(35) 00:14:13.904 fused_ordering(36) 00:14:13.904 fused_ordering(37) 00:14:13.904 fused_ordering(38) 00:14:13.904 fused_ordering(39) 00:14:13.904 fused_ordering(40) 00:14:13.904 fused_ordering(41) 00:14:13.904 fused_ordering(42) 00:14:13.904 fused_ordering(43) 00:14:13.904 fused_ordering(44) 00:14:13.904 fused_ordering(45) 00:14:13.904 fused_ordering(46) 00:14:13.904 fused_ordering(47) 00:14:13.904 fused_ordering(48) 00:14:13.904 fused_ordering(49) 00:14:13.904 fused_ordering(50) 00:14:13.904 fused_ordering(51) 00:14:13.904 fused_ordering(52) 00:14:13.904 fused_ordering(53) 00:14:13.904 fused_ordering(54) 00:14:13.904 fused_ordering(55) 00:14:13.904 fused_ordering(56) 00:14:13.904 fused_ordering(57) 00:14:13.904 fused_ordering(58) 00:14:13.904 fused_ordering(59) 00:14:13.904 fused_ordering(60) 00:14:13.904 fused_ordering(61) 00:14:13.904 fused_ordering(62) 00:14:13.904 fused_ordering(63) 00:14:13.904 fused_ordering(64) 00:14:13.904 fused_ordering(65) 00:14:13.904 fused_ordering(66) 00:14:13.904 fused_ordering(67) 00:14:13.904 fused_ordering(68) 00:14:13.904 fused_ordering(69) 00:14:13.904 fused_ordering(70) 00:14:13.904 fused_ordering(71) 00:14:13.904 fused_ordering(72) 00:14:13.904 fused_ordering(73) 00:14:13.904 fused_ordering(74) 00:14:13.904 fused_ordering(75) 00:14:13.904 fused_ordering(76) 00:14:13.904 fused_ordering(77) 00:14:13.904 fused_ordering(78) 00:14:13.904 fused_ordering(79) 00:14:13.904 fused_ordering(80) 00:14:13.904 fused_ordering(81) 00:14:13.904 fused_ordering(82) 00:14:13.904 fused_ordering(83) 00:14:13.904 fused_ordering(84) 00:14:13.904 fused_ordering(85) 00:14:13.904 fused_ordering(86) 00:14:13.904 fused_ordering(87) 00:14:13.904 fused_ordering(88) 00:14:13.904 fused_ordering(89) 00:14:13.904 fused_ordering(90) 00:14:13.904 fused_ordering(91) 00:14:13.904 fused_ordering(92) 00:14:13.904 fused_ordering(93) 00:14:13.904 fused_ordering(94) 00:14:13.904 fused_ordering(95) 00:14:13.904 fused_ordering(96) 00:14:13.904 fused_ordering(97) 00:14:13.904 fused_ordering(98) 00:14:13.904 fused_ordering(99) 00:14:13.904 fused_ordering(100) 00:14:13.904 fused_ordering(101) 00:14:13.904 fused_ordering(102) 00:14:13.904 fused_ordering(103) 00:14:13.904 fused_ordering(104) 00:14:13.904 fused_ordering(105) 00:14:13.904 fused_ordering(106) 00:14:13.904 fused_ordering(107) 00:14:13.904 fused_ordering(108) 00:14:13.904 fused_ordering(109) 00:14:13.904 fused_ordering(110) 00:14:13.904 fused_ordering(111) 00:14:13.904 fused_ordering(112) 00:14:13.904 fused_ordering(113) 00:14:13.904 fused_ordering(114) 00:14:13.904 fused_ordering(115) 00:14:13.904 fused_ordering(116) 00:14:13.904 fused_ordering(117) 00:14:13.904 fused_ordering(118) 00:14:13.904 fused_ordering(119) 00:14:13.904 fused_ordering(120) 00:14:13.904 fused_ordering(121) 00:14:13.904 fused_ordering(122) 00:14:13.904 fused_ordering(123) 00:14:13.904 fused_ordering(124) 00:14:13.904 fused_ordering(125) 00:14:13.904 fused_ordering(126) 00:14:13.904 fused_ordering(127) 00:14:13.904 fused_ordering(128) 00:14:13.904 fused_ordering(129) 00:14:13.904 fused_ordering(130) 00:14:13.904 fused_ordering(131) 00:14:13.904 fused_ordering(132) 00:14:13.904 fused_ordering(133) 00:14:13.904 fused_ordering(134) 00:14:13.904 fused_ordering(135) 00:14:13.904 fused_ordering(136) 00:14:13.904 fused_ordering(137) 00:14:13.904 fused_ordering(138) 00:14:13.904 fused_ordering(139) 00:14:13.904 fused_ordering(140) 00:14:13.904 fused_ordering(141) 00:14:13.904 fused_ordering(142) 00:14:13.904 fused_ordering(143) 00:14:13.904 fused_ordering(144) 00:14:13.904 fused_ordering(145) 00:14:13.904 fused_ordering(146) 00:14:13.904 fused_ordering(147) 00:14:13.904 fused_ordering(148) 00:14:13.904 fused_ordering(149) 00:14:13.904 fused_ordering(150) 00:14:13.904 fused_ordering(151) 00:14:13.904 fused_ordering(152) 00:14:13.904 fused_ordering(153) 00:14:13.904 fused_ordering(154) 00:14:13.904 fused_ordering(155) 00:14:13.904 fused_ordering(156) 00:14:13.904 fused_ordering(157) 00:14:13.904 fused_ordering(158) 00:14:13.904 fused_ordering(159) 00:14:13.904 fused_ordering(160) 00:14:13.904 fused_ordering(161) 00:14:13.904 fused_ordering(162) 00:14:13.904 fused_ordering(163) 00:14:13.904 fused_ordering(164) 00:14:13.904 fused_ordering(165) 00:14:13.904 fused_ordering(166) 00:14:13.904 fused_ordering(167) 00:14:13.904 fused_ordering(168) 00:14:13.904 fused_ordering(169) 00:14:13.904 fused_ordering(170) 00:14:13.904 fused_ordering(171) 00:14:13.904 fused_ordering(172) 00:14:13.904 fused_ordering(173) 00:14:13.904 fused_ordering(174) 00:14:13.904 fused_ordering(175) 00:14:13.904 fused_ordering(176) 00:14:13.904 fused_ordering(177) 00:14:13.904 fused_ordering(178) 00:14:13.904 fused_ordering(179) 00:14:13.905 fused_ordering(180) 00:14:13.905 fused_ordering(181) 00:14:13.905 fused_ordering(182) 00:14:13.905 fused_ordering(183) 00:14:13.905 fused_ordering(184) 00:14:13.905 fused_ordering(185) 00:14:13.905 fused_ordering(186) 00:14:13.905 fused_ordering(187) 00:14:13.905 fused_ordering(188) 00:14:13.905 fused_ordering(189) 00:14:13.905 fused_ordering(190) 00:14:13.905 fused_ordering(191) 00:14:13.905 fused_ordering(192) 00:14:13.905 fused_ordering(193) 00:14:13.905 fused_ordering(194) 00:14:13.905 fused_ordering(195) 00:14:13.905 fused_ordering(196) 00:14:13.905 fused_ordering(197) 00:14:13.905 fused_ordering(198) 00:14:13.905 fused_ordering(199) 00:14:13.905 fused_ordering(200) 00:14:13.905 fused_ordering(201) 00:14:13.905 fused_ordering(202) 00:14:13.905 fused_ordering(203) 00:14:13.905 fused_ordering(204) 00:14:13.905 fused_ordering(205) 00:14:14.469 fused_ordering(206) 00:14:14.469 fused_ordering(207) 00:14:14.469 fused_ordering(208) 00:14:14.469 fused_ordering(209) 00:14:14.469 fused_ordering(210) 00:14:14.469 fused_ordering(211) 00:14:14.469 fused_ordering(212) 00:14:14.469 fused_ordering(213) 00:14:14.469 fused_ordering(214) 00:14:14.469 fused_ordering(215) 00:14:14.469 fused_ordering(216) 00:14:14.469 fused_ordering(217) 00:14:14.469 fused_ordering(218) 00:14:14.469 fused_ordering(219) 00:14:14.469 fused_ordering(220) 00:14:14.469 fused_ordering(221) 00:14:14.469 fused_ordering(222) 00:14:14.469 fused_ordering(223) 00:14:14.469 fused_ordering(224) 00:14:14.469 fused_ordering(225) 00:14:14.469 fused_ordering(226) 00:14:14.469 fused_ordering(227) 00:14:14.469 fused_ordering(228) 00:14:14.469 fused_ordering(229) 00:14:14.469 fused_ordering(230) 00:14:14.469 fused_ordering(231) 00:14:14.469 fused_ordering(232) 00:14:14.469 fused_ordering(233) 00:14:14.469 fused_ordering(234) 00:14:14.469 fused_ordering(235) 00:14:14.469 fused_ordering(236) 00:14:14.469 fused_ordering(237) 00:14:14.469 fused_ordering(238) 00:14:14.469 fused_ordering(239) 00:14:14.469 fused_ordering(240) 00:14:14.469 fused_ordering(241) 00:14:14.469 fused_ordering(242) 00:14:14.469 fused_ordering(243) 00:14:14.469 fused_ordering(244) 00:14:14.469 fused_ordering(245) 00:14:14.469 fused_ordering(246) 00:14:14.469 fused_ordering(247) 00:14:14.469 fused_ordering(248) 00:14:14.469 fused_ordering(249) 00:14:14.469 fused_ordering(250) 00:14:14.469 fused_ordering(251) 00:14:14.469 fused_ordering(252) 00:14:14.469 fused_ordering(253) 00:14:14.469 fused_ordering(254) 00:14:14.469 fused_ordering(255) 00:14:14.469 fused_ordering(256) 00:14:14.469 fused_ordering(257) 00:14:14.469 fused_ordering(258) 00:14:14.469 fused_ordering(259) 00:14:14.469 fused_ordering(260) 00:14:14.469 fused_ordering(261) 00:14:14.469 fused_ordering(262) 00:14:14.469 fused_ordering(263) 00:14:14.469 fused_ordering(264) 00:14:14.469 fused_ordering(265) 00:14:14.469 fused_ordering(266) 00:14:14.469 fused_ordering(267) 00:14:14.469 fused_ordering(268) 00:14:14.469 fused_ordering(269) 00:14:14.469 fused_ordering(270) 00:14:14.469 fused_ordering(271) 00:14:14.469 fused_ordering(272) 00:14:14.469 fused_ordering(273) 00:14:14.469 fused_ordering(274) 00:14:14.469 fused_ordering(275) 00:14:14.469 fused_ordering(276) 00:14:14.469 fused_ordering(277) 00:14:14.469 fused_ordering(278) 00:14:14.469 fused_ordering(279) 00:14:14.469 fused_ordering(280) 00:14:14.469 fused_ordering(281) 00:14:14.469 fused_ordering(282) 00:14:14.469 fused_ordering(283) 00:14:14.469 fused_ordering(284) 00:14:14.469 fused_ordering(285) 00:14:14.469 fused_ordering(286) 00:14:14.469 fused_ordering(287) 00:14:14.469 fused_ordering(288) 00:14:14.469 fused_ordering(289) 00:14:14.469 fused_ordering(290) 00:14:14.469 fused_ordering(291) 00:14:14.469 fused_ordering(292) 00:14:14.469 fused_ordering(293) 00:14:14.469 fused_ordering(294) 00:14:14.469 fused_ordering(295) 00:14:14.469 fused_ordering(296) 00:14:14.469 fused_ordering(297) 00:14:14.469 fused_ordering(298) 00:14:14.469 fused_ordering(299) 00:14:14.469 fused_ordering(300) 00:14:14.469 fused_ordering(301) 00:14:14.469 fused_ordering(302) 00:14:14.469 fused_ordering(303) 00:14:14.469 fused_ordering(304) 00:14:14.469 fused_ordering(305) 00:14:14.469 fused_ordering(306) 00:14:14.469 fused_ordering(307) 00:14:14.469 fused_ordering(308) 00:14:14.469 fused_ordering(309) 00:14:14.469 fused_ordering(310) 00:14:14.469 fused_ordering(311) 00:14:14.469 fused_ordering(312) 00:14:14.469 fused_ordering(313) 00:14:14.469 fused_ordering(314) 00:14:14.469 fused_ordering(315) 00:14:14.469 fused_ordering(316) 00:14:14.469 fused_ordering(317) 00:14:14.469 fused_ordering(318) 00:14:14.469 fused_ordering(319) 00:14:14.469 fused_ordering(320) 00:14:14.469 fused_ordering(321) 00:14:14.469 fused_ordering(322) 00:14:14.469 fused_ordering(323) 00:14:14.469 fused_ordering(324) 00:14:14.469 fused_ordering(325) 00:14:14.469 fused_ordering(326) 00:14:14.469 fused_ordering(327) 00:14:14.469 fused_ordering(328) 00:14:14.469 fused_ordering(329) 00:14:14.469 fused_ordering(330) 00:14:14.469 fused_ordering(331) 00:14:14.469 fused_ordering(332) 00:14:14.469 fused_ordering(333) 00:14:14.470 fused_ordering(334) 00:14:14.470 fused_ordering(335) 00:14:14.470 fused_ordering(336) 00:14:14.470 fused_ordering(337) 00:14:14.470 fused_ordering(338) 00:14:14.470 fused_ordering(339) 00:14:14.470 fused_ordering(340) 00:14:14.470 fused_ordering(341) 00:14:14.470 fused_ordering(342) 00:14:14.470 fused_ordering(343) 00:14:14.470 fused_ordering(344) 00:14:14.470 fused_ordering(345) 00:14:14.470 fused_ordering(346) 00:14:14.470 fused_ordering(347) 00:14:14.470 fused_ordering(348) 00:14:14.470 fused_ordering(349) 00:14:14.470 fused_ordering(350) 00:14:14.470 fused_ordering(351) 00:14:14.470 fused_ordering(352) 00:14:14.470 fused_ordering(353) 00:14:14.470 fused_ordering(354) 00:14:14.470 fused_ordering(355) 00:14:14.470 fused_ordering(356) 00:14:14.470 fused_ordering(357) 00:14:14.470 fused_ordering(358) 00:14:14.470 fused_ordering(359) 00:14:14.470 fused_ordering(360) 00:14:14.470 fused_ordering(361) 00:14:14.470 fused_ordering(362) 00:14:14.470 fused_ordering(363) 00:14:14.470 fused_ordering(364) 00:14:14.470 fused_ordering(365) 00:14:14.470 fused_ordering(366) 00:14:14.470 fused_ordering(367) 00:14:14.470 fused_ordering(368) 00:14:14.470 fused_ordering(369) 00:14:14.470 fused_ordering(370) 00:14:14.470 fused_ordering(371) 00:14:14.470 fused_ordering(372) 00:14:14.470 fused_ordering(373) 00:14:14.470 fused_ordering(374) 00:14:14.470 fused_ordering(375) 00:14:14.470 fused_ordering(376) 00:14:14.470 fused_ordering(377) 00:14:14.470 fused_ordering(378) 00:14:14.470 fused_ordering(379) 00:14:14.470 fused_ordering(380) 00:14:14.470 fused_ordering(381) 00:14:14.470 fused_ordering(382) 00:14:14.470 fused_ordering(383) 00:14:14.470 fused_ordering(384) 00:14:14.470 fused_ordering(385) 00:14:14.470 fused_ordering(386) 00:14:14.470 fused_ordering(387) 00:14:14.470 fused_ordering(388) 00:14:14.470 fused_ordering(389) 00:14:14.470 fused_ordering(390) 00:14:14.470 fused_ordering(391) 00:14:14.470 fused_ordering(392) 00:14:14.470 fused_ordering(393) 00:14:14.470 fused_ordering(394) 00:14:14.470 fused_ordering(395) 00:14:14.470 fused_ordering(396) 00:14:14.470 fused_ordering(397) 00:14:14.470 fused_ordering(398) 00:14:14.470 fused_ordering(399) 00:14:14.470 fused_ordering(400) 00:14:14.470 fused_ordering(401) 00:14:14.470 fused_ordering(402) 00:14:14.470 fused_ordering(403) 00:14:14.470 fused_ordering(404) 00:14:14.470 fused_ordering(405) 00:14:14.470 fused_ordering(406) 00:14:14.470 fused_ordering(407) 00:14:14.470 fused_ordering(408) 00:14:14.470 fused_ordering(409) 00:14:14.470 fused_ordering(410) 00:14:15.036 fused_ordering(411) 00:14:15.037 fused_ordering(412) 00:14:15.037 fused_ordering(413) 00:14:15.037 fused_ordering(414) 00:14:15.037 fused_ordering(415) 00:14:15.037 fused_ordering(416) 00:14:15.037 fused_ordering(417) 00:14:15.037 fused_ordering(418) 00:14:15.037 fused_ordering(419) 00:14:15.037 fused_ordering(420) 00:14:15.037 fused_ordering(421) 00:14:15.037 fused_ordering(422) 00:14:15.037 fused_ordering(423) 00:14:15.037 fused_ordering(424) 00:14:15.037 fused_ordering(425) 00:14:15.037 fused_ordering(426) 00:14:15.037 fused_ordering(427) 00:14:15.037 fused_ordering(428) 00:14:15.037 fused_ordering(429) 00:14:15.037 fused_ordering(430) 00:14:15.037 fused_ordering(431) 00:14:15.037 fused_ordering(432) 00:14:15.037 fused_ordering(433) 00:14:15.037 fused_ordering(434) 00:14:15.037 fused_ordering(435) 00:14:15.037 fused_ordering(436) 00:14:15.037 fused_ordering(437) 00:14:15.037 fused_ordering(438) 00:14:15.037 fused_ordering(439) 00:14:15.037 fused_ordering(440) 00:14:15.037 fused_ordering(441) 00:14:15.037 fused_ordering(442) 00:14:15.037 fused_ordering(443) 00:14:15.037 fused_ordering(444) 00:14:15.037 fused_ordering(445) 00:14:15.037 fused_ordering(446) 00:14:15.037 fused_ordering(447) 00:14:15.037 fused_ordering(448) 00:14:15.037 fused_ordering(449) 00:14:15.037 fused_ordering(450) 00:14:15.037 fused_ordering(451) 00:14:15.037 fused_ordering(452) 00:14:15.037 fused_ordering(453) 00:14:15.037 fused_ordering(454) 00:14:15.037 fused_ordering(455) 00:14:15.037 fused_ordering(456) 00:14:15.037 fused_ordering(457) 00:14:15.037 fused_ordering(458) 00:14:15.037 fused_ordering(459) 00:14:15.037 fused_ordering(460) 00:14:15.037 fused_ordering(461) 00:14:15.037 fused_ordering(462) 00:14:15.037 fused_ordering(463) 00:14:15.037 fused_ordering(464) 00:14:15.037 fused_ordering(465) 00:14:15.037 fused_ordering(466) 00:14:15.037 fused_ordering(467) 00:14:15.037 fused_ordering(468) 00:14:15.037 fused_ordering(469) 00:14:15.037 fused_ordering(470) 00:14:15.037 fused_ordering(471) 00:14:15.037 fused_ordering(472) 00:14:15.037 fused_ordering(473) 00:14:15.037 fused_ordering(474) 00:14:15.037 fused_ordering(475) 00:14:15.037 fused_ordering(476) 00:14:15.037 fused_ordering(477) 00:14:15.037 fused_ordering(478) 00:14:15.037 fused_ordering(479) 00:14:15.037 fused_ordering(480) 00:14:15.037 fused_ordering(481) 00:14:15.037 fused_ordering(482) 00:14:15.037 fused_ordering(483) 00:14:15.037 fused_ordering(484) 00:14:15.037 fused_ordering(485) 00:14:15.037 fused_ordering(486) 00:14:15.037 fused_ordering(487) 00:14:15.037 fused_ordering(488) 00:14:15.037 fused_ordering(489) 00:14:15.037 fused_ordering(490) 00:14:15.037 fused_ordering(491) 00:14:15.037 fused_ordering(492) 00:14:15.037 fused_ordering(493) 00:14:15.037 fused_ordering(494) 00:14:15.037 fused_ordering(495) 00:14:15.037 fused_ordering(496) 00:14:15.037 fused_ordering(497) 00:14:15.037 fused_ordering(498) 00:14:15.037 fused_ordering(499) 00:14:15.037 fused_ordering(500) 00:14:15.037 fused_ordering(501) 00:14:15.037 fused_ordering(502) 00:14:15.037 fused_ordering(503) 00:14:15.037 fused_ordering(504) 00:14:15.037 fused_ordering(505) 00:14:15.037 fused_ordering(506) 00:14:15.037 fused_ordering(507) 00:14:15.037 fused_ordering(508) 00:14:15.037 fused_ordering(509) 00:14:15.037 fused_ordering(510) 00:14:15.037 fused_ordering(511) 00:14:15.037 fused_ordering(512) 00:14:15.037 fused_ordering(513) 00:14:15.037 fused_ordering(514) 00:14:15.037 fused_ordering(515) 00:14:15.037 fused_ordering(516) 00:14:15.037 fused_ordering(517) 00:14:15.037 fused_ordering(518) 00:14:15.037 fused_ordering(519) 00:14:15.037 fused_ordering(520) 00:14:15.037 fused_ordering(521) 00:14:15.037 fused_ordering(522) 00:14:15.037 fused_ordering(523) 00:14:15.037 fused_ordering(524) 00:14:15.037 fused_ordering(525) 00:14:15.037 fused_ordering(526) 00:14:15.037 fused_ordering(527) 00:14:15.037 fused_ordering(528) 00:14:15.037 fused_ordering(529) 00:14:15.037 fused_ordering(530) 00:14:15.037 fused_ordering(531) 00:14:15.037 fused_ordering(532) 00:14:15.037 fused_ordering(533) 00:14:15.037 fused_ordering(534) 00:14:15.037 fused_ordering(535) 00:14:15.037 fused_ordering(536) 00:14:15.037 fused_ordering(537) 00:14:15.037 fused_ordering(538) 00:14:15.037 fused_ordering(539) 00:14:15.037 fused_ordering(540) 00:14:15.037 fused_ordering(541) 00:14:15.037 fused_ordering(542) 00:14:15.037 fused_ordering(543) 00:14:15.037 fused_ordering(544) 00:14:15.037 fused_ordering(545) 00:14:15.037 fused_ordering(546) 00:14:15.037 fused_ordering(547) 00:14:15.037 fused_ordering(548) 00:14:15.037 fused_ordering(549) 00:14:15.037 fused_ordering(550) 00:14:15.037 fused_ordering(551) 00:14:15.037 fused_ordering(552) 00:14:15.037 fused_ordering(553) 00:14:15.037 fused_ordering(554) 00:14:15.037 fused_ordering(555) 00:14:15.037 fused_ordering(556) 00:14:15.037 fused_ordering(557) 00:14:15.037 fused_ordering(558) 00:14:15.037 fused_ordering(559) 00:14:15.037 fused_ordering(560) 00:14:15.037 fused_ordering(561) 00:14:15.037 fused_ordering(562) 00:14:15.037 fused_ordering(563) 00:14:15.037 fused_ordering(564) 00:14:15.037 fused_ordering(565) 00:14:15.037 fused_ordering(566) 00:14:15.037 fused_ordering(567) 00:14:15.037 fused_ordering(568) 00:14:15.037 fused_ordering(569) 00:14:15.037 fused_ordering(570) 00:14:15.037 fused_ordering(571) 00:14:15.037 fused_ordering(572) 00:14:15.037 fused_ordering(573) 00:14:15.037 fused_ordering(574) 00:14:15.037 fused_ordering(575) 00:14:15.037 fused_ordering(576) 00:14:15.037 fused_ordering(577) 00:14:15.037 fused_ordering(578) 00:14:15.037 fused_ordering(579) 00:14:15.037 fused_ordering(580) 00:14:15.037 fused_ordering(581) 00:14:15.037 fused_ordering(582) 00:14:15.037 fused_ordering(583) 00:14:15.037 fused_ordering(584) 00:14:15.037 fused_ordering(585) 00:14:15.037 fused_ordering(586) 00:14:15.037 fused_ordering(587) 00:14:15.037 fused_ordering(588) 00:14:15.037 fused_ordering(589) 00:14:15.037 fused_ordering(590) 00:14:15.037 fused_ordering(591) 00:14:15.037 fused_ordering(592) 00:14:15.037 fused_ordering(593) 00:14:15.037 fused_ordering(594) 00:14:15.037 fused_ordering(595) 00:14:15.037 fused_ordering(596) 00:14:15.037 fused_ordering(597) 00:14:15.037 fused_ordering(598) 00:14:15.037 fused_ordering(599) 00:14:15.037 fused_ordering(600) 00:14:15.037 fused_ordering(601) 00:14:15.037 fused_ordering(602) 00:14:15.037 fused_ordering(603) 00:14:15.037 fused_ordering(604) 00:14:15.037 fused_ordering(605) 00:14:15.037 fused_ordering(606) 00:14:15.037 fused_ordering(607) 00:14:15.037 fused_ordering(608) 00:14:15.037 fused_ordering(609) 00:14:15.037 fused_ordering(610) 00:14:15.037 fused_ordering(611) 00:14:15.037 fused_ordering(612) 00:14:15.037 fused_ordering(613) 00:14:15.037 fused_ordering(614) 00:14:15.037 fused_ordering(615) 00:14:15.601 fused_ordering(616) 00:14:15.601 fused_ordering(617) 00:14:15.601 fused_ordering(618) 00:14:15.601 fused_ordering(619) 00:14:15.601 fused_ordering(620) 00:14:15.601 fused_ordering(621) 00:14:15.601 fused_ordering(622) 00:14:15.601 fused_ordering(623) 00:14:15.601 fused_ordering(624) 00:14:15.601 fused_ordering(625) 00:14:15.601 fused_ordering(626) 00:14:15.601 fused_ordering(627) 00:14:15.601 fused_ordering(628) 00:14:15.601 fused_ordering(629) 00:14:15.601 fused_ordering(630) 00:14:15.601 fused_ordering(631) 00:14:15.601 fused_ordering(632) 00:14:15.601 fused_ordering(633) 00:14:15.601 fused_ordering(634) 00:14:15.601 fused_ordering(635) 00:14:15.601 fused_ordering(636) 00:14:15.601 fused_ordering(637) 00:14:15.601 fused_ordering(638) 00:14:15.601 fused_ordering(639) 00:14:15.601 fused_ordering(640) 00:14:15.601 fused_ordering(641) 00:14:15.601 fused_ordering(642) 00:14:15.601 fused_ordering(643) 00:14:15.601 fused_ordering(644) 00:14:15.601 fused_ordering(645) 00:14:15.601 fused_ordering(646) 00:14:15.601 fused_ordering(647) 00:14:15.601 fused_ordering(648) 00:14:15.601 fused_ordering(649) 00:14:15.601 fused_ordering(650) 00:14:15.601 fused_ordering(651) 00:14:15.601 fused_ordering(652) 00:14:15.601 fused_ordering(653) 00:14:15.601 fused_ordering(654) 00:14:15.601 fused_ordering(655) 00:14:15.601 fused_ordering(656) 00:14:15.601 fused_ordering(657) 00:14:15.601 fused_ordering(658) 00:14:15.601 fused_ordering(659) 00:14:15.601 fused_ordering(660) 00:14:15.601 fused_ordering(661) 00:14:15.601 fused_ordering(662) 00:14:15.601 fused_ordering(663) 00:14:15.601 fused_ordering(664) 00:14:15.601 fused_ordering(665) 00:14:15.602 fused_ordering(666) 00:14:15.602 fused_ordering(667) 00:14:15.602 fused_ordering(668) 00:14:15.602 fused_ordering(669) 00:14:15.602 fused_ordering(670) 00:14:15.602 fused_ordering(671) 00:14:15.602 fused_ordering(672) 00:14:15.602 fused_ordering(673) 00:14:15.602 fused_ordering(674) 00:14:15.602 fused_ordering(675) 00:14:15.602 fused_ordering(676) 00:14:15.602 fused_ordering(677) 00:14:15.602 fused_ordering(678) 00:14:15.602 fused_ordering(679) 00:14:15.602 fused_ordering(680) 00:14:15.602 fused_ordering(681) 00:14:15.602 fused_ordering(682) 00:14:15.602 fused_ordering(683) 00:14:15.602 fused_ordering(684) 00:14:15.602 fused_ordering(685) 00:14:15.602 fused_ordering(686) 00:14:15.602 fused_ordering(687) 00:14:15.602 fused_ordering(688) 00:14:15.602 fused_ordering(689) 00:14:15.602 fused_ordering(690) 00:14:15.602 fused_ordering(691) 00:14:15.602 fused_ordering(692) 00:14:15.602 fused_ordering(693) 00:14:15.602 fused_ordering(694) 00:14:15.602 fused_ordering(695) 00:14:15.602 fused_ordering(696) 00:14:15.602 fused_ordering(697) 00:14:15.602 fused_ordering(698) 00:14:15.602 fused_ordering(699) 00:14:15.602 fused_ordering(700) 00:14:15.602 fused_ordering(701) 00:14:15.602 fused_ordering(702) 00:14:15.602 fused_ordering(703) 00:14:15.602 fused_ordering(704) 00:14:15.602 fused_ordering(705) 00:14:15.602 fused_ordering(706) 00:14:15.602 fused_ordering(707) 00:14:15.602 fused_ordering(708) 00:14:15.602 fused_ordering(709) 00:14:15.602 fused_ordering(710) 00:14:15.602 fused_ordering(711) 00:14:15.602 fused_ordering(712) 00:14:15.602 fused_ordering(713) 00:14:15.602 fused_ordering(714) 00:14:15.602 fused_ordering(715) 00:14:15.602 fused_ordering(716) 00:14:15.602 fused_ordering(717) 00:14:15.602 fused_ordering(718) 00:14:15.602 fused_ordering(719) 00:14:15.602 fused_ordering(720) 00:14:15.602 fused_ordering(721) 00:14:15.602 fused_ordering(722) 00:14:15.602 fused_ordering(723) 00:14:15.602 fused_ordering(724) 00:14:15.602 fused_ordering(725) 00:14:15.602 fused_ordering(726) 00:14:15.602 fused_ordering(727) 00:14:15.602 fused_ordering(728) 00:14:15.602 fused_ordering(729) 00:14:15.602 fused_ordering(730) 00:14:15.602 fused_ordering(731) 00:14:15.602 fused_ordering(732) 00:14:15.602 fused_ordering(733) 00:14:15.602 fused_ordering(734) 00:14:15.602 fused_ordering(735) 00:14:15.602 fused_ordering(736) 00:14:15.602 fused_ordering(737) 00:14:15.602 fused_ordering(738) 00:14:15.602 fused_ordering(739) 00:14:15.602 fused_ordering(740) 00:14:15.602 fused_ordering(741) 00:14:15.602 fused_ordering(742) 00:14:15.602 fused_ordering(743) 00:14:15.602 fused_ordering(744) 00:14:15.602 fused_ordering(745) 00:14:15.602 fused_ordering(746) 00:14:15.602 fused_ordering(747) 00:14:15.602 fused_ordering(748) 00:14:15.602 fused_ordering(749) 00:14:15.602 fused_ordering(750) 00:14:15.602 fused_ordering(751) 00:14:15.602 fused_ordering(752) 00:14:15.602 fused_ordering(753) 00:14:15.602 fused_ordering(754) 00:14:15.602 fused_ordering(755) 00:14:15.602 fused_ordering(756) 00:14:15.602 fused_ordering(757) 00:14:15.602 fused_ordering(758) 00:14:15.602 fused_ordering(759) 00:14:15.602 fused_ordering(760) 00:14:15.602 fused_ordering(761) 00:14:15.602 fused_ordering(762) 00:14:15.602 fused_ordering(763) 00:14:15.602 fused_ordering(764) 00:14:15.602 fused_ordering(765) 00:14:15.602 fused_ordering(766) 00:14:15.602 fused_ordering(767) 00:14:15.602 fused_ordering(768) 00:14:15.602 fused_ordering(769) 00:14:15.602 fused_ordering(770) 00:14:15.602 fused_ordering(771) 00:14:15.602 fused_ordering(772) 00:14:15.602 fused_ordering(773) 00:14:15.602 fused_ordering(774) 00:14:15.602 fused_ordering(775) 00:14:15.602 fused_ordering(776) 00:14:15.602 fused_ordering(777) 00:14:15.602 fused_ordering(778) 00:14:15.602 fused_ordering(779) 00:14:15.602 fused_ordering(780) 00:14:15.602 fused_ordering(781) 00:14:15.602 fused_ordering(782) 00:14:15.602 fused_ordering(783) 00:14:15.602 fused_ordering(784) 00:14:15.602 fused_ordering(785) 00:14:15.602 fused_ordering(786) 00:14:15.602 fused_ordering(787) 00:14:15.602 fused_ordering(788) 00:14:15.602 fused_ordering(789) 00:14:15.602 fused_ordering(790) 00:14:15.602 fused_ordering(791) 00:14:15.602 fused_ordering(792) 00:14:15.602 fused_ordering(793) 00:14:15.602 fused_ordering(794) 00:14:15.602 fused_ordering(795) 00:14:15.602 fused_ordering(796) 00:14:15.602 fused_ordering(797) 00:14:15.602 fused_ordering(798) 00:14:15.602 fused_ordering(799) 00:14:15.602 fused_ordering(800) 00:14:15.602 fused_ordering(801) 00:14:15.602 fused_ordering(802) 00:14:15.602 fused_ordering(803) 00:14:15.602 fused_ordering(804) 00:14:15.602 fused_ordering(805) 00:14:15.602 fused_ordering(806) 00:14:15.602 fused_ordering(807) 00:14:15.602 fused_ordering(808) 00:14:15.602 fused_ordering(809) 00:14:15.602 fused_ordering(810) 00:14:15.602 fused_ordering(811) 00:14:15.602 fused_ordering(812) 00:14:15.602 fused_ordering(813) 00:14:15.602 fused_ordering(814) 00:14:15.602 fused_ordering(815) 00:14:15.602 fused_ordering(816) 00:14:15.602 fused_ordering(817) 00:14:15.602 fused_ordering(818) 00:14:15.602 fused_ordering(819) 00:14:15.602 fused_ordering(820) 00:14:16.167 fused_ordering(821) 00:14:16.167 fused_ordering(822) 00:14:16.167 fused_ordering(823) 00:14:16.167 fused_ordering(824) 00:14:16.167 fused_ordering(825) 00:14:16.167 fused_ordering(826) 00:14:16.167 fused_ordering(827) 00:14:16.167 fused_ordering(828) 00:14:16.167 fused_ordering(829) 00:14:16.167 fused_ordering(830) 00:14:16.167 fused_ordering(831) 00:14:16.167 fused_ordering(832) 00:14:16.167 fused_ordering(833) 00:14:16.167 fused_ordering(834) 00:14:16.167 fused_ordering(835) 00:14:16.167 fused_ordering(836) 00:14:16.167 fused_ordering(837) 00:14:16.167 fused_ordering(838) 00:14:16.167 fused_ordering(839) 00:14:16.167 fused_ordering(840) 00:14:16.167 fused_ordering(841) 00:14:16.167 fused_ordering(842) 00:14:16.167 fused_ordering(843) 00:14:16.167 fused_ordering(844) 00:14:16.167 fused_ordering(845) 00:14:16.167 fused_ordering(846) 00:14:16.167 fused_ordering(847) 00:14:16.167 fused_ordering(848) 00:14:16.167 fused_ordering(849) 00:14:16.167 fused_ordering(850) 00:14:16.167 fused_ordering(851) 00:14:16.167 fused_ordering(852) 00:14:16.167 fused_ordering(853) 00:14:16.167 fused_ordering(854) 00:14:16.167 fused_ordering(855) 00:14:16.167 fused_ordering(856) 00:14:16.167 fused_ordering(857) 00:14:16.167 fused_ordering(858) 00:14:16.167 fused_ordering(859) 00:14:16.167 fused_ordering(860) 00:14:16.167 fused_ordering(861) 00:14:16.167 fused_ordering(862) 00:14:16.167 fused_ordering(863) 00:14:16.167 fused_ordering(864) 00:14:16.167 fused_ordering(865) 00:14:16.167 fused_ordering(866) 00:14:16.167 fused_ordering(867) 00:14:16.167 fused_ordering(868) 00:14:16.167 fused_ordering(869) 00:14:16.167 fused_ordering(870) 00:14:16.167 fused_ordering(871) 00:14:16.167 fused_ordering(872) 00:14:16.167 fused_ordering(873) 00:14:16.167 fused_ordering(874) 00:14:16.167 fused_ordering(875) 00:14:16.167 fused_ordering(876) 00:14:16.167 fused_ordering(877) 00:14:16.167 fused_ordering(878) 00:14:16.167 fused_ordering(879) 00:14:16.167 fused_ordering(880) 00:14:16.167 fused_ordering(881) 00:14:16.167 fused_ordering(882) 00:14:16.167 fused_ordering(883) 00:14:16.167 fused_ordering(884) 00:14:16.167 fused_ordering(885) 00:14:16.167 fused_ordering(886) 00:14:16.167 fused_ordering(887) 00:14:16.167 fused_ordering(888) 00:14:16.167 fused_ordering(889) 00:14:16.167 fused_ordering(890) 00:14:16.167 fused_ordering(891) 00:14:16.167 fused_ordering(892) 00:14:16.167 fused_ordering(893) 00:14:16.167 fused_ordering(894) 00:14:16.167 fused_ordering(895) 00:14:16.167 fused_ordering(896) 00:14:16.167 fused_ordering(897) 00:14:16.167 fused_ordering(898) 00:14:16.167 fused_ordering(899) 00:14:16.167 fused_ordering(900) 00:14:16.167 fused_ordering(901) 00:14:16.167 fused_ordering(902) 00:14:16.167 fused_ordering(903) 00:14:16.167 fused_ordering(904) 00:14:16.167 fused_ordering(905) 00:14:16.167 fused_ordering(906) 00:14:16.167 fused_ordering(907) 00:14:16.167 fused_ordering(908) 00:14:16.167 fused_ordering(909) 00:14:16.167 fused_ordering(910) 00:14:16.167 fused_ordering(911) 00:14:16.167 fused_ordering(912) 00:14:16.167 fused_ordering(913) 00:14:16.167 fused_ordering(914) 00:14:16.167 fused_ordering(915) 00:14:16.167 fused_ordering(916) 00:14:16.167 fused_ordering(917) 00:14:16.167 fused_ordering(918) 00:14:16.167 fused_ordering(919) 00:14:16.167 fused_ordering(920) 00:14:16.167 fused_ordering(921) 00:14:16.167 fused_ordering(922) 00:14:16.167 fused_ordering(923) 00:14:16.167 fused_ordering(924) 00:14:16.167 fused_ordering(925) 00:14:16.167 fused_ordering(926) 00:14:16.167 fused_ordering(927) 00:14:16.167 fused_ordering(928) 00:14:16.167 fused_ordering(929) 00:14:16.167 fused_ordering(930) 00:14:16.167 fused_ordering(931) 00:14:16.168 fused_ordering(932) 00:14:16.168 fused_ordering(933) 00:14:16.168 fused_ordering(934) 00:14:16.168 fused_ordering(935) 00:14:16.168 fused_ordering(936) 00:14:16.168 fused_ordering(937) 00:14:16.168 fused_ordering(938) 00:14:16.168 fused_ordering(939) 00:14:16.168 fused_ordering(940) 00:14:16.168 fused_ordering(941) 00:14:16.168 fused_ordering(942) 00:14:16.168 fused_ordering(943) 00:14:16.168 fused_ordering(944) 00:14:16.168 fused_ordering(945) 00:14:16.168 fused_ordering(946) 00:14:16.168 fused_ordering(947) 00:14:16.168 fused_ordering(948) 00:14:16.168 fused_ordering(949) 00:14:16.168 fused_ordering(950) 00:14:16.168 fused_ordering(951) 00:14:16.168 fused_ordering(952) 00:14:16.168 fused_ordering(953) 00:14:16.168 fused_ordering(954) 00:14:16.168 fused_ordering(955) 00:14:16.168 fused_ordering(956) 00:14:16.168 fused_ordering(957) 00:14:16.168 fused_ordering(958) 00:14:16.168 fused_ordering(959) 00:14:16.168 fused_ordering(960) 00:14:16.168 fused_ordering(961) 00:14:16.168 fused_ordering(962) 00:14:16.168 fused_ordering(963) 00:14:16.168 fused_ordering(964) 00:14:16.168 fused_ordering(965) 00:14:16.168 fused_ordering(966) 00:14:16.168 fused_ordering(967) 00:14:16.168 fused_ordering(968) 00:14:16.168 fused_ordering(969) 00:14:16.168 fused_ordering(970) 00:14:16.168 fused_ordering(971) 00:14:16.168 fused_ordering(972) 00:14:16.168 fused_ordering(973) 00:14:16.168 fused_ordering(974) 00:14:16.168 fused_ordering(975) 00:14:16.168 fused_ordering(976) 00:14:16.168 fused_ordering(977) 00:14:16.168 fused_ordering(978) 00:14:16.168 fused_ordering(979) 00:14:16.168 fused_ordering(980) 00:14:16.168 fused_ordering(981) 00:14:16.168 fused_ordering(982) 00:14:16.168 fused_ordering(983) 00:14:16.168 fused_ordering(984) 00:14:16.168 fused_ordering(985) 00:14:16.168 fused_ordering(986) 00:14:16.168 fused_ordering(987) 00:14:16.168 fused_ordering(988) 00:14:16.168 fused_ordering(989) 00:14:16.168 fused_ordering(990) 00:14:16.168 fused_ordering(991) 00:14:16.168 fused_ordering(992) 00:14:16.168 fused_ordering(993) 00:14:16.168 fused_ordering(994) 00:14:16.168 fused_ordering(995) 00:14:16.168 fused_ordering(996) 00:14:16.168 fused_ordering(997) 00:14:16.168 fused_ordering(998) 00:14:16.168 fused_ordering(999) 00:14:16.168 fused_ordering(1000) 00:14:16.168 fused_ordering(1001) 00:14:16.168 fused_ordering(1002) 00:14:16.168 fused_ordering(1003) 00:14:16.168 fused_ordering(1004) 00:14:16.168 fused_ordering(1005) 00:14:16.168 fused_ordering(1006) 00:14:16.168 fused_ordering(1007) 00:14:16.168 fused_ordering(1008) 00:14:16.168 fused_ordering(1009) 00:14:16.168 fused_ordering(1010) 00:14:16.168 fused_ordering(1011) 00:14:16.168 fused_ordering(1012) 00:14:16.168 fused_ordering(1013) 00:14:16.168 fused_ordering(1014) 00:14:16.168 fused_ordering(1015) 00:14:16.168 fused_ordering(1016) 00:14:16.168 fused_ordering(1017) 00:14:16.168 fused_ordering(1018) 00:14:16.168 fused_ordering(1019) 00:14:16.168 fused_ordering(1020) 00:14:16.168 fused_ordering(1021) 00:14:16.168 fused_ordering(1022) 00:14:16.168 fused_ordering(1023) 00:14:16.168 01:00:09 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:14:16.168 01:00:09 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:14:16.168 01:00:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:16.168 01:00:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@117 -- # sync 00:14:16.168 01:00:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:16.168 01:00:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@120 -- # set +e 00:14:16.168 01:00:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:16.168 01:00:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:16.168 rmmod nvme_tcp 00:14:16.426 rmmod nvme_fabrics 00:14:16.426 rmmod nvme_keyring 00:14:16.426 01:00:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:16.426 01:00:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set -e 00:14:16.426 01:00:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@125 -- # return 0 00:14:16.426 01:00:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@489 -- # '[' -n 3720869 ']' 00:14:16.426 01:00:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@490 -- # killprocess 3720869 00:14:16.426 01:00:09 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@946 -- # '[' -z 3720869 ']' 00:14:16.426 01:00:09 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@950 -- # kill -0 3720869 00:14:16.426 01:00:09 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@951 -- # uname 00:14:16.426 01:00:09 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:14:16.426 01:00:09 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3720869 00:14:16.426 01:00:09 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:14:16.426 01:00:09 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:14:16.426 01:00:09 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3720869' 00:14:16.426 killing process with pid 3720869 00:14:16.426 01:00:09 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@965 -- # kill 3720869 00:14:16.426 01:00:09 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@970 -- # wait 3720869 00:14:16.684 01:00:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:16.685 01:00:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:16.685 01:00:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:16.685 01:00:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:16.685 01:00:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:16.685 01:00:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:16.685 01:00:09 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:16.685 01:00:09 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:18.584 01:00:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:18.584 00:14:18.584 real 0m7.554s 00:14:18.584 user 0m5.217s 00:14:18.584 sys 0m3.347s 00:14:18.584 01:00:11 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:18.584 01:00:11 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:18.584 ************************************ 00:14:18.584 END TEST nvmf_fused_ordering 00:14:18.584 ************************************ 00:14:18.584 01:00:11 nvmf_tcp -- nvmf/nvmf.sh@35 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:14:18.584 01:00:11 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:14:18.584 01:00:11 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:18.584 01:00:11 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:18.584 ************************************ 00:14:18.584 START TEST nvmf_delete_subsystem 00:14:18.584 ************************************ 00:14:18.584 01:00:11 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:14:18.841 * Looking for test storage... 00:14:18.841 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:18.841 01:00:11 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:18.841 01:00:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:14:18.841 01:00:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:18.841 01:00:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:18.841 01:00:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:18.841 01:00:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:18.841 01:00:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:18.842 01:00:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:18.842 01:00:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:18.842 01:00:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:18.842 01:00:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:18.842 01:00:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:18.842 01:00:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:18.842 01:00:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:14:18.842 01:00:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:18.842 01:00:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:18.842 01:00:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:18.842 01:00:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:18.842 01:00:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:18.842 01:00:11 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:18.842 01:00:11 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:18.842 01:00:11 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:18.842 01:00:11 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:18.842 01:00:11 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:18.842 01:00:11 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:18.842 01:00:11 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:14:18.842 01:00:11 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:18.842 01:00:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@47 -- # : 0 00:14:18.842 01:00:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:18.842 01:00:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:18.842 01:00:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:18.842 01:00:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:18.842 01:00:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:18.842 01:00:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:18.842 01:00:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:18.842 01:00:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:18.842 01:00:11 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:14:18.842 01:00:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:18.842 01:00:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:18.842 01:00:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:18.842 01:00:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:18.842 01:00:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:18.842 01:00:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:18.842 01:00:11 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:18.842 01:00:11 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:18.842 01:00:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:18.842 01:00:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:18.842 01:00:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@285 -- # xtrace_disable 00:14:18.842 01:00:11 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:20.739 01:00:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:20.739 01:00:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # pci_devs=() 00:14:20.739 01:00:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:20.739 01:00:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:20.739 01:00:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:20.739 01:00:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:20.739 01:00:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:20.739 01:00:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # net_devs=() 00:14:20.739 01:00:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:20.739 01:00:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # e810=() 00:14:20.739 01:00:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # local -ga e810 00:14:20.739 01:00:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # x722=() 00:14:20.739 01:00:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # local -ga x722 00:14:20.739 01:00:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # mlx=() 00:14:20.739 01:00:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # local -ga mlx 00:14:20.739 01:00:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:20.739 01:00:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:20.740 01:00:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:20.740 01:00:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:20.740 01:00:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:20.740 01:00:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:20.740 01:00:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:20.740 01:00:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:20.740 01:00:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:20.740 01:00:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:20.740 01:00:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:20.740 01:00:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:20.740 01:00:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:20.740 01:00:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:20.740 01:00:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:20.740 01:00:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:20.740 01:00:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:20.740 01:00:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:20.740 01:00:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:14:20.740 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:14:20.740 01:00:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:20.740 01:00:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:20.740 01:00:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:20.740 01:00:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:20.740 01:00:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:20.740 01:00:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:20.740 01:00:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:14:20.740 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:14:20.740 01:00:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:20.740 01:00:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:20.740 01:00:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:20.740 01:00:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:20.740 01:00:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:20.740 01:00:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:20.740 01:00:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:20.740 01:00:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:20.740 01:00:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:20.740 01:00:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:20.740 01:00:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:20.740 01:00:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:20.740 01:00:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:20.740 01:00:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:20.740 01:00:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:20.740 01:00:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:14:20.740 Found net devices under 0000:0a:00.0: cvl_0_0 00:14:20.740 01:00:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:20.740 01:00:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:20.740 01:00:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:20.740 01:00:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:20.740 01:00:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:20.740 01:00:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:20.740 01:00:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:20.740 01:00:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:20.740 01:00:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:14:20.740 Found net devices under 0000:0a:00.1: cvl_0_1 00:14:20.740 01:00:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:20.740 01:00:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:20.740 01:00:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # is_hw=yes 00:14:20.740 01:00:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:20.740 01:00:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:20.740 01:00:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:20.740 01:00:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:20.740 01:00:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:20.740 01:00:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:20.740 01:00:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:20.740 01:00:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:20.740 01:00:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:20.740 01:00:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:20.740 01:00:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:20.740 01:00:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:20.740 01:00:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:20.740 01:00:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:20.740 01:00:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:20.740 01:00:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:20.740 01:00:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:20.740 01:00:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:20.740 01:00:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:20.740 01:00:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:20.740 01:00:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:20.740 01:00:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:20.740 01:00:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:20.740 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:20.740 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.232 ms 00:14:20.740 00:14:20.740 --- 10.0.0.2 ping statistics --- 00:14:20.740 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:20.740 rtt min/avg/max/mdev = 0.232/0.232/0.232/0.000 ms 00:14:20.740 01:00:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:20.740 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:20.740 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.160 ms 00:14:20.740 00:14:20.740 --- 10.0.0.1 ping statistics --- 00:14:20.740 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:20.740 rtt min/avg/max/mdev = 0.160/0.160/0.160/0.000 ms 00:14:20.740 01:00:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:20.740 01:00:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # return 0 00:14:20.740 01:00:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:20.740 01:00:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:20.740 01:00:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:20.740 01:00:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:20.740 01:00:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:20.740 01:00:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:20.740 01:00:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:20.997 01:00:13 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:14:20.997 01:00:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:20.997 01:00:13 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@720 -- # xtrace_disable 00:14:20.998 01:00:13 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:20.998 01:00:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@481 -- # nvmfpid=3723615 00:14:20.998 01:00:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:14:20.998 01:00:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # waitforlisten 3723615 00:14:20.998 01:00:13 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@827 -- # '[' -z 3723615 ']' 00:14:20.998 01:00:13 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:20.998 01:00:13 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@832 -- # local max_retries=100 00:14:20.998 01:00:13 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:20.998 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:20.998 01:00:13 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@836 -- # xtrace_disable 00:14:20.998 01:00:13 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:20.998 [2024-07-25 01:00:13.953723] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 23.11.0 initialization... 00:14:20.998 [2024-07-25 01:00:13.953807] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:20.998 EAL: No free 2048 kB hugepages reported on node 1 00:14:20.998 [2024-07-25 01:00:14.022723] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:14:20.998 [2024-07-25 01:00:14.112866] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:20.998 [2024-07-25 01:00:14.112927] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:20.998 [2024-07-25 01:00:14.112943] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:20.998 [2024-07-25 01:00:14.112956] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:20.998 [2024-07-25 01:00:14.112967] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:20.998 [2024-07-25 01:00:14.113054] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:20.998 [2024-07-25 01:00:14.113059] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:21.255 01:00:14 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:14:21.256 01:00:14 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@860 -- # return 0 00:14:21.256 01:00:14 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:21.256 01:00:14 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:21.256 01:00:14 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:21.256 01:00:14 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:21.256 01:00:14 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:21.256 01:00:14 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:21.256 01:00:14 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:21.256 [2024-07-25 01:00:14.252100] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:21.256 01:00:14 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:21.256 01:00:14 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:21.256 01:00:14 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:21.256 01:00:14 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:21.256 01:00:14 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:21.256 01:00:14 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:21.256 01:00:14 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:21.256 01:00:14 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:21.256 [2024-07-25 01:00:14.268369] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:21.256 01:00:14 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:21.256 01:00:14 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:14:21.256 01:00:14 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:21.256 01:00:14 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:21.256 NULL1 00:14:21.256 01:00:14 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:21.256 01:00:14 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:14:21.256 01:00:14 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:21.256 01:00:14 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:21.256 Delay0 00:14:21.256 01:00:14 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:21.256 01:00:14 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:21.256 01:00:14 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:21.256 01:00:14 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:21.256 01:00:14 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:21.256 01:00:14 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=3723751 00:14:21.256 01:00:14 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:14:21.256 01:00:14 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:14:21.256 EAL: No free 2048 kB hugepages reported on node 1 00:14:21.256 [2024-07-25 01:00:14.343017] subsystem.c:1568:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:14:23.150 01:00:16 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:23.150 01:00:16 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:23.150 01:00:16 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:23.409 Write completed with error (sct=0, sc=8) 00:14:23.409 Read completed with error (sct=0, sc=8) 00:14:23.409 Read completed with error (sct=0, sc=8) 00:14:23.409 starting I/O failed: -6 00:14:23.409 Read completed with error (sct=0, sc=8) 00:14:23.409 Write completed with error (sct=0, sc=8) 00:14:23.409 Write completed with error (sct=0, sc=8) 00:14:23.409 Read completed with error (sct=0, sc=8) 00:14:23.409 starting I/O failed: -6 00:14:23.409 Read completed with error (sct=0, sc=8) 00:14:23.409 Write completed with error (sct=0, sc=8) 00:14:23.409 Read completed with error (sct=0, sc=8) 00:14:23.409 Write completed with error (sct=0, sc=8) 00:14:23.409 starting I/O failed: -6 00:14:23.409 Read completed with error (sct=0, sc=8) 00:14:23.409 Read completed with error (sct=0, sc=8) 00:14:23.409 Write completed with error (sct=0, sc=8) 00:14:23.409 Read completed with error (sct=0, sc=8) 00:14:23.409 Read completed with error (sct=0, sc=8) 00:14:23.409 Read completed with error (sct=0, sc=8) 00:14:23.409 Write completed with error (sct=0, sc=8) 00:14:23.409 starting I/O failed: -6 00:14:23.409 Read completed with error (sct=0, sc=8) 00:14:23.409 Read completed with error (sct=0, sc=8) 00:14:23.409 starting I/O failed: -6 00:14:23.409 Read completed with error (sct=0, sc=8) 00:14:23.409 Read completed with error (sct=0, sc=8) 00:14:23.409 Write completed with error (sct=0, sc=8) 00:14:23.409 Read completed with error (sct=0, sc=8) 00:14:23.409 Read completed with error (sct=0, sc=8) 00:14:23.409 Read completed with error (sct=0, sc=8) 00:14:23.409 starting I/O failed: -6 00:14:23.409 Read completed with error (sct=0, sc=8) 00:14:23.409 Read completed with error (sct=0, sc=8) 00:14:23.409 starting I/O failed: -6 00:14:23.409 Write completed with error (sct=0, sc=8) 00:14:23.409 Write completed with error (sct=0, sc=8) 00:14:23.409 Read completed with error (sct=0, sc=8) 00:14:23.409 Write completed with error (sct=0, sc=8) 00:14:23.409 Read completed with error (sct=0, sc=8) 00:14:23.409 starting I/O failed: -6 00:14:23.409 Read completed with error (sct=0, sc=8) 00:14:23.409 Read completed with error (sct=0, sc=8) 00:14:23.409 Write completed with error (sct=0, sc=8) 00:14:23.409 Read completed with error (sct=0, sc=8) 00:14:23.409 starting I/O failed: -6 00:14:23.409 Read completed with error (sct=0, sc=8) 00:14:23.409 Read completed with error (sct=0, sc=8) 00:14:23.409 Write completed with error (sct=0, sc=8) 00:14:23.409 Write completed with error (sct=0, sc=8) 00:14:23.409 Write completed with error (sct=0, sc=8) 00:14:23.409 Read completed with error (sct=0, sc=8) 00:14:23.409 starting I/O failed: -6 00:14:23.409 starting I/O failed: -6 00:14:23.409 Read completed with error (sct=0, sc=8) 00:14:23.409 Write completed with error (sct=0, sc=8) 00:14:23.409 Read completed with error (sct=0, sc=8) 00:14:23.409 Read completed with error (sct=0, sc=8) 00:14:23.409 Read completed with error (sct=0, sc=8) 00:14:23.409 Read completed with error (sct=0, sc=8) 00:14:23.409 Read completed with error (sct=0, sc=8) 00:14:23.409 starting I/O failed: -6 00:14:23.409 Read completed with error (sct=0, sc=8) 00:14:23.409 Read completed with error (sct=0, sc=8) 00:14:23.409 starting I/O failed: -6 00:14:23.409 Read completed with error (sct=0, sc=8) 00:14:23.409 Read completed with error (sct=0, sc=8) 00:14:23.409 Read completed with error (sct=0, sc=8) 00:14:23.409 Read completed with error (sct=0, sc=8) 00:14:23.409 Read completed with error (sct=0, sc=8) 00:14:23.409 Write completed with error (sct=0, sc=8) 00:14:23.409 starting I/O failed: -6 00:14:23.409 Read completed with error (sct=0, sc=8) 00:14:23.409 Write completed with error (sct=0, sc=8) 00:14:23.409 starting I/O failed: -6 00:14:23.409 Write completed with error (sct=0, sc=8) 00:14:23.409 Read completed with error (sct=0, sc=8) 00:14:23.409 Read completed with error (sct=0, sc=8) 00:14:23.409 Write completed with error (sct=0, sc=8) 00:14:23.409 Read completed with error (sct=0, sc=8) 00:14:23.409 Write completed with error (sct=0, sc=8) 00:14:23.409 starting I/O failed: -6 00:14:23.409 Write completed with error (sct=0, sc=8) 00:14:23.409 Write completed with error (sct=0, sc=8) 00:14:23.409 starting I/O failed: -6 00:14:23.409 Read completed with error (sct=0, sc=8) 00:14:23.409 Read completed with error (sct=0, sc=8) 00:14:23.409 Write completed with error (sct=0, sc=8) 00:14:23.409 Read completed with error (sct=0, sc=8) 00:14:23.409 starting I/O failed: -6 00:14:23.409 Write completed with error (sct=0, sc=8) 00:14:23.409 Write completed with error (sct=0, sc=8) 00:14:23.409 Read completed with error (sct=0, sc=8) 00:14:23.409 Write completed with error (sct=0, sc=8) 00:14:23.409 starting I/O failed: -6 00:14:23.409 Read completed with error (sct=0, sc=8) 00:14:23.409 [2024-07-25 01:00:16.385009] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f92cc00c2f0 is same with the state(5) to be set 00:14:23.409 Read completed with error (sct=0, sc=8) 00:14:23.409 Read completed with error (sct=0, sc=8) 00:14:23.409 Write completed with error (sct=0, sc=8) 00:14:23.409 starting I/O failed: -6 00:14:23.409 Read completed with error (sct=0, sc=8) 00:14:23.409 Read completed with error (sct=0, sc=8) 00:14:23.409 Read completed with error (sct=0, sc=8) 00:14:23.409 Write completed with error (sct=0, sc=8) 00:14:23.409 Read completed with error (sct=0, sc=8) 00:14:23.409 Read completed with error (sct=0, sc=8) 00:14:23.409 Read completed with error (sct=0, sc=8) 00:14:23.409 starting I/O failed: -6 00:14:23.409 Read completed with error (sct=0, sc=8) 00:14:23.409 Read completed with error (sct=0, sc=8) 00:14:23.409 Read completed with error (sct=0, sc=8) 00:14:23.409 Read completed with error (sct=0, sc=8) 00:14:23.409 Write completed with error (sct=0, sc=8) 00:14:23.409 Read completed with error (sct=0, sc=8) 00:14:23.409 Read completed with error (sct=0, sc=8) 00:14:23.409 Write completed with error (sct=0, sc=8) 00:14:23.409 Read completed with error (sct=0, sc=8) 00:14:23.409 Write completed with error (sct=0, sc=8) 00:14:23.409 Write completed with error (sct=0, sc=8) 00:14:23.409 Read completed with error (sct=0, sc=8) 00:14:23.409 Write completed with error (sct=0, sc=8) 00:14:23.409 Write completed with error (sct=0, sc=8) 00:14:23.409 Read completed with error (sct=0, sc=8) 00:14:23.409 Read completed with error (sct=0, sc=8) 00:14:23.409 Read completed with error (sct=0, sc=8) 00:14:23.409 Write completed with error (sct=0, sc=8) 00:14:23.409 Write completed with error (sct=0, sc=8) 00:14:23.409 Read completed with error (sct=0, sc=8) 00:14:23.409 starting I/O failed: -6 00:14:23.409 Write completed with error (sct=0, sc=8) 00:14:23.409 Write completed with error (sct=0, sc=8) 00:14:23.409 Read completed with error (sct=0, sc=8) 00:14:23.409 Read completed with error (sct=0, sc=8) 00:14:23.409 Write completed with error (sct=0, sc=8) 00:14:23.409 starting I/O failed: -6 00:14:23.409 Read completed with error (sct=0, sc=8) 00:14:23.409 Read completed with error (sct=0, sc=8) 00:14:23.409 Read completed with error (sct=0, sc=8) 00:14:23.409 Read completed with error (sct=0, sc=8) 00:14:23.409 Read completed with error (sct=0, sc=8) 00:14:23.409 Write completed with error (sct=0, sc=8) 00:14:23.409 starting I/O failed: -6 00:14:23.409 Read completed with error (sct=0, sc=8) 00:14:23.409 Write completed with error (sct=0, sc=8) 00:14:23.409 Write completed with error (sct=0, sc=8) 00:14:23.409 Read completed with error (sct=0, sc=8) 00:14:23.409 Read completed with error (sct=0, sc=8) 00:14:23.409 Read completed with error (sct=0, sc=8) 00:14:23.409 starting I/O failed: -6 00:14:23.409 Read completed with error (sct=0, sc=8) 00:14:23.409 Read completed with error (sct=0, sc=8) 00:14:23.409 Read completed with error (sct=0, sc=8) 00:14:23.409 Write completed with error (sct=0, sc=8) 00:14:23.409 Read completed with error (sct=0, sc=8) 00:14:23.409 starting I/O failed: -6 00:14:23.409 Read completed with error (sct=0, sc=8) 00:14:23.410 Read completed with error (sct=0, sc=8) 00:14:23.410 Read completed with error (sct=0, sc=8) 00:14:23.410 Write completed with error (sct=0, sc=8) 00:14:23.410 Write completed with error (sct=0, sc=8) 00:14:23.410 Read completed with error (sct=0, sc=8) 00:14:23.410 starting I/O failed: -6 00:14:23.410 Write completed with error (sct=0, sc=8) 00:14:23.410 Read completed with error (sct=0, sc=8) 00:14:23.410 Read completed with error (sct=0, sc=8) 00:14:23.410 Write completed with error (sct=0, sc=8) 00:14:23.410 Write completed with error (sct=0, sc=8) 00:14:23.410 Read completed with error (sct=0, sc=8) 00:14:23.410 starting I/O failed: -6 00:14:23.410 Read completed with error (sct=0, sc=8) 00:14:23.410 Read completed with error (sct=0, sc=8) 00:14:23.410 Write completed with error (sct=0, sc=8) 00:14:23.410 Read completed with error (sct=0, sc=8) 00:14:23.410 Write completed with error (sct=0, sc=8) 00:14:23.410 Read completed with error (sct=0, sc=8) 00:14:23.410 starting I/O failed: -6 00:14:23.410 Read completed with error (sct=0, sc=8) 00:14:23.410 Read completed with error (sct=0, sc=8) 00:14:23.410 Read completed with error (sct=0, sc=8) 00:14:23.410 Read completed with error (sct=0, sc=8) 00:14:23.410 Read completed with error (sct=0, sc=8) 00:14:23.410 starting I/O failed: -6 00:14:23.410 Read completed with error (sct=0, sc=8) 00:14:23.410 Read completed with error (sct=0, sc=8) 00:14:23.410 starting I/O failed: -6 00:14:23.410 Write completed with error (sct=0, sc=8) 00:14:23.410 Write completed with error (sct=0, sc=8) 00:14:23.410 starting I/O failed: -6 00:14:23.410 Read completed with error (sct=0, sc=8) 00:14:23.410 Write completed with error (sct=0, sc=8) 00:14:23.410 starting I/O failed: -6 00:14:23.410 Read completed with error (sct=0, sc=8) 00:14:23.410 Write completed with error (sct=0, sc=8) 00:14:23.410 starting I/O failed: -6 00:14:23.410 Read completed with error (sct=0, sc=8) 00:14:23.410 Read completed with error (sct=0, sc=8) 00:14:23.410 starting I/O failed: -6 00:14:23.410 Read completed with error (sct=0, sc=8) 00:14:23.410 Read completed with error (sct=0, sc=8) 00:14:23.410 starting I/O failed: -6 00:14:23.410 Read completed with error (sct=0, sc=8) 00:14:23.410 Read completed with error (sct=0, sc=8) 00:14:23.410 starting I/O failed: -6 00:14:23.410 Write completed with error (sct=0, sc=8) 00:14:23.410 Write completed with error (sct=0, sc=8) 00:14:23.410 starting I/O failed: -6 00:14:23.410 Write completed with error (sct=0, sc=8) 00:14:23.410 Read completed with error (sct=0, sc=8) 00:14:23.410 starting I/O failed: -6 00:14:23.410 Write completed with error (sct=0, sc=8) 00:14:23.410 Read completed with error (sct=0, sc=8) 00:14:23.410 starting I/O failed: -6 00:14:23.410 Read completed with error (sct=0, sc=8) 00:14:23.410 Read completed with error (sct=0, sc=8) 00:14:23.410 starting I/O failed: -6 00:14:23.410 Read completed with error (sct=0, sc=8) 00:14:23.410 Write completed with error (sct=0, sc=8) 00:14:23.410 starting I/O failed: -6 00:14:23.410 Write completed with error (sct=0, sc=8) 00:14:23.410 Write completed with error (sct=0, sc=8) 00:14:23.410 starting I/O failed: -6 00:14:23.410 Read completed with error (sct=0, sc=8) 00:14:23.410 Read completed with error (sct=0, sc=8) 00:14:23.410 starting I/O failed: -6 00:14:23.410 Read completed with error (sct=0, sc=8) 00:14:23.410 Write completed with error (sct=0, sc=8) 00:14:23.410 starting I/O failed: -6 00:14:23.410 Read completed with error (sct=0, sc=8) 00:14:23.410 Read completed with error (sct=0, sc=8) 00:14:23.410 starting I/O failed: -6 00:14:23.410 Write completed with error (sct=0, sc=8) 00:14:23.410 Read completed with error (sct=0, sc=8) 00:14:23.410 starting I/O failed: -6 00:14:23.410 Read completed with error (sct=0, sc=8) 00:14:23.410 Read completed with error (sct=0, sc=8) 00:14:23.410 starting I/O failed: -6 00:14:23.410 Write completed with error (sct=0, sc=8) 00:14:23.410 Read completed with error (sct=0, sc=8) 00:14:23.410 starting I/O failed: -6 00:14:23.410 starting I/O failed: -6 00:14:23.410 starting I/O failed: -6 00:14:23.410 starting I/O failed: -6 00:14:23.410 starting I/O failed: -6 00:14:23.410 starting I/O failed: -6 00:14:23.410 starting I/O failed: -6 00:14:23.410 starting I/O failed: -6 00:14:24.345 [2024-07-25 01:00:17.359733] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b4620 is same with the state(5) to be set 00:14:24.345 Read completed with error (sct=0, sc=8) 00:14:24.345 Read completed with error (sct=0, sc=8) 00:14:24.345 Read completed with error (sct=0, sc=8) 00:14:24.345 Write completed with error (sct=0, sc=8) 00:14:24.345 Read completed with error (sct=0, sc=8) 00:14:24.345 Read completed with error (sct=0, sc=8) 00:14:24.345 Read completed with error (sct=0, sc=8) 00:14:24.345 Read completed with error (sct=0, sc=8) 00:14:24.345 Read completed with error (sct=0, sc=8) 00:14:24.345 Read completed with error (sct=0, sc=8) 00:14:24.345 Write completed with error (sct=0, sc=8) 00:14:24.345 Write completed with error (sct=0, sc=8) 00:14:24.345 Read completed with error (sct=0, sc=8) 00:14:24.345 Read completed with error (sct=0, sc=8) 00:14:24.345 Read completed with error (sct=0, sc=8) 00:14:24.345 Read completed with error (sct=0, sc=8) 00:14:24.345 Read completed with error (sct=0, sc=8) 00:14:24.345 Read completed with error (sct=0, sc=8) 00:14:24.345 Read completed with error (sct=0, sc=8) 00:14:24.345 Write completed with error (sct=0, sc=8) 00:14:24.345 Read completed with error (sct=0, sc=8) 00:14:24.345 Write completed with error (sct=0, sc=8) 00:14:24.345 Read completed with error (sct=0, sc=8) 00:14:24.345 Write completed with error (sct=0, sc=8) 00:14:24.345 Read completed with error (sct=0, sc=8) 00:14:24.345 Write completed with error (sct=0, sc=8) 00:14:24.345 Read completed with error (sct=0, sc=8) 00:14:24.345 Write completed with error (sct=0, sc=8) 00:14:24.345 Read completed with error (sct=0, sc=8) 00:14:24.345 Read completed with error (sct=0, sc=8) 00:14:24.345 Read completed with error (sct=0, sc=8) 00:14:24.345 Read completed with error (sct=0, sc=8) 00:14:24.345 Read completed with error (sct=0, sc=8) 00:14:24.345 Read completed with error (sct=0, sc=8) 00:14:24.345 Read completed with error (sct=0, sc=8) 00:14:24.345 Write completed with error (sct=0, sc=8) 00:14:24.345 Read completed with error (sct=0, sc=8) 00:14:24.345 Read completed with error (sct=0, sc=8) 00:14:24.345 Read completed with error (sct=0, sc=8) 00:14:24.345 Read completed with error (sct=0, sc=8) 00:14:24.345 Read completed with error (sct=0, sc=8) 00:14:24.345 [2024-07-25 01:00:17.383412] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x797ec0 is same with the state(5) to be set 00:14:24.345 Write completed with error (sct=0, sc=8) 00:14:24.345 Read completed with error (sct=0, sc=8) 00:14:24.345 Read completed with error (sct=0, sc=8) 00:14:24.345 Read completed with error (sct=0, sc=8) 00:14:24.345 Read completed with error (sct=0, sc=8) 00:14:24.345 Write completed with error (sct=0, sc=8) 00:14:24.345 Write completed with error (sct=0, sc=8) 00:14:24.345 Read completed with error (sct=0, sc=8) 00:14:24.345 Read completed with error (sct=0, sc=8) 00:14:24.345 Read completed with error (sct=0, sc=8) 00:14:24.345 Write completed with error (sct=0, sc=8) 00:14:24.345 Read completed with error (sct=0, sc=8) 00:14:24.345 Read completed with error (sct=0, sc=8) 00:14:24.345 Read completed with error (sct=0, sc=8) 00:14:24.345 Write completed with error (sct=0, sc=8) 00:14:24.345 Read completed with error (sct=0, sc=8) 00:14:24.345 Read completed with error (sct=0, sc=8) 00:14:24.345 Read completed with error (sct=0, sc=8) 00:14:24.345 [2024-07-25 01:00:17.387318] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f92cc00c600 is same with the state(5) to be set 00:14:24.345 Write completed with error (sct=0, sc=8) 00:14:24.345 Read completed with error (sct=0, sc=8) 00:14:24.345 Read completed with error (sct=0, sc=8) 00:14:24.345 Read completed with error (sct=0, sc=8) 00:14:24.345 Write completed with error (sct=0, sc=8) 00:14:24.345 Read completed with error (sct=0, sc=8) 00:14:24.345 Write completed with error (sct=0, sc=8) 00:14:24.345 Read completed with error (sct=0, sc=8) 00:14:24.345 Write completed with error (sct=0, sc=8) 00:14:24.345 Read completed with error (sct=0, sc=8) 00:14:24.345 Read completed with error (sct=0, sc=8) 00:14:24.345 Write completed with error (sct=0, sc=8) 00:14:24.345 Write completed with error (sct=0, sc=8) 00:14:24.345 Write completed with error (sct=0, sc=8) 00:14:24.345 Read completed with error (sct=0, sc=8) 00:14:24.345 Read completed with error (sct=0, sc=8) 00:14:24.345 Write completed with error (sct=0, sc=8) 00:14:24.345 Read completed with error (sct=0, sc=8) 00:14:24.345 [2024-07-25 01:00:17.387476] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f92cc00bfe0 is same with the state(5) to be set 00:14:24.345 Write completed with error (sct=0, sc=8) 00:14:24.345 Read completed with error (sct=0, sc=8) 00:14:24.345 Read completed with error (sct=0, sc=8) 00:14:24.345 Read completed with error (sct=0, sc=8) 00:14:24.345 Write completed with error (sct=0, sc=8) 00:14:24.345 Write completed with error (sct=0, sc=8) 00:14:24.345 Read completed with error (sct=0, sc=8) 00:14:24.345 Read completed with error (sct=0, sc=8) 00:14:24.345 Read completed with error (sct=0, sc=8) 00:14:24.345 Write completed with error (sct=0, sc=8) 00:14:24.345 Write completed with error (sct=0, sc=8) 00:14:24.345 Write completed with error (sct=0, sc=8) 00:14:24.345 Read completed with error (sct=0, sc=8) 00:14:24.345 Read completed with error (sct=0, sc=8) 00:14:24.345 Read completed with error (sct=0, sc=8) 00:14:24.345 Write completed with error (sct=0, sc=8) 00:14:24.346 Read completed with error (sct=0, sc=8) 00:14:24.346 Write completed with error (sct=0, sc=8) 00:14:24.346 Write completed with error (sct=0, sc=8) 00:14:24.346 Write completed with error (sct=0, sc=8) 00:14:24.346 Write completed with error (sct=0, sc=8) 00:14:24.346 Read completed with error (sct=0, sc=8) 00:14:24.346 Read completed with error (sct=0, sc=8) 00:14:24.346 Write completed with error (sct=0, sc=8) 00:14:24.346 Read completed with error (sct=0, sc=8) 00:14:24.346 Read completed with error (sct=0, sc=8) 00:14:24.346 Read completed with error (sct=0, sc=8) 00:14:24.346 Write completed with error (sct=0, sc=8) 00:14:24.346 Read completed with error (sct=0, sc=8) 00:14:24.346 Write completed with error (sct=0, sc=8) 00:14:24.346 Write completed with error (sct=0, sc=8) 00:14:24.346 Read completed with error (sct=0, sc=8) 00:14:24.346 Read completed with error (sct=0, sc=8) 00:14:24.346 Read completed with error (sct=0, sc=8) 00:14:24.346 Read completed with error (sct=0, sc=8) 00:14:24.346 Write completed with error (sct=0, sc=8) 00:14:24.346 Read completed with error (sct=0, sc=8) 00:14:24.346 Write completed with error (sct=0, sc=8) 00:14:24.346 Read completed with error (sct=0, sc=8) 00:14:24.346 Write completed with error (sct=0, sc=8) 00:14:24.346 Write completed with error (sct=0, sc=8) 00:14:24.346 [2024-07-25 01:00:17.387720] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x797b00 is same with the state(5) to be set 00:14:24.346 Initializing NVMe Controllers 00:14:24.346 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:24.346 Controller IO queue size 128, less than required. 00:14:24.346 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:24.346 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:14:24.346 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:14:24.346 Initialization complete. Launching workers. 00:14:24.346 ======================================================== 00:14:24.346 Latency(us) 00:14:24.346 Device Information : IOPS MiB/s Average min max 00:14:24.346 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 185.57 0.09 908587.74 689.46 1012458.13 00:14:24.346 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 157.78 0.08 924471.77 581.53 1013025.07 00:14:24.346 ======================================================== 00:14:24.346 Total : 343.35 0.17 915887.05 581.53 1013025.07 00:14:24.346 00:14:24.346 [2024-07-25 01:00:17.388750] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7b4620 (9): Bad file descriptor 00:14:24.346 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:14:24.346 01:00:17 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:24.346 01:00:17 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:14:24.346 01:00:17 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 3723751 00:14:24.346 01:00:17 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:14:24.911 01:00:17 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:14:24.911 01:00:17 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 3723751 00:14:24.911 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (3723751) - No such process 00:14:24.911 01:00:17 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 3723751 00:14:24.911 01:00:17 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@648 -- # local es=0 00:14:24.911 01:00:17 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # valid_exec_arg wait 3723751 00:14:24.911 01:00:17 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@636 -- # local arg=wait 00:14:24.911 01:00:17 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:24.911 01:00:17 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # type -t wait 00:14:24.911 01:00:17 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:24.911 01:00:17 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # wait 3723751 00:14:24.911 01:00:17 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # es=1 00:14:24.911 01:00:17 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:24.911 01:00:17 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:24.911 01:00:17 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:24.911 01:00:17 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:24.911 01:00:17 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:24.911 01:00:17 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:24.911 01:00:17 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:24.911 01:00:17 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:24.911 01:00:17 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:24.911 01:00:17 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:24.911 [2024-07-25 01:00:17.912519] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:24.911 01:00:17 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:24.911 01:00:17 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:24.911 01:00:17 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:24.911 01:00:17 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:24.911 01:00:17 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:24.911 01:00:17 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=3724149 00:14:24.911 01:00:17 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:14:24.911 01:00:17 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3724149 00:14:24.911 01:00:17 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:24.911 01:00:17 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:14:24.911 EAL: No free 2048 kB hugepages reported on node 1 00:14:24.911 [2024-07-25 01:00:17.975344] subsystem.c:1568:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:14:25.474 01:00:18 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:25.474 01:00:18 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3724149 00:14:25.474 01:00:18 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:26.036 01:00:18 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:26.036 01:00:18 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3724149 00:14:26.036 01:00:18 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:26.293 01:00:19 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:26.293 01:00:19 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3724149 00:14:26.293 01:00:19 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:26.863 01:00:19 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:26.863 01:00:19 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3724149 00:14:26.863 01:00:19 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:27.427 01:00:20 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:27.427 01:00:20 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3724149 00:14:27.427 01:00:20 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:27.991 01:00:20 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:27.991 01:00:20 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3724149 00:14:27.991 01:00:20 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:28.248 Initializing NVMe Controllers 00:14:28.248 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:28.248 Controller IO queue size 128, less than required. 00:14:28.248 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:28.248 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:14:28.248 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:14:28.248 Initialization complete. Launching workers. 00:14:28.248 ======================================================== 00:14:28.248 Latency(us) 00:14:28.248 Device Information : IOPS MiB/s Average min max 00:14:28.248 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1004233.04 1000222.49 1010915.62 00:14:28.248 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1005015.17 1000198.01 1042442.94 00:14:28.248 ======================================================== 00:14:28.248 Total : 256.00 0.12 1004624.10 1000198.01 1042442.94 00:14:28.248 00:14:28.505 01:00:21 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:28.505 01:00:21 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3724149 00:14:28.505 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (3724149) - No such process 00:14:28.505 01:00:21 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 3724149 00:14:28.505 01:00:21 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:14:28.505 01:00:21 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:14:28.505 01:00:21 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:28.505 01:00:21 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # sync 00:14:28.505 01:00:21 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:28.505 01:00:21 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@120 -- # set +e 00:14:28.505 01:00:21 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:28.505 01:00:21 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:28.505 rmmod nvme_tcp 00:14:28.505 rmmod nvme_fabrics 00:14:28.505 rmmod nvme_keyring 00:14:28.505 01:00:21 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:28.505 01:00:21 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set -e 00:14:28.505 01:00:21 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # return 0 00:14:28.505 01:00:21 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@489 -- # '[' -n 3723615 ']' 00:14:28.505 01:00:21 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@490 -- # killprocess 3723615 00:14:28.505 01:00:21 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@946 -- # '[' -z 3723615 ']' 00:14:28.505 01:00:21 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@950 -- # kill -0 3723615 00:14:28.505 01:00:21 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@951 -- # uname 00:14:28.505 01:00:21 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:14:28.505 01:00:21 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3723615 00:14:28.505 01:00:21 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:14:28.505 01:00:21 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:14:28.505 01:00:21 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3723615' 00:14:28.505 killing process with pid 3723615 00:14:28.505 01:00:21 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@965 -- # kill 3723615 00:14:28.505 01:00:21 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@970 -- # wait 3723615 00:14:28.762 01:00:21 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:28.762 01:00:21 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:28.762 01:00:21 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:28.762 01:00:21 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:28.762 01:00:21 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:28.762 01:00:21 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:28.762 01:00:21 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:28.762 01:00:21 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:30.665 01:00:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:30.665 00:14:30.665 real 0m12.082s 00:14:30.665 user 0m27.363s 00:14:30.665 sys 0m2.921s 00:14:30.665 01:00:23 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:30.665 01:00:23 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:30.665 ************************************ 00:14:30.665 END TEST nvmf_delete_subsystem 00:14:30.665 ************************************ 00:14:30.665 01:00:23 nvmf_tcp -- nvmf/nvmf.sh@36 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:14:30.665 01:00:23 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:14:30.665 01:00:23 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:30.665 01:00:23 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:30.922 ************************************ 00:14:30.922 START TEST nvmf_ns_masking 00:14:30.922 ************************************ 00:14:30.922 01:00:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1121 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:14:30.922 * Looking for test storage... 00:14:30.922 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:30.922 01:00:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:30.922 01:00:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:14:30.922 01:00:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:30.922 01:00:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:30.922 01:00:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:30.922 01:00:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:30.922 01:00:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:30.922 01:00:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:30.922 01:00:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:30.922 01:00:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:30.922 01:00:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:30.922 01:00:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:30.922 01:00:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:30.922 01:00:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:14:30.922 01:00:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:30.922 01:00:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:30.922 01:00:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:30.922 01:00:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:30.922 01:00:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:30.922 01:00:23 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:30.922 01:00:23 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:30.922 01:00:23 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:30.922 01:00:23 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:30.922 01:00:23 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:30.922 01:00:23 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:30.922 01:00:23 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:14:30.922 01:00:23 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:30.922 01:00:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@47 -- # : 0 00:14:30.922 01:00:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:30.922 01:00:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:30.922 01:00:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:30.922 01:00:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:30.922 01:00:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:30.922 01:00:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:30.922 01:00:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:30.922 01:00:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:30.922 01:00:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:30.922 01:00:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@11 -- # loops=5 00:14:30.922 01:00:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@13 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:14:30.922 01:00:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@14 -- # HOSTNQN=nqn.2016-06.io.spdk:host1 00:14:30.922 01:00:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@15 -- # uuidgen 00:14:30.922 01:00:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@15 -- # HOSTID=7743a4a2-78a2-415a-bc56-11fbac627060 00:14:30.922 01:00:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvmftestinit 00:14:30.922 01:00:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:30.922 01:00:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:30.922 01:00:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:30.922 01:00:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:30.922 01:00:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:30.922 01:00:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:30.922 01:00:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:30.922 01:00:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:30.922 01:00:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:30.922 01:00:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:30.922 01:00:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@285 -- # xtrace_disable 00:14:30.922 01:00:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:32.821 01:00:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:32.821 01:00:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@291 -- # pci_devs=() 00:14:32.821 01:00:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:32.821 01:00:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:32.821 01:00:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:32.821 01:00:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:32.821 01:00:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:32.821 01:00:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@295 -- # net_devs=() 00:14:32.821 01:00:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:32.821 01:00:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@296 -- # e810=() 00:14:32.821 01:00:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@296 -- # local -ga e810 00:14:32.821 01:00:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@297 -- # x722=() 00:14:32.821 01:00:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@297 -- # local -ga x722 00:14:32.821 01:00:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@298 -- # mlx=() 00:14:32.821 01:00:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@298 -- # local -ga mlx 00:14:32.821 01:00:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:32.821 01:00:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:32.821 01:00:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:32.821 01:00:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:32.821 01:00:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:32.821 01:00:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:32.821 01:00:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:32.821 01:00:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:32.821 01:00:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:32.821 01:00:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:32.821 01:00:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:32.821 01:00:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:32.821 01:00:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:32.821 01:00:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:32.821 01:00:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:32.821 01:00:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:32.821 01:00:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:32.821 01:00:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:32.821 01:00:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:14:32.821 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:14:32.821 01:00:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:32.821 01:00:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:32.821 01:00:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:32.821 01:00:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:32.821 01:00:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:32.822 01:00:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:32.822 01:00:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:14:32.822 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:14:32.822 01:00:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:32.822 01:00:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:32.822 01:00:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:32.822 01:00:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:32.822 01:00:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:32.822 01:00:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:32.822 01:00:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:32.822 01:00:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:32.822 01:00:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:32.822 01:00:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:32.822 01:00:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:32.822 01:00:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:32.822 01:00:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:32.822 01:00:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:32.822 01:00:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:32.822 01:00:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:14:32.822 Found net devices under 0000:0a:00.0: cvl_0_0 00:14:32.822 01:00:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:32.822 01:00:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:32.822 01:00:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:32.822 01:00:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:32.822 01:00:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:32.822 01:00:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:32.822 01:00:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:32.822 01:00:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:32.822 01:00:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:14:32.822 Found net devices under 0000:0a:00.1: cvl_0_1 00:14:32.822 01:00:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:32.822 01:00:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:32.822 01:00:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # is_hw=yes 00:14:32.822 01:00:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:32.822 01:00:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:32.822 01:00:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:32.822 01:00:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:32.822 01:00:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:32.822 01:00:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:32.822 01:00:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:32.822 01:00:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:32.822 01:00:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:32.822 01:00:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:32.822 01:00:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:32.822 01:00:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:32.822 01:00:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:32.822 01:00:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:32.822 01:00:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:32.822 01:00:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:32.822 01:00:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:32.822 01:00:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:32.822 01:00:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:32.822 01:00:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:32.822 01:00:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:32.822 01:00:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:32.822 01:00:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:32.822 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:32.822 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.140 ms 00:14:32.822 00:14:32.822 --- 10.0.0.2 ping statistics --- 00:14:32.822 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:32.822 rtt min/avg/max/mdev = 0.140/0.140/0.140/0.000 ms 00:14:32.822 01:00:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:32.822 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:32.822 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.116 ms 00:14:32.822 00:14:32.822 --- 10.0.0.1 ping statistics --- 00:14:32.822 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:32.822 rtt min/avg/max/mdev = 0.116/0.116/0.116/0.000 ms 00:14:32.822 01:00:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:32.822 01:00:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@422 -- # return 0 00:14:32.822 01:00:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:32.822 01:00:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:32.822 01:00:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:32.822 01:00:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:32.822 01:00:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:32.822 01:00:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:32.822 01:00:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:32.822 01:00:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # nvmfappstart -m 0xF 00:14:32.822 01:00:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:32.822 01:00:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@720 -- # xtrace_disable 00:14:32.822 01:00:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:32.822 01:00:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@481 -- # nvmfpid=3726489 00:14:32.822 01:00:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:32.822 01:00:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@482 -- # waitforlisten 3726489 00:14:32.822 01:00:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@827 -- # '[' -z 3726489 ']' 00:14:32.822 01:00:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:32.822 01:00:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@832 -- # local max_retries=100 00:14:32.822 01:00:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:32.822 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:32.822 01:00:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@836 -- # xtrace_disable 00:14:32.822 01:00:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:33.078 [2024-07-25 01:00:25.977304] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 23.11.0 initialization... 00:14:33.078 [2024-07-25 01:00:25.977377] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:33.078 EAL: No free 2048 kB hugepages reported on node 1 00:14:33.078 [2024-07-25 01:00:26.041568] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:33.078 [2024-07-25 01:00:26.127102] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:33.078 [2024-07-25 01:00:26.127144] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:33.078 [2024-07-25 01:00:26.127158] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:33.078 [2024-07-25 01:00:26.127170] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:33.078 [2024-07-25 01:00:26.127180] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:33.078 [2024-07-25 01:00:26.127332] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:33.078 [2024-07-25 01:00:26.127388] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:33.078 [2024-07-25 01:00:26.127386] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:33.078 [2024-07-25 01:00:26.127359] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:33.334 01:00:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:14:33.334 01:00:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@860 -- # return 0 00:14:33.334 01:00:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:33.334 01:00:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:33.334 01:00:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:33.334 01:00:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:33.334 01:00:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:14:33.590 [2024-07-25 01:00:26.552012] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:33.590 01:00:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@49 -- # MALLOC_BDEV_SIZE=64 00:14:33.590 01:00:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@50 -- # MALLOC_BLOCK_SIZE=512 00:14:33.590 01:00:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:14:33.849 Malloc1 00:14:33.849 01:00:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:14:34.105 Malloc2 00:14:34.105 01:00:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:14:34.361 01:00:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:14:34.618 01:00:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:34.873 [2024-07-25 01:00:27.862777] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:34.873 01:00:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@61 -- # connect 00:14:34.873 01:00:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@18 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 7743a4a2-78a2-415a-bc56-11fbac627060 -a 10.0.0.2 -s 4420 -i 4 00:14:35.129 01:00:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 00:14:35.129 01:00:28 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1194 -- # local i=0 00:14:35.129 01:00:28 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:14:35.129 01:00:28 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:14:35.129 01:00:28 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # sleep 2 00:14:37.017 01:00:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:14:37.017 01:00:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:14:37.017 01:00:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:14:37.017 01:00:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:14:37.017 01:00:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:14:37.017 01:00:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # return 0 00:14:37.017 01:00:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:14:37.017 01:00:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:14:37.017 01:00:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:14:37.017 01:00:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:14:37.017 01:00:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@62 -- # ns_is_visible 0x1 00:14:37.017 01:00:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:14:37.017 01:00:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:14:37.017 [ 0]:0x1 00:14:37.017 01:00:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:37.017 01:00:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:14:37.274 01:00:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=122bad1f2795434183a2e651266128b3 00:14:37.274 01:00:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 122bad1f2795434183a2e651266128b3 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:37.274 01:00:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:14:37.530 01:00:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@66 -- # ns_is_visible 0x1 00:14:37.530 01:00:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:14:37.530 01:00:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:14:37.530 [ 0]:0x1 00:14:37.530 01:00:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:37.530 01:00:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:14:37.530 01:00:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=122bad1f2795434183a2e651266128b3 00:14:37.530 01:00:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 122bad1f2795434183a2e651266128b3 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:37.530 01:00:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@67 -- # ns_is_visible 0x2 00:14:37.530 01:00:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:14:37.530 01:00:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:14:37.530 [ 1]:0x2 00:14:37.530 01:00:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:37.530 01:00:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:14:37.530 01:00:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=3fa3608e0b534b5eb4dbe7b6f2a7eebc 00:14:37.530 01:00:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 3fa3608e0b534b5eb4dbe7b6f2a7eebc != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:37.530 01:00:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@69 -- # disconnect 00:14:37.530 01:00:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:37.530 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:37.530 01:00:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:37.787 01:00:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:14:38.042 01:00:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@77 -- # connect 1 00:14:38.042 01:00:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@18 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 7743a4a2-78a2-415a-bc56-11fbac627060 -a 10.0.0.2 -s 4420 -i 4 00:14:38.298 01:00:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 1 00:14:38.298 01:00:31 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1194 -- # local i=0 00:14:38.298 01:00:31 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:14:38.298 01:00:31 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1196 -- # [[ -n 1 ]] 00:14:38.298 01:00:31 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1197 -- # nvme_device_counter=1 00:14:38.298 01:00:31 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # sleep 2 00:14:40.192 01:00:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:14:40.192 01:00:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:14:40.192 01:00:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:14:40.192 01:00:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:14:40.192 01:00:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:14:40.192 01:00:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # return 0 00:14:40.192 01:00:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:14:40.192 01:00:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:14:40.192 01:00:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:14:40.192 01:00:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:14:40.192 01:00:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@78 -- # NOT ns_is_visible 0x1 00:14:40.193 01:00:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:14:40.193 01:00:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:14:40.193 01:00:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:14:40.193 01:00:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:40.193 01:00:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:14:40.193 01:00:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:40.193 01:00:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:14:40.193 01:00:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:14:40.193 01:00:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:14:40.449 01:00:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:40.449 01:00:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:14:40.449 01:00:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:14:40.449 01:00:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:40.449 01:00:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:14:40.449 01:00:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:40.449 01:00:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:40.449 01:00:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:40.449 01:00:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@79 -- # ns_is_visible 0x2 00:14:40.449 01:00:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:14:40.449 01:00:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:14:40.449 [ 0]:0x2 00:14:40.449 01:00:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:40.449 01:00:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:14:40.449 01:00:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=3fa3608e0b534b5eb4dbe7b6f2a7eebc 00:14:40.449 01:00:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 3fa3608e0b534b5eb4dbe7b6f2a7eebc != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:40.449 01:00:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:40.705 01:00:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@83 -- # ns_is_visible 0x1 00:14:40.705 01:00:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:14:40.705 01:00:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:14:40.705 [ 0]:0x1 00:14:40.705 01:00:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:40.705 01:00:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:14:40.705 01:00:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=122bad1f2795434183a2e651266128b3 00:14:40.705 01:00:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 122bad1f2795434183a2e651266128b3 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:40.705 01:00:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@84 -- # ns_is_visible 0x2 00:14:40.705 01:00:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:14:40.705 01:00:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:14:40.705 [ 1]:0x2 00:14:40.705 01:00:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:40.705 01:00:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:14:40.705 01:00:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=3fa3608e0b534b5eb4dbe7b6f2a7eebc 00:14:40.705 01:00:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 3fa3608e0b534b5eb4dbe7b6f2a7eebc != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:40.705 01:00:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:40.961 01:00:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@88 -- # NOT ns_is_visible 0x1 00:14:40.961 01:00:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:14:40.961 01:00:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:14:40.961 01:00:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:14:40.961 01:00:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:40.961 01:00:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:14:40.961 01:00:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:40.961 01:00:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:14:40.961 01:00:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:14:40.961 01:00:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:14:41.217 01:00:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:41.217 01:00:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:14:41.217 01:00:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:14:41.217 01:00:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:41.217 01:00:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:14:41.217 01:00:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:41.217 01:00:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:41.217 01:00:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:41.217 01:00:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x2 00:14:41.217 01:00:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:14:41.217 01:00:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:14:41.217 [ 0]:0x2 00:14:41.217 01:00:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:41.217 01:00:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:14:41.217 01:00:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=3fa3608e0b534b5eb4dbe7b6f2a7eebc 00:14:41.217 01:00:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 3fa3608e0b534b5eb4dbe7b6f2a7eebc != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:41.217 01:00:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@91 -- # disconnect 00:14:41.217 01:00:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:41.217 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:41.217 01:00:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:41.473 01:00:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@95 -- # connect 2 00:14:41.473 01:00:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@18 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 7743a4a2-78a2-415a-bc56-11fbac627060 -a 10.0.0.2 -s 4420 -i 4 00:14:41.729 01:00:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 2 00:14:41.729 01:00:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1194 -- # local i=0 00:14:41.729 01:00:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:14:41.729 01:00:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1196 -- # [[ -n 2 ]] 00:14:41.729 01:00:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1197 -- # nvme_device_counter=2 00:14:41.729 01:00:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # sleep 2 00:14:43.620 01:00:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:14:43.620 01:00:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:14:43.620 01:00:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:14:43.620 01:00:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # nvme_devices=2 00:14:43.620 01:00:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:14:43.620 01:00:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # return 0 00:14:43.620 01:00:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:14:43.620 01:00:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:14:43.877 01:00:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:14:43.877 01:00:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:14:43.877 01:00:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@96 -- # ns_is_visible 0x1 00:14:43.877 01:00:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:14:43.877 01:00:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:14:43.877 [ 0]:0x1 00:14:43.877 01:00:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:43.877 01:00:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:14:43.877 01:00:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=122bad1f2795434183a2e651266128b3 00:14:43.878 01:00:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 122bad1f2795434183a2e651266128b3 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:43.878 01:00:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@97 -- # ns_is_visible 0x2 00:14:43.878 01:00:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:14:43.878 01:00:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:14:43.878 [ 1]:0x2 00:14:43.878 01:00:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:43.878 01:00:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:14:44.135 01:00:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=3fa3608e0b534b5eb4dbe7b6f2a7eebc 00:14:44.135 01:00:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 3fa3608e0b534b5eb4dbe7b6f2a7eebc != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:44.135 01:00:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:44.392 01:00:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@101 -- # NOT ns_is_visible 0x1 00:14:44.392 01:00:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:14:44.392 01:00:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:14:44.392 01:00:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:14:44.392 01:00:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:44.392 01:00:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:14:44.392 01:00:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:44.392 01:00:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:14:44.392 01:00:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:14:44.392 01:00:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:14:44.392 01:00:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:44.392 01:00:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:14:44.392 01:00:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:14:44.392 01:00:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:44.392 01:00:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:14:44.392 01:00:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:44.392 01:00:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:44.392 01:00:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:44.392 01:00:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x2 00:14:44.392 01:00:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:14:44.392 01:00:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:14:44.392 [ 0]:0x2 00:14:44.392 01:00:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:44.392 01:00:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:14:44.392 01:00:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=3fa3608e0b534b5eb4dbe7b6f2a7eebc 00:14:44.392 01:00:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 3fa3608e0b534b5eb4dbe7b6f2a7eebc != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:44.392 01:00:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@105 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:14:44.392 01:00:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:14:44.392 01:00:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:14:44.392 01:00:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:44.392 01:00:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:44.392 01:00:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:44.392 01:00:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:44.392 01:00:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:44.392 01:00:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:44.392 01:00:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:44.392 01:00:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:14:44.392 01:00:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:14:44.649 [2024-07-25 01:00:37.598308] nvmf_rpc.c:1791:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:14:44.649 request: 00:14:44.649 { 00:14:44.649 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:44.649 "nsid": 2, 00:14:44.649 "host": "nqn.2016-06.io.spdk:host1", 00:14:44.649 "method": "nvmf_ns_remove_host", 00:14:44.649 "req_id": 1 00:14:44.649 } 00:14:44.649 Got JSON-RPC error response 00:14:44.649 response: 00:14:44.649 { 00:14:44.649 "code": -32602, 00:14:44.649 "message": "Invalid parameters" 00:14:44.649 } 00:14:44.649 01:00:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:14:44.649 01:00:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:44.649 01:00:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:44.649 01:00:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:44.649 01:00:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@106 -- # NOT ns_is_visible 0x1 00:14:44.649 01:00:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:14:44.649 01:00:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:14:44.649 01:00:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:14:44.649 01:00:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:44.649 01:00:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:14:44.649 01:00:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:44.649 01:00:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:14:44.649 01:00:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:14:44.649 01:00:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:14:44.649 01:00:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:44.649 01:00:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:14:44.649 01:00:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:14:44.649 01:00:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:44.649 01:00:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:14:44.649 01:00:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:44.649 01:00:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:44.649 01:00:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:44.649 01:00:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@107 -- # ns_is_visible 0x2 00:14:44.649 01:00:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:14:44.649 01:00:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:14:44.649 [ 0]:0x2 00:14:44.649 01:00:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:44.649 01:00:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:14:44.649 01:00:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=3fa3608e0b534b5eb4dbe7b6f2a7eebc 00:14:44.649 01:00:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 3fa3608e0b534b5eb4dbe7b6f2a7eebc != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:44.650 01:00:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@108 -- # disconnect 00:14:44.650 01:00:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:44.906 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:44.906 01:00:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@110 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:44.906 01:00:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:14:44.906 01:00:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@114 -- # nvmftestfini 00:14:44.906 01:00:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:44.906 01:00:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@117 -- # sync 00:14:45.164 01:00:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:45.164 01:00:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@120 -- # set +e 00:14:45.164 01:00:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:45.164 01:00:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:45.164 rmmod nvme_tcp 00:14:45.164 rmmod nvme_fabrics 00:14:45.164 rmmod nvme_keyring 00:14:45.164 01:00:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:45.164 01:00:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@124 -- # set -e 00:14:45.164 01:00:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@125 -- # return 0 00:14:45.164 01:00:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@489 -- # '[' -n 3726489 ']' 00:14:45.164 01:00:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@490 -- # killprocess 3726489 00:14:45.164 01:00:38 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@946 -- # '[' -z 3726489 ']' 00:14:45.164 01:00:38 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@950 -- # kill -0 3726489 00:14:45.164 01:00:38 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@951 -- # uname 00:14:45.164 01:00:38 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:14:45.164 01:00:38 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3726489 00:14:45.164 01:00:38 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:14:45.164 01:00:38 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:14:45.164 01:00:38 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3726489' 00:14:45.164 killing process with pid 3726489 00:14:45.164 01:00:38 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@965 -- # kill 3726489 00:14:45.164 01:00:38 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@970 -- # wait 3726489 00:14:45.422 01:00:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:45.422 01:00:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:45.422 01:00:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:45.422 01:00:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:45.422 01:00:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:45.422 01:00:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:45.422 01:00:38 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:45.422 01:00:38 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:47.351 01:00:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:47.351 00:14:47.351 real 0m16.635s 00:14:47.351 user 0m52.145s 00:14:47.351 sys 0m3.718s 00:14:47.351 01:00:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:47.351 01:00:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:47.351 ************************************ 00:14:47.351 END TEST nvmf_ns_masking 00:14:47.351 ************************************ 00:14:47.351 01:00:40 nvmf_tcp -- nvmf/nvmf.sh@37 -- # [[ 1 -eq 1 ]] 00:14:47.351 01:00:40 nvmf_tcp -- nvmf/nvmf.sh@38 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:14:47.351 01:00:40 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:14:47.351 01:00:40 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:47.351 01:00:40 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:47.609 ************************************ 00:14:47.609 START TEST nvmf_nvme_cli 00:14:47.609 ************************************ 00:14:47.609 01:00:40 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:14:47.609 * Looking for test storage... 00:14:47.609 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:47.609 01:00:40 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:47.609 01:00:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:14:47.609 01:00:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:47.609 01:00:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:47.609 01:00:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:47.609 01:00:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:47.609 01:00:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:47.609 01:00:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:47.609 01:00:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:47.609 01:00:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:47.609 01:00:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:47.609 01:00:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:47.609 01:00:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:47.609 01:00:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:14:47.609 01:00:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:47.609 01:00:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:47.609 01:00:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:47.609 01:00:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:47.609 01:00:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:47.609 01:00:40 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:47.609 01:00:40 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:47.609 01:00:40 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:47.609 01:00:40 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:47.609 01:00:40 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:47.610 01:00:40 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:47.610 01:00:40 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:14:47.610 01:00:40 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:47.610 01:00:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@47 -- # : 0 00:14:47.610 01:00:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:47.610 01:00:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:47.610 01:00:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:47.610 01:00:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:47.610 01:00:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:47.610 01:00:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:47.610 01:00:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:47.610 01:00:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:47.610 01:00:40 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:47.610 01:00:40 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:47.610 01:00:40 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:14:47.610 01:00:40 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:14:47.610 01:00:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:47.610 01:00:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:47.610 01:00:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:47.610 01:00:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:47.610 01:00:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:47.610 01:00:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:47.610 01:00:40 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:47.610 01:00:40 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:47.610 01:00:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:47.610 01:00:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:47.610 01:00:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@285 -- # xtrace_disable 00:14:47.610 01:00:40 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:49.509 01:00:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:49.509 01:00:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@291 -- # pci_devs=() 00:14:49.509 01:00:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:49.509 01:00:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:49.509 01:00:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:49.509 01:00:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:49.509 01:00:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:49.509 01:00:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@295 -- # net_devs=() 00:14:49.509 01:00:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:49.509 01:00:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@296 -- # e810=() 00:14:49.509 01:00:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@296 -- # local -ga e810 00:14:49.509 01:00:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@297 -- # x722=() 00:14:49.509 01:00:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@297 -- # local -ga x722 00:14:49.509 01:00:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@298 -- # mlx=() 00:14:49.509 01:00:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@298 -- # local -ga mlx 00:14:49.509 01:00:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:49.509 01:00:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:49.509 01:00:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:49.509 01:00:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:49.509 01:00:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:49.509 01:00:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:49.509 01:00:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:49.509 01:00:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:49.509 01:00:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:49.509 01:00:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:49.509 01:00:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:49.509 01:00:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:49.509 01:00:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:49.509 01:00:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:49.509 01:00:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:49.509 01:00:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:49.509 01:00:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:49.509 01:00:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:49.509 01:00:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:14:49.509 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:14:49.509 01:00:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:49.509 01:00:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:49.509 01:00:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:49.509 01:00:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:49.509 01:00:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:49.509 01:00:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:49.509 01:00:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:14:49.509 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:14:49.509 01:00:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:49.509 01:00:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:49.509 01:00:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:49.509 01:00:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:49.509 01:00:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:49.509 01:00:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:49.509 01:00:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:49.509 01:00:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:49.509 01:00:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:49.509 01:00:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:49.509 01:00:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:49.509 01:00:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:49.509 01:00:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:49.509 01:00:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:49.509 01:00:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:49.509 01:00:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:14:49.509 Found net devices under 0000:0a:00.0: cvl_0_0 00:14:49.509 01:00:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:49.509 01:00:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:49.509 01:00:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:49.509 01:00:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:49.509 01:00:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:49.509 01:00:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:49.509 01:00:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:49.509 01:00:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:49.509 01:00:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:14:49.509 Found net devices under 0000:0a:00.1: cvl_0_1 00:14:49.509 01:00:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:49.509 01:00:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:49.509 01:00:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # is_hw=yes 00:14:49.509 01:00:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:49.509 01:00:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:49.509 01:00:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:49.509 01:00:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:49.509 01:00:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:49.509 01:00:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:49.509 01:00:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:49.509 01:00:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:49.509 01:00:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:49.509 01:00:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:49.509 01:00:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:49.509 01:00:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:49.509 01:00:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:49.509 01:00:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:49.509 01:00:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:49.509 01:00:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:49.767 01:00:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:49.767 01:00:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:49.767 01:00:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:49.767 01:00:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:49.767 01:00:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:49.767 01:00:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:49.767 01:00:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:49.767 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:49.767 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.223 ms 00:14:49.767 00:14:49.767 --- 10.0.0.2 ping statistics --- 00:14:49.767 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:49.767 rtt min/avg/max/mdev = 0.223/0.223/0.223/0.000 ms 00:14:49.767 01:00:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:49.767 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:49.767 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.167 ms 00:14:49.767 00:14:49.767 --- 10.0.0.1 ping statistics --- 00:14:49.767 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:49.767 rtt min/avg/max/mdev = 0.167/0.167/0.167/0.000 ms 00:14:49.767 01:00:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:49.767 01:00:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@422 -- # return 0 00:14:49.767 01:00:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:49.767 01:00:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:49.767 01:00:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:49.767 01:00:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:49.767 01:00:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:49.767 01:00:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:49.767 01:00:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:49.767 01:00:42 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:14:49.767 01:00:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:49.767 01:00:42 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@720 -- # xtrace_disable 00:14:49.767 01:00:42 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:49.767 01:00:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@481 -- # nvmfpid=3730040 00:14:49.767 01:00:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:49.767 01:00:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@482 -- # waitforlisten 3730040 00:14:49.767 01:00:42 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@827 -- # '[' -z 3730040 ']' 00:14:49.767 01:00:42 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:49.767 01:00:42 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@832 -- # local max_retries=100 00:14:49.767 01:00:42 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:49.767 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:49.767 01:00:42 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@836 -- # xtrace_disable 00:14:49.767 01:00:42 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:49.767 [2024-07-25 01:00:42.811410] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 23.11.0 initialization... 00:14:49.767 [2024-07-25 01:00:42.811498] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:49.767 EAL: No free 2048 kB hugepages reported on node 1 00:14:49.767 [2024-07-25 01:00:42.882432] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:50.025 [2024-07-25 01:00:42.979800] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:50.025 [2024-07-25 01:00:42.979868] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:50.025 [2024-07-25 01:00:42.979885] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:50.025 [2024-07-25 01:00:42.979898] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:50.025 [2024-07-25 01:00:42.979910] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:50.025 [2024-07-25 01:00:42.983272] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:50.025 [2024-07-25 01:00:42.983324] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:50.025 [2024-07-25 01:00:42.983352] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:50.025 [2024-07-25 01:00:42.983356] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:50.025 01:00:43 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:14:50.025 01:00:43 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@860 -- # return 0 00:14:50.025 01:00:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:50.025 01:00:43 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:50.025 01:00:43 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:50.025 01:00:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:50.025 01:00:43 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:50.025 01:00:43 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:50.025 01:00:43 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:50.025 [2024-07-25 01:00:43.146861] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:50.025 01:00:43 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:50.025 01:00:43 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:14:50.025 01:00:43 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:50.025 01:00:43 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:50.283 Malloc0 00:14:50.283 01:00:43 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:50.283 01:00:43 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:14:50.283 01:00:43 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:50.283 01:00:43 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:50.283 Malloc1 00:14:50.283 01:00:43 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:50.283 01:00:43 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:14:50.283 01:00:43 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:50.283 01:00:43 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:50.283 01:00:43 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:50.283 01:00:43 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:50.283 01:00:43 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:50.283 01:00:43 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:50.283 01:00:43 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:50.283 01:00:43 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:50.283 01:00:43 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:50.283 01:00:43 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:50.284 01:00:43 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:50.284 01:00:43 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:50.284 01:00:43 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:50.284 01:00:43 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:50.284 [2024-07-25 01:00:43.228118] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:50.284 01:00:43 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:50.284 01:00:43 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:50.284 01:00:43 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:50.284 01:00:43 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:50.284 01:00:43 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:50.284 01:00:43 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 4420 00:14:50.284 00:14:50.284 Discovery Log Number of Records 2, Generation counter 2 00:14:50.284 =====Discovery Log Entry 0====== 00:14:50.284 trtype: tcp 00:14:50.284 adrfam: ipv4 00:14:50.284 subtype: current discovery subsystem 00:14:50.284 treq: not required 00:14:50.284 portid: 0 00:14:50.284 trsvcid: 4420 00:14:50.284 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:14:50.284 traddr: 10.0.0.2 00:14:50.284 eflags: explicit discovery connections, duplicate discovery information 00:14:50.284 sectype: none 00:14:50.284 =====Discovery Log Entry 1====== 00:14:50.284 trtype: tcp 00:14:50.284 adrfam: ipv4 00:14:50.284 subtype: nvme subsystem 00:14:50.284 treq: not required 00:14:50.284 portid: 0 00:14:50.284 trsvcid: 4420 00:14:50.284 subnqn: nqn.2016-06.io.spdk:cnode1 00:14:50.284 traddr: 10.0.0.2 00:14:50.284 eflags: none 00:14:50.284 sectype: none 00:14:50.284 01:00:43 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:14:50.284 01:00:43 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:14:50.284 01:00:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:14:50.284 01:00:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:50.284 01:00:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:14:50.284 01:00:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:14:50.284 01:00:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:50.284 01:00:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:14:50.284 01:00:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:50.284 01:00:43 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:14:50.284 01:00:43 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:51.214 01:00:44 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:14:51.214 01:00:44 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1194 -- # local i=0 00:14:51.214 01:00:44 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:14:51.214 01:00:44 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1196 -- # [[ -n 2 ]] 00:14:51.214 01:00:44 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1197 -- # nvme_device_counter=2 00:14:51.214 01:00:44 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1201 -- # sleep 2 00:14:53.108 01:00:46 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:14:53.108 01:00:46 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:14:53.108 01:00:46 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:14:53.108 01:00:46 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # nvme_devices=2 00:14:53.108 01:00:46 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:14:53.108 01:00:46 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1204 -- # return 0 00:14:53.108 01:00:46 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:14:53.108 01:00:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:14:53.108 01:00:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:53.108 01:00:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:14:53.108 01:00:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:14:53.108 01:00:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:53.108 01:00:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:14:53.108 01:00:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:53.108 01:00:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:14:53.108 01:00:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:14:53.108 01:00:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:53.108 01:00:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:14:53.109 01:00:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:14:53.109 01:00:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:53.109 01:00:46 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n2 00:14:53.109 /dev/nvme0n1 ]] 00:14:53.109 01:00:46 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:14:53.109 01:00:46 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:14:53.109 01:00:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:14:53.109 01:00:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:53.109 01:00:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:14:53.109 01:00:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:14:53.109 01:00:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:53.109 01:00:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:14:53.109 01:00:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:53.109 01:00:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:14:53.109 01:00:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:14:53.109 01:00:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:53.109 01:00:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:14:53.109 01:00:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:14:53.109 01:00:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:53.109 01:00:46 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:14:53.109 01:00:46 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:53.109 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:53.109 01:00:46 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:53.109 01:00:46 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1215 -- # local i=0 00:14:53.109 01:00:46 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:14:53.109 01:00:46 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:53.109 01:00:46 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:14:53.109 01:00:46 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:53.109 01:00:46 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # return 0 00:14:53.109 01:00:46 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:14:53.109 01:00:46 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:53.109 01:00:46 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:53.109 01:00:46 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:53.109 01:00:46 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:53.109 01:00:46 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:14:53.109 01:00:46 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:14:53.109 01:00:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:53.109 01:00:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@117 -- # sync 00:14:53.109 01:00:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:53.109 01:00:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@120 -- # set +e 00:14:53.109 01:00:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:53.109 01:00:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:53.109 rmmod nvme_tcp 00:14:53.109 rmmod nvme_fabrics 00:14:53.109 rmmod nvme_keyring 00:14:53.366 01:00:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:53.366 01:00:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set -e 00:14:53.366 01:00:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@125 -- # return 0 00:14:53.366 01:00:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@489 -- # '[' -n 3730040 ']' 00:14:53.366 01:00:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@490 -- # killprocess 3730040 00:14:53.366 01:00:46 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@946 -- # '[' -z 3730040 ']' 00:14:53.366 01:00:46 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@950 -- # kill -0 3730040 00:14:53.366 01:00:46 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@951 -- # uname 00:14:53.366 01:00:46 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:14:53.366 01:00:46 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3730040 00:14:53.366 01:00:46 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:14:53.366 01:00:46 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:14:53.366 01:00:46 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3730040' 00:14:53.366 killing process with pid 3730040 00:14:53.366 01:00:46 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@965 -- # kill 3730040 00:14:53.366 01:00:46 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@970 -- # wait 3730040 00:14:53.624 01:00:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:53.624 01:00:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:53.624 01:00:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:53.624 01:00:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:53.624 01:00:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:53.624 01:00:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:53.624 01:00:46 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:53.624 01:00:46 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:55.523 01:00:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:55.523 00:14:55.523 real 0m8.087s 00:14:55.523 user 0m14.761s 00:14:55.523 sys 0m2.137s 00:14:55.523 01:00:48 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:55.523 01:00:48 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:55.523 ************************************ 00:14:55.523 END TEST nvmf_nvme_cli 00:14:55.523 ************************************ 00:14:55.523 01:00:48 nvmf_tcp -- nvmf/nvmf.sh@40 -- # [[ 1 -eq 1 ]] 00:14:55.523 01:00:48 nvmf_tcp -- nvmf/nvmf.sh@41 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:14:55.523 01:00:48 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:14:55.523 01:00:48 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:55.523 01:00:48 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:55.523 ************************************ 00:14:55.523 START TEST nvmf_vfio_user 00:14:55.523 ************************************ 00:14:55.523 01:00:48 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:14:55.780 * Looking for test storage... 00:14:55.780 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:55.780 01:00:48 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:55.780 01:00:48 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:14:55.780 01:00:48 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:55.780 01:00:48 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:55.780 01:00:48 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:55.780 01:00:48 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:55.780 01:00:48 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:55.780 01:00:48 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:55.780 01:00:48 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:55.780 01:00:48 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:55.780 01:00:48 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:55.780 01:00:48 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:55.780 01:00:48 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:55.780 01:00:48 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:14:55.780 01:00:48 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:55.780 01:00:48 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:55.780 01:00:48 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:55.780 01:00:48 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:55.780 01:00:48 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:55.780 01:00:48 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:55.780 01:00:48 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:55.780 01:00:48 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:55.780 01:00:48 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:55.780 01:00:48 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:55.780 01:00:48 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:55.780 01:00:48 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:14:55.780 01:00:48 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:55.780 01:00:48 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@47 -- # : 0 00:14:55.780 01:00:48 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:55.780 01:00:48 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:55.780 01:00:48 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:55.780 01:00:48 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:55.780 01:00:48 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:55.780 01:00:48 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:55.780 01:00:48 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:55.780 01:00:48 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:55.780 01:00:48 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:14:55.780 01:00:48 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:14:55.780 01:00:48 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:14:55.780 01:00:48 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:55.780 01:00:48 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:14:55.780 01:00:48 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:14:55.780 01:00:48 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:14:55.780 01:00:48 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:14:55.780 01:00:48 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:14:55.780 01:00:48 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:14:55.780 01:00:48 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=3730857 00:14:55.780 01:00:48 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:14:55.780 01:00:48 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 3730857' 00:14:55.780 Process pid: 3730857 00:14:55.780 01:00:48 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:14:55.780 01:00:48 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 3730857 00:14:55.780 01:00:48 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@827 -- # '[' -z 3730857 ']' 00:14:55.780 01:00:48 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:55.780 01:00:48 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@832 -- # local max_retries=100 00:14:55.780 01:00:48 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:55.780 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:55.780 01:00:48 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@836 -- # xtrace_disable 00:14:55.780 01:00:48 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:14:55.780 [2024-07-25 01:00:48.770420] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 23.11.0 initialization... 00:14:55.780 [2024-07-25 01:00:48.770514] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:55.780 EAL: No free 2048 kB hugepages reported on node 1 00:14:55.780 [2024-07-25 01:00:48.830112] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:55.780 [2024-07-25 01:00:48.915619] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:55.780 [2024-07-25 01:00:48.915675] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:55.780 [2024-07-25 01:00:48.915689] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:55.780 [2024-07-25 01:00:48.915700] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:55.780 [2024-07-25 01:00:48.915725] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:55.780 [2024-07-25 01:00:48.915817] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:55.780 [2024-07-25 01:00:48.915881] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:55.780 [2024-07-25 01:00:48.915949] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:55.780 [2024-07-25 01:00:48.915951] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:56.036 01:00:49 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:14:56.036 01:00:49 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@860 -- # return 0 00:14:56.036 01:00:49 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:14:56.965 01:00:50 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:14:57.222 01:00:50 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:14:57.222 01:00:50 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:14:57.222 01:00:50 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:57.222 01:00:50 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:14:57.222 01:00:50 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:14:57.480 Malloc1 00:14:57.480 01:00:50 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:14:57.737 01:00:50 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:14:57.993 01:00:51 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:14:58.248 01:00:51 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:58.248 01:00:51 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:14:58.248 01:00:51 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:14:58.504 Malloc2 00:14:58.504 01:00:51 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:14:58.760 01:00:51 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:14:59.019 01:00:52 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:14:59.276 01:00:52 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:14:59.276 01:00:52 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:14:59.276 01:00:52 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:59.276 01:00:52 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:14:59.276 01:00:52 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:14:59.276 01:00:52 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:14:59.276 [2024-07-25 01:00:52.351719] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 23.11.0 initialization... 00:14:59.276 [2024-07-25 01:00:52.351758] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3731275 ] 00:14:59.276 EAL: No free 2048 kB hugepages reported on node 1 00:14:59.276 [2024-07-25 01:00:52.383741] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:14:59.276 [2024-07-25 01:00:52.388271] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:14:59.276 [2024-07-25 01:00:52.388317] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7fd7d6235000 00:14:59.276 [2024-07-25 01:00:52.389268] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:59.276 [2024-07-25 01:00:52.390255] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:59.276 [2024-07-25 01:00:52.391258] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:59.276 [2024-07-25 01:00:52.392263] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:59.276 [2024-07-25 01:00:52.393273] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:59.276 [2024-07-25 01:00:52.394270] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:59.276 [2024-07-25 01:00:52.395276] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:59.276 [2024-07-25 01:00:52.398251] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:59.276 [2024-07-25 01:00:52.398300] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:14:59.276 [2024-07-25 01:00:52.398320] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7fd7d4fe7000 00:14:59.276 [2024-07-25 01:00:52.399441] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:14:59.276 [2024-07-25 01:00:52.415140] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:14:59.276 [2024-07-25 01:00:52.415177] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to connect adminq (no timeout) 00:14:59.276 [2024-07-25 01:00:52.418415] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:14:59.276 [2024-07-25 01:00:52.418473] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:14:59.276 [2024-07-25 01:00:52.418578] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for connect adminq (no timeout) 00:14:59.276 [2024-07-25 01:00:52.418606] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs (no timeout) 00:14:59.276 [2024-07-25 01:00:52.418616] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs wait for vs (no timeout) 00:14:59.276 [2024-07-25 01:00:52.419413] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:14:59.276 [2024-07-25 01:00:52.419439] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap (no timeout) 00:14:59.276 [2024-07-25 01:00:52.419453] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap wait for cap (no timeout) 00:14:59.276 [2024-07-25 01:00:52.420415] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:14:59.276 [2024-07-25 01:00:52.420433] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en (no timeout) 00:14:59.276 [2024-07-25 01:00:52.420446] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en wait for cc (timeout 15000 ms) 00:14:59.276 [2024-07-25 01:00:52.421427] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:14:59.276 [2024-07-25 01:00:52.421445] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:14:59.276 [2024-07-25 01:00:52.422432] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:14:59.276 [2024-07-25 01:00:52.422450] nvme_ctrlr.c:3751:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 0 && CSTS.RDY = 0 00:14:59.276 [2024-07-25 01:00:52.422459] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to controller is disabled (timeout 15000 ms) 00:14:59.276 [2024-07-25 01:00:52.422470] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:14:59.276 [2024-07-25 01:00:52.422580] nvme_ctrlr.c:3944:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Setting CC.EN = 1 00:14:59.276 [2024-07-25 01:00:52.422593] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:14:59.276 [2024-07-25 01:00:52.422602] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:14:59.276 [2024-07-25 01:00:52.423446] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:14:59.276 [2024-07-25 01:00:52.424445] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:14:59.276 [2024-07-25 01:00:52.425451] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:14:59.276 [2024-07-25 01:00:52.426444] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:59.276 [2024-07-25 01:00:52.426552] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:14:59.535 [2024-07-25 01:00:52.427463] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:14:59.535 [2024-07-25 01:00:52.427481] nvme_ctrlr.c:3786:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:14:59.535 [2024-07-25 01:00:52.427491] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to reset admin queue (timeout 30000 ms) 00:14:59.535 [2024-07-25 01:00:52.427516] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller (no timeout) 00:14:59.535 [2024-07-25 01:00:52.427529] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify controller (timeout 30000 ms) 00:14:59.535 [2024-07-25 01:00:52.427557] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:59.535 [2024-07-25 01:00:52.427567] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:59.535 [2024-07-25 01:00:52.427585] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:59.535 [2024-07-25 01:00:52.427651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:14:59.535 [2024-07-25 01:00:52.427671] nvme_ctrlr.c:1986:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_xfer_size 131072 00:14:59.535 [2024-07-25 01:00:52.427681] nvme_ctrlr.c:1990:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] MDTS max_xfer_size 131072 00:14:59.535 [2024-07-25 01:00:52.427688] nvme_ctrlr.c:1993:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CNTLID 0x0001 00:14:59.535 [2024-07-25 01:00:52.427697] nvme_ctrlr.c:2004:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:14:59.535 [2024-07-25 01:00:52.427705] nvme_ctrlr.c:2017:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_sges 1 00:14:59.535 [2024-07-25 01:00:52.427712] nvme_ctrlr.c:2032:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] fuses compare and write: 1 00:14:59.535 [2024-07-25 01:00:52.427722] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to configure AER (timeout 30000 ms) 00:14:59.535 [2024-07-25 01:00:52.427734] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for configure aer (timeout 30000 ms) 00:14:59.535 [2024-07-25 01:00:52.427750] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:14:59.535 [2024-07-25 01:00:52.427772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:14:59.535 [2024-07-25 01:00:52.427790] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:14:59.535 [2024-07-25 01:00:52.427820] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:14:59.535 [2024-07-25 01:00:52.427834] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:14:59.535 [2024-07-25 01:00:52.427846] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:14:59.535 [2024-07-25 01:00:52.427855] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set keep alive timeout (timeout 30000 ms) 00:14:59.535 [2024-07-25 01:00:52.427887] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:14:59.535 [2024-07-25 01:00:52.427902] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:14:59.535 [2024-07-25 01:00:52.427917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:14:59.535 [2024-07-25 01:00:52.427928] nvme_ctrlr.c:2892:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Controller adjusted keep alive timeout to 0 ms 00:14:59.535 [2024-07-25 01:00:52.427937] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller iocs specific (timeout 30000 ms) 00:14:59.535 [2024-07-25 01:00:52.427947] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set number of queues (timeout 30000 ms) 00:14:59.535 [2024-07-25 01:00:52.427959] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set number of queues (timeout 30000 ms) 00:14:59.535 [2024-07-25 01:00:52.427973] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:14:59.535 [2024-07-25 01:00:52.427988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:14:59.535 [2024-07-25 01:00:52.428051] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify active ns (timeout 30000 ms) 00:14:59.535 [2024-07-25 01:00:52.428066] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify active ns (timeout 30000 ms) 00:14:59.535 [2024-07-25 01:00:52.428078] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:14:59.535 [2024-07-25 01:00:52.428086] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:14:59.535 [2024-07-25 01:00:52.428096] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:14:59.535 [2024-07-25 01:00:52.428114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:14:59.535 [2024-07-25 01:00:52.428129] nvme_ctrlr.c:4570:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Namespace 1 was added 00:14:59.535 [2024-07-25 01:00:52.428143] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns (timeout 30000 ms) 00:14:59.535 [2024-07-25 01:00:52.428156] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify ns (timeout 30000 ms) 00:14:59.535 [2024-07-25 01:00:52.428167] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:59.535 [2024-07-25 01:00:52.428178] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:59.535 [2024-07-25 01:00:52.428188] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:59.535 [2024-07-25 01:00:52.428208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:14:59.535 [2024-07-25 01:00:52.428228] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:14:59.535 [2024-07-25 01:00:52.428248] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:14:59.535 [2024-07-25 01:00:52.428277] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:59.535 [2024-07-25 01:00:52.428285] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:59.535 [2024-07-25 01:00:52.428295] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:59.535 [2024-07-25 01:00:52.428313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:14:59.535 [2024-07-25 01:00:52.428327] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns iocs specific (timeout 30000 ms) 00:14:59.535 [2024-07-25 01:00:52.428338] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported log pages (timeout 30000 ms) 00:14:59.535 [2024-07-25 01:00:52.428351] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported features (timeout 30000 ms) 00:14:59.535 [2024-07-25 01:00:52.428361] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set doorbell buffer config (timeout 30000 ms) 00:14:59.535 [2024-07-25 01:00:52.428369] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host ID (timeout 30000 ms) 00:14:59.536 [2024-07-25 01:00:52.428378] nvme_ctrlr.c:2992:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] NVMe-oF transport - not sending Set Features - Host ID 00:14:59.536 [2024-07-25 01:00:52.428385] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to transport ready (timeout 30000 ms) 00:14:59.536 [2024-07-25 01:00:52.428393] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to ready (no timeout) 00:14:59.536 [2024-07-25 01:00:52.428422] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:14:59.536 [2024-07-25 01:00:52.428440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:14:59.536 [2024-07-25 01:00:52.428459] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:14:59.536 [2024-07-25 01:00:52.428471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:14:59.536 [2024-07-25 01:00:52.428487] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:14:59.536 [2024-07-25 01:00:52.428498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:14:59.536 [2024-07-25 01:00:52.428514] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:14:59.536 [2024-07-25 01:00:52.428525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:14:59.536 [2024-07-25 01:00:52.428543] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:14:59.536 [2024-07-25 01:00:52.428570] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:14:59.536 [2024-07-25 01:00:52.428577] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:14:59.536 [2024-07-25 01:00:52.428583] nvme_pcie_common.c:1254:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:14:59.536 [2024-07-25 01:00:52.428592] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:14:59.536 [2024-07-25 01:00:52.428604] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:14:59.536 [2024-07-25 01:00:52.428612] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:14:59.536 [2024-07-25 01:00:52.428620] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:14:59.536 [2024-07-25 01:00:52.428631] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:14:59.536 [2024-07-25 01:00:52.428639] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:59.536 [2024-07-25 01:00:52.428647] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:59.536 [2024-07-25 01:00:52.428658] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:14:59.536 [2024-07-25 01:00:52.428680] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:14:59.536 [2024-07-25 01:00:52.428690] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:14:59.536 [2024-07-25 01:00:52.428701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:14:59.536 [2024-07-25 01:00:52.428721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:14:59.536 [2024-07-25 01:00:52.428752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:14:59.536 [2024-07-25 01:00:52.428767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:14:59.536 ===================================================== 00:14:59.536 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:14:59.536 ===================================================== 00:14:59.536 Controller Capabilities/Features 00:14:59.536 ================================ 00:14:59.536 Vendor ID: 4e58 00:14:59.536 Subsystem Vendor ID: 4e58 00:14:59.536 Serial Number: SPDK1 00:14:59.536 Model Number: SPDK bdev Controller 00:14:59.536 Firmware Version: 24.05.1 00:14:59.536 Recommended Arb Burst: 6 00:14:59.536 IEEE OUI Identifier: 8d 6b 50 00:14:59.536 Multi-path I/O 00:14:59.536 May have multiple subsystem ports: Yes 00:14:59.536 May have multiple controllers: Yes 00:14:59.536 Associated with SR-IOV VF: No 00:14:59.536 Max Data Transfer Size: 131072 00:14:59.536 Max Number of Namespaces: 32 00:14:59.536 Max Number of I/O Queues: 127 00:14:59.536 NVMe Specification Version (VS): 1.3 00:14:59.536 NVMe Specification Version (Identify): 1.3 00:14:59.536 Maximum Queue Entries: 256 00:14:59.536 Contiguous Queues Required: Yes 00:14:59.536 Arbitration Mechanisms Supported 00:14:59.536 Weighted Round Robin: Not Supported 00:14:59.536 Vendor Specific: Not Supported 00:14:59.536 Reset Timeout: 15000 ms 00:14:59.536 Doorbell Stride: 4 bytes 00:14:59.536 NVM Subsystem Reset: Not Supported 00:14:59.536 Command Sets Supported 00:14:59.536 NVM Command Set: Supported 00:14:59.536 Boot Partition: Not Supported 00:14:59.536 Memory Page Size Minimum: 4096 bytes 00:14:59.536 Memory Page Size Maximum: 4096 bytes 00:14:59.536 Persistent Memory Region: Not Supported 00:14:59.536 Optional Asynchronous Events Supported 00:14:59.536 Namespace Attribute Notices: Supported 00:14:59.536 Firmware Activation Notices: Not Supported 00:14:59.536 ANA Change Notices: Not Supported 00:14:59.536 PLE Aggregate Log Change Notices: Not Supported 00:14:59.536 LBA Status Info Alert Notices: Not Supported 00:14:59.536 EGE Aggregate Log Change Notices: Not Supported 00:14:59.536 Normal NVM Subsystem Shutdown event: Not Supported 00:14:59.536 Zone Descriptor Change Notices: Not Supported 00:14:59.536 Discovery Log Change Notices: Not Supported 00:14:59.536 Controller Attributes 00:14:59.536 128-bit Host Identifier: Supported 00:14:59.536 Non-Operational Permissive Mode: Not Supported 00:14:59.536 NVM Sets: Not Supported 00:14:59.536 Read Recovery Levels: Not Supported 00:14:59.536 Endurance Groups: Not Supported 00:14:59.536 Predictable Latency Mode: Not Supported 00:14:59.536 Traffic Based Keep ALive: Not Supported 00:14:59.536 Namespace Granularity: Not Supported 00:14:59.536 SQ Associations: Not Supported 00:14:59.536 UUID List: Not Supported 00:14:59.536 Multi-Domain Subsystem: Not Supported 00:14:59.536 Fixed Capacity Management: Not Supported 00:14:59.536 Variable Capacity Management: Not Supported 00:14:59.536 Delete Endurance Group: Not Supported 00:14:59.536 Delete NVM Set: Not Supported 00:14:59.536 Extended LBA Formats Supported: Not Supported 00:14:59.536 Flexible Data Placement Supported: Not Supported 00:14:59.536 00:14:59.536 Controller Memory Buffer Support 00:14:59.536 ================================ 00:14:59.536 Supported: No 00:14:59.536 00:14:59.536 Persistent Memory Region Support 00:14:59.536 ================================ 00:14:59.536 Supported: No 00:14:59.536 00:14:59.536 Admin Command Set Attributes 00:14:59.536 ============================ 00:14:59.536 Security Send/Receive: Not Supported 00:14:59.536 Format NVM: Not Supported 00:14:59.536 Firmware Activate/Download: Not Supported 00:14:59.536 Namespace Management: Not Supported 00:14:59.536 Device Self-Test: Not Supported 00:14:59.536 Directives: Not Supported 00:14:59.536 NVMe-MI: Not Supported 00:14:59.536 Virtualization Management: Not Supported 00:14:59.536 Doorbell Buffer Config: Not Supported 00:14:59.536 Get LBA Status Capability: Not Supported 00:14:59.536 Command & Feature Lockdown Capability: Not Supported 00:14:59.536 Abort Command Limit: 4 00:14:59.536 Async Event Request Limit: 4 00:14:59.536 Number of Firmware Slots: N/A 00:14:59.536 Firmware Slot 1 Read-Only: N/A 00:14:59.536 Firmware Activation Without Reset: N/A 00:14:59.536 Multiple Update Detection Support: N/A 00:14:59.536 Firmware Update Granularity: No Information Provided 00:14:59.536 Per-Namespace SMART Log: No 00:14:59.536 Asymmetric Namespace Access Log Page: Not Supported 00:14:59.536 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:14:59.536 Command Effects Log Page: Supported 00:14:59.536 Get Log Page Extended Data: Supported 00:14:59.536 Telemetry Log Pages: Not Supported 00:14:59.536 Persistent Event Log Pages: Not Supported 00:14:59.536 Supported Log Pages Log Page: May Support 00:14:59.536 Commands Supported & Effects Log Page: Not Supported 00:14:59.536 Feature Identifiers & Effects Log Page:May Support 00:14:59.536 NVMe-MI Commands & Effects Log Page: May Support 00:14:59.536 Data Area 4 for Telemetry Log: Not Supported 00:14:59.536 Error Log Page Entries Supported: 128 00:14:59.536 Keep Alive: Supported 00:14:59.536 Keep Alive Granularity: 10000 ms 00:14:59.536 00:14:59.536 NVM Command Set Attributes 00:14:59.536 ========================== 00:14:59.536 Submission Queue Entry Size 00:14:59.536 Max: 64 00:14:59.536 Min: 64 00:14:59.536 Completion Queue Entry Size 00:14:59.536 Max: 16 00:14:59.536 Min: 16 00:14:59.536 Number of Namespaces: 32 00:14:59.536 Compare Command: Supported 00:14:59.536 Write Uncorrectable Command: Not Supported 00:14:59.536 Dataset Management Command: Supported 00:14:59.536 Write Zeroes Command: Supported 00:14:59.537 Set Features Save Field: Not Supported 00:14:59.537 Reservations: Not Supported 00:14:59.537 Timestamp: Not Supported 00:14:59.537 Copy: Supported 00:14:59.537 Volatile Write Cache: Present 00:14:59.537 Atomic Write Unit (Normal): 1 00:14:59.537 Atomic Write Unit (PFail): 1 00:14:59.537 Atomic Compare & Write Unit: 1 00:14:59.537 Fused Compare & Write: Supported 00:14:59.537 Scatter-Gather List 00:14:59.537 SGL Command Set: Supported (Dword aligned) 00:14:59.537 SGL Keyed: Not Supported 00:14:59.537 SGL Bit Bucket Descriptor: Not Supported 00:14:59.537 SGL Metadata Pointer: Not Supported 00:14:59.537 Oversized SGL: Not Supported 00:14:59.537 SGL Metadata Address: Not Supported 00:14:59.537 SGL Offset: Not Supported 00:14:59.537 Transport SGL Data Block: Not Supported 00:14:59.537 Replay Protected Memory Block: Not Supported 00:14:59.537 00:14:59.537 Firmware Slot Information 00:14:59.537 ========================= 00:14:59.537 Active slot: 1 00:14:59.537 Slot 1 Firmware Revision: 24.05.1 00:14:59.537 00:14:59.537 00:14:59.537 Commands Supported and Effects 00:14:59.537 ============================== 00:14:59.537 Admin Commands 00:14:59.537 -------------- 00:14:59.537 Get Log Page (02h): Supported 00:14:59.537 Identify (06h): Supported 00:14:59.537 Abort (08h): Supported 00:14:59.537 Set Features (09h): Supported 00:14:59.537 Get Features (0Ah): Supported 00:14:59.537 Asynchronous Event Request (0Ch): Supported 00:14:59.537 Keep Alive (18h): Supported 00:14:59.537 I/O Commands 00:14:59.537 ------------ 00:14:59.537 Flush (00h): Supported LBA-Change 00:14:59.537 Write (01h): Supported LBA-Change 00:14:59.537 Read (02h): Supported 00:14:59.537 Compare (05h): Supported 00:14:59.537 Write Zeroes (08h): Supported LBA-Change 00:14:59.537 Dataset Management (09h): Supported LBA-Change 00:14:59.537 Copy (19h): Supported LBA-Change 00:14:59.537 Unknown (79h): Supported LBA-Change 00:14:59.537 Unknown (7Ah): Supported 00:14:59.537 00:14:59.537 Error Log 00:14:59.537 ========= 00:14:59.537 00:14:59.537 Arbitration 00:14:59.537 =========== 00:14:59.537 Arbitration Burst: 1 00:14:59.537 00:14:59.537 Power Management 00:14:59.537 ================ 00:14:59.537 Number of Power States: 1 00:14:59.537 Current Power State: Power State #0 00:14:59.537 Power State #0: 00:14:59.537 Max Power: 0.00 W 00:14:59.537 Non-Operational State: Operational 00:14:59.537 Entry Latency: Not Reported 00:14:59.537 Exit Latency: Not Reported 00:14:59.537 Relative Read Throughput: 0 00:14:59.537 Relative Read Latency: 0 00:14:59.537 Relative Write Throughput: 0 00:14:59.537 Relative Write Latency: 0 00:14:59.537 Idle Power: Not Reported 00:14:59.537 Active Power: Not Reported 00:14:59.537 Non-Operational Permissive Mode: Not Supported 00:14:59.537 00:14:59.537 Health Information 00:14:59.537 ================== 00:14:59.537 Critical Warnings: 00:14:59.537 Available Spare Space: OK 00:14:59.537 Temperature: OK 00:14:59.537 Device Reliability: OK 00:14:59.537 Read Only: No 00:14:59.537 Volatile Memory Backup: OK 00:14:59.537 Current Temperature: 0 Kelvin[2024-07-25 01:00:52.428900] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:14:59.537 [2024-07-25 01:00:52.428916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:14:59.537 [2024-07-25 01:00:52.428955] nvme_ctrlr.c:4234:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Prepare to destruct SSD 00:14:59.537 [2024-07-25 01:00:52.428972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:59.537 [2024-07-25 01:00:52.428983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:59.537 [2024-07-25 01:00:52.428993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:59.537 [2024-07-25 01:00:52.429003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:59.537 [2024-07-25 01:00:52.431255] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:14:59.537 [2024-07-25 01:00:52.431276] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:14:59.537 [2024-07-25 01:00:52.431481] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:59.537 [2024-07-25 01:00:52.431570] nvme_ctrlr.c:1084:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] RTD3E = 0 us 00:14:59.537 [2024-07-25 01:00:52.431602] nvme_ctrlr.c:1087:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown timeout = 10000 ms 00:14:59.537 [2024-07-25 01:00:52.432497] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:14:59.537 [2024-07-25 01:00:52.432520] nvme_ctrlr.c:1206:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown complete in 0 milliseconds 00:14:59.537 [2024-07-25 01:00:52.432602] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:14:59.537 [2024-07-25 01:00:52.436268] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:14:59.537 (-273 Celsius) 00:14:59.537 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:14:59.537 Available Spare: 0% 00:14:59.537 Available Spare Threshold: 0% 00:14:59.537 Life Percentage Used: 0% 00:14:59.537 Data Units Read: 0 00:14:59.537 Data Units Written: 0 00:14:59.537 Host Read Commands: 0 00:14:59.537 Host Write Commands: 0 00:14:59.537 Controller Busy Time: 0 minutes 00:14:59.537 Power Cycles: 0 00:14:59.537 Power On Hours: 0 hours 00:14:59.537 Unsafe Shutdowns: 0 00:14:59.537 Unrecoverable Media Errors: 0 00:14:59.537 Lifetime Error Log Entries: 0 00:14:59.537 Warning Temperature Time: 0 minutes 00:14:59.537 Critical Temperature Time: 0 minutes 00:14:59.537 00:14:59.537 Number of Queues 00:14:59.537 ================ 00:14:59.537 Number of I/O Submission Queues: 127 00:14:59.537 Number of I/O Completion Queues: 127 00:14:59.537 00:14:59.537 Active Namespaces 00:14:59.537 ================= 00:14:59.537 Namespace ID:1 00:14:59.537 Error Recovery Timeout: Unlimited 00:14:59.537 Command Set Identifier: NVM (00h) 00:14:59.537 Deallocate: Supported 00:14:59.537 Deallocated/Unwritten Error: Not Supported 00:14:59.537 Deallocated Read Value: Unknown 00:14:59.537 Deallocate in Write Zeroes: Not Supported 00:14:59.537 Deallocated Guard Field: 0xFFFF 00:14:59.537 Flush: Supported 00:14:59.537 Reservation: Supported 00:14:59.537 Namespace Sharing Capabilities: Multiple Controllers 00:14:59.537 Size (in LBAs): 131072 (0GiB) 00:14:59.537 Capacity (in LBAs): 131072 (0GiB) 00:14:59.537 Utilization (in LBAs): 131072 (0GiB) 00:14:59.537 NGUID: 21B51829DE8F45A5AE0BC239D30B7B74 00:14:59.537 UUID: 21b51829-de8f-45a5-ae0b-c239d30b7b74 00:14:59.537 Thin Provisioning: Not Supported 00:14:59.537 Per-NS Atomic Units: Yes 00:14:59.537 Atomic Boundary Size (Normal): 0 00:14:59.537 Atomic Boundary Size (PFail): 0 00:14:59.537 Atomic Boundary Offset: 0 00:14:59.537 Maximum Single Source Range Length: 65535 00:14:59.537 Maximum Copy Length: 65535 00:14:59.537 Maximum Source Range Count: 1 00:14:59.537 NGUID/EUI64 Never Reused: No 00:14:59.537 Namespace Write Protected: No 00:14:59.537 Number of LBA Formats: 1 00:14:59.537 Current LBA Format: LBA Format #00 00:14:59.537 LBA Format #00: Data Size: 512 Metadata Size: 0 00:14:59.537 00:14:59.537 01:00:52 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:14:59.537 EAL: No free 2048 kB hugepages reported on node 1 00:14:59.537 [2024-07-25 01:00:52.676107] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:04.785 Initializing NVMe Controllers 00:15:04.785 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:04.785 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:15:04.785 Initialization complete. Launching workers. 00:15:04.785 ======================================================== 00:15:04.785 Latency(us) 00:15:04.785 Device Information : IOPS MiB/s Average min max 00:15:04.785 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 36042.40 140.79 3551.29 1140.72 9535.76 00:15:04.785 ======================================================== 00:15:04.785 Total : 36042.40 140.79 3551.29 1140.72 9535.76 00:15:04.785 00:15:04.785 [2024-07-25 01:00:57.697939] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:04.785 01:00:57 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:15:04.785 EAL: No free 2048 kB hugepages reported on node 1 00:15:05.061 [2024-07-25 01:00:57.940036] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:10.315 Initializing NVMe Controllers 00:15:10.315 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:10.315 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:15:10.315 Initialization complete. Launching workers. 00:15:10.315 ======================================================== 00:15:10.315 Latency(us) 00:15:10.315 Device Information : IOPS MiB/s Average min max 00:15:10.315 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16004.60 62.52 8006.15 4964.43 15858.74 00:15:10.315 ======================================================== 00:15:10.315 Total : 16004.60 62.52 8006.15 4964.43 15858.74 00:15:10.315 00:15:10.315 [2024-07-25 01:01:02.976268] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:10.315 01:01:03 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:15:10.315 EAL: No free 2048 kB hugepages reported on node 1 00:15:10.315 [2024-07-25 01:01:03.186310] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:15.575 [2024-07-25 01:01:08.274614] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:15.575 Initializing NVMe Controllers 00:15:15.575 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:15.575 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:15.575 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:15:15.575 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:15:15.575 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:15:15.575 Initialization complete. Launching workers. 00:15:15.575 Starting thread on core 2 00:15:15.575 Starting thread on core 3 00:15:15.575 Starting thread on core 1 00:15:15.575 01:01:08 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:15:15.575 EAL: No free 2048 kB hugepages reported on node 1 00:15:15.575 [2024-07-25 01:01:08.573721] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:18.853 [2024-07-25 01:01:11.628477] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:18.853 Initializing NVMe Controllers 00:15:18.853 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:18.853 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:18.853 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:15:18.853 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:15:18.853 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:15:18.853 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:15:18.853 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:15:18.853 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:15:18.853 Initialization complete. Launching workers. 00:15:18.853 Starting thread on core 1 with urgent priority queue 00:15:18.853 Starting thread on core 2 with urgent priority queue 00:15:18.853 Starting thread on core 3 with urgent priority queue 00:15:18.853 Starting thread on core 0 with urgent priority queue 00:15:18.853 SPDK bdev Controller (SPDK1 ) core 0: 5654.33 IO/s 17.69 secs/100000 ios 00:15:18.853 SPDK bdev Controller (SPDK1 ) core 1: 5983.67 IO/s 16.71 secs/100000 ios 00:15:18.853 SPDK bdev Controller (SPDK1 ) core 2: 5713.33 IO/s 17.50 secs/100000 ios 00:15:18.853 SPDK bdev Controller (SPDK1 ) core 3: 4925.33 IO/s 20.30 secs/100000 ios 00:15:18.853 ======================================================== 00:15:18.853 00:15:18.853 01:01:11 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:15:18.853 EAL: No free 2048 kB hugepages reported on node 1 00:15:18.853 [2024-07-25 01:01:11.930771] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:18.853 Initializing NVMe Controllers 00:15:18.853 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:18.853 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:18.853 Namespace ID: 1 size: 0GB 00:15:18.853 Initialization complete. 00:15:18.853 INFO: using host memory buffer for IO 00:15:18.853 Hello world! 00:15:18.853 [2024-07-25 01:01:11.964341] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:19.110 01:01:12 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:15:19.110 EAL: No free 2048 kB hugepages reported on node 1 00:15:19.110 [2024-07-25 01:01:12.256631] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:20.477 Initializing NVMe Controllers 00:15:20.477 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:20.477 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:20.477 Initialization complete. Launching workers. 00:15:20.477 submit (in ns) avg, min, max = 7201.5, 3475.6, 5996712.2 00:15:20.477 complete (in ns) avg, min, max = 25423.4, 2063.3, 4018126.7 00:15:20.477 00:15:20.477 Submit histogram 00:15:20.477 ================ 00:15:20.477 Range in us Cumulative Count 00:15:20.477 3.461 - 3.484: 0.0074% ( 1) 00:15:20.477 3.508 - 3.532: 0.5912% ( 79) 00:15:20.477 3.532 - 3.556: 1.7220% ( 153) 00:15:20.477 3.556 - 3.579: 5.1881% ( 469) 00:15:20.477 3.579 - 3.603: 11.2482% ( 820) 00:15:20.477 3.603 - 3.627: 20.2572% ( 1219) 00:15:20.477 3.627 - 3.650: 29.0149% ( 1185) 00:15:20.477 3.650 - 3.674: 36.6640% ( 1035) 00:15:20.477 3.674 - 3.698: 43.6923% ( 951) 00:15:20.477 3.698 - 3.721: 51.6813% ( 1081) 00:15:20.477 3.721 - 3.745: 57.5272% ( 791) 00:15:20.477 3.745 - 3.769: 62.5822% ( 684) 00:15:20.477 3.769 - 3.793: 66.1666% ( 485) 00:15:20.477 3.793 - 3.816: 69.3888% ( 436) 00:15:20.477 3.816 - 3.840: 72.8106% ( 463) 00:15:20.477 3.840 - 3.864: 76.6314% ( 517) 00:15:20.477 3.864 - 3.887: 79.9276% ( 446) 00:15:20.477 3.887 - 3.911: 83.1350% ( 434) 00:15:20.477 3.911 - 3.935: 85.8917% ( 373) 00:15:20.477 3.935 - 3.959: 87.9758% ( 282) 00:15:20.477 3.959 - 3.982: 90.0599% ( 282) 00:15:20.477 3.982 - 4.006: 91.5601% ( 203) 00:15:20.477 4.006 - 4.030: 92.6982% ( 154) 00:15:20.477 4.030 - 4.053: 93.7255% ( 139) 00:15:20.477 4.053 - 4.077: 94.4646% ( 100) 00:15:20.477 4.077 - 4.101: 95.0780% ( 83) 00:15:20.477 4.101 - 4.124: 95.5362% ( 62) 00:15:20.477 4.124 - 4.148: 95.9648% ( 58) 00:15:20.477 4.148 - 4.172: 96.2826% ( 43) 00:15:20.477 4.172 - 4.196: 96.4600% ( 24) 00:15:20.477 4.196 - 4.219: 96.5708% ( 15) 00:15:20.477 4.219 - 4.243: 96.6891% ( 16) 00:15:20.477 4.243 - 4.267: 96.7926% ( 14) 00:15:20.477 4.267 - 4.290: 96.9330% ( 19) 00:15:20.477 4.290 - 4.314: 97.0586% ( 17) 00:15:20.477 4.314 - 4.338: 97.1029% ( 6) 00:15:20.477 4.338 - 4.361: 97.2360% ( 18) 00:15:20.477 4.361 - 4.385: 97.3099% ( 10) 00:15:20.477 4.385 - 4.409: 97.3247% ( 2) 00:15:20.478 4.409 - 4.433: 97.3468% ( 3) 00:15:20.478 4.433 - 4.456: 97.3542% ( 1) 00:15:20.478 4.456 - 4.480: 97.3838% ( 4) 00:15:20.478 4.551 - 4.575: 97.4060% ( 3) 00:15:20.478 4.575 - 4.599: 97.4281% ( 3) 00:15:20.478 4.622 - 4.646: 97.4429% ( 2) 00:15:20.478 4.670 - 4.693: 97.4503% ( 1) 00:15:20.478 4.693 - 4.717: 97.4577% ( 1) 00:15:20.478 4.741 - 4.764: 97.4725% ( 2) 00:15:20.478 4.764 - 4.788: 97.4946% ( 3) 00:15:20.478 4.788 - 4.812: 97.5020% ( 1) 00:15:20.478 4.812 - 4.836: 97.5242% ( 3) 00:15:20.478 4.836 - 4.859: 97.5464% ( 3) 00:15:20.478 4.859 - 4.883: 97.5685% ( 3) 00:15:20.478 4.883 - 4.907: 97.5981% ( 4) 00:15:20.478 4.907 - 4.930: 97.6646% ( 9) 00:15:20.478 4.930 - 4.954: 97.7090% ( 6) 00:15:20.478 4.954 - 4.978: 97.7533% ( 6) 00:15:20.478 4.978 - 5.001: 97.7681% ( 2) 00:15:20.478 5.001 - 5.025: 97.7903% ( 3) 00:15:20.478 5.025 - 5.049: 97.7976% ( 1) 00:15:20.478 5.049 - 5.073: 97.8272% ( 4) 00:15:20.478 5.073 - 5.096: 97.8716% ( 6) 00:15:20.478 5.096 - 5.120: 97.9085% ( 5) 00:15:20.478 5.120 - 5.144: 97.9381% ( 4) 00:15:20.478 5.144 - 5.167: 97.9676% ( 4) 00:15:20.478 5.167 - 5.191: 97.9898% ( 3) 00:15:20.478 5.191 - 5.215: 98.0046% ( 2) 00:15:20.478 5.215 - 5.239: 98.0341% ( 4) 00:15:20.478 5.239 - 5.262: 98.0563% ( 3) 00:15:20.478 5.262 - 5.286: 98.0711% ( 2) 00:15:20.478 5.286 - 5.310: 98.0859% ( 2) 00:15:20.478 5.310 - 5.333: 98.1154% ( 4) 00:15:20.478 5.333 - 5.357: 98.1228% ( 1) 00:15:20.478 5.357 - 5.381: 98.1450% ( 3) 00:15:20.478 5.381 - 5.404: 98.1524% ( 1) 00:15:20.478 5.404 - 5.428: 98.1598% ( 1) 00:15:20.478 5.428 - 5.452: 98.1672% ( 1) 00:15:20.478 5.476 - 5.499: 98.1746% ( 1) 00:15:20.478 5.499 - 5.523: 98.1820% ( 1) 00:15:20.478 5.594 - 5.618: 98.1967% ( 2) 00:15:20.478 5.760 - 5.784: 98.2041% ( 1) 00:15:20.478 5.784 - 5.807: 98.2115% ( 1) 00:15:20.478 5.855 - 5.879: 98.2189% ( 1) 00:15:20.478 5.902 - 5.926: 98.2263% ( 1) 00:15:20.478 5.973 - 5.997: 98.2337% ( 1) 00:15:20.478 5.997 - 6.021: 98.2411% ( 1) 00:15:20.478 6.163 - 6.210: 98.2485% ( 1) 00:15:20.478 6.210 - 6.258: 98.2559% ( 1) 00:15:20.478 6.258 - 6.305: 98.2780% ( 3) 00:15:20.478 6.353 - 6.400: 98.2854% ( 1) 00:15:20.478 6.447 - 6.495: 98.2928% ( 1) 00:15:20.478 6.495 - 6.542: 98.3076% ( 2) 00:15:20.478 6.590 - 6.637: 98.3150% ( 1) 00:15:20.478 6.684 - 6.732: 98.3298% ( 2) 00:15:20.478 6.732 - 6.779: 98.3372% ( 1) 00:15:20.478 6.779 - 6.827: 98.3593% ( 3) 00:15:20.478 6.827 - 6.874: 98.3667% ( 1) 00:15:20.478 6.874 - 6.921: 98.3741% ( 1) 00:15:20.478 7.016 - 7.064: 98.3963% ( 3) 00:15:20.478 7.111 - 7.159: 98.4184% ( 3) 00:15:20.478 7.159 - 7.206: 98.4332% ( 2) 00:15:20.478 7.348 - 7.396: 98.4554% ( 3) 00:15:20.478 7.396 - 7.443: 98.4702% ( 2) 00:15:20.478 7.490 - 7.538: 98.4924% ( 3) 00:15:20.478 7.538 - 7.585: 98.5071% ( 2) 00:15:20.478 7.585 - 7.633: 98.5219% ( 2) 00:15:20.478 7.633 - 7.680: 98.5293% ( 1) 00:15:20.478 7.680 - 7.727: 98.5441% ( 2) 00:15:20.478 7.727 - 7.775: 98.5663% ( 3) 00:15:20.478 7.822 - 7.870: 98.5884% ( 3) 00:15:20.478 7.870 - 7.917: 98.5958% ( 1) 00:15:20.478 7.917 - 7.964: 98.6106% ( 2) 00:15:20.478 7.964 - 8.012: 98.6180% ( 1) 00:15:20.478 8.107 - 8.154: 98.6254% ( 1) 00:15:20.478 8.201 - 8.249: 98.6402% ( 2) 00:15:20.478 8.249 - 8.296: 98.6623% ( 3) 00:15:20.478 8.344 - 8.391: 98.6697% ( 1) 00:15:20.478 8.391 - 8.439: 98.6771% ( 1) 00:15:20.478 8.486 - 8.533: 98.6919% ( 2) 00:15:20.478 8.581 - 8.628: 98.6993% ( 1) 00:15:20.478 8.676 - 8.723: 98.7067% ( 1) 00:15:20.478 8.770 - 8.818: 98.7141% ( 1) 00:15:20.478 8.865 - 8.913: 98.7215% ( 1) 00:15:20.478 8.913 - 8.960: 98.7362% ( 2) 00:15:20.478 8.960 - 9.007: 98.7436% ( 1) 00:15:20.478 9.007 - 9.055: 98.7510% ( 1) 00:15:20.478 9.102 - 9.150: 98.7584% ( 1) 00:15:20.478 9.339 - 9.387: 98.7658% ( 1) 00:15:20.478 9.576 - 9.624: 98.7732% ( 1) 00:15:20.478 9.624 - 9.671: 98.7806% ( 1) 00:15:20.478 9.719 - 9.766: 98.7880% ( 1) 00:15:20.478 9.861 - 9.908: 98.7954% ( 1) 00:15:20.478 10.050 - 10.098: 98.8101% ( 2) 00:15:20.478 10.193 - 10.240: 98.8175% ( 1) 00:15:20.478 10.240 - 10.287: 98.8249% ( 1) 00:15:20.478 10.524 - 10.572: 98.8323% ( 1) 00:15:20.478 10.572 - 10.619: 98.8397% ( 1) 00:15:20.478 11.046 - 11.093: 98.8545% ( 2) 00:15:20.478 11.093 - 11.141: 98.8619% ( 1) 00:15:20.478 11.236 - 11.283: 98.8693% ( 1) 00:15:20.478 11.283 - 11.330: 98.8767% ( 1) 00:15:20.478 11.378 - 11.425: 98.8840% ( 1) 00:15:20.478 11.662 - 11.710: 98.8914% ( 1) 00:15:20.478 11.804 - 11.852: 98.8988% ( 1) 00:15:20.478 11.947 - 11.994: 98.9136% ( 2) 00:15:20.478 12.041 - 12.089: 98.9210% ( 1) 00:15:20.478 12.231 - 12.326: 98.9284% ( 1) 00:15:20.478 12.326 - 12.421: 98.9358% ( 1) 00:15:20.478 12.516 - 12.610: 98.9506% ( 2) 00:15:20.478 12.800 - 12.895: 98.9653% ( 2) 00:15:20.478 12.895 - 12.990: 98.9727% ( 1) 00:15:20.478 12.990 - 13.084: 98.9801% ( 1) 00:15:20.478 13.084 - 13.179: 99.0023% ( 3) 00:15:20.478 13.274 - 13.369: 99.0097% ( 1) 00:15:20.478 13.369 - 13.464: 99.0245% ( 2) 00:15:20.478 13.843 - 13.938: 99.0319% ( 1) 00:15:20.478 13.938 - 14.033: 99.0392% ( 1) 00:15:20.478 14.033 - 14.127: 99.0540% ( 2) 00:15:20.478 14.127 - 14.222: 99.0614% ( 1) 00:15:20.478 14.222 - 14.317: 99.0762% ( 2) 00:15:20.478 14.317 - 14.412: 99.0836% ( 1) 00:15:20.478 14.412 - 14.507: 99.0910% ( 1) 00:15:20.478 14.507 - 14.601: 99.1058% ( 2) 00:15:20.478 14.601 - 14.696: 99.1205% ( 2) 00:15:20.478 16.782 - 16.877: 99.1279% ( 1) 00:15:20.478 17.256 - 17.351: 99.1353% ( 1) 00:15:20.478 17.351 - 17.446: 99.1649% ( 4) 00:15:20.478 17.446 - 17.541: 99.1944% ( 4) 00:15:20.478 17.541 - 17.636: 99.2314% ( 5) 00:15:20.478 17.636 - 17.730: 99.2757% ( 6) 00:15:20.478 17.730 - 17.825: 99.3053% ( 4) 00:15:20.478 17.825 - 17.920: 99.3423% ( 5) 00:15:20.478 17.920 - 18.015: 99.3940% ( 7) 00:15:20.478 18.015 - 18.110: 99.4162% ( 3) 00:15:20.478 18.110 - 18.204: 99.4679% ( 7) 00:15:20.478 18.204 - 18.299: 99.4901% ( 3) 00:15:20.478 18.299 - 18.394: 99.5492% ( 8) 00:15:20.478 18.394 - 18.489: 99.5714% ( 3) 00:15:20.478 18.489 - 18.584: 99.6305% ( 8) 00:15:20.478 18.584 - 18.679: 99.6600% ( 4) 00:15:20.478 18.679 - 18.773: 99.6822% ( 3) 00:15:20.478 18.773 - 18.868: 99.7044% ( 3) 00:15:20.478 18.868 - 18.963: 99.7266% ( 3) 00:15:20.478 18.963 - 19.058: 99.7339% ( 1) 00:15:20.478 19.153 - 19.247: 99.7709% ( 5) 00:15:20.478 19.247 - 19.342: 99.7857% ( 2) 00:15:20.478 19.437 - 19.532: 99.8005% ( 2) 00:15:20.478 19.532 - 19.627: 99.8226% ( 3) 00:15:20.478 19.721 - 19.816: 99.8300% ( 1) 00:15:20.478 20.006 - 20.101: 99.8448% ( 2) 00:15:20.478 20.101 - 20.196: 99.8522% ( 1) 00:15:20.478 21.239 - 21.333: 99.8596% ( 1) 00:15:20.478 21.333 - 21.428: 99.8670% ( 1) 00:15:20.478 21.428 - 21.523: 99.8744% ( 1) 00:15:20.478 22.376 - 22.471: 99.8818% ( 1) 00:15:20.478 22.850 - 22.945: 99.8891% ( 1) 00:15:20.478 25.031 - 25.221: 99.8965% ( 1) 00:15:20.478 29.393 - 29.582: 99.9039% ( 1) 00:15:20.478 30.151 - 30.341: 99.9113% ( 1) 00:15:20.478 30.530 - 30.720: 99.9187% ( 1) 00:15:20.478 2184.533 - 2196.670: 99.9261% ( 1) 00:15:20.478 3980.705 - 4004.978: 99.9483% ( 3) 00:15:20.478 4004.978 - 4029.250: 99.9926% ( 6) 00:15:20.478 5995.330 - 6019.603: 100.0000% ( 1) 00:15:20.478 00:15:20.478 Complete histogram 00:15:20.478 ================== 00:15:20.478 Range in us Cumulative Count 00:15:20.478 2.062 - 2.074: 11.5291% ( 1560) 00:15:20.478 2.074 - 2.086: 36.1097% ( 3326) 00:15:20.478 2.086 - 2.098: 38.9624% ( 386) 00:15:20.478 2.098 - 2.110: 50.9349% ( 1620) 00:15:20.478 2.110 - 2.121: 60.5720% ( 1304) 00:15:20.478 2.121 - 2.133: 62.2201% ( 223) 00:15:20.478 2.133 - 2.145: 72.2193% ( 1353) 00:15:20.478 2.145 - 2.157: 78.2499% ( 816) 00:15:20.478 2.157 - 2.169: 79.5654% ( 178) 00:15:20.478 2.169 - 2.181: 84.9531% ( 729) 00:15:20.478 2.181 - 2.193: 87.6358% ( 363) 00:15:20.478 2.193 - 2.204: 88.3896% ( 102) 00:15:20.478 2.204 - 2.216: 90.1412% ( 237) 00:15:20.478 2.216 - 2.228: 91.5675% ( 193) 00:15:20.478 2.228 - 2.240: 93.2451% ( 227) 00:15:20.478 2.240 - 2.252: 94.4646% ( 165) 00:15:20.479 2.252 - 2.264: 94.8710% ( 55) 00:15:20.479 2.264 - 2.276: 95.0188% ( 20) 00:15:20.479 2.276 - 2.287: 95.2184% ( 27) 00:15:20.479 2.287 - 2.299: 95.5436% ( 44) 00:15:20.479 2.299 - 2.311: 95.9796% ( 59) 00:15:20.479 2.311 - 2.323: 96.1718% ( 26) 00:15:20.479 2.323 - 2.335: 96.2161% ( 6) 00:15:20.479 2.335 - 2.347: 96.2457% ( 4) 00:15:20.479 2.347 - 2.359: 96.3639% ( 16) 00:15:20.479 2.359 - 2.370: 96.7039% ( 46) 00:15:20.479 2.370 - 2.382: 96.9921% ( 39) 00:15:20.479 2.382 - 2.394: 97.3321% ( 46) 00:15:20.479 2.394 - 2.406: 97.6129% ( 38) 00:15:20.479 2.406 - 2.418: 97.7459% ( 18) 00:15:20.479 2.418 - 2.430: 97.9676% ( 30) 00:15:20.479 2.430 - 2.441: 98.0563% ( 12) 00:15:20.479 2.441 - 2.453: 98.1672% ( 15) 00:15:20.479 2.453 - 2.465: 98.2928% ( 17) 00:15:20.479 2.465 - 2.477: 98.3889% ( 13) 00:15:20.479 2.477 - 2.489: 98.4554% ( 9) 00:15:20.479 2.489 - 2.501: 98.5219% ( 9) 00:15:20.479 2.501 - 2.513: 98.5515% ( 4) 00:15:20.479 2.513 - 2.524: 98.6032% ( 7) 00:15:20.479 2.524 - 2.536: 98.6180% ( 2) 00:15:20.479 2.572 - 2.584: 98.6328% ( 2) 00:15:20.479 2.643 - 2.655: 98.6402% ( 1) 00:15:20.479 2.655 - 2.667: 98.6476% ( 1) 00:15:20.479 2.679 - 2.690: 98.6549% ( 1) 00:15:20.479 2.761 - 2.773: 98.6623% ( 1) 00:15:20.479 2.797 - 2.809: 98.6697% ( 1) 00:15:20.479 3.176 - 3.200: 98.6771% ( 1) 00:15:20.479 3.247 - 3.271: 9[2024-07-25 01:01:13.277939] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:20.479 8.6993% ( 3) 00:15:20.479 3.342 - 3.366: 98.7067% ( 1) 00:15:20.479 3.366 - 3.390: 98.7215% ( 2) 00:15:20.479 3.390 - 3.413: 98.7362% ( 2) 00:15:20.479 3.413 - 3.437: 98.7584% ( 3) 00:15:20.479 3.461 - 3.484: 98.7658% ( 1) 00:15:20.479 3.484 - 3.508: 98.7732% ( 1) 00:15:20.479 3.627 - 3.650: 98.7880% ( 2) 00:15:20.479 3.721 - 3.745: 98.7954% ( 1) 00:15:20.479 3.840 - 3.864: 98.8027% ( 1) 00:15:20.479 3.935 - 3.959: 98.8101% ( 1) 00:15:20.479 4.788 - 4.812: 98.8175% ( 1) 00:15:20.479 5.167 - 5.191: 98.8249% ( 1) 00:15:20.479 5.262 - 5.286: 98.8323% ( 1) 00:15:20.479 5.286 - 5.310: 98.8471% ( 2) 00:15:20.479 5.333 - 5.357: 98.8545% ( 1) 00:15:20.479 5.452 - 5.476: 98.8619% ( 1) 00:15:20.479 5.594 - 5.618: 98.8693% ( 1) 00:15:20.479 5.618 - 5.641: 98.8840% ( 2) 00:15:20.479 5.760 - 5.784: 98.8914% ( 1) 00:15:20.479 6.021 - 6.044: 98.8988% ( 1) 00:15:20.479 6.400 - 6.447: 98.9062% ( 1) 00:15:20.479 6.447 - 6.495: 98.9136% ( 1) 00:15:20.479 7.111 - 7.159: 98.9210% ( 1) 00:15:20.479 7.490 - 7.538: 98.9284% ( 1) 00:15:20.479 7.727 - 7.775: 98.9358% ( 1) 00:15:20.479 10.904 - 10.951: 98.9432% ( 1) 00:15:20.479 15.170 - 15.265: 98.9506% ( 1) 00:15:20.479 15.739 - 15.834: 98.9653% ( 2) 00:15:20.479 15.834 - 15.929: 98.9801% ( 2) 00:15:20.479 15.929 - 16.024: 98.9949% ( 2) 00:15:20.479 16.024 - 16.119: 99.0245% ( 4) 00:15:20.479 16.119 - 16.213: 99.0762% ( 7) 00:15:20.479 16.213 - 16.308: 99.0984% ( 3) 00:15:20.479 16.308 - 16.403: 99.1131% ( 2) 00:15:20.479 16.403 - 16.498: 99.1649% ( 7) 00:15:20.479 16.498 - 16.593: 99.2018% ( 5) 00:15:20.479 16.593 - 16.687: 99.2092% ( 1) 00:15:20.479 16.687 - 16.782: 99.2240% ( 2) 00:15:20.479 16.782 - 16.877: 99.2536% ( 4) 00:15:20.479 16.877 - 16.972: 99.2757% ( 3) 00:15:20.479 16.972 - 17.067: 99.2831% ( 1) 00:15:20.479 17.067 - 17.161: 99.2905% ( 1) 00:15:20.479 17.256 - 17.351: 99.3053% ( 2) 00:15:20.479 17.351 - 17.446: 99.3275% ( 3) 00:15:20.479 17.446 - 17.541: 99.3423% ( 2) 00:15:20.479 17.541 - 17.636: 99.3496% ( 1) 00:15:20.479 17.636 - 17.730: 99.3718% ( 3) 00:15:20.479 17.730 - 17.825: 99.3792% ( 1) 00:15:20.479 17.825 - 17.920: 99.3940% ( 2) 00:15:20.479 18.204 - 18.299: 99.4088% ( 2) 00:15:20.479 2014.625 - 2026.761: 99.4162% ( 1) 00:15:20.479 2026.761 - 2038.898: 99.4309% ( 2) 00:15:20.479 3980.705 - 4004.978: 99.8005% ( 50) 00:15:20.479 4004.978 - 4029.250: 100.0000% ( 27) 00:15:20.479 00:15:20.479 01:01:13 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:15:20.479 01:01:13 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:15:20.479 01:01:13 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:15:20.479 01:01:13 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:15:20.479 01:01:13 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:20.479 [ 00:15:20.479 { 00:15:20.479 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:20.479 "subtype": "Discovery", 00:15:20.479 "listen_addresses": [], 00:15:20.479 "allow_any_host": true, 00:15:20.479 "hosts": [] 00:15:20.479 }, 00:15:20.479 { 00:15:20.479 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:20.479 "subtype": "NVMe", 00:15:20.479 "listen_addresses": [ 00:15:20.479 { 00:15:20.479 "trtype": "VFIOUSER", 00:15:20.479 "adrfam": "IPv4", 00:15:20.479 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:20.479 "trsvcid": "0" 00:15:20.479 } 00:15:20.479 ], 00:15:20.479 "allow_any_host": true, 00:15:20.479 "hosts": [], 00:15:20.479 "serial_number": "SPDK1", 00:15:20.479 "model_number": "SPDK bdev Controller", 00:15:20.479 "max_namespaces": 32, 00:15:20.479 "min_cntlid": 1, 00:15:20.479 "max_cntlid": 65519, 00:15:20.479 "namespaces": [ 00:15:20.479 { 00:15:20.479 "nsid": 1, 00:15:20.479 "bdev_name": "Malloc1", 00:15:20.479 "name": "Malloc1", 00:15:20.479 "nguid": "21B51829DE8F45A5AE0BC239D30B7B74", 00:15:20.479 "uuid": "21b51829-de8f-45a5-ae0b-c239d30b7b74" 00:15:20.479 } 00:15:20.479 ] 00:15:20.479 }, 00:15:20.479 { 00:15:20.479 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:20.479 "subtype": "NVMe", 00:15:20.479 "listen_addresses": [ 00:15:20.479 { 00:15:20.479 "trtype": "VFIOUSER", 00:15:20.479 "adrfam": "IPv4", 00:15:20.479 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:20.479 "trsvcid": "0" 00:15:20.479 } 00:15:20.479 ], 00:15:20.479 "allow_any_host": true, 00:15:20.479 "hosts": [], 00:15:20.479 "serial_number": "SPDK2", 00:15:20.479 "model_number": "SPDK bdev Controller", 00:15:20.479 "max_namespaces": 32, 00:15:20.479 "min_cntlid": 1, 00:15:20.479 "max_cntlid": 65519, 00:15:20.479 "namespaces": [ 00:15:20.479 { 00:15:20.479 "nsid": 1, 00:15:20.479 "bdev_name": "Malloc2", 00:15:20.479 "name": "Malloc2", 00:15:20.479 "nguid": "71E66C9C856F478FAC08D1BCEA6147A4", 00:15:20.479 "uuid": "71e66c9c-856f-478f-ac08-d1bcea6147a4" 00:15:20.479 } 00:15:20.479 ] 00:15:20.479 } 00:15:20.479 ] 00:15:20.479 01:01:13 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:15:20.479 01:01:13 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=3733786 00:15:20.479 01:01:13 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:15:20.479 01:01:13 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:15:20.479 01:01:13 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1261 -- # local i=0 00:15:20.479 01:01:13 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1262 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:20.479 01:01:13 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1268 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:20.479 01:01:13 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # return 0 00:15:20.479 01:01:13 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:15:20.479 01:01:13 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:15:20.736 EAL: No free 2048 kB hugepages reported on node 1 00:15:20.736 [2024-07-25 01:01:13.782742] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:20.993 Malloc3 00:15:20.993 01:01:13 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:15:21.250 [2024-07-25 01:01:14.147460] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:21.250 01:01:14 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:21.250 Asynchronous Event Request test 00:15:21.250 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:21.250 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:21.250 Registering asynchronous event callbacks... 00:15:21.250 Starting namespace attribute notice tests for all controllers... 00:15:21.250 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:15:21.250 aer_cb - Changed Namespace 00:15:21.250 Cleaning up... 00:15:21.250 [ 00:15:21.250 { 00:15:21.250 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:21.250 "subtype": "Discovery", 00:15:21.250 "listen_addresses": [], 00:15:21.250 "allow_any_host": true, 00:15:21.250 "hosts": [] 00:15:21.250 }, 00:15:21.250 { 00:15:21.250 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:21.250 "subtype": "NVMe", 00:15:21.250 "listen_addresses": [ 00:15:21.250 { 00:15:21.250 "trtype": "VFIOUSER", 00:15:21.250 "adrfam": "IPv4", 00:15:21.250 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:21.250 "trsvcid": "0" 00:15:21.250 } 00:15:21.250 ], 00:15:21.250 "allow_any_host": true, 00:15:21.250 "hosts": [], 00:15:21.250 "serial_number": "SPDK1", 00:15:21.250 "model_number": "SPDK bdev Controller", 00:15:21.250 "max_namespaces": 32, 00:15:21.250 "min_cntlid": 1, 00:15:21.250 "max_cntlid": 65519, 00:15:21.250 "namespaces": [ 00:15:21.250 { 00:15:21.250 "nsid": 1, 00:15:21.250 "bdev_name": "Malloc1", 00:15:21.250 "name": "Malloc1", 00:15:21.250 "nguid": "21B51829DE8F45A5AE0BC239D30B7B74", 00:15:21.250 "uuid": "21b51829-de8f-45a5-ae0b-c239d30b7b74" 00:15:21.250 }, 00:15:21.250 { 00:15:21.250 "nsid": 2, 00:15:21.250 "bdev_name": "Malloc3", 00:15:21.250 "name": "Malloc3", 00:15:21.250 "nguid": "9DAA03E1EF4741E8B7E7A7B919FBD197", 00:15:21.250 "uuid": "9daa03e1-ef47-41e8-b7e7-a7b919fbd197" 00:15:21.250 } 00:15:21.250 ] 00:15:21.250 }, 00:15:21.250 { 00:15:21.250 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:21.250 "subtype": "NVMe", 00:15:21.250 "listen_addresses": [ 00:15:21.250 { 00:15:21.250 "trtype": "VFIOUSER", 00:15:21.250 "adrfam": "IPv4", 00:15:21.250 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:21.250 "trsvcid": "0" 00:15:21.250 } 00:15:21.250 ], 00:15:21.250 "allow_any_host": true, 00:15:21.250 "hosts": [], 00:15:21.250 "serial_number": "SPDK2", 00:15:21.250 "model_number": "SPDK bdev Controller", 00:15:21.250 "max_namespaces": 32, 00:15:21.250 "min_cntlid": 1, 00:15:21.250 "max_cntlid": 65519, 00:15:21.250 "namespaces": [ 00:15:21.250 { 00:15:21.250 "nsid": 1, 00:15:21.250 "bdev_name": "Malloc2", 00:15:21.250 "name": "Malloc2", 00:15:21.250 "nguid": "71E66C9C856F478FAC08D1BCEA6147A4", 00:15:21.250 "uuid": "71e66c9c-856f-478f-ac08-d1bcea6147a4" 00:15:21.250 } 00:15:21.250 ] 00:15:21.250 } 00:15:21.250 ] 00:15:21.250 01:01:14 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 3733786 00:15:21.250 01:01:14 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:21.250 01:01:14 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:15:21.250 01:01:14 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:15:21.250 01:01:14 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:15:21.508 [2024-07-25 01:01:14.411837] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 23.11.0 initialization... 00:15:21.508 [2024-07-25 01:01:14.411877] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3733922 ] 00:15:21.508 EAL: No free 2048 kB hugepages reported on node 1 00:15:21.508 [2024-07-25 01:01:14.446160] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:15:21.508 [2024-07-25 01:01:14.455310] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:15:21.508 [2024-07-25 01:01:14.455341] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7fc456d65000 00:15:21.508 [2024-07-25 01:01:14.456305] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:21.508 [2024-07-25 01:01:14.457309] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:21.508 [2024-07-25 01:01:14.458335] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:21.508 [2024-07-25 01:01:14.459342] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:21.508 [2024-07-25 01:01:14.460349] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:21.508 [2024-07-25 01:01:14.461355] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:21.508 [2024-07-25 01:01:14.462359] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:21.508 [2024-07-25 01:01:14.463368] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:21.508 [2024-07-25 01:01:14.464381] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:15:21.508 [2024-07-25 01:01:14.464403] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7fc455b17000 00:15:21.508 [2024-07-25 01:01:14.465518] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:15:21.508 [2024-07-25 01:01:14.480172] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:15:21.508 [2024-07-25 01:01:14.480204] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to connect adminq (no timeout) 00:15:21.508 [2024-07-25 01:01:14.485337] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:15:21.508 [2024-07-25 01:01:14.485392] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:15:21.508 [2024-07-25 01:01:14.485477] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for connect adminq (no timeout) 00:15:21.508 [2024-07-25 01:01:14.485501] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs (no timeout) 00:15:21.508 [2024-07-25 01:01:14.485511] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs wait for vs (no timeout) 00:15:21.508 [2024-07-25 01:01:14.486342] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:15:21.508 [2024-07-25 01:01:14.486366] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap (no timeout) 00:15:21.508 [2024-07-25 01:01:14.486380] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap wait for cap (no timeout) 00:15:21.508 [2024-07-25 01:01:14.487345] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:15:21.508 [2024-07-25 01:01:14.487366] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en (no timeout) 00:15:21.508 [2024-07-25 01:01:14.487379] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en wait for cc (timeout 15000 ms) 00:15:21.508 [2024-07-25 01:01:14.488352] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:15:21.508 [2024-07-25 01:01:14.488372] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:15:21.509 [2024-07-25 01:01:14.489355] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:15:21.509 [2024-07-25 01:01:14.489375] nvme_ctrlr.c:3751:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 0 && CSTS.RDY = 0 00:15:21.509 [2024-07-25 01:01:14.489384] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to controller is disabled (timeout 15000 ms) 00:15:21.509 [2024-07-25 01:01:14.489395] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:15:21.509 [2024-07-25 01:01:14.489504] nvme_ctrlr.c:3944:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Setting CC.EN = 1 00:15:21.509 [2024-07-25 01:01:14.489512] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:15:21.509 [2024-07-25 01:01:14.489520] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:15:21.509 [2024-07-25 01:01:14.490367] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:15:21.509 [2024-07-25 01:01:14.491370] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:15:21.509 [2024-07-25 01:01:14.492379] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:15:21.509 [2024-07-25 01:01:14.493378] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:21.509 [2024-07-25 01:01:14.493458] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:15:21.509 [2024-07-25 01:01:14.494396] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:15:21.509 [2024-07-25 01:01:14.494415] nvme_ctrlr.c:3786:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:15:21.509 [2024-07-25 01:01:14.494424] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to reset admin queue (timeout 30000 ms) 00:15:21.509 [2024-07-25 01:01:14.494448] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller (no timeout) 00:15:21.509 [2024-07-25 01:01:14.494461] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify controller (timeout 30000 ms) 00:15:21.509 [2024-07-25 01:01:14.494483] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:21.509 [2024-07-25 01:01:14.494493] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:21.509 [2024-07-25 01:01:14.494510] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:21.509 [2024-07-25 01:01:14.503260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:15:21.509 [2024-07-25 01:01:14.503286] nvme_ctrlr.c:1986:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_xfer_size 131072 00:15:21.509 [2024-07-25 01:01:14.503296] nvme_ctrlr.c:1990:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] MDTS max_xfer_size 131072 00:15:21.509 [2024-07-25 01:01:14.503307] nvme_ctrlr.c:1993:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CNTLID 0x0001 00:15:21.509 [2024-07-25 01:01:14.503316] nvme_ctrlr.c:2004:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:15:21.509 [2024-07-25 01:01:14.503324] nvme_ctrlr.c:2017:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_sges 1 00:15:21.509 [2024-07-25 01:01:14.503331] nvme_ctrlr.c:2032:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] fuses compare and write: 1 00:15:21.509 [2024-07-25 01:01:14.503339] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to configure AER (timeout 30000 ms) 00:15:21.509 [2024-07-25 01:01:14.503351] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for configure aer (timeout 30000 ms) 00:15:21.509 [2024-07-25 01:01:14.503366] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:15:21.509 [2024-07-25 01:01:14.511256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:15:21.509 [2024-07-25 01:01:14.511280] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:15:21.509 [2024-07-25 01:01:14.511293] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:15:21.509 [2024-07-25 01:01:14.511305] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:15:21.509 [2024-07-25 01:01:14.511317] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:15:21.509 [2024-07-25 01:01:14.511325] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set keep alive timeout (timeout 30000 ms) 00:15:21.509 [2024-07-25 01:01:14.511341] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:15:21.509 [2024-07-25 01:01:14.511355] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:15:21.509 [2024-07-25 01:01:14.519269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:15:21.509 [2024-07-25 01:01:14.519287] nvme_ctrlr.c:2892:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Controller adjusted keep alive timeout to 0 ms 00:15:21.509 [2024-07-25 01:01:14.519296] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller iocs specific (timeout 30000 ms) 00:15:21.509 [2024-07-25 01:01:14.519307] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set number of queues (timeout 30000 ms) 00:15:21.509 [2024-07-25 01:01:14.519320] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set number of queues (timeout 30000 ms) 00:15:21.509 [2024-07-25 01:01:14.519335] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:15:21.509 [2024-07-25 01:01:14.524276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:15:21.509 [2024-07-25 01:01:14.524349] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify active ns (timeout 30000 ms) 00:15:21.509 [2024-07-25 01:01:14.524366] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify active ns (timeout 30000 ms) 00:15:21.509 [2024-07-25 01:01:14.524378] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:15:21.509 [2024-07-25 01:01:14.524391] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:15:21.509 [2024-07-25 01:01:14.524401] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:15:21.509 [2024-07-25 01:01:14.535269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:15:21.509 [2024-07-25 01:01:14.535300] nvme_ctrlr.c:4570:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Namespace 1 was added 00:15:21.509 [2024-07-25 01:01:14.535319] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns (timeout 30000 ms) 00:15:21.509 [2024-07-25 01:01:14.535333] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify ns (timeout 30000 ms) 00:15:21.509 [2024-07-25 01:01:14.535346] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:21.509 [2024-07-25 01:01:14.535354] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:21.509 [2024-07-25 01:01:14.535363] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:21.509 [2024-07-25 01:01:14.543252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:15:21.509 [2024-07-25 01:01:14.543280] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify namespace id descriptors (timeout 30000 ms) 00:15:21.509 [2024-07-25 01:01:14.543306] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:15:21.509 [2024-07-25 01:01:14.543319] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:21.509 [2024-07-25 01:01:14.543327] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:21.509 [2024-07-25 01:01:14.543337] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:21.509 [2024-07-25 01:01:14.551253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:15:21.509 [2024-07-25 01:01:14.551273] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns iocs specific (timeout 30000 ms) 00:15:21.509 [2024-07-25 01:01:14.551288] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported log pages (timeout 30000 ms) 00:15:21.509 [2024-07-25 01:01:14.551301] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported features (timeout 30000 ms) 00:15:21.509 [2024-07-25 01:01:14.551311] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set doorbell buffer config (timeout 30000 ms) 00:15:21.509 [2024-07-25 01:01:14.551319] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host ID (timeout 30000 ms) 00:15:21.509 [2024-07-25 01:01:14.551327] nvme_ctrlr.c:2992:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] NVMe-oF transport - not sending Set Features - Host ID 00:15:21.509 [2024-07-25 01:01:14.551334] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to transport ready (timeout 30000 ms) 00:15:21.509 [2024-07-25 01:01:14.551342] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to ready (no timeout) 00:15:21.509 [2024-07-25 01:01:14.551372] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:15:21.509 [2024-07-25 01:01:14.559269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:15:21.509 [2024-07-25 01:01:14.559295] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:15:21.509 [2024-07-25 01:01:14.567268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:15:21.509 [2024-07-25 01:01:14.567293] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:15:21.509 [2024-07-25 01:01:14.575268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:15:21.509 [2024-07-25 01:01:14.575293] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:15:21.510 [2024-07-25 01:01:14.583255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:15:21.510 [2024-07-25 01:01:14.583281] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:15:21.510 [2024-07-25 01:01:14.583306] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:15:21.510 [2024-07-25 01:01:14.583313] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:15:21.510 [2024-07-25 01:01:14.583319] nvme_pcie_common.c:1254:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:15:21.510 [2024-07-25 01:01:14.583329] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:15:21.510 [2024-07-25 01:01:14.583342] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:15:21.510 [2024-07-25 01:01:14.583350] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:15:21.510 [2024-07-25 01:01:14.583359] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:15:21.510 [2024-07-25 01:01:14.583371] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:15:21.510 [2024-07-25 01:01:14.583379] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:21.510 [2024-07-25 01:01:14.583388] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:21.510 [2024-07-25 01:01:14.583399] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:15:21.510 [2024-07-25 01:01:14.583408] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:15:21.510 [2024-07-25 01:01:14.583417] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:15:21.510 [2024-07-25 01:01:14.591255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:15:21.510 [2024-07-25 01:01:14.591283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:15:21.510 [2024-07-25 01:01:14.591298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:15:21.510 [2024-07-25 01:01:14.591312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:15:21.510 ===================================================== 00:15:21.510 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:21.510 ===================================================== 00:15:21.510 Controller Capabilities/Features 00:15:21.510 ================================ 00:15:21.510 Vendor ID: 4e58 00:15:21.510 Subsystem Vendor ID: 4e58 00:15:21.510 Serial Number: SPDK2 00:15:21.510 Model Number: SPDK bdev Controller 00:15:21.510 Firmware Version: 24.05.1 00:15:21.510 Recommended Arb Burst: 6 00:15:21.510 IEEE OUI Identifier: 8d 6b 50 00:15:21.510 Multi-path I/O 00:15:21.510 May have multiple subsystem ports: Yes 00:15:21.510 May have multiple controllers: Yes 00:15:21.510 Associated with SR-IOV VF: No 00:15:21.510 Max Data Transfer Size: 131072 00:15:21.510 Max Number of Namespaces: 32 00:15:21.510 Max Number of I/O Queues: 127 00:15:21.510 NVMe Specification Version (VS): 1.3 00:15:21.510 NVMe Specification Version (Identify): 1.3 00:15:21.510 Maximum Queue Entries: 256 00:15:21.510 Contiguous Queues Required: Yes 00:15:21.510 Arbitration Mechanisms Supported 00:15:21.510 Weighted Round Robin: Not Supported 00:15:21.510 Vendor Specific: Not Supported 00:15:21.510 Reset Timeout: 15000 ms 00:15:21.510 Doorbell Stride: 4 bytes 00:15:21.510 NVM Subsystem Reset: Not Supported 00:15:21.510 Command Sets Supported 00:15:21.510 NVM Command Set: Supported 00:15:21.510 Boot Partition: Not Supported 00:15:21.510 Memory Page Size Minimum: 4096 bytes 00:15:21.510 Memory Page Size Maximum: 4096 bytes 00:15:21.510 Persistent Memory Region: Not Supported 00:15:21.510 Optional Asynchronous Events Supported 00:15:21.510 Namespace Attribute Notices: Supported 00:15:21.510 Firmware Activation Notices: Not Supported 00:15:21.510 ANA Change Notices: Not Supported 00:15:21.510 PLE Aggregate Log Change Notices: Not Supported 00:15:21.510 LBA Status Info Alert Notices: Not Supported 00:15:21.510 EGE Aggregate Log Change Notices: Not Supported 00:15:21.510 Normal NVM Subsystem Shutdown event: Not Supported 00:15:21.510 Zone Descriptor Change Notices: Not Supported 00:15:21.510 Discovery Log Change Notices: Not Supported 00:15:21.510 Controller Attributes 00:15:21.510 128-bit Host Identifier: Supported 00:15:21.510 Non-Operational Permissive Mode: Not Supported 00:15:21.510 NVM Sets: Not Supported 00:15:21.510 Read Recovery Levels: Not Supported 00:15:21.510 Endurance Groups: Not Supported 00:15:21.510 Predictable Latency Mode: Not Supported 00:15:21.510 Traffic Based Keep ALive: Not Supported 00:15:21.510 Namespace Granularity: Not Supported 00:15:21.510 SQ Associations: Not Supported 00:15:21.510 UUID List: Not Supported 00:15:21.510 Multi-Domain Subsystem: Not Supported 00:15:21.510 Fixed Capacity Management: Not Supported 00:15:21.510 Variable Capacity Management: Not Supported 00:15:21.510 Delete Endurance Group: Not Supported 00:15:21.510 Delete NVM Set: Not Supported 00:15:21.510 Extended LBA Formats Supported: Not Supported 00:15:21.510 Flexible Data Placement Supported: Not Supported 00:15:21.510 00:15:21.510 Controller Memory Buffer Support 00:15:21.510 ================================ 00:15:21.510 Supported: No 00:15:21.510 00:15:21.510 Persistent Memory Region Support 00:15:21.510 ================================ 00:15:21.510 Supported: No 00:15:21.510 00:15:21.510 Admin Command Set Attributes 00:15:21.510 ============================ 00:15:21.510 Security Send/Receive: Not Supported 00:15:21.510 Format NVM: Not Supported 00:15:21.510 Firmware Activate/Download: Not Supported 00:15:21.510 Namespace Management: Not Supported 00:15:21.510 Device Self-Test: Not Supported 00:15:21.510 Directives: Not Supported 00:15:21.510 NVMe-MI: Not Supported 00:15:21.510 Virtualization Management: Not Supported 00:15:21.510 Doorbell Buffer Config: Not Supported 00:15:21.510 Get LBA Status Capability: Not Supported 00:15:21.510 Command & Feature Lockdown Capability: Not Supported 00:15:21.510 Abort Command Limit: 4 00:15:21.510 Async Event Request Limit: 4 00:15:21.510 Number of Firmware Slots: N/A 00:15:21.510 Firmware Slot 1 Read-Only: N/A 00:15:21.510 Firmware Activation Without Reset: N/A 00:15:21.510 Multiple Update Detection Support: N/A 00:15:21.510 Firmware Update Granularity: No Information Provided 00:15:21.510 Per-Namespace SMART Log: No 00:15:21.510 Asymmetric Namespace Access Log Page: Not Supported 00:15:21.510 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:15:21.510 Command Effects Log Page: Supported 00:15:21.510 Get Log Page Extended Data: Supported 00:15:21.510 Telemetry Log Pages: Not Supported 00:15:21.510 Persistent Event Log Pages: Not Supported 00:15:21.510 Supported Log Pages Log Page: May Support 00:15:21.510 Commands Supported & Effects Log Page: Not Supported 00:15:21.510 Feature Identifiers & Effects Log Page:May Support 00:15:21.510 NVMe-MI Commands & Effects Log Page: May Support 00:15:21.510 Data Area 4 for Telemetry Log: Not Supported 00:15:21.510 Error Log Page Entries Supported: 128 00:15:21.510 Keep Alive: Supported 00:15:21.510 Keep Alive Granularity: 10000 ms 00:15:21.510 00:15:21.510 NVM Command Set Attributes 00:15:21.510 ========================== 00:15:21.510 Submission Queue Entry Size 00:15:21.510 Max: 64 00:15:21.510 Min: 64 00:15:21.510 Completion Queue Entry Size 00:15:21.510 Max: 16 00:15:21.510 Min: 16 00:15:21.510 Number of Namespaces: 32 00:15:21.510 Compare Command: Supported 00:15:21.510 Write Uncorrectable Command: Not Supported 00:15:21.510 Dataset Management Command: Supported 00:15:21.510 Write Zeroes Command: Supported 00:15:21.510 Set Features Save Field: Not Supported 00:15:21.510 Reservations: Not Supported 00:15:21.510 Timestamp: Not Supported 00:15:21.510 Copy: Supported 00:15:21.510 Volatile Write Cache: Present 00:15:21.510 Atomic Write Unit (Normal): 1 00:15:21.510 Atomic Write Unit (PFail): 1 00:15:21.510 Atomic Compare & Write Unit: 1 00:15:21.510 Fused Compare & Write: Supported 00:15:21.510 Scatter-Gather List 00:15:21.510 SGL Command Set: Supported (Dword aligned) 00:15:21.510 SGL Keyed: Not Supported 00:15:21.510 SGL Bit Bucket Descriptor: Not Supported 00:15:21.510 SGL Metadata Pointer: Not Supported 00:15:21.510 Oversized SGL: Not Supported 00:15:21.510 SGL Metadata Address: Not Supported 00:15:21.510 SGL Offset: Not Supported 00:15:21.510 Transport SGL Data Block: Not Supported 00:15:21.510 Replay Protected Memory Block: Not Supported 00:15:21.510 00:15:21.510 Firmware Slot Information 00:15:21.510 ========================= 00:15:21.510 Active slot: 1 00:15:21.510 Slot 1 Firmware Revision: 24.05.1 00:15:21.510 00:15:21.510 00:15:21.510 Commands Supported and Effects 00:15:21.510 ============================== 00:15:21.510 Admin Commands 00:15:21.510 -------------- 00:15:21.510 Get Log Page (02h): Supported 00:15:21.511 Identify (06h): Supported 00:15:21.511 Abort (08h): Supported 00:15:21.511 Set Features (09h): Supported 00:15:21.511 Get Features (0Ah): Supported 00:15:21.511 Asynchronous Event Request (0Ch): Supported 00:15:21.511 Keep Alive (18h): Supported 00:15:21.511 I/O Commands 00:15:21.511 ------------ 00:15:21.511 Flush (00h): Supported LBA-Change 00:15:21.511 Write (01h): Supported LBA-Change 00:15:21.511 Read (02h): Supported 00:15:21.511 Compare (05h): Supported 00:15:21.511 Write Zeroes (08h): Supported LBA-Change 00:15:21.511 Dataset Management (09h): Supported LBA-Change 00:15:21.511 Copy (19h): Supported LBA-Change 00:15:21.511 Unknown (79h): Supported LBA-Change 00:15:21.511 Unknown (7Ah): Supported 00:15:21.511 00:15:21.511 Error Log 00:15:21.511 ========= 00:15:21.511 00:15:21.511 Arbitration 00:15:21.511 =========== 00:15:21.511 Arbitration Burst: 1 00:15:21.511 00:15:21.511 Power Management 00:15:21.511 ================ 00:15:21.511 Number of Power States: 1 00:15:21.511 Current Power State: Power State #0 00:15:21.511 Power State #0: 00:15:21.511 Max Power: 0.00 W 00:15:21.511 Non-Operational State: Operational 00:15:21.511 Entry Latency: Not Reported 00:15:21.511 Exit Latency: Not Reported 00:15:21.511 Relative Read Throughput: 0 00:15:21.511 Relative Read Latency: 0 00:15:21.511 Relative Write Throughput: 0 00:15:21.511 Relative Write Latency: 0 00:15:21.511 Idle Power: Not Reported 00:15:21.511 Active Power: Not Reported 00:15:21.511 Non-Operational Permissive Mode: Not Supported 00:15:21.511 00:15:21.511 Health Information 00:15:21.511 ================== 00:15:21.511 Critical Warnings: 00:15:21.511 Available Spare Space: OK 00:15:21.511 Temperature: OK 00:15:21.511 Device Reliability: OK 00:15:21.511 Read Only: No 00:15:21.511 Volatile Memory Backup: OK 00:15:21.511 Current Temperature: 0 Kelvin[2024-07-25 01:01:14.591439] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:15:21.511 [2024-07-25 01:01:14.599254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:15:21.511 [2024-07-25 01:01:14.599301] nvme_ctrlr.c:4234:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Prepare to destruct SSD 00:15:21.511 [2024-07-25 01:01:14.599325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:21.511 [2024-07-25 01:01:14.599336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:21.511 [2024-07-25 01:01:14.599346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:21.511 [2024-07-25 01:01:14.599355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:21.511 [2024-07-25 01:01:14.599420] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:15:21.511 [2024-07-25 01:01:14.599440] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:15:21.511 [2024-07-25 01:01:14.600420] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:21.511 [2024-07-25 01:01:14.600489] nvme_ctrlr.c:1084:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] RTD3E = 0 us 00:15:21.511 [2024-07-25 01:01:14.600504] nvme_ctrlr.c:1087:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown timeout = 10000 ms 00:15:21.511 [2024-07-25 01:01:14.601432] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:15:21.511 [2024-07-25 01:01:14.601456] nvme_ctrlr.c:1206:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown complete in 0 milliseconds 00:15:21.511 [2024-07-25 01:01:14.601506] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:15:21.511 [2024-07-25 01:01:14.602694] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:15:21.511 (-273 Celsius) 00:15:21.511 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:15:21.511 Available Spare: 0% 00:15:21.511 Available Spare Threshold: 0% 00:15:21.511 Life Percentage Used: 0% 00:15:21.511 Data Units Read: 0 00:15:21.511 Data Units Written: 0 00:15:21.511 Host Read Commands: 0 00:15:21.511 Host Write Commands: 0 00:15:21.511 Controller Busy Time: 0 minutes 00:15:21.511 Power Cycles: 0 00:15:21.511 Power On Hours: 0 hours 00:15:21.511 Unsafe Shutdowns: 0 00:15:21.511 Unrecoverable Media Errors: 0 00:15:21.511 Lifetime Error Log Entries: 0 00:15:21.511 Warning Temperature Time: 0 minutes 00:15:21.511 Critical Temperature Time: 0 minutes 00:15:21.511 00:15:21.511 Number of Queues 00:15:21.511 ================ 00:15:21.511 Number of I/O Submission Queues: 127 00:15:21.511 Number of I/O Completion Queues: 127 00:15:21.511 00:15:21.511 Active Namespaces 00:15:21.511 ================= 00:15:21.511 Namespace ID:1 00:15:21.511 Error Recovery Timeout: Unlimited 00:15:21.511 Command Set Identifier: NVM (00h) 00:15:21.511 Deallocate: Supported 00:15:21.511 Deallocated/Unwritten Error: Not Supported 00:15:21.511 Deallocated Read Value: Unknown 00:15:21.511 Deallocate in Write Zeroes: Not Supported 00:15:21.511 Deallocated Guard Field: 0xFFFF 00:15:21.511 Flush: Supported 00:15:21.511 Reservation: Supported 00:15:21.511 Namespace Sharing Capabilities: Multiple Controllers 00:15:21.511 Size (in LBAs): 131072 (0GiB) 00:15:21.511 Capacity (in LBAs): 131072 (0GiB) 00:15:21.511 Utilization (in LBAs): 131072 (0GiB) 00:15:21.511 NGUID: 71E66C9C856F478FAC08D1BCEA6147A4 00:15:21.511 UUID: 71e66c9c-856f-478f-ac08-d1bcea6147a4 00:15:21.511 Thin Provisioning: Not Supported 00:15:21.511 Per-NS Atomic Units: Yes 00:15:21.511 Atomic Boundary Size (Normal): 0 00:15:21.511 Atomic Boundary Size (PFail): 0 00:15:21.511 Atomic Boundary Offset: 0 00:15:21.511 Maximum Single Source Range Length: 65535 00:15:21.511 Maximum Copy Length: 65535 00:15:21.511 Maximum Source Range Count: 1 00:15:21.511 NGUID/EUI64 Never Reused: No 00:15:21.511 Namespace Write Protected: No 00:15:21.511 Number of LBA Formats: 1 00:15:21.511 Current LBA Format: LBA Format #00 00:15:21.511 LBA Format #00: Data Size: 512 Metadata Size: 0 00:15:21.511 00:15:21.511 01:01:14 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:15:21.768 EAL: No free 2048 kB hugepages reported on node 1 00:15:21.768 [2024-07-25 01:01:14.831043] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:27.034 Initializing NVMe Controllers 00:15:27.034 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:27.034 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:15:27.034 Initialization complete. Launching workers. 00:15:27.034 ======================================================== 00:15:27.034 Latency(us) 00:15:27.034 Device Information : IOPS MiB/s Average min max 00:15:27.034 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 36132.48 141.14 3541.95 1140.05 7292.00 00:15:27.034 ======================================================== 00:15:27.034 Total : 36132.48 141.14 3541.95 1140.05 7292.00 00:15:27.034 00:15:27.034 [2024-07-25 01:01:19.933607] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:27.034 01:01:19 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:15:27.034 EAL: No free 2048 kB hugepages reported on node 1 00:15:27.034 [2024-07-25 01:01:20.176367] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:32.351 Initializing NVMe Controllers 00:15:32.351 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:32.351 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:15:32.351 Initialization complete. Launching workers. 00:15:32.351 ======================================================== 00:15:32.351 Latency(us) 00:15:32.351 Device Information : IOPS MiB/s Average min max 00:15:32.351 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 33735.21 131.78 3793.33 1190.09 7627.31 00:15:32.351 ======================================================== 00:15:32.351 Total : 33735.21 131.78 3793.33 1190.09 7627.31 00:15:32.351 00:15:32.351 [2024-07-25 01:01:25.198937] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:32.351 01:01:25 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:15:32.351 EAL: No free 2048 kB hugepages reported on node 1 00:15:32.351 [2024-07-25 01:01:25.407870] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:37.614 [2024-07-25 01:01:30.540388] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:37.614 Initializing NVMe Controllers 00:15:37.614 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:37.614 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:37.614 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:15:37.614 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:15:37.614 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:15:37.614 Initialization complete. Launching workers. 00:15:37.614 Starting thread on core 2 00:15:37.614 Starting thread on core 3 00:15:37.614 Starting thread on core 1 00:15:37.614 01:01:30 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:15:37.614 EAL: No free 2048 kB hugepages reported on node 1 00:15:37.872 [2024-07-25 01:01:30.843726] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:41.151 [2024-07-25 01:01:33.923323] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:41.151 Initializing NVMe Controllers 00:15:41.151 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:41.151 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:41.151 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:15:41.151 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:15:41.151 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:15:41.151 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:15:41.151 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:15:41.151 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:15:41.151 Initialization complete. Launching workers. 00:15:41.151 Starting thread on core 1 with urgent priority queue 00:15:41.151 Starting thread on core 2 with urgent priority queue 00:15:41.151 Starting thread on core 3 with urgent priority queue 00:15:41.151 Starting thread on core 0 with urgent priority queue 00:15:41.151 SPDK bdev Controller (SPDK2 ) core 0: 5751.33 IO/s 17.39 secs/100000 ios 00:15:41.151 SPDK bdev Controller (SPDK2 ) core 1: 5958.67 IO/s 16.78 secs/100000 ios 00:15:41.151 SPDK bdev Controller (SPDK2 ) core 2: 5941.67 IO/s 16.83 secs/100000 ios 00:15:41.151 SPDK bdev Controller (SPDK2 ) core 3: 4850.67 IO/s 20.62 secs/100000 ios 00:15:41.151 ======================================================== 00:15:41.151 00:15:41.151 01:01:33 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:15:41.151 EAL: No free 2048 kB hugepages reported on node 1 00:15:41.151 [2024-07-25 01:01:34.224738] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:41.151 Initializing NVMe Controllers 00:15:41.151 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:41.151 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:41.151 Namespace ID: 1 size: 0GB 00:15:41.151 Initialization complete. 00:15:41.151 INFO: using host memory buffer for IO 00:15:41.151 Hello world! 00:15:41.151 [2024-07-25 01:01:34.233785] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:41.151 01:01:34 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:15:41.409 EAL: No free 2048 kB hugepages reported on node 1 00:15:41.409 [2024-07-25 01:01:34.527490] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:42.781 Initializing NVMe Controllers 00:15:42.781 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:42.781 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:42.781 Initialization complete. Launching workers. 00:15:42.781 submit (in ns) avg, min, max = 7959.5, 3483.3, 4016493.3 00:15:42.781 complete (in ns) avg, min, max = 24370.6, 2042.2, 4015302.2 00:15:42.781 00:15:42.781 Submit histogram 00:15:42.781 ================ 00:15:42.781 Range in us Cumulative Count 00:15:42.781 3.461 - 3.484: 0.0073% ( 1) 00:15:42.781 3.484 - 3.508: 1.0300% ( 140) 00:15:42.781 3.508 - 3.532: 2.6225% ( 218) 00:15:42.781 3.532 - 3.556: 6.5235% ( 534) 00:15:42.781 3.556 - 3.579: 13.4341% ( 946) 00:15:42.781 3.579 - 3.603: 23.4349% ( 1369) 00:15:42.781 3.603 - 3.627: 31.9015% ( 1159) 00:15:42.781 3.627 - 3.650: 40.1929% ( 1135) 00:15:42.781 3.650 - 3.674: 47.0889% ( 944) 00:15:42.781 3.674 - 3.698: 54.4744% ( 1011) 00:15:42.781 3.698 - 3.721: 60.4792% ( 822) 00:15:42.781 3.721 - 3.745: 64.9061% ( 606) 00:15:42.781 3.745 - 3.769: 68.0693% ( 433) 00:15:42.781 3.769 - 3.793: 71.6342% ( 488) 00:15:42.781 3.793 - 3.816: 75.0018% ( 461) 00:15:42.781 3.816 - 3.840: 78.5594% ( 487) 00:15:42.781 3.840 - 3.864: 82.0221% ( 474) 00:15:42.781 3.864 - 3.887: 84.8491% ( 387) 00:15:42.781 3.887 - 3.911: 87.2525% ( 329) 00:15:42.781 3.911 - 3.935: 89.2541% ( 274) 00:15:42.781 3.935 - 3.959: 91.1243% ( 256) 00:15:42.781 3.959 - 3.982: 92.3807% ( 172) 00:15:42.781 3.982 - 4.006: 93.4692% ( 149) 00:15:42.781 4.006 - 4.030: 94.3824% ( 125) 00:15:42.781 4.030 - 4.053: 95.2663% ( 121) 00:15:42.781 4.053 - 4.077: 95.9676% ( 96) 00:15:42.781 4.077 - 4.101: 96.4789% ( 70) 00:15:42.781 4.101 - 4.124: 96.7346% ( 35) 00:15:42.781 4.124 - 4.148: 97.0195% ( 39) 00:15:42.781 4.148 - 4.172: 97.2167% ( 27) 00:15:42.781 4.172 - 4.196: 97.2971% ( 11) 00:15:42.781 4.196 - 4.219: 97.3848% ( 12) 00:15:42.781 4.219 - 4.243: 97.4943% ( 15) 00:15:42.781 4.243 - 4.267: 97.5820% ( 12) 00:15:42.781 4.267 - 4.290: 97.6331% ( 7) 00:15:42.781 4.290 - 4.314: 97.7062% ( 10) 00:15:42.781 4.314 - 4.338: 97.7719% ( 9) 00:15:42.781 4.338 - 4.361: 97.8012% ( 4) 00:15:42.781 4.361 - 4.385: 97.8596% ( 8) 00:15:42.781 4.385 - 4.409: 97.8742% ( 2) 00:15:42.781 4.409 - 4.433: 97.8815% ( 1) 00:15:42.781 4.433 - 4.456: 97.8888% ( 1) 00:15:42.781 4.456 - 4.480: 97.9180% ( 4) 00:15:42.781 4.480 - 4.504: 97.9253% ( 1) 00:15:42.781 4.504 - 4.527: 97.9400% ( 2) 00:15:42.781 4.527 - 4.551: 97.9473% ( 1) 00:15:42.781 4.551 - 4.575: 97.9546% ( 1) 00:15:42.781 4.622 - 4.646: 97.9692% ( 2) 00:15:42.781 4.670 - 4.693: 97.9838% ( 2) 00:15:42.781 4.693 - 4.717: 98.0057% ( 3) 00:15:42.781 4.717 - 4.741: 98.0130% ( 1) 00:15:42.781 4.741 - 4.764: 98.0276% ( 2) 00:15:42.781 4.764 - 4.788: 98.0641% ( 5) 00:15:42.781 4.788 - 4.812: 98.0787% ( 2) 00:15:42.781 4.812 - 4.836: 98.0934% ( 2) 00:15:42.781 4.836 - 4.859: 98.1445% ( 7) 00:15:42.781 4.859 - 4.883: 98.1883% ( 6) 00:15:42.781 4.883 - 4.907: 98.2029% ( 2) 00:15:42.781 4.907 - 4.930: 98.2687% ( 9) 00:15:42.781 4.930 - 4.954: 98.3198% ( 7) 00:15:42.781 4.954 - 4.978: 98.3710% ( 7) 00:15:42.781 4.978 - 5.001: 98.3929% ( 3) 00:15:42.781 5.001 - 5.025: 98.4075% ( 2) 00:15:42.781 5.025 - 5.049: 98.4148% ( 1) 00:15:42.781 5.049 - 5.073: 98.4586% ( 6) 00:15:42.781 5.073 - 5.096: 98.4878% ( 4) 00:15:42.781 5.096 - 5.120: 98.4951% ( 1) 00:15:42.781 5.120 - 5.144: 98.5244% ( 4) 00:15:42.781 5.144 - 5.167: 98.5390% ( 2) 00:15:42.781 5.167 - 5.191: 98.5609% ( 3) 00:15:42.781 5.191 - 5.215: 98.5755% ( 2) 00:15:42.781 5.215 - 5.239: 98.5901% ( 2) 00:15:42.781 5.262 - 5.286: 98.6047% ( 2) 00:15:42.781 5.286 - 5.310: 98.6120% ( 1) 00:15:42.781 5.310 - 5.333: 98.6339% ( 3) 00:15:42.781 5.404 - 5.428: 98.6412% ( 1) 00:15:42.781 5.428 - 5.452: 98.6485% ( 1) 00:15:42.781 5.476 - 5.499: 98.6559% ( 1) 00:15:42.781 5.547 - 5.570: 98.6632% ( 1) 00:15:42.781 6.068 - 6.116: 98.6705% ( 1) 00:15:42.781 6.163 - 6.210: 98.6778% ( 1) 00:15:42.781 6.400 - 6.447: 98.6851% ( 1) 00:15:42.781 6.447 - 6.495: 98.6924% ( 1) 00:15:42.781 6.495 - 6.542: 98.7070% ( 2) 00:15:42.781 6.969 - 7.016: 98.7143% ( 1) 00:15:42.781 7.064 - 7.111: 98.7289% ( 2) 00:15:42.781 7.111 - 7.159: 98.7362% ( 1) 00:15:42.781 7.253 - 7.301: 98.7435% ( 1) 00:15:42.781 7.301 - 7.348: 98.7508% ( 1) 00:15:42.781 7.348 - 7.396: 98.7654% ( 2) 00:15:42.781 7.396 - 7.443: 98.7800% ( 2) 00:15:42.781 7.443 - 7.490: 98.7873% ( 1) 00:15:42.781 7.490 - 7.538: 98.8093% ( 3) 00:15:42.781 7.538 - 7.585: 98.8385% ( 4) 00:15:42.781 7.585 - 7.633: 98.8458% ( 1) 00:15:42.781 7.633 - 7.680: 98.8604% ( 2) 00:15:42.781 7.680 - 7.727: 98.8677% ( 1) 00:15:42.781 7.822 - 7.870: 98.8823% ( 2) 00:15:42.781 7.870 - 7.917: 98.8896% ( 1) 00:15:42.781 8.059 - 8.107: 98.8969% ( 1) 00:15:42.781 8.107 - 8.154: 98.9188% ( 3) 00:15:42.781 8.154 - 8.201: 98.9335% ( 2) 00:15:42.781 8.249 - 8.296: 98.9481% ( 2) 00:15:42.781 8.296 - 8.344: 98.9554% ( 1) 00:15:42.781 8.391 - 8.439: 98.9700% ( 2) 00:15:42.781 8.533 - 8.581: 98.9773% ( 1) 00:15:42.781 8.628 - 8.676: 98.9846% ( 1) 00:15:42.781 9.055 - 9.102: 98.9919% ( 1) 00:15:42.781 10.098 - 10.145: 98.9992% ( 1) 00:15:42.781 10.667 - 10.714: 99.0065% ( 1) 00:15:42.781 10.856 - 10.904: 99.0138% ( 1) 00:15:42.781 11.804 - 11.852: 99.0211% ( 1) 00:15:42.781 12.041 - 12.089: 99.0357% ( 2) 00:15:42.781 12.516 - 12.610: 99.0430% ( 1) 00:15:42.781 13.084 - 13.179: 99.0503% ( 1) 00:15:42.781 13.274 - 13.369: 99.0576% ( 1) 00:15:42.781 13.369 - 13.464: 99.0649% ( 1) 00:15:42.781 14.412 - 14.507: 99.0722% ( 1) 00:15:42.781 14.696 - 14.791: 99.0796% ( 1) 00:15:42.781 16.877 - 16.972: 99.0869% ( 1) 00:15:42.781 17.067 - 17.161: 99.0942% ( 1) 00:15:42.781 17.161 - 17.256: 99.1015% ( 1) 00:15:42.781 17.351 - 17.446: 99.1161% ( 2) 00:15:42.781 17.446 - 17.541: 99.1307% ( 2) 00:15:42.781 17.541 - 17.636: 99.1599% ( 4) 00:15:42.781 17.636 - 17.730: 99.1964% ( 5) 00:15:42.781 17.730 - 17.825: 99.2549% ( 8) 00:15:42.781 17.825 - 17.920: 99.3060% ( 7) 00:15:42.781 17.920 - 18.015: 99.3571% ( 7) 00:15:42.781 18.015 - 18.110: 99.4083% ( 7) 00:15:42.781 18.110 - 18.204: 99.4813% ( 10) 00:15:42.781 18.204 - 18.299: 99.5325% ( 7) 00:15:42.781 18.299 - 18.394: 99.5617% ( 4) 00:15:42.781 18.394 - 18.489: 99.6274% ( 9) 00:15:42.781 18.489 - 18.584: 99.6640% ( 5) 00:15:42.781 18.584 - 18.679: 99.6932% ( 4) 00:15:42.781 18.679 - 18.773: 99.7151% ( 3) 00:15:42.781 18.773 - 18.868: 99.7224% ( 1) 00:15:42.781 18.868 - 18.963: 99.7516% ( 4) 00:15:42.781 18.963 - 19.058: 99.7589% ( 1) 00:15:42.781 19.058 - 19.153: 99.7662% ( 1) 00:15:42.781 19.153 - 19.247: 99.7808% ( 2) 00:15:42.781 19.247 - 19.342: 99.7882% ( 1) 00:15:42.781 19.342 - 19.437: 99.8101% ( 3) 00:15:42.781 19.532 - 19.627: 99.8174% ( 1) 00:15:42.781 19.627 - 19.721: 99.8247% ( 1) 00:15:42.781 19.721 - 19.816: 99.8466% ( 3) 00:15:42.781 19.816 - 19.911: 99.8539% ( 1) 00:15:42.781 19.911 - 20.006: 99.8685% ( 2) 00:15:42.781 20.196 - 20.290: 99.8758% ( 1) 00:15:42.781 21.997 - 22.092: 99.8831% ( 1) 00:15:42.781 24.841 - 25.031: 99.8904% ( 1) 00:15:42.781 27.117 - 27.307: 99.8977% ( 1) 00:15:42.781 3980.705 - 4004.978: 99.9708% ( 10) 00:15:42.781 4004.978 - 4029.250: 100.0000% ( 4) 00:15:42.781 00:15:42.782 Complete histogram 00:15:42.782 ================== 00:15:42.782 Range in us Cumulative Count 00:15:42.782 2.039 - 2.050: 1.4026% ( 192) 00:15:42.782 2.050 - 2.062: 30.3601% ( 3964) 00:15:42.782 2.062 - 2.074: 39.6157% ( 1267) 00:15:42.782 2.074 - 2.086: 44.6855% ( 694) 00:15:42.782 2.086 - 2.098: 58.0758% ( 1833) 00:15:42.782 2.098 - 2.110: 61.8380% ( 515) 00:15:42.782 2.110 - 2.121: 67.7040% ( 803) 00:15:42.782 2.121 - 2.133: 77.6244% ( 1358) 00:15:42.782 2.133 - 2.145: 79.6917% ( 283) 00:15:42.782 2.145 - 2.157: 83.7095% ( 550) 00:15:42.782 2.157 - 2.169: 88.3848% ( 640) 00:15:42.782 2.169 - 2.181: 89.4368% ( 144) 00:15:42.782 2.181 - 2.193: 90.4887% ( 144) 00:15:42.782 2.193 - 2.204: 91.7379% ( 171) 00:15:42.782 2.204 - 2.216: 93.7906% ( 281) 00:15:42.782 2.216 - 2.228: 94.8572% ( 146) 00:15:42.782 2.228 - 2.240: 95.3612% ( 69) 00:15:42.782 2.240 - 2.252: 95.5366% ( 24) 00:15:42.782 2.252 - 2.264: 95.6096% ( 10) 00:15:42.782 2.264 - 2.276: 95.7630% ( 21) 00:15:42.782 2.276 - 2.287: 96.0771% ( 43) 00:15:42.782 2.287 - 2.299: 96.2671% ( 26) 00:15:42.782 2.299 - 2.311: 96.3474% ( 11) 00:15:42.782 2.311 - 2.323: 96.3840% ( 5) 00:15:42.782 2.323 - 2.335: 96.4935% ( 15) 00:15:42.782 2.335 - 2.347: 96.6908% ( 27) 00:15:42.782 2.347 - 2.359: 97.0926% ( 55) 00:15:42.782 2.359 - 2.370: 97.3994% ( 42) 00:15:42.782 2.370 - 2.382: 97.7281% ( 45) 00:15:42.782 2.382 - 2.394: 97.9838% ( 35) 00:15:42.782 2.394 - 2.406: 98.1883% ( 28) 00:15:42.782 2.406 - 2.418: 98.2687% ( 11) 00:15:42.782 2.418 - 2.430: 98.3710% ( 14) 00:15:42.782 2.430 - 2.441: 98.4367% ( 9) 00:15:42.782 2.441 - 2.453: 98.4586% ( 3) 00:15:42.782 2.453 - 2.465: 98.4878% ( 4) 00:15:42.782 2.465 - 2.477: 98.5098% ( 3) 00:15:42.782 2.477 - 2.489: 98.5244% ( 2) 00:15:42.782 2.489 - 2.501: 98.5536% ( 4) 00:15:42.782 2.501 - 2.513: 98.5755% ( 3) 00:15:42.782 2.513 - 2.524: 98.5828% ( 1) 00:15:42.782 2.524 - 2.536: 98.5901% ( 1) 00:15:42.782 2.536 - 2.548: 98.5974% ( 1) 00:15:42.782 2.548 - 2.560: 98.6193% ( 3) 00:15:42.782 2.572 - 2.584: 98.6266% ( 1) 00:15:42.782 2.596 - 2.607: 98.6339% ( 1) 00:15:42.782 2.607 - 2.619: 98.6485% ( 2) 00:15:42.782 2.631 - 2.643: 98.6559% ( 1) 00:15:42.782 2.702 - 2.714: 98.6632% ( 1) 00:15:42.782 2.714 - 2.726: 98.6705% ( 1) 00:15:42.782 3.153 - 3.176: 98.6778% ( 1) 00:15:42.782 3.247 - 3.271: 98.6851% ( 1) 00:15:42.782 3.295 - 3.319: 98.6924% ( 1) 00:15:42.782 3.319 - 3.342: 98.6997% ( 1) 00:15:42.782 3.366 - 3.390: 98.7143% ( 2) 00:15:42.782 3.390 - 3.413: 98.7289% ( 2) 00:15:42.782 3.413 - 3.437: 98.7362% ( 1) 00:15:42.782 3.484 - 3.508: 98.7435% ( 1) 00:15:42.782 3.532 - 3.556: 98.7581% ( 2) 00:15:42.782 3.556 - 3.579: 98.7654% ( 1) 00:15:42.782 3.579 - 3.603: 98.7727% ( 1) 00:15:42.782 3.603 - 3.627: 98.7800% ( 1) 00:15:42.782 3.627 - 3.650: 98.7873% ( 1) 00:15:42.782 3.674 - 3.698: 98.8093% ( 3) 00:15:42.782 3.769 - 3.793: 98.8166% ( 1) 00:15:42.782 3.840 - 3.864: 98.8239% ( 1) 00:15:42.782 3.864 - 3.887: 98.8312% ( 1) 00:15:42.782 3.887 - 3.911: 98.8385% ( 1) 00:15:42.782 4.172 - 4.196: 98.8458% ( 1) 00:15:42.782 4.954 - 4.978: 98.8531% ( 1) 00:15:42.782 5.191 - 5.215: 98.8604% ( 1) 00:15:42.782 5.831 - 5.855: 98.8677% ( 1) 00:15:42.782 6.068 - 6.116: 98.8750% ( 1) 00:15:42.782 6.116 - 6.163: 98.8823% ( 1) 00:15:42.782 6.163 - 6.210: 98.8896% ( 1) 00:15:42.782 6.305 - 6.353: 98.8969% ( 1) 00:15:42.782 6.400 - 6.447: 98.9042% ( 1) 00:15:42.782 6.495 - 6.542: 98.9115% ( 1) 00:15:42.782 6.732 - 6.779: 98.9188% ( 1) 00:15:42.782 6.874 - 6.921: 98.9261% ( 1) 00:15:42.782 7.064 - 7.111: 98.9481% ( 3) 00:15:42.782 7.301 - 7.348: 98.9554% ( 1) 00:15:42.782 8.107 - 8.154: 98.9627% ( 1) 00:15:42.782 15.644 - 15.739: 98.9700% ( 1) 00:15:42.782 15.739 - 15.834: 98.9846% ( 2) 00:15:42.782 15.834 - 15.929: 98.9992% ( 2) 00:15:42.782 15.929 - 16.024: 99.0211% ( 3) 00:15:42.782 16.024 - 16.119: 99.0430% ( 3) 00:15:42.782 16.119 - 16.213: 99.0649% ( 3) 00:15:42.782 16.213 - 16.308: 99.0796% ( 2) 00:15:42.782 16.308 - 16.403: 99.0869% ( 1) 00:15:42.782 16.403 - 16.498: 99.1599% ( 10) 00:15:42.782 16.498 - 16.593: 99.2257% ( 9) 00:15:42.782 16.593 - 16.687: 99.2476% ( 3) 00:15:42.782 16.687 - 16.782: 99.2841% ( 5) 00:15:42.782 16.782 - 16.877: 99.2914% ( 1) 00:15:42.782 16.877 - 16.972: 9[2024-07-25 01:01:35.626004] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:42.782 9.3060% ( 2) 00:15:42.782 16.972 - 17.067: 99.3279% ( 3) 00:15:42.782 17.067 - 17.161: 99.3352% ( 1) 00:15:42.782 17.256 - 17.351: 99.3645% ( 4) 00:15:42.782 17.446 - 17.541: 99.3791% ( 2) 00:15:42.782 17.541 - 17.636: 99.3937% ( 2) 00:15:42.782 17.730 - 17.825: 99.4010% ( 1) 00:15:42.782 17.825 - 17.920: 99.4083% ( 1) 00:15:42.782 17.920 - 18.015: 99.4156% ( 1) 00:15:42.782 18.110 - 18.204: 99.4229% ( 1) 00:15:42.782 18.204 - 18.299: 99.4302% ( 1) 00:15:42.782 18.299 - 18.394: 99.4375% ( 1) 00:15:42.782 18.394 - 18.489: 99.4448% ( 1) 00:15:42.782 3592.344 - 3616.616: 99.4521% ( 1) 00:15:42.782 3980.705 - 4004.978: 99.8466% ( 54) 00:15:42.782 4004.978 - 4029.250: 100.0000% ( 21) 00:15:42.782 00:15:42.782 01:01:35 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:15:42.782 01:01:35 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:15:42.782 01:01:35 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:15:42.782 01:01:35 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:15:42.782 01:01:35 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:43.040 [ 00:15:43.040 { 00:15:43.040 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:43.040 "subtype": "Discovery", 00:15:43.040 "listen_addresses": [], 00:15:43.040 "allow_any_host": true, 00:15:43.040 "hosts": [] 00:15:43.040 }, 00:15:43.040 { 00:15:43.040 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:43.040 "subtype": "NVMe", 00:15:43.040 "listen_addresses": [ 00:15:43.040 { 00:15:43.040 "trtype": "VFIOUSER", 00:15:43.040 "adrfam": "IPv4", 00:15:43.040 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:43.040 "trsvcid": "0" 00:15:43.040 } 00:15:43.040 ], 00:15:43.040 "allow_any_host": true, 00:15:43.040 "hosts": [], 00:15:43.040 "serial_number": "SPDK1", 00:15:43.040 "model_number": "SPDK bdev Controller", 00:15:43.040 "max_namespaces": 32, 00:15:43.040 "min_cntlid": 1, 00:15:43.040 "max_cntlid": 65519, 00:15:43.040 "namespaces": [ 00:15:43.040 { 00:15:43.040 "nsid": 1, 00:15:43.040 "bdev_name": "Malloc1", 00:15:43.040 "name": "Malloc1", 00:15:43.040 "nguid": "21B51829DE8F45A5AE0BC239D30B7B74", 00:15:43.040 "uuid": "21b51829-de8f-45a5-ae0b-c239d30b7b74" 00:15:43.040 }, 00:15:43.040 { 00:15:43.040 "nsid": 2, 00:15:43.040 "bdev_name": "Malloc3", 00:15:43.040 "name": "Malloc3", 00:15:43.040 "nguid": "9DAA03E1EF4741E8B7E7A7B919FBD197", 00:15:43.040 "uuid": "9daa03e1-ef47-41e8-b7e7-a7b919fbd197" 00:15:43.040 } 00:15:43.040 ] 00:15:43.040 }, 00:15:43.040 { 00:15:43.040 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:43.040 "subtype": "NVMe", 00:15:43.040 "listen_addresses": [ 00:15:43.040 { 00:15:43.040 "trtype": "VFIOUSER", 00:15:43.040 "adrfam": "IPv4", 00:15:43.040 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:43.040 "trsvcid": "0" 00:15:43.040 } 00:15:43.040 ], 00:15:43.040 "allow_any_host": true, 00:15:43.040 "hosts": [], 00:15:43.040 "serial_number": "SPDK2", 00:15:43.040 "model_number": "SPDK bdev Controller", 00:15:43.040 "max_namespaces": 32, 00:15:43.040 "min_cntlid": 1, 00:15:43.040 "max_cntlid": 65519, 00:15:43.040 "namespaces": [ 00:15:43.040 { 00:15:43.040 "nsid": 1, 00:15:43.040 "bdev_name": "Malloc2", 00:15:43.040 "name": "Malloc2", 00:15:43.040 "nguid": "71E66C9C856F478FAC08D1BCEA6147A4", 00:15:43.040 "uuid": "71e66c9c-856f-478f-ac08-d1bcea6147a4" 00:15:43.040 } 00:15:43.040 ] 00:15:43.040 } 00:15:43.040 ] 00:15:43.040 01:01:35 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:15:43.040 01:01:35 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=3736438 00:15:43.040 01:01:35 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:15:43.040 01:01:35 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:15:43.040 01:01:35 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1261 -- # local i=0 00:15:43.040 01:01:35 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1262 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:43.040 01:01:35 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1268 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:43.040 01:01:35 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # return 0 00:15:43.040 01:01:35 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:15:43.040 01:01:35 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:15:43.040 EAL: No free 2048 kB hugepages reported on node 1 00:15:43.040 [2024-07-25 01:01:36.122694] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:43.297 Malloc4 00:15:43.297 01:01:36 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:15:43.554 [2024-07-25 01:01:36.476180] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:43.554 01:01:36 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:43.554 Asynchronous Event Request test 00:15:43.554 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:43.554 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:43.554 Registering asynchronous event callbacks... 00:15:43.554 Starting namespace attribute notice tests for all controllers... 00:15:43.554 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:15:43.554 aer_cb - Changed Namespace 00:15:43.554 Cleaning up... 00:15:43.812 [ 00:15:43.812 { 00:15:43.812 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:43.812 "subtype": "Discovery", 00:15:43.812 "listen_addresses": [], 00:15:43.812 "allow_any_host": true, 00:15:43.812 "hosts": [] 00:15:43.812 }, 00:15:43.812 { 00:15:43.812 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:43.812 "subtype": "NVMe", 00:15:43.812 "listen_addresses": [ 00:15:43.812 { 00:15:43.812 "trtype": "VFIOUSER", 00:15:43.812 "adrfam": "IPv4", 00:15:43.812 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:43.812 "trsvcid": "0" 00:15:43.812 } 00:15:43.812 ], 00:15:43.812 "allow_any_host": true, 00:15:43.812 "hosts": [], 00:15:43.812 "serial_number": "SPDK1", 00:15:43.812 "model_number": "SPDK bdev Controller", 00:15:43.812 "max_namespaces": 32, 00:15:43.812 "min_cntlid": 1, 00:15:43.812 "max_cntlid": 65519, 00:15:43.812 "namespaces": [ 00:15:43.812 { 00:15:43.812 "nsid": 1, 00:15:43.812 "bdev_name": "Malloc1", 00:15:43.812 "name": "Malloc1", 00:15:43.812 "nguid": "21B51829DE8F45A5AE0BC239D30B7B74", 00:15:43.812 "uuid": "21b51829-de8f-45a5-ae0b-c239d30b7b74" 00:15:43.812 }, 00:15:43.812 { 00:15:43.812 "nsid": 2, 00:15:43.812 "bdev_name": "Malloc3", 00:15:43.812 "name": "Malloc3", 00:15:43.812 "nguid": "9DAA03E1EF4741E8B7E7A7B919FBD197", 00:15:43.812 "uuid": "9daa03e1-ef47-41e8-b7e7-a7b919fbd197" 00:15:43.812 } 00:15:43.812 ] 00:15:43.812 }, 00:15:43.812 { 00:15:43.812 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:43.812 "subtype": "NVMe", 00:15:43.812 "listen_addresses": [ 00:15:43.812 { 00:15:43.812 "trtype": "VFIOUSER", 00:15:43.812 "adrfam": "IPv4", 00:15:43.812 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:43.812 "trsvcid": "0" 00:15:43.812 } 00:15:43.812 ], 00:15:43.812 "allow_any_host": true, 00:15:43.812 "hosts": [], 00:15:43.812 "serial_number": "SPDK2", 00:15:43.812 "model_number": "SPDK bdev Controller", 00:15:43.812 "max_namespaces": 32, 00:15:43.812 "min_cntlid": 1, 00:15:43.812 "max_cntlid": 65519, 00:15:43.812 "namespaces": [ 00:15:43.812 { 00:15:43.812 "nsid": 1, 00:15:43.812 "bdev_name": "Malloc2", 00:15:43.812 "name": "Malloc2", 00:15:43.812 "nguid": "71E66C9C856F478FAC08D1BCEA6147A4", 00:15:43.812 "uuid": "71e66c9c-856f-478f-ac08-d1bcea6147a4" 00:15:43.812 }, 00:15:43.812 { 00:15:43.812 "nsid": 2, 00:15:43.812 "bdev_name": "Malloc4", 00:15:43.812 "name": "Malloc4", 00:15:43.812 "nguid": "C01FD793245C4ACB98D0D0AD88F74239", 00:15:43.812 "uuid": "c01fd793-245c-4acb-98d0-d0ad88f74239" 00:15:43.812 } 00:15:43.812 ] 00:15:43.812 } 00:15:43.812 ] 00:15:43.812 01:01:36 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 3736438 00:15:43.812 01:01:36 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:15:43.812 01:01:36 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 3730857 00:15:43.812 01:01:36 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@946 -- # '[' -z 3730857 ']' 00:15:43.812 01:01:36 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@950 -- # kill -0 3730857 00:15:43.812 01:01:36 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@951 -- # uname 00:15:43.812 01:01:36 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:15:43.812 01:01:36 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3730857 00:15:43.812 01:01:36 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:15:43.812 01:01:36 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:15:43.812 01:01:36 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3730857' 00:15:43.812 killing process with pid 3730857 00:15:43.812 01:01:36 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@965 -- # kill 3730857 00:15:43.812 01:01:36 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@970 -- # wait 3730857 00:15:44.070 01:01:37 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:15:44.070 01:01:37 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:15:44.070 01:01:37 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:15:44.070 01:01:37 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:15:44.070 01:01:37 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:15:44.070 01:01:37 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=3736582 00:15:44.070 01:01:37 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:15:44.070 01:01:37 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 3736582' 00:15:44.070 Process pid: 3736582 00:15:44.070 01:01:37 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:15:44.070 01:01:37 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 3736582 00:15:44.070 01:01:37 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@827 -- # '[' -z 3736582 ']' 00:15:44.070 01:01:37 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:44.070 01:01:37 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@832 -- # local max_retries=100 00:15:44.070 01:01:37 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:44.070 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:44.070 01:01:37 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@836 -- # xtrace_disable 00:15:44.070 01:01:37 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:15:44.070 [2024-07-25 01:01:37.137614] thread.c:2937:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:15:44.070 [2024-07-25 01:01:37.138668] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 23.11.0 initialization... 00:15:44.070 [2024-07-25 01:01:37.138743] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:44.070 EAL: No free 2048 kB hugepages reported on node 1 00:15:44.070 [2024-07-25 01:01:37.202493] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:44.328 [2024-07-25 01:01:37.293226] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:44.328 [2024-07-25 01:01:37.293295] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:44.328 [2024-07-25 01:01:37.293322] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:44.328 [2024-07-25 01:01:37.293335] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:44.328 [2024-07-25 01:01:37.293348] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:44.328 [2024-07-25 01:01:37.293445] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:44.328 [2024-07-25 01:01:37.293504] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:44.328 [2024-07-25 01:01:37.293622] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:44.328 [2024-07-25 01:01:37.293624] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:44.328 [2024-07-25 01:01:37.396124] thread.c:2095:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:15:44.328 [2024-07-25 01:01:37.396341] thread.c:2095:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:15:44.328 [2024-07-25 01:01:37.396598] thread.c:2095:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:15:44.328 [2024-07-25 01:01:37.397221] thread.c:2095:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:15:44.328 [2024-07-25 01:01:37.397482] thread.c:2095:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:15:44.328 01:01:37 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:15:44.328 01:01:37 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@860 -- # return 0 00:15:44.328 01:01:37 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:15:45.699 01:01:38 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:15:45.699 01:01:38 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:15:45.699 01:01:38 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:15:45.699 01:01:38 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:45.699 01:01:38 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:15:45.699 01:01:38 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:15:45.958 Malloc1 00:15:45.958 01:01:38 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:15:46.216 01:01:39 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:15:46.474 01:01:39 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:15:46.731 01:01:39 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:46.731 01:01:39 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:15:46.731 01:01:39 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:15:46.988 Malloc2 00:15:46.988 01:01:39 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:15:47.244 01:01:40 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:15:47.501 01:01:40 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:15:47.759 01:01:40 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:15:47.759 01:01:40 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 3736582 00:15:47.759 01:01:40 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@946 -- # '[' -z 3736582 ']' 00:15:47.759 01:01:40 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@950 -- # kill -0 3736582 00:15:47.759 01:01:40 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@951 -- # uname 00:15:47.759 01:01:40 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:15:47.759 01:01:40 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3736582 00:15:47.759 01:01:40 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:15:47.759 01:01:40 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:15:47.759 01:01:40 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3736582' 00:15:47.759 killing process with pid 3736582 00:15:47.759 01:01:40 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@965 -- # kill 3736582 00:15:47.759 01:01:40 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@970 -- # wait 3736582 00:15:48.016 01:01:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:15:48.016 01:01:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:15:48.016 00:15:48.016 real 0m52.397s 00:15:48.016 user 3m26.979s 00:15:48.016 sys 0m4.352s 00:15:48.016 01:01:41 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1122 -- # xtrace_disable 00:15:48.016 01:01:41 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:15:48.016 ************************************ 00:15:48.016 END TEST nvmf_vfio_user 00:15:48.016 ************************************ 00:15:48.016 01:01:41 nvmf_tcp -- nvmf/nvmf.sh@42 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:15:48.016 01:01:41 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:15:48.016 01:01:41 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:15:48.016 01:01:41 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:48.016 ************************************ 00:15:48.016 START TEST nvmf_vfio_user_nvme_compliance 00:15:48.016 ************************************ 00:15:48.016 01:01:41 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:15:48.016 * Looking for test storage... 00:15:48.016 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:15:48.016 01:01:41 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:48.016 01:01:41 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:15:48.016 01:01:41 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:48.016 01:01:41 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:48.016 01:01:41 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:48.016 01:01:41 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:48.016 01:01:41 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:48.016 01:01:41 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:48.016 01:01:41 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:48.016 01:01:41 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:48.016 01:01:41 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:48.016 01:01:41 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:48.016 01:01:41 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:48.016 01:01:41 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:15:48.016 01:01:41 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:48.016 01:01:41 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:48.016 01:01:41 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:48.016 01:01:41 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:48.016 01:01:41 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:48.016 01:01:41 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:48.016 01:01:41 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:48.016 01:01:41 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:48.016 01:01:41 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:48.016 01:01:41 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:48.016 01:01:41 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:48.016 01:01:41 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:15:48.016 01:01:41 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:48.016 01:01:41 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@47 -- # : 0 00:15:48.016 01:01:41 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:48.016 01:01:41 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:48.016 01:01:41 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:48.016 01:01:41 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:48.016 01:01:41 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:48.016 01:01:41 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:48.016 01:01:41 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:48.016 01:01:41 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:48.016 01:01:41 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:48.016 01:01:41 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:48.016 01:01:41 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:15:48.016 01:01:41 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:15:48.016 01:01:41 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:15:48.016 01:01:41 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=3737173 00:15:48.016 01:01:41 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:15:48.016 01:01:41 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 3737173' 00:15:48.016 Process pid: 3737173 00:15:48.016 01:01:41 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:15:48.016 01:01:41 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 3737173 00:15:48.016 01:01:41 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@827 -- # '[' -z 3737173 ']' 00:15:48.016 01:01:41 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:48.016 01:01:41 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@832 -- # local max_retries=100 00:15:48.016 01:01:41 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:48.016 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:48.017 01:01:41 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@836 -- # xtrace_disable 00:15:48.017 01:01:41 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:48.273 [2024-07-25 01:01:41.191320] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 23.11.0 initialization... 00:15:48.273 [2024-07-25 01:01:41.191397] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:48.273 EAL: No free 2048 kB hugepages reported on node 1 00:15:48.273 [2024-07-25 01:01:41.250698] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:48.273 [2024-07-25 01:01:41.341211] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:48.273 [2024-07-25 01:01:41.341289] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:48.273 [2024-07-25 01:01:41.341312] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:48.273 [2024-07-25 01:01:41.341324] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:48.273 [2024-07-25 01:01:41.341335] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:48.273 [2024-07-25 01:01:41.341394] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:48.273 [2024-07-25 01:01:41.341417] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:48.273 [2024-07-25 01:01:41.341419] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:48.530 01:01:41 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:15:48.530 01:01:41 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@860 -- # return 0 00:15:48.530 01:01:41 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:15:49.461 01:01:42 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:15:49.461 01:01:42 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:15:49.461 01:01:42 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:15:49.461 01:01:42 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:49.461 01:01:42 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:49.461 01:01:42 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:49.461 01:01:42 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:15:49.461 01:01:42 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:15:49.461 01:01:42 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:49.461 01:01:42 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:49.461 malloc0 00:15:49.461 01:01:42 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:49.461 01:01:42 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:15:49.461 01:01:42 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:49.461 01:01:42 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:49.461 01:01:42 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:49.461 01:01:42 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:15:49.461 01:01:42 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:49.461 01:01:42 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:49.461 01:01:42 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:49.461 01:01:42 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:15:49.461 01:01:42 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:49.461 01:01:42 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:49.461 01:01:42 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:49.461 01:01:42 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:15:49.461 EAL: No free 2048 kB hugepages reported on node 1 00:15:49.718 00:15:49.718 00:15:49.718 CUnit - A unit testing framework for C - Version 2.1-3 00:15:49.718 http://cunit.sourceforge.net/ 00:15:49.718 00:15:49.718 00:15:49.718 Suite: nvme_compliance 00:15:49.718 Test: admin_identify_ctrlr_verify_dptr ...[2024-07-25 01:01:42.684047] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:49.718 [2024-07-25 01:01:42.685502] vfio_user.c: 804:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:15:49.718 [2024-07-25 01:01:42.685536] vfio_user.c:5514:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:15:49.718 [2024-07-25 01:01:42.685563] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:15:49.718 [2024-07-25 01:01:42.687064] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:49.718 passed 00:15:49.718 Test: admin_identify_ctrlr_verify_fused ...[2024-07-25 01:01:42.775721] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:49.718 [2024-07-25 01:01:42.778738] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:49.718 passed 00:15:49.718 Test: admin_identify_ns ...[2024-07-25 01:01:42.865301] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:49.976 [2024-07-25 01:01:42.927274] ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:15:49.976 [2024-07-25 01:01:42.935259] ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:15:49.976 [2024-07-25 01:01:42.956383] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:49.976 passed 00:15:49.976 Test: admin_get_features_mandatory_features ...[2024-07-25 01:01:43.040083] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:49.976 [2024-07-25 01:01:43.043101] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:49.976 passed 00:15:49.976 Test: admin_get_features_optional_features ...[2024-07-25 01:01:43.124652] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:49.976 [2024-07-25 01:01:43.127679] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:50.233 passed 00:15:50.234 Test: admin_set_features_number_of_queues ...[2024-07-25 01:01:43.211857] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:50.234 [2024-07-25 01:01:43.316372] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:50.234 passed 00:15:50.490 Test: admin_get_log_page_mandatory_logs ...[2024-07-25 01:01:43.401595] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:50.491 [2024-07-25 01:01:43.404633] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:50.491 passed 00:15:50.491 Test: admin_get_log_page_with_lpo ...[2024-07-25 01:01:43.486823] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:50.491 [2024-07-25 01:01:43.554272] ctrlr.c:2654:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:15:50.491 [2024-07-25 01:01:43.567335] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:50.491 passed 00:15:50.748 Test: fabric_property_get ...[2024-07-25 01:01:43.651053] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:50.748 [2024-07-25 01:01:43.652352] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:15:50.748 [2024-07-25 01:01:43.654072] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:50.748 passed 00:15:50.748 Test: admin_delete_io_sq_use_admin_qid ...[2024-07-25 01:01:43.735630] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:50.748 [2024-07-25 01:01:43.736898] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:15:50.748 [2024-07-25 01:01:43.738655] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:50.748 passed 00:15:50.748 Test: admin_delete_io_sq_delete_sq_twice ...[2024-07-25 01:01:43.823805] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:51.005 [2024-07-25 01:01:43.907263] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:15:51.005 [2024-07-25 01:01:43.923279] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:15:51.005 [2024-07-25 01:01:43.928362] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:51.005 passed 00:15:51.005 Test: admin_delete_io_cq_use_admin_qid ...[2024-07-25 01:01:44.012090] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:51.006 [2024-07-25 01:01:44.013405] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:15:51.006 [2024-07-25 01:01:44.015110] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:51.006 passed 00:15:51.006 Test: admin_delete_io_cq_delete_cq_first ...[2024-07-25 01:01:44.095353] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:51.292 [2024-07-25 01:01:44.173260] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:15:51.292 [2024-07-25 01:01:44.197267] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:15:51.292 [2024-07-25 01:01:44.202360] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:51.292 passed 00:15:51.292 Test: admin_create_io_cq_verify_iv_pc ...[2024-07-25 01:01:44.285989] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:51.293 [2024-07-25 01:01:44.287318] vfio_user.c:2158:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:15:51.293 [2024-07-25 01:01:44.287358] vfio_user.c:2152:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:15:51.293 [2024-07-25 01:01:44.289008] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:51.293 passed 00:15:51.293 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-07-25 01:01:44.370301] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:51.550 [2024-07-25 01:01:44.464268] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:15:51.550 [2024-07-25 01:01:44.472254] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:15:51.550 [2024-07-25 01:01:44.480267] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:15:51.550 [2024-07-25 01:01:44.488253] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:15:51.550 [2024-07-25 01:01:44.517361] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:51.550 passed 00:15:51.550 Test: admin_create_io_sq_verify_pc ...[2024-07-25 01:01:44.600963] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:51.550 [2024-07-25 01:01:44.617267] vfio_user.c:2051:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:15:51.550 [2024-07-25 01:01:44.635326] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:51.550 passed 00:15:51.808 Test: admin_create_io_qp_max_qps ...[2024-07-25 01:01:44.717886] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:52.739 [2024-07-25 01:01:45.810259] nvme_ctrlr.c:5342:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user] No free I/O queue IDs 00:15:53.303 [2024-07-25 01:01:46.189066] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:53.303 passed 00:15:53.303 Test: admin_create_io_sq_shared_cq ...[2024-07-25 01:01:46.271264] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:53.303 [2024-07-25 01:01:46.405265] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:15:53.303 [2024-07-25 01:01:46.442352] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:53.561 passed 00:15:53.561 00:15:53.561 Run Summary: Type Total Ran Passed Failed Inactive 00:15:53.561 suites 1 1 n/a 0 0 00:15:53.561 tests 18 18 18 0 0 00:15:53.561 asserts 360 360 360 0 n/a 00:15:53.561 00:15:53.561 Elapsed time = 1.557 seconds 00:15:53.561 01:01:46 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 3737173 00:15:53.561 01:01:46 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@946 -- # '[' -z 3737173 ']' 00:15:53.561 01:01:46 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@950 -- # kill -0 3737173 00:15:53.561 01:01:46 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@951 -- # uname 00:15:53.561 01:01:46 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:15:53.561 01:01:46 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3737173 00:15:53.561 01:01:46 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:15:53.561 01:01:46 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:15:53.561 01:01:46 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3737173' 00:15:53.561 killing process with pid 3737173 00:15:53.561 01:01:46 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@965 -- # kill 3737173 00:15:53.561 01:01:46 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@970 -- # wait 3737173 00:15:53.820 01:01:46 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:15:53.820 01:01:46 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:15:53.820 00:15:53.820 real 0m5.683s 00:15:53.820 user 0m16.077s 00:15:53.820 sys 0m0.521s 00:15:53.820 01:01:46 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1122 -- # xtrace_disable 00:15:53.820 01:01:46 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:53.820 ************************************ 00:15:53.820 END TEST nvmf_vfio_user_nvme_compliance 00:15:53.820 ************************************ 00:15:53.820 01:01:46 nvmf_tcp -- nvmf/nvmf.sh@43 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:15:53.820 01:01:46 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:15:53.820 01:01:46 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:15:53.820 01:01:46 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:53.820 ************************************ 00:15:53.820 START TEST nvmf_vfio_user_fuzz 00:15:53.820 ************************************ 00:15:53.820 01:01:46 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:15:53.820 * Looking for test storage... 00:15:53.820 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:53.820 01:01:46 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:53.820 01:01:46 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:15:53.820 01:01:46 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:53.820 01:01:46 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:53.820 01:01:46 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:53.820 01:01:46 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:53.820 01:01:46 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:53.820 01:01:46 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:53.820 01:01:46 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:53.820 01:01:46 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:53.820 01:01:46 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:53.820 01:01:46 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:53.820 01:01:46 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:53.820 01:01:46 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:15:53.820 01:01:46 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:53.820 01:01:46 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:53.820 01:01:46 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:53.820 01:01:46 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:53.820 01:01:46 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:53.820 01:01:46 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:53.820 01:01:46 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:53.820 01:01:46 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:53.820 01:01:46 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:53.820 01:01:46 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:53.820 01:01:46 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:53.820 01:01:46 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:15:53.820 01:01:46 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:53.820 01:01:46 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@47 -- # : 0 00:15:53.820 01:01:46 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:53.820 01:01:46 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:53.820 01:01:46 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:53.820 01:01:46 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:53.820 01:01:46 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:53.820 01:01:46 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:53.820 01:01:46 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:53.820 01:01:46 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:53.820 01:01:46 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:15:53.820 01:01:46 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:15:53.820 01:01:46 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:15:53.820 01:01:46 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:15:53.820 01:01:46 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:15:53.820 01:01:46 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:15:53.820 01:01:46 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:15:53.820 01:01:46 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=3737894 00:15:53.820 01:01:46 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:15:53.820 01:01:46 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 3737894' 00:15:53.820 Process pid: 3737894 00:15:53.820 01:01:46 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:15:53.820 01:01:46 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 3737894 00:15:53.820 01:01:46 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@827 -- # '[' -z 3737894 ']' 00:15:53.820 01:01:46 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:53.820 01:01:46 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@832 -- # local max_retries=100 00:15:53.820 01:01:46 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:53.820 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:53.820 01:01:46 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@836 -- # xtrace_disable 00:15:53.820 01:01:46 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:54.078 01:01:47 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:15:54.078 01:01:47 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@860 -- # return 0 00:15:54.078 01:01:47 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:15:55.451 01:01:48 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:15:55.451 01:01:48 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:55.451 01:01:48 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:55.451 01:01:48 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:55.451 01:01:48 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:15:55.451 01:01:48 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:15:55.451 01:01:48 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:55.451 01:01:48 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:55.451 malloc0 00:15:55.451 01:01:48 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:55.451 01:01:48 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:15:55.451 01:01:48 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:55.451 01:01:48 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:55.451 01:01:48 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:55.451 01:01:48 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:15:55.451 01:01:48 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:55.451 01:01:48 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:55.451 01:01:48 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:55.451 01:01:48 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:15:55.451 01:01:48 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:55.451 01:01:48 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:55.451 01:01:48 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:55.451 01:01:48 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:15:55.451 01:01:48 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:16:27.503 Fuzzing completed. Shutting down the fuzz application 00:16:27.503 00:16:27.503 Dumping successful admin opcodes: 00:16:27.503 8, 9, 10, 24, 00:16:27.503 Dumping successful io opcodes: 00:16:27.503 0, 00:16:27.503 NS: 0x200003a1ef00 I/O qp, Total commands completed: 591818, total successful commands: 2285, random_seed: 416829632 00:16:27.503 NS: 0x200003a1ef00 admin qp, Total commands completed: 76971, total successful commands: 596, random_seed: 2010810624 00:16:27.503 01:02:18 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:16:27.503 01:02:18 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:27.503 01:02:18 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:27.503 01:02:18 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:27.503 01:02:18 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 3737894 00:16:27.503 01:02:18 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@946 -- # '[' -z 3737894 ']' 00:16:27.503 01:02:18 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@950 -- # kill -0 3737894 00:16:27.503 01:02:18 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@951 -- # uname 00:16:27.503 01:02:18 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:16:27.503 01:02:18 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3737894 00:16:27.503 01:02:18 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:16:27.503 01:02:18 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:16:27.503 01:02:18 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3737894' 00:16:27.503 killing process with pid 3737894 00:16:27.503 01:02:18 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@965 -- # kill 3737894 00:16:27.503 01:02:18 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@970 -- # wait 3737894 00:16:27.503 01:02:18 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:16:27.503 01:02:19 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:16:27.503 00:16:27.503 real 0m32.211s 00:16:27.503 user 0m31.015s 00:16:27.503 sys 0m29.063s 00:16:27.503 01:02:19 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1122 -- # xtrace_disable 00:16:27.503 01:02:19 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:27.503 ************************************ 00:16:27.503 END TEST nvmf_vfio_user_fuzz 00:16:27.503 ************************************ 00:16:27.503 01:02:19 nvmf_tcp -- nvmf/nvmf.sh@47 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:16:27.503 01:02:19 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:16:27.503 01:02:19 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:16:27.503 01:02:19 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:27.503 ************************************ 00:16:27.503 START TEST nvmf_host_management 00:16:27.503 ************************************ 00:16:27.503 01:02:19 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:16:27.503 * Looking for test storage... 00:16:27.503 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:27.503 01:02:19 nvmf_tcp.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:27.503 01:02:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:16:27.503 01:02:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:27.503 01:02:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:27.503 01:02:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:27.503 01:02:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:27.503 01:02:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:27.503 01:02:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:27.503 01:02:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:27.503 01:02:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:27.503 01:02:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:27.503 01:02:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:27.503 01:02:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:27.503 01:02:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:16:27.503 01:02:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:27.503 01:02:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:27.503 01:02:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:27.503 01:02:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:27.503 01:02:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:27.503 01:02:19 nvmf_tcp.nvmf_host_management -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:27.503 01:02:19 nvmf_tcp.nvmf_host_management -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:27.503 01:02:19 nvmf_tcp.nvmf_host_management -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:27.503 01:02:19 nvmf_tcp.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:27.503 01:02:19 nvmf_tcp.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:27.503 01:02:19 nvmf_tcp.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:27.503 01:02:19 nvmf_tcp.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:16:27.503 01:02:19 nvmf_tcp.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:27.503 01:02:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@47 -- # : 0 00:16:27.503 01:02:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:27.503 01:02:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:27.503 01:02:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:27.503 01:02:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:27.503 01:02:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:27.503 01:02:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:27.503 01:02:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:27.503 01:02:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:27.503 01:02:19 nvmf_tcp.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:27.503 01:02:19 nvmf_tcp.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:27.503 01:02:19 nvmf_tcp.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:16:27.503 01:02:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:27.503 01:02:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:27.503 01:02:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:27.503 01:02:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:27.503 01:02:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:27.503 01:02:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:27.503 01:02:19 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:27.503 01:02:19 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:27.503 01:02:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:27.503 01:02:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:27.503 01:02:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@285 -- # xtrace_disable 00:16:27.503 01:02:19 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:28.071 01:02:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:28.071 01:02:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@291 -- # pci_devs=() 00:16:28.071 01:02:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:28.071 01:02:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:28.071 01:02:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:28.071 01:02:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:28.071 01:02:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:28.071 01:02:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@295 -- # net_devs=() 00:16:28.071 01:02:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:28.071 01:02:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@296 -- # e810=() 00:16:28.071 01:02:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@296 -- # local -ga e810 00:16:28.071 01:02:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@297 -- # x722=() 00:16:28.071 01:02:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@297 -- # local -ga x722 00:16:28.071 01:02:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@298 -- # mlx=() 00:16:28.071 01:02:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@298 -- # local -ga mlx 00:16:28.071 01:02:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:28.071 01:02:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:28.071 01:02:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:28.071 01:02:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:28.071 01:02:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:28.071 01:02:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:28.071 01:02:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:28.071 01:02:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:28.071 01:02:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:28.071 01:02:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:28.071 01:02:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:28.071 01:02:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:28.071 01:02:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:28.071 01:02:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:28.071 01:02:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:28.071 01:02:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:28.071 01:02:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:28.071 01:02:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:28.071 01:02:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:16:28.071 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:16:28.071 01:02:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:28.071 01:02:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:28.071 01:02:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:28.071 01:02:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:28.071 01:02:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:28.071 01:02:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:28.071 01:02:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:16:28.071 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:16:28.071 01:02:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:28.071 01:02:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:28.071 01:02:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:28.071 01:02:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:28.071 01:02:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:28.071 01:02:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:28.071 01:02:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:28.071 01:02:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:28.071 01:02:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:28.071 01:02:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:28.071 01:02:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:28.072 01:02:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:28.072 01:02:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:28.072 01:02:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:28.072 01:02:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:28.072 01:02:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:16:28.072 Found net devices under 0000:0a:00.0: cvl_0_0 00:16:28.072 01:02:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:28.072 01:02:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:28.072 01:02:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:28.072 01:02:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:28.072 01:02:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:28.072 01:02:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:28.072 01:02:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:28.072 01:02:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:28.072 01:02:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:16:28.072 Found net devices under 0000:0a:00.1: cvl_0_1 00:16:28.072 01:02:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:28.072 01:02:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:28.072 01:02:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # is_hw=yes 00:16:28.072 01:02:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:28.072 01:02:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:28.072 01:02:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:28.072 01:02:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:28.072 01:02:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:28.072 01:02:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:28.072 01:02:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:28.072 01:02:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:28.072 01:02:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:28.072 01:02:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:28.072 01:02:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:28.072 01:02:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:28.072 01:02:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:28.072 01:02:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:28.072 01:02:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:28.072 01:02:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:28.072 01:02:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:28.072 01:02:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:28.072 01:02:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:28.072 01:02:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:28.072 01:02:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:28.072 01:02:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:28.072 01:02:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:28.072 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:28.072 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.236 ms 00:16:28.072 00:16:28.072 --- 10.0.0.2 ping statistics --- 00:16:28.072 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:28.072 rtt min/avg/max/mdev = 0.236/0.236/0.236/0.000 ms 00:16:28.072 01:02:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:28.072 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:28.072 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.085 ms 00:16:28.072 00:16:28.072 --- 10.0.0.1 ping statistics --- 00:16:28.072 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:28.072 rtt min/avg/max/mdev = 0.085/0.085/0.085/0.000 ms 00:16:28.072 01:02:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:28.072 01:02:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@422 -- # return 0 00:16:28.072 01:02:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:28.072 01:02:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:28.072 01:02:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:28.072 01:02:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:28.072 01:02:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:28.072 01:02:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:28.072 01:02:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:28.072 01:02:21 nvmf_tcp.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:16:28.072 01:02:21 nvmf_tcp.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:16:28.072 01:02:21 nvmf_tcp.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:16:28.072 01:02:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:28.072 01:02:21 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@720 -- # xtrace_disable 00:16:28.072 01:02:21 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:28.072 01:02:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@481 -- # nvmfpid=3743223 00:16:28.072 01:02:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:16:28.072 01:02:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@482 -- # waitforlisten 3743223 00:16:28.330 01:02:21 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@827 -- # '[' -z 3743223 ']' 00:16:28.330 01:02:21 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:28.330 01:02:21 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@832 -- # local max_retries=100 00:16:28.330 01:02:21 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:28.330 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:28.330 01:02:21 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # xtrace_disable 00:16:28.330 01:02:21 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:28.330 [2024-07-25 01:02:21.266077] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 23.11.0 initialization... 00:16:28.330 [2024-07-25 01:02:21.266166] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:28.330 EAL: No free 2048 kB hugepages reported on node 1 00:16:28.330 [2024-07-25 01:02:21.338088] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:28.330 [2024-07-25 01:02:21.437366] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:28.330 [2024-07-25 01:02:21.437416] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:28.330 [2024-07-25 01:02:21.437433] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:28.330 [2024-07-25 01:02:21.437452] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:28.330 [2024-07-25 01:02:21.437464] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:28.330 [2024-07-25 01:02:21.437553] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:28.330 [2024-07-25 01:02:21.437611] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:28.330 [2024-07-25 01:02:21.437669] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:16:28.330 [2024-07-25 01:02:21.437671] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:28.588 01:02:21 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:16:28.588 01:02:21 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@860 -- # return 0 00:16:28.588 01:02:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:28.588 01:02:21 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:28.588 01:02:21 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:28.588 01:02:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:28.588 01:02:21 nvmf_tcp.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:28.588 01:02:21 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:28.588 01:02:21 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:28.588 [2024-07-25 01:02:21.582788] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:28.588 01:02:21 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:28.588 01:02:21 nvmf_tcp.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:16:28.588 01:02:21 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@720 -- # xtrace_disable 00:16:28.588 01:02:21 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:28.588 01:02:21 nvmf_tcp.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:16:28.588 01:02:21 nvmf_tcp.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:16:28.588 01:02:21 nvmf_tcp.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:16:28.588 01:02:21 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:28.588 01:02:21 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:28.589 Malloc0 00:16:28.589 [2024-07-25 01:02:21.641732] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:28.589 01:02:21 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:28.589 01:02:21 nvmf_tcp.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:16:28.589 01:02:21 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:28.589 01:02:21 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:28.589 01:02:21 nvmf_tcp.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=3743387 00:16:28.589 01:02:21 nvmf_tcp.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 3743387 /var/tmp/bdevperf.sock 00:16:28.589 01:02:21 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@827 -- # '[' -z 3743387 ']' 00:16:28.589 01:02:21 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:28.589 01:02:21 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:16:28.589 01:02:21 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:16:28.589 01:02:21 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@832 -- # local max_retries=100 00:16:28.589 01:02:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:16:28.589 01:02:21 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:28.589 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:28.589 01:02:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:16:28.589 01:02:21 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # xtrace_disable 00:16:28.589 01:02:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:16:28.589 01:02:21 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:28.589 01:02:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:16:28.589 { 00:16:28.589 "params": { 00:16:28.589 "name": "Nvme$subsystem", 00:16:28.589 "trtype": "$TEST_TRANSPORT", 00:16:28.589 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:28.589 "adrfam": "ipv4", 00:16:28.589 "trsvcid": "$NVMF_PORT", 00:16:28.589 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:28.589 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:28.589 "hdgst": ${hdgst:-false}, 00:16:28.589 "ddgst": ${ddgst:-false} 00:16:28.589 }, 00:16:28.589 "method": "bdev_nvme_attach_controller" 00:16:28.589 } 00:16:28.589 EOF 00:16:28.589 )") 00:16:28.589 01:02:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:16:28.589 01:02:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:16:28.589 01:02:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:16:28.589 01:02:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:16:28.589 "params": { 00:16:28.589 "name": "Nvme0", 00:16:28.589 "trtype": "tcp", 00:16:28.589 "traddr": "10.0.0.2", 00:16:28.589 "adrfam": "ipv4", 00:16:28.589 "trsvcid": "4420", 00:16:28.589 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:16:28.589 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:16:28.589 "hdgst": false, 00:16:28.589 "ddgst": false 00:16:28.589 }, 00:16:28.589 "method": "bdev_nvme_attach_controller" 00:16:28.589 }' 00:16:28.589 [2024-07-25 01:02:21.712872] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 23.11.0 initialization... 00:16:28.589 [2024-07-25 01:02:21.712946] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3743387 ] 00:16:28.847 EAL: No free 2048 kB hugepages reported on node 1 00:16:28.847 [2024-07-25 01:02:21.775613] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:28.847 [2024-07-25 01:02:21.861925] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:29.104 Running I/O for 10 seconds... 00:16:29.104 01:02:22 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:16:29.104 01:02:22 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@860 -- # return 0 00:16:29.104 01:02:22 nvmf_tcp.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:16:29.104 01:02:22 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:29.104 01:02:22 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:29.104 01:02:22 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:29.104 01:02:22 nvmf_tcp.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:29.104 01:02:22 nvmf_tcp.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:16:29.104 01:02:22 nvmf_tcp.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:16:29.104 01:02:22 nvmf_tcp.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:16:29.104 01:02:22 nvmf_tcp.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:16:29.104 01:02:22 nvmf_tcp.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:16:29.104 01:02:22 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:16:29.104 01:02:22 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:16:29.104 01:02:22 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:16:29.105 01:02:22 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:16:29.105 01:02:22 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:29.105 01:02:22 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:29.105 01:02:22 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:29.105 01:02:22 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=67 00:16:29.105 01:02:22 nvmf_tcp.nvmf_host_management -- target/host_management.sh@58 -- # '[' 67 -ge 100 ']' 00:16:29.105 01:02:22 nvmf_tcp.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:16:29.363 01:02:22 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:16:29.363 01:02:22 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:16:29.363 01:02:22 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:16:29.363 01:02:22 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:29.363 01:02:22 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:16:29.363 01:02:22 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:29.363 01:02:22 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:29.363 01:02:22 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=538 00:16:29.363 01:02:22 nvmf_tcp.nvmf_host_management -- target/host_management.sh@58 -- # '[' 538 -ge 100 ']' 00:16:29.363 01:02:22 nvmf_tcp.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:16:29.363 01:02:22 nvmf_tcp.nvmf_host_management -- target/host_management.sh@60 -- # break 00:16:29.363 01:02:22 nvmf_tcp.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:16:29.363 01:02:22 nvmf_tcp.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:16:29.363 01:02:22 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:29.363 01:02:22 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:29.363 [2024-07-25 01:02:22.492572] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x801120 is same with the state(5) to be set 00:16:29.364 [2024-07-25 01:02:22.492690] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x801120 is same with the state(5) to be set 00:16:29.364 [2024-07-25 01:02:22.492707] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x801120 is same with the state(5) to be set 00:16:29.364 [2024-07-25 01:02:22.492720] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x801120 is same with the state(5) to be set 00:16:29.364 [2024-07-25 01:02:22.492732] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x801120 is same with the state(5) to be set 00:16:29.364 [2024-07-25 01:02:22.492744] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x801120 is same with the state(5) to be set 00:16:29.364 [2024-07-25 01:02:22.492756] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x801120 is same with the state(5) to be set 00:16:29.364 [2024-07-25 01:02:22.492768] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x801120 is same with the state(5) to be set 00:16:29.364 [2024-07-25 01:02:22.492779] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x801120 is same with the state(5) to be set 00:16:29.364 [2024-07-25 01:02:22.492791] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x801120 is same with the state(5) to be set 00:16:29.364 [2024-07-25 01:02:22.492802] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x801120 is same with the state(5) to be set 00:16:29.364 [2024-07-25 01:02:22.492815] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x801120 is same with the state(5) to be set 00:16:29.364 [2024-07-25 01:02:22.492827] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x801120 is same with the state(5) to be set 00:16:29.364 [2024-07-25 01:02:22.492839] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x801120 is same with the state(5) to be set 00:16:29.364 [2024-07-25 01:02:22.492850] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x801120 is same with the state(5) to be set 00:16:29.364 [2024-07-25 01:02:22.492873] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x801120 is same with the state(5) to be set 00:16:29.364 [2024-07-25 01:02:22.492886] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x801120 is same with the state(5) to be set 00:16:29.364 [2024-07-25 01:02:22.492898] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x801120 is same with the state(5) to be set 00:16:29.364 [2024-07-25 01:02:22.492910] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x801120 is same with the state(5) to be set 00:16:29.364 [2024-07-25 01:02:22.492921] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x801120 is same with the state(5) to be set 00:16:29.364 [2024-07-25 01:02:22.492933] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x801120 is same with the state(5) to be set 00:16:29.364 [2024-07-25 01:02:22.492945] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x801120 is same with the state(5) to be set 00:16:29.364 [2024-07-25 01:02:22.492969] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x801120 is same with the state(5) to be set 00:16:29.364 [2024-07-25 01:02:22.492981] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x801120 is same with the state(5) to be set 00:16:29.364 [2024-07-25 01:02:22.492993] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x801120 is same with the state(5) to be set 00:16:29.364 [2024-07-25 01:02:22.493004] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x801120 is same with the state(5) to be set 00:16:29.364 [2024-07-25 01:02:22.493016] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x801120 is same with the state(5) to be set 00:16:29.364 [2024-07-25 01:02:22.493028] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x801120 is same with the state(5) to be set 00:16:29.364 [2024-07-25 01:02:22.493040] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x801120 is same with the state(5) to be set 00:16:29.364 [2024-07-25 01:02:22.493051] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x801120 is same with the state(5) to be set 00:16:29.364 [2024-07-25 01:02:22.493063] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x801120 is same with the state(5) to be set 00:16:29.364 [2024-07-25 01:02:22.493074] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x801120 is same with the state(5) to be set 00:16:29.364 [2024-07-25 01:02:22.493086] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x801120 is same with the state(5) to be set 00:16:29.364 [2024-07-25 01:02:22.493098] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x801120 is same with the state(5) to be set 00:16:29.364 [2024-07-25 01:02:22.493110] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x801120 is same with the state(5) to be set 00:16:29.364 [2024-07-25 01:02:22.493137] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x801120 is same with the state(5) to be set 00:16:29.364 [2024-07-25 01:02:22.493149] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x801120 is same with the state(5) to be set 00:16:29.364 [2024-07-25 01:02:22.493162] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x801120 is same with the state(5) to be set 00:16:29.364 [2024-07-25 01:02:22.493174] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x801120 is same with the state(5) to be set 00:16:29.364 [2024-07-25 01:02:22.493186] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x801120 is same with the state(5) to be set 00:16:29.364 [2024-07-25 01:02:22.493197] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x801120 is same with the state(5) to be set 00:16:29.364 [2024-07-25 01:02:22.493209] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x801120 is same with the state(5) to be set 00:16:29.364 01:02:22 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:29.364 01:02:22 nvmf_tcp.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:16:29.364 01:02:22 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:29.364 01:02:22 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:29.364 [2024-07-25 01:02:22.498317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:76416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.364 [2024-07-25 01:02:22.498357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:29.364 [2024-07-25 01:02:22.498385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:76544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.364 [2024-07-25 01:02:22.498403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:29.364 [2024-07-25 01:02:22.498421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:76672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.364 [2024-07-25 01:02:22.498436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:29.364 [2024-07-25 01:02:22.498452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:76800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.364 [2024-07-25 01:02:22.498468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:29.364 [2024-07-25 01:02:22.498484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:76928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.364 [2024-07-25 01:02:22.498498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:29.364 [2024-07-25 01:02:22.498516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:77056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.364 [2024-07-25 01:02:22.498534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:29.364 [2024-07-25 01:02:22.498551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:77184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.364 [2024-07-25 01:02:22.498567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:29.364 [2024-07-25 01:02:22.498584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:77312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.364 [2024-07-25 01:02:22.498606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:29.364 [2024-07-25 01:02:22.498621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:77440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.364 [2024-07-25 01:02:22.498637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:29.364 [2024-07-25 01:02:22.498653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:77568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.364 [2024-07-25 01:02:22.498668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:29.364 [2024-07-25 01:02:22.498684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:77696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.364 [2024-07-25 01:02:22.498699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:29.364 [2024-07-25 01:02:22.498715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:77824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.364 [2024-07-25 01:02:22.498735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:29.364 [2024-07-25 01:02:22.498753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:77952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.364 [2024-07-25 01:02:22.498768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:29.364 [2024-07-25 01:02:22.498784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:78080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.364 [2024-07-25 01:02:22.498799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:29.364 [2024-07-25 01:02:22.498816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:78208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.364 [2024-07-25 01:02:22.498832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:29.364 [2024-07-25 01:02:22.498848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:78336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.364 [2024-07-25 01:02:22.498863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:29.364 [2024-07-25 01:02:22.498878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:78464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.364 [2024-07-25 01:02:22.498893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:29.364 [2024-07-25 01:02:22.498909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:78592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.364 [2024-07-25 01:02:22.498924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:29.364 [2024-07-25 01:02:22.498940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:78720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.365 [2024-07-25 01:02:22.498955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:29.365 [2024-07-25 01:02:22.498970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:78848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.365 [2024-07-25 01:02:22.498986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:29.365 [2024-07-25 01:02:22.499002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:78976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.365 [2024-07-25 01:02:22.499017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:29.365 [2024-07-25 01:02:22.499034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:79104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.365 [2024-07-25 01:02:22.499049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:29.365 [2024-07-25 01:02:22.499065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:79232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.365 [2024-07-25 01:02:22.499080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:29.365 [2024-07-25 01:02:22.499097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:79360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.365 [2024-07-25 01:02:22.499112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:29.365 [2024-07-25 01:02:22.499133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:79488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.365 [2024-07-25 01:02:22.499151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:29.365 [2024-07-25 01:02:22.499168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:79616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.365 [2024-07-25 01:02:22.499185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:29.365 [2024-07-25 01:02:22.499201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:79744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.365 [2024-07-25 01:02:22.499217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:29.365 [2024-07-25 01:02:22.499233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:79872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.365 [2024-07-25 01:02:22.499260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:29.365 [2024-07-25 01:02:22.499278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:80000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.365 [2024-07-25 01:02:22.499302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:29.365 [2024-07-25 01:02:22.499319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:80128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.365 [2024-07-25 01:02:22.499334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:29.365 [2024-07-25 01:02:22.499350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:80256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.365 [2024-07-25 01:02:22.499366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:29.365 [2024-07-25 01:02:22.499383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:80384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.365 [2024-07-25 01:02:22.499398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:29.365 [2024-07-25 01:02:22.499415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:80512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.365 [2024-07-25 01:02:22.499430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:29.365 [2024-07-25 01:02:22.499447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:80640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.365 [2024-07-25 01:02:22.499462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:29.365 [2024-07-25 01:02:22.499480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:80768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.365 [2024-07-25 01:02:22.499496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:29.365 [2024-07-25 01:02:22.499512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:80896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.365 [2024-07-25 01:02:22.499534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:29.365 [2024-07-25 01:02:22.499567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:81024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.365 [2024-07-25 01:02:22.499598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:29.365 [2024-07-25 01:02:22.499616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:81152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.365 [2024-07-25 01:02:22.499631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:29.365 [2024-07-25 01:02:22.499647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:81280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.365 [2024-07-25 01:02:22.499663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:29.365 [2024-07-25 01:02:22.499679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:81408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.365 [2024-07-25 01:02:22.499694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:29.365 [2024-07-25 01:02:22.499710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:81536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.365 [2024-07-25 01:02:22.499724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:29.365 [2024-07-25 01:02:22.499741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:81664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.365 [2024-07-25 01:02:22.499757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:29.365 [2024-07-25 01:02:22.499773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:81792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.365 [2024-07-25 01:02:22.499789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:29.365 [2024-07-25 01:02:22.499804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.365 [2024-07-25 01:02:22.499819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:29.365 [2024-07-25 01:02:22.499836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:82048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.365 [2024-07-25 01:02:22.499852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:29.365 [2024-07-25 01:02:22.499867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:82176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.365 [2024-07-25 01:02:22.499882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:29.365 [2024-07-25 01:02:22.499897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:82304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.365 [2024-07-25 01:02:22.499912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:29.365 [2024-07-25 01:02:22.499928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:82432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.365 [2024-07-25 01:02:22.499942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:29.365 [2024-07-25 01:02:22.499957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:82560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.365 [2024-07-25 01:02:22.499973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:29.365 [2024-07-25 01:02:22.499992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:82688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.365 [2024-07-25 01:02:22.500008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:29.365 [2024-07-25 01:02:22.500024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:82816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.365 [2024-07-25 01:02:22.500039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:29.365 [2024-07-25 01:02:22.500054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:82944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.365 [2024-07-25 01:02:22.500069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:29.365 [2024-07-25 01:02:22.500084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:83072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.365 [2024-07-25 01:02:22.500099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:29.365 [2024-07-25 01:02:22.500115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:83200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.365 [2024-07-25 01:02:22.500130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:29.365 [2024-07-25 01:02:22.500145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:83328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.365 [2024-07-25 01:02:22.500160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:29.365 [2024-07-25 01:02:22.500175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:83456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.365 [2024-07-25 01:02:22.500190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:29.365 [2024-07-25 01:02:22.500206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:83584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.365 [2024-07-25 01:02:22.500235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:29.365 [2024-07-25 01:02:22.500262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:83712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.366 [2024-07-25 01:02:22.500278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:29.366 [2024-07-25 01:02:22.500303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:83840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.366 [2024-07-25 01:02:22.500317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:29.366 [2024-07-25 01:02:22.500334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:83968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.366 [2024-07-25 01:02:22.500350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:29.366 [2024-07-25 01:02:22.500367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:84096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.366 [2024-07-25 01:02:22.500382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:29.366 [2024-07-25 01:02:22.500399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:84224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.366 [2024-07-25 01:02:22.500418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:29.366 [2024-07-25 01:02:22.500435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:84352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.366 [2024-07-25 01:02:22.500451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:29.366 [2024-07-25 01:02:22.500468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:84480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:29.366 [2024-07-25 01:02:22.500483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:29.366 [2024-07-25 01:02:22.500601] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1804110 was disconnected and freed. reset controller. 00:16:29.366 [2024-07-25 01:02:22.500691] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:16:29.366 [2024-07-25 01:02:22.500730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:29.366 [2024-07-25 01:02:22.500746] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:29.366 [2024-07-25 01:02:22.500762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:29.366 [2024-07-25 01:02:22.500777] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:29.366 [2024-07-25 01:02:22.500792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:29.366 [2024-07-25 01:02:22.500807] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:29.366 [2024-07-25 01:02:22.500822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:29.366 [2024-07-25 01:02:22.500835] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f31e0 is same with the state(5) to be set 00:16:29.366 [2024-07-25 01:02:22.502007] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:16:29.366 task offset: 76416 on job bdev=Nvme0n1 fails 00:16:29.366 00:16:29.366 Latency(us) 00:16:29.366 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:29.366 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:29.366 Job: Nvme0n1 ended in about 0.41 seconds with error 00:16:29.366 Verification LBA range: start 0x0 length 0x400 00:16:29.366 Nvme0n1 : 0.41 1440.41 90.03 154.42 0.00 39018.37 2900.57 34175.81 00:16:29.366 =================================================================================================================== 00:16:29.366 Total : 1440.41 90.03 154.42 0.00 39018.37 2900.57 34175.81 00:16:29.366 [2024-07-25 01:02:22.504023] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:16:29.366 [2024-07-25 01:02:22.504051] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13f31e0 (9): Bad file descriptor 00:16:29.366 01:02:22 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:29.366 01:02:22 nvmf_tcp.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:16:29.624 [2024-07-25 01:02:22.558130] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:16:30.554 01:02:23 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 3743387 00:16:30.554 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (3743387) - No such process 00:16:30.554 01:02:23 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # true 00:16:30.554 01:02:23 nvmf_tcp.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:16:30.555 01:02:23 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:16:30.555 01:02:23 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:16:30.555 01:02:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:16:30.555 01:02:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:16:30.555 01:02:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:16:30.555 01:02:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:16:30.555 { 00:16:30.555 "params": { 00:16:30.555 "name": "Nvme$subsystem", 00:16:30.555 "trtype": "$TEST_TRANSPORT", 00:16:30.555 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:30.555 "adrfam": "ipv4", 00:16:30.555 "trsvcid": "$NVMF_PORT", 00:16:30.555 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:30.555 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:30.555 "hdgst": ${hdgst:-false}, 00:16:30.555 "ddgst": ${ddgst:-false} 00:16:30.555 }, 00:16:30.555 "method": "bdev_nvme_attach_controller" 00:16:30.555 } 00:16:30.555 EOF 00:16:30.555 )") 00:16:30.555 01:02:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:16:30.555 01:02:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:16:30.555 01:02:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:16:30.555 01:02:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:16:30.555 "params": { 00:16:30.555 "name": "Nvme0", 00:16:30.555 "trtype": "tcp", 00:16:30.555 "traddr": "10.0.0.2", 00:16:30.555 "adrfam": "ipv4", 00:16:30.555 "trsvcid": "4420", 00:16:30.555 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:16:30.555 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:16:30.555 "hdgst": false, 00:16:30.555 "ddgst": false 00:16:30.555 }, 00:16:30.555 "method": "bdev_nvme_attach_controller" 00:16:30.555 }' 00:16:30.555 [2024-07-25 01:02:23.551831] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 23.11.0 initialization... 00:16:30.555 [2024-07-25 01:02:23.551923] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3743547 ] 00:16:30.555 EAL: No free 2048 kB hugepages reported on node 1 00:16:30.555 [2024-07-25 01:02:23.613580] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:30.555 [2024-07-25 01:02:23.700569] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:31.119 Running I/O for 1 seconds... 00:16:32.050 00:16:32.050 Latency(us) 00:16:32.050 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:32.050 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:32.050 Verification LBA range: start 0x0 length 0x400 00:16:32.050 Nvme0n1 : 1.00 1594.96 99.68 0.00 0.00 39483.29 6941.96 33981.63 00:16:32.050 =================================================================================================================== 00:16:32.050 Total : 1594.96 99.68 0.00 0.00 39483.29 6941.96 33981.63 00:16:32.308 01:02:25 nvmf_tcp.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:16:32.308 01:02:25 nvmf_tcp.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:16:32.308 01:02:25 nvmf_tcp.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:16:32.308 01:02:25 nvmf_tcp.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:16:32.308 01:02:25 nvmf_tcp.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:16:32.308 01:02:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:32.308 01:02:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@117 -- # sync 00:16:32.308 01:02:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:32.308 01:02:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@120 -- # set +e 00:16:32.308 01:02:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:32.308 01:02:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:32.308 rmmod nvme_tcp 00:16:32.308 rmmod nvme_fabrics 00:16:32.308 rmmod nvme_keyring 00:16:32.308 01:02:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:32.308 01:02:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@124 -- # set -e 00:16:32.308 01:02:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@125 -- # return 0 00:16:32.308 01:02:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@489 -- # '[' -n 3743223 ']' 00:16:32.308 01:02:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@490 -- # killprocess 3743223 00:16:32.308 01:02:25 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@946 -- # '[' -z 3743223 ']' 00:16:32.308 01:02:25 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@950 -- # kill -0 3743223 00:16:32.308 01:02:25 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@951 -- # uname 00:16:32.308 01:02:25 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:16:32.308 01:02:25 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3743223 00:16:32.308 01:02:25 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:16:32.308 01:02:25 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:16:32.308 01:02:25 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3743223' 00:16:32.308 killing process with pid 3743223 00:16:32.308 01:02:25 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@965 -- # kill 3743223 00:16:32.308 01:02:25 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@970 -- # wait 3743223 00:16:32.566 [2024-07-25 01:02:25.551455] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:16:32.566 01:02:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:32.566 01:02:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:32.566 01:02:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:32.566 01:02:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:32.566 01:02:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:32.566 01:02:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:32.566 01:02:25 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:32.566 01:02:25 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:34.498 01:02:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:34.498 01:02:27 nvmf_tcp.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:16:34.498 00:16:34.498 real 0m8.540s 00:16:34.498 user 0m19.542s 00:16:34.498 sys 0m2.504s 00:16:34.498 01:02:27 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1122 -- # xtrace_disable 00:16:34.498 01:02:27 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:34.498 ************************************ 00:16:34.498 END TEST nvmf_host_management 00:16:34.498 ************************************ 00:16:34.756 01:02:27 nvmf_tcp -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:16:34.756 01:02:27 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:16:34.756 01:02:27 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:16:34.756 01:02:27 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:34.756 ************************************ 00:16:34.756 START TEST nvmf_lvol 00:16:34.756 ************************************ 00:16:34.756 01:02:27 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:16:34.756 * Looking for test storage... 00:16:34.756 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:34.756 01:02:27 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:34.756 01:02:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:16:34.756 01:02:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:34.756 01:02:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:34.756 01:02:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:34.756 01:02:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:34.756 01:02:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:34.756 01:02:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:34.756 01:02:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:34.756 01:02:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:34.756 01:02:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:34.756 01:02:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:34.756 01:02:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:34.756 01:02:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:16:34.756 01:02:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:34.756 01:02:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:34.756 01:02:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:34.756 01:02:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:34.756 01:02:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:34.756 01:02:27 nvmf_tcp.nvmf_lvol -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:34.756 01:02:27 nvmf_tcp.nvmf_lvol -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:34.756 01:02:27 nvmf_tcp.nvmf_lvol -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:34.756 01:02:27 nvmf_tcp.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:34.756 01:02:27 nvmf_tcp.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:34.756 01:02:27 nvmf_tcp.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:34.756 01:02:27 nvmf_tcp.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:16:34.757 01:02:27 nvmf_tcp.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:34.757 01:02:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@47 -- # : 0 00:16:34.757 01:02:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:34.757 01:02:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:34.757 01:02:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:34.757 01:02:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:34.757 01:02:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:34.757 01:02:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:34.757 01:02:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:34.757 01:02:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:34.757 01:02:27 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:34.757 01:02:27 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:34.757 01:02:27 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:16:34.757 01:02:27 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:16:34.757 01:02:27 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:34.757 01:02:27 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:16:34.757 01:02:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:34.757 01:02:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:34.757 01:02:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:34.757 01:02:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:34.757 01:02:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:34.757 01:02:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:34.757 01:02:27 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:34.757 01:02:27 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:34.757 01:02:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:34.757 01:02:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:34.757 01:02:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@285 -- # xtrace_disable 00:16:34.757 01:02:27 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:16:36.657 01:02:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:36.657 01:02:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@291 -- # pci_devs=() 00:16:36.657 01:02:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:36.657 01:02:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:36.657 01:02:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:36.657 01:02:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:36.657 01:02:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:36.657 01:02:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@295 -- # net_devs=() 00:16:36.657 01:02:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:36.657 01:02:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@296 -- # e810=() 00:16:36.657 01:02:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@296 -- # local -ga e810 00:16:36.657 01:02:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@297 -- # x722=() 00:16:36.657 01:02:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@297 -- # local -ga x722 00:16:36.657 01:02:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@298 -- # mlx=() 00:16:36.657 01:02:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@298 -- # local -ga mlx 00:16:36.657 01:02:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:36.657 01:02:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:36.657 01:02:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:36.657 01:02:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:36.657 01:02:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:36.657 01:02:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:36.657 01:02:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:36.657 01:02:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:36.658 01:02:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:36.658 01:02:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:36.658 01:02:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:36.658 01:02:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:36.658 01:02:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:36.658 01:02:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:36.658 01:02:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:36.658 01:02:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:36.658 01:02:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:36.658 01:02:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:36.658 01:02:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:16:36.658 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:16:36.658 01:02:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:36.658 01:02:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:36.658 01:02:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:36.658 01:02:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:36.658 01:02:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:36.658 01:02:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:36.658 01:02:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:16:36.658 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:16:36.658 01:02:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:36.658 01:02:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:36.658 01:02:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:36.658 01:02:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:36.658 01:02:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:36.658 01:02:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:36.658 01:02:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:36.658 01:02:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:36.658 01:02:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:36.658 01:02:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:36.658 01:02:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:36.658 01:02:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:36.658 01:02:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:36.658 01:02:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:36.658 01:02:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:36.658 01:02:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:16:36.658 Found net devices under 0000:0a:00.0: cvl_0_0 00:16:36.658 01:02:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:36.658 01:02:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:36.658 01:02:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:36.658 01:02:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:36.658 01:02:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:36.658 01:02:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:36.658 01:02:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:36.658 01:02:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:36.658 01:02:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:16:36.658 Found net devices under 0000:0a:00.1: cvl_0_1 00:16:36.658 01:02:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:36.658 01:02:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:36.658 01:02:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # is_hw=yes 00:16:36.658 01:02:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:36.658 01:02:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:36.658 01:02:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:36.658 01:02:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:36.658 01:02:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:36.658 01:02:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:36.658 01:02:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:36.658 01:02:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:36.658 01:02:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:36.658 01:02:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:36.658 01:02:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:36.658 01:02:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:36.658 01:02:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:36.658 01:02:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:36.658 01:02:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:36.658 01:02:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:36.658 01:02:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:36.658 01:02:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:36.658 01:02:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:36.658 01:02:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:36.916 01:02:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:36.916 01:02:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:36.916 01:02:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:36.916 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:36.916 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.251 ms 00:16:36.916 00:16:36.916 --- 10.0.0.2 ping statistics --- 00:16:36.916 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:36.916 rtt min/avg/max/mdev = 0.251/0.251/0.251/0.000 ms 00:16:36.916 01:02:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:36.916 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:36.916 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.093 ms 00:16:36.916 00:16:36.916 --- 10.0.0.1 ping statistics --- 00:16:36.916 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:36.916 rtt min/avg/max/mdev = 0.093/0.093/0.093/0.000 ms 00:16:36.916 01:02:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:36.916 01:02:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@422 -- # return 0 00:16:36.916 01:02:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:36.916 01:02:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:36.916 01:02:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:36.916 01:02:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:36.916 01:02:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:36.916 01:02:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:36.916 01:02:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:36.916 01:02:29 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:16:36.916 01:02:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:36.916 01:02:29 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@720 -- # xtrace_disable 00:16:36.916 01:02:29 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:16:36.916 01:02:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@481 -- # nvmfpid=3745744 00:16:36.916 01:02:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:16:36.916 01:02:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@482 -- # waitforlisten 3745744 00:16:36.916 01:02:29 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@827 -- # '[' -z 3745744 ']' 00:16:36.916 01:02:29 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:36.916 01:02:29 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@832 -- # local max_retries=100 00:16:36.916 01:02:29 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:36.916 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:36.916 01:02:29 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@836 -- # xtrace_disable 00:16:36.916 01:02:29 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:16:36.916 [2024-07-25 01:02:29.926417] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 23.11.0 initialization... 00:16:36.916 [2024-07-25 01:02:29.926500] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:36.916 EAL: No free 2048 kB hugepages reported on node 1 00:16:36.916 [2024-07-25 01:02:29.992196] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:37.174 [2024-07-25 01:02:30.090213] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:37.174 [2024-07-25 01:02:30.090291] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:37.174 [2024-07-25 01:02:30.090319] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:37.174 [2024-07-25 01:02:30.090332] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:37.174 [2024-07-25 01:02:30.090342] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:37.174 [2024-07-25 01:02:30.090395] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:37.174 [2024-07-25 01:02:30.090457] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:37.174 [2024-07-25 01:02:30.090460] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:37.174 01:02:30 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:16:37.174 01:02:30 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@860 -- # return 0 00:16:37.174 01:02:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:37.174 01:02:30 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:37.174 01:02:30 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:16:37.174 01:02:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:37.174 01:02:30 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:16:37.431 [2024-07-25 01:02:30.475853] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:37.431 01:02:30 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:37.689 01:02:30 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:16:37.689 01:02:30 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:37.947 01:02:31 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:16:37.947 01:02:31 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:16:38.204 01:02:31 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:16:38.461 01:02:31 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=11583cd4-b8e8-4dd7-892d-86a93679ca95 00:16:38.461 01:02:31 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 11583cd4-b8e8-4dd7-892d-86a93679ca95 lvol 20 00:16:38.718 01:02:31 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=64070c56-fa6b-47ab-a0da-92d891e49a38 00:16:38.718 01:02:31 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:16:38.975 01:02:32 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 64070c56-fa6b-47ab-a0da-92d891e49a38 00:16:39.232 01:02:32 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:16:39.490 [2024-07-25 01:02:32.496280] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:39.490 01:02:32 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:16:39.747 01:02:32 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=3746164 00:16:39.747 01:02:32 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:16:39.747 01:02:32 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:16:39.747 EAL: No free 2048 kB hugepages reported on node 1 00:16:40.678 01:02:33 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 64070c56-fa6b-47ab-a0da-92d891e49a38 MY_SNAPSHOT 00:16:40.935 01:02:34 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=2475360c-32e5-477b-8cde-3c26abc0025b 00:16:40.935 01:02:34 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 64070c56-fa6b-47ab-a0da-92d891e49a38 30 00:16:41.500 01:02:34 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 2475360c-32e5-477b-8cde-3c26abc0025b MY_CLONE 00:16:41.500 01:02:34 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=a1dc7e61-2e3b-47f8-a551-c526768c6c43 00:16:41.500 01:02:34 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate a1dc7e61-2e3b-47f8-a551-c526768c6c43 00:16:42.432 01:02:35 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 3746164 00:16:50.530 Initializing NVMe Controllers 00:16:50.530 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:16:50.530 Controller IO queue size 128, less than required. 00:16:50.530 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:16:50.530 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:16:50.530 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:16:50.530 Initialization complete. Launching workers. 00:16:50.530 ======================================================== 00:16:50.530 Latency(us) 00:16:50.530 Device Information : IOPS MiB/s Average min max 00:16:50.530 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 9904.00 38.69 12928.75 697.01 70718.64 00:16:50.530 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 10670.80 41.68 11995.52 2080.46 74267.04 00:16:50.530 ======================================================== 00:16:50.530 Total : 20574.80 80.37 12444.74 697.01 74267.04 00:16:50.530 00:16:50.530 01:02:43 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:16:50.530 01:02:43 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 64070c56-fa6b-47ab-a0da-92d891e49a38 00:16:50.530 01:02:43 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 11583cd4-b8e8-4dd7-892d-86a93679ca95 00:16:51.094 01:02:43 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:16:51.094 01:02:43 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:16:51.094 01:02:43 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:16:51.094 01:02:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:51.094 01:02:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@117 -- # sync 00:16:51.094 01:02:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:51.094 01:02:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@120 -- # set +e 00:16:51.094 01:02:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:51.094 01:02:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:51.094 rmmod nvme_tcp 00:16:51.094 rmmod nvme_fabrics 00:16:51.094 rmmod nvme_keyring 00:16:51.094 01:02:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:51.094 01:02:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@124 -- # set -e 00:16:51.094 01:02:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@125 -- # return 0 00:16:51.094 01:02:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@489 -- # '[' -n 3745744 ']' 00:16:51.094 01:02:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@490 -- # killprocess 3745744 00:16:51.094 01:02:44 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@946 -- # '[' -z 3745744 ']' 00:16:51.094 01:02:44 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@950 -- # kill -0 3745744 00:16:51.094 01:02:44 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@951 -- # uname 00:16:51.094 01:02:44 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:16:51.094 01:02:44 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3745744 00:16:51.094 01:02:44 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:16:51.094 01:02:44 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:16:51.094 01:02:44 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3745744' 00:16:51.094 killing process with pid 3745744 00:16:51.094 01:02:44 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@965 -- # kill 3745744 00:16:51.094 01:02:44 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@970 -- # wait 3745744 00:16:51.352 01:02:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:51.352 01:02:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:51.352 01:02:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:51.352 01:02:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:51.352 01:02:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:51.352 01:02:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:51.352 01:02:44 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:51.352 01:02:44 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:53.251 01:02:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:53.251 00:16:53.251 real 0m18.676s 00:16:53.251 user 1m2.793s 00:16:53.251 sys 0m5.955s 00:16:53.251 01:02:46 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1122 -- # xtrace_disable 00:16:53.251 01:02:46 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:16:53.251 ************************************ 00:16:53.251 END TEST nvmf_lvol 00:16:53.251 ************************************ 00:16:53.251 01:02:46 nvmf_tcp -- nvmf/nvmf.sh@49 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:16:53.251 01:02:46 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:16:53.251 01:02:46 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:16:53.251 01:02:46 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:53.251 ************************************ 00:16:53.251 START TEST nvmf_lvs_grow 00:16:53.251 ************************************ 00:16:53.251 01:02:46 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:16:53.509 * Looking for test storage... 00:16:53.509 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:53.509 01:02:46 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:53.509 01:02:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:16:53.509 01:02:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:53.509 01:02:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:53.509 01:02:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:53.509 01:02:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:53.509 01:02:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:53.509 01:02:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:53.509 01:02:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:53.509 01:02:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:53.509 01:02:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:53.509 01:02:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:53.509 01:02:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:53.509 01:02:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:16:53.509 01:02:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:53.509 01:02:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:53.509 01:02:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:53.509 01:02:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:53.509 01:02:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:53.509 01:02:46 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:53.509 01:02:46 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:53.509 01:02:46 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:53.509 01:02:46 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:53.509 01:02:46 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:53.509 01:02:46 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:53.509 01:02:46 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:16:53.509 01:02:46 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:53.509 01:02:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@47 -- # : 0 00:16:53.509 01:02:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:53.509 01:02:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:53.509 01:02:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:53.509 01:02:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:53.509 01:02:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:53.509 01:02:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:53.509 01:02:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:53.509 01:02:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:53.509 01:02:46 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:53.509 01:02:46 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:53.509 01:02:46 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:16:53.509 01:02:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:53.509 01:02:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:53.509 01:02:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:53.509 01:02:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:53.509 01:02:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:53.509 01:02:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:53.509 01:02:46 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:53.509 01:02:46 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:53.509 01:02:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:53.509 01:02:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:53.509 01:02:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@285 -- # xtrace_disable 00:16:53.509 01:02:46 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:16:55.437 01:02:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:55.437 01:02:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@291 -- # pci_devs=() 00:16:55.437 01:02:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:55.437 01:02:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:55.437 01:02:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:55.437 01:02:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:55.437 01:02:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:55.437 01:02:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@295 -- # net_devs=() 00:16:55.437 01:02:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:55.437 01:02:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@296 -- # e810=() 00:16:55.437 01:02:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@296 -- # local -ga e810 00:16:55.437 01:02:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@297 -- # x722=() 00:16:55.437 01:02:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@297 -- # local -ga x722 00:16:55.437 01:02:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@298 -- # mlx=() 00:16:55.437 01:02:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@298 -- # local -ga mlx 00:16:55.437 01:02:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:55.437 01:02:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:55.437 01:02:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:55.437 01:02:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:55.437 01:02:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:55.437 01:02:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:55.437 01:02:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:55.437 01:02:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:55.437 01:02:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:55.437 01:02:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:55.437 01:02:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:55.437 01:02:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:55.437 01:02:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:55.437 01:02:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:55.437 01:02:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:55.437 01:02:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:55.437 01:02:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:55.437 01:02:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:55.437 01:02:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:16:55.437 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:16:55.437 01:02:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:55.437 01:02:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:55.437 01:02:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:55.437 01:02:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:55.437 01:02:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:55.437 01:02:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:55.437 01:02:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:16:55.437 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:16:55.437 01:02:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:55.437 01:02:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:55.437 01:02:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:55.437 01:02:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:55.437 01:02:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:55.437 01:02:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:55.437 01:02:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:55.437 01:02:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:55.437 01:02:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:55.437 01:02:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:55.437 01:02:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:55.437 01:02:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:55.437 01:02:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:55.437 01:02:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:55.437 01:02:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:55.437 01:02:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:16:55.437 Found net devices under 0000:0a:00.0: cvl_0_0 00:16:55.437 01:02:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:55.437 01:02:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:55.437 01:02:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:55.437 01:02:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:55.437 01:02:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:55.437 01:02:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:55.437 01:02:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:55.437 01:02:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:55.437 01:02:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:16:55.437 Found net devices under 0000:0a:00.1: cvl_0_1 00:16:55.437 01:02:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:55.437 01:02:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:55.437 01:02:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # is_hw=yes 00:16:55.437 01:02:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:55.437 01:02:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:55.437 01:02:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:55.437 01:02:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:55.437 01:02:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:55.437 01:02:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:55.437 01:02:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:55.437 01:02:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:55.437 01:02:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:55.437 01:02:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:55.437 01:02:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:55.437 01:02:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:55.437 01:02:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:55.437 01:02:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:55.437 01:02:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:55.437 01:02:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:55.437 01:02:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:55.437 01:02:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:55.437 01:02:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:55.437 01:02:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:55.437 01:02:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:55.437 01:02:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:55.437 01:02:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:55.437 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:55.437 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.245 ms 00:16:55.437 00:16:55.437 --- 10.0.0.2 ping statistics --- 00:16:55.437 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:55.437 rtt min/avg/max/mdev = 0.245/0.245/0.245/0.000 ms 00:16:55.437 01:02:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:55.696 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:55.696 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.147 ms 00:16:55.696 00:16:55.696 --- 10.0.0.1 ping statistics --- 00:16:55.696 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:55.696 rtt min/avg/max/mdev = 0.147/0.147/0.147/0.000 ms 00:16:55.696 01:02:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:55.696 01:02:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@422 -- # return 0 00:16:55.696 01:02:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:55.696 01:02:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:55.696 01:02:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:55.696 01:02:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:55.696 01:02:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:55.696 01:02:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:55.696 01:02:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:55.696 01:02:48 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:16:55.696 01:02:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:55.696 01:02:48 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@720 -- # xtrace_disable 00:16:55.696 01:02:48 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:16:55.696 01:02:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@481 -- # nvmfpid=3749421 00:16:55.696 01:02:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:16:55.696 01:02:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@482 -- # waitforlisten 3749421 00:16:55.696 01:02:48 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # '[' -z 3749421 ']' 00:16:55.696 01:02:48 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:55.696 01:02:48 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@832 -- # local max_retries=100 00:16:55.696 01:02:48 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:55.696 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:55.696 01:02:48 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # xtrace_disable 00:16:55.696 01:02:48 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:16:55.696 [2024-07-25 01:02:48.663711] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 23.11.0 initialization... 00:16:55.696 [2024-07-25 01:02:48.663801] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:55.696 EAL: No free 2048 kB hugepages reported on node 1 00:16:55.696 [2024-07-25 01:02:48.729656] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:55.696 [2024-07-25 01:02:48.820412] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:55.696 [2024-07-25 01:02:48.820470] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:55.696 [2024-07-25 01:02:48.820483] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:55.696 [2024-07-25 01:02:48.820495] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:55.696 [2024-07-25 01:02:48.820505] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:55.696 [2024-07-25 01:02:48.820531] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:55.954 01:02:48 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:16:55.954 01:02:48 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@860 -- # return 0 00:16:55.954 01:02:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:55.954 01:02:48 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:55.954 01:02:48 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:16:55.954 01:02:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:55.954 01:02:48 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:16:56.211 [2024-07-25 01:02:49.234317] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:56.211 01:02:49 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:16:56.211 01:02:49 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:16:56.211 01:02:49 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1103 -- # xtrace_disable 00:16:56.211 01:02:49 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:16:56.211 ************************************ 00:16:56.211 START TEST lvs_grow_clean 00:16:56.211 ************************************ 00:16:56.211 01:02:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1121 -- # lvs_grow 00:16:56.211 01:02:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:16:56.211 01:02:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:16:56.211 01:02:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:16:56.212 01:02:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:16:56.212 01:02:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:16:56.212 01:02:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:16:56.212 01:02:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:16:56.212 01:02:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:16:56.212 01:02:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:16:56.469 01:02:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:16:56.469 01:02:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:16:57.034 01:02:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=aa7d2fe7-9102-4bb9-aa46-f3bb0493e22f 00:16:57.034 01:02:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u aa7d2fe7-9102-4bb9-aa46-f3bb0493e22f 00:16:57.034 01:02:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:16:57.292 01:02:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:16:57.292 01:02:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:16:57.292 01:02:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u aa7d2fe7-9102-4bb9-aa46-f3bb0493e22f lvol 150 00:16:57.292 01:02:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=2054ae3f-0f0a-4595-9884-50b938673b01 00:16:57.292 01:02:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:16:57.292 01:02:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:16:57.550 [2024-07-25 01:02:50.660668] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:16:57.550 [2024-07-25 01:02:50.660749] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:16:57.550 true 00:16:57.550 01:02:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u aa7d2fe7-9102-4bb9-aa46-f3bb0493e22f 00:16:57.550 01:02:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:16:57.806 01:02:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:16:57.806 01:02:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:16:58.063 01:02:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 2054ae3f-0f0a-4595-9884-50b938673b01 00:16:58.321 01:02:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:16:58.579 [2024-07-25 01:02:51.643726] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:58.579 01:02:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:16:58.836 01:02:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=3749854 00:16:58.836 01:02:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:16:58.836 01:02:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:58.836 01:02:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 3749854 /var/tmp/bdevperf.sock 00:16:58.836 01:02:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@827 -- # '[' -z 3749854 ']' 00:16:58.836 01:02:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:58.836 01:02:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@832 -- # local max_retries=100 00:16:58.836 01:02:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:58.836 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:58.836 01:02:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # xtrace_disable 00:16:58.836 01:02:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:16:58.836 [2024-07-25 01:02:51.943262] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 23.11.0 initialization... 00:16:58.836 [2024-07-25 01:02:51.943333] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3749854 ] 00:16:58.836 EAL: No free 2048 kB hugepages reported on node 1 00:16:59.094 [2024-07-25 01:02:52.004940] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:59.094 [2024-07-25 01:02:52.098423] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:59.094 01:02:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:16:59.094 01:02:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@860 -- # return 0 00:16:59.094 01:02:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:16:59.657 Nvme0n1 00:16:59.657 01:02:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:16:59.657 [ 00:16:59.657 { 00:16:59.657 "name": "Nvme0n1", 00:16:59.657 "aliases": [ 00:16:59.657 "2054ae3f-0f0a-4595-9884-50b938673b01" 00:16:59.657 ], 00:16:59.657 "product_name": "NVMe disk", 00:16:59.657 "block_size": 4096, 00:16:59.657 "num_blocks": 38912, 00:16:59.657 "uuid": "2054ae3f-0f0a-4595-9884-50b938673b01", 00:16:59.657 "assigned_rate_limits": { 00:16:59.657 "rw_ios_per_sec": 0, 00:16:59.657 "rw_mbytes_per_sec": 0, 00:16:59.657 "r_mbytes_per_sec": 0, 00:16:59.657 "w_mbytes_per_sec": 0 00:16:59.657 }, 00:16:59.657 "claimed": false, 00:16:59.657 "zoned": false, 00:16:59.657 "supported_io_types": { 00:16:59.657 "read": true, 00:16:59.657 "write": true, 00:16:59.657 "unmap": true, 00:16:59.657 "write_zeroes": true, 00:16:59.657 "flush": true, 00:16:59.657 "reset": true, 00:16:59.657 "compare": true, 00:16:59.657 "compare_and_write": true, 00:16:59.657 "abort": true, 00:16:59.657 "nvme_admin": true, 00:16:59.657 "nvme_io": true 00:16:59.657 }, 00:16:59.657 "memory_domains": [ 00:16:59.657 { 00:16:59.657 "dma_device_id": "system", 00:16:59.657 "dma_device_type": 1 00:16:59.657 } 00:16:59.657 ], 00:16:59.657 "driver_specific": { 00:16:59.657 "nvme": [ 00:16:59.657 { 00:16:59.657 "trid": { 00:16:59.657 "trtype": "TCP", 00:16:59.657 "adrfam": "IPv4", 00:16:59.657 "traddr": "10.0.0.2", 00:16:59.657 "trsvcid": "4420", 00:16:59.657 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:16:59.657 }, 00:16:59.657 "ctrlr_data": { 00:16:59.657 "cntlid": 1, 00:16:59.657 "vendor_id": "0x8086", 00:16:59.657 "model_number": "SPDK bdev Controller", 00:16:59.657 "serial_number": "SPDK0", 00:16:59.657 "firmware_revision": "24.05.1", 00:16:59.657 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:16:59.657 "oacs": { 00:16:59.657 "security": 0, 00:16:59.657 "format": 0, 00:16:59.657 "firmware": 0, 00:16:59.657 "ns_manage": 0 00:16:59.657 }, 00:16:59.657 "multi_ctrlr": true, 00:16:59.657 "ana_reporting": false 00:16:59.657 }, 00:16:59.657 "vs": { 00:16:59.657 "nvme_version": "1.3" 00:16:59.657 }, 00:16:59.657 "ns_data": { 00:16:59.657 "id": 1, 00:16:59.657 "can_share": true 00:16:59.657 } 00:16:59.657 } 00:16:59.657 ], 00:16:59.657 "mp_policy": "active_passive" 00:16:59.657 } 00:16:59.657 } 00:16:59.657 ] 00:16:59.657 01:02:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=3749871 00:16:59.657 01:02:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:16:59.657 01:02:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:16:59.915 Running I/O for 10 seconds... 00:17:00.848 Latency(us) 00:17:00.848 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:00.848 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:00.848 Nvme0n1 : 1.00 14740.00 57.58 0.00 0.00 0.00 0.00 0.00 00:17:00.848 =================================================================================================================== 00:17:00.848 Total : 14740.00 57.58 0.00 0.00 0.00 0.00 0.00 00:17:00.848 00:17:01.781 01:02:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u aa7d2fe7-9102-4bb9-aa46-f3bb0493e22f 00:17:01.781 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:01.781 Nvme0n1 : 2.00 14704.00 57.44 0.00 0.00 0.00 0.00 0.00 00:17:01.781 =================================================================================================================== 00:17:01.781 Total : 14704.00 57.44 0.00 0.00 0.00 0.00 0.00 00:17:01.781 00:17:02.039 true 00:17:02.039 01:02:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u aa7d2fe7-9102-4bb9-aa46-f3bb0493e22f 00:17:02.039 01:02:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:17:02.297 01:02:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:17:02.297 01:02:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:17:02.297 01:02:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 3749871 00:17:02.862 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:02.862 Nvme0n1 : 3.00 14931.00 58.32 0.00 0.00 0.00 0.00 0.00 00:17:02.862 =================================================================================================================== 00:17:02.862 Total : 14931.00 58.32 0.00 0.00 0.00 0.00 0.00 00:17:02.862 00:17:03.795 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:03.795 Nvme0n1 : 4.00 14992.75 58.57 0.00 0.00 0.00 0.00 0.00 00:17:03.795 =================================================================================================================== 00:17:03.795 Total : 14992.75 58.57 0.00 0.00 0.00 0.00 0.00 00:17:03.795 00:17:05.169 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:05.169 Nvme0n1 : 5.00 15043.20 58.76 0.00 0.00 0.00 0.00 0.00 00:17:05.169 =================================================================================================================== 00:17:05.169 Total : 15043.20 58.76 0.00 0.00 0.00 0.00 0.00 00:17:05.169 00:17:06.103 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:06.103 Nvme0n1 : 6.00 15132.33 59.11 0.00 0.00 0.00 0.00 0.00 00:17:06.103 =================================================================================================================== 00:17:06.103 Total : 15132.33 59.11 0.00 0.00 0.00 0.00 0.00 00:17:06.103 00:17:07.037 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:07.037 Nvme0n1 : 7.00 15120.71 59.07 0.00 0.00 0.00 0.00 0.00 00:17:07.037 =================================================================================================================== 00:17:07.037 Total : 15120.71 59.07 0.00 0.00 0.00 0.00 0.00 00:17:07.037 00:17:07.970 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:07.970 Nvme0n1 : 8.00 15183.62 59.31 0.00 0.00 0.00 0.00 0.00 00:17:07.970 =================================================================================================================== 00:17:07.970 Total : 15183.62 59.31 0.00 0.00 0.00 0.00 0.00 00:17:07.970 00:17:08.903 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:08.903 Nvme0n1 : 9.00 15168.89 59.25 0.00 0.00 0.00 0.00 0.00 00:17:08.903 =================================================================================================================== 00:17:08.903 Total : 15168.89 59.25 0.00 0.00 0.00 0.00 0.00 00:17:08.903 00:17:09.833 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:09.834 Nvme0n1 : 10.00 15152.60 59.19 0.00 0.00 0.00 0.00 0.00 00:17:09.834 =================================================================================================================== 00:17:09.834 Total : 15152.60 59.19 0.00 0.00 0.00 0.00 0.00 00:17:09.834 00:17:09.834 00:17:09.834 Latency(us) 00:17:09.834 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:09.834 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:09.834 Nvme0n1 : 10.00 15158.23 59.21 0.00 0.00 8439.35 4975.88 18932.62 00:17:09.834 =================================================================================================================== 00:17:09.834 Total : 15158.23 59.21 0.00 0.00 8439.35 4975.88 18932.62 00:17:09.834 0 00:17:09.834 01:03:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 3749854 00:17:09.834 01:03:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@946 -- # '[' -z 3749854 ']' 00:17:09.834 01:03:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@950 -- # kill -0 3749854 00:17:09.834 01:03:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@951 -- # uname 00:17:09.834 01:03:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:17:09.834 01:03:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3749854 00:17:09.834 01:03:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:17:09.834 01:03:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:17:09.834 01:03:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3749854' 00:17:09.834 killing process with pid 3749854 00:17:09.834 01:03:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@965 -- # kill 3749854 00:17:09.834 Received shutdown signal, test time was about 10.000000 seconds 00:17:09.834 00:17:09.834 Latency(us) 00:17:09.834 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:09.834 =================================================================================================================== 00:17:09.834 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:09.834 01:03:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@970 -- # wait 3749854 00:17:10.091 01:03:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:17:10.348 01:03:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:17:10.929 01:03:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u aa7d2fe7-9102-4bb9-aa46-f3bb0493e22f 00:17:10.929 01:03:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:17:10.929 01:03:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:17:10.929 01:03:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:17:10.929 01:03:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:17:11.190 [2024-07-25 01:03:04.259989] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:17:11.190 01:03:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u aa7d2fe7-9102-4bb9-aa46-f3bb0493e22f 00:17:11.190 01:03:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@648 -- # local es=0 00:17:11.190 01:03:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u aa7d2fe7-9102-4bb9-aa46-f3bb0493e22f 00:17:11.190 01:03:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:11.190 01:03:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:11.190 01:03:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:11.190 01:03:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:11.190 01:03:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:11.190 01:03:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:11.190 01:03:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:11.190 01:03:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:17:11.191 01:03:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u aa7d2fe7-9102-4bb9-aa46-f3bb0493e22f 00:17:11.448 request: 00:17:11.448 { 00:17:11.448 "uuid": "aa7d2fe7-9102-4bb9-aa46-f3bb0493e22f", 00:17:11.448 "method": "bdev_lvol_get_lvstores", 00:17:11.448 "req_id": 1 00:17:11.448 } 00:17:11.448 Got JSON-RPC error response 00:17:11.448 response: 00:17:11.448 { 00:17:11.448 "code": -19, 00:17:11.448 "message": "No such device" 00:17:11.448 } 00:17:11.448 01:03:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # es=1 00:17:11.448 01:03:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:11.448 01:03:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:11.448 01:03:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:11.448 01:03:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:17:11.706 aio_bdev 00:17:11.706 01:03:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 2054ae3f-0f0a-4595-9884-50b938673b01 00:17:11.706 01:03:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@895 -- # local bdev_name=2054ae3f-0f0a-4595-9884-50b938673b01 00:17:11.706 01:03:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:17:11.706 01:03:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@897 -- # local i 00:17:11.706 01:03:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:17:11.706 01:03:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:17:11.706 01:03:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:17:11.964 01:03:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 2054ae3f-0f0a-4595-9884-50b938673b01 -t 2000 00:17:12.220 [ 00:17:12.220 { 00:17:12.220 "name": "2054ae3f-0f0a-4595-9884-50b938673b01", 00:17:12.220 "aliases": [ 00:17:12.220 "lvs/lvol" 00:17:12.220 ], 00:17:12.220 "product_name": "Logical Volume", 00:17:12.220 "block_size": 4096, 00:17:12.220 "num_blocks": 38912, 00:17:12.220 "uuid": "2054ae3f-0f0a-4595-9884-50b938673b01", 00:17:12.220 "assigned_rate_limits": { 00:17:12.220 "rw_ios_per_sec": 0, 00:17:12.220 "rw_mbytes_per_sec": 0, 00:17:12.220 "r_mbytes_per_sec": 0, 00:17:12.220 "w_mbytes_per_sec": 0 00:17:12.220 }, 00:17:12.220 "claimed": false, 00:17:12.220 "zoned": false, 00:17:12.221 "supported_io_types": { 00:17:12.221 "read": true, 00:17:12.221 "write": true, 00:17:12.221 "unmap": true, 00:17:12.221 "write_zeroes": true, 00:17:12.221 "flush": false, 00:17:12.221 "reset": true, 00:17:12.221 "compare": false, 00:17:12.221 "compare_and_write": false, 00:17:12.221 "abort": false, 00:17:12.221 "nvme_admin": false, 00:17:12.221 "nvme_io": false 00:17:12.221 }, 00:17:12.221 "driver_specific": { 00:17:12.221 "lvol": { 00:17:12.221 "lvol_store_uuid": "aa7d2fe7-9102-4bb9-aa46-f3bb0493e22f", 00:17:12.221 "base_bdev": "aio_bdev", 00:17:12.221 "thin_provision": false, 00:17:12.221 "num_allocated_clusters": 38, 00:17:12.221 "snapshot": false, 00:17:12.221 "clone": false, 00:17:12.221 "esnap_clone": false 00:17:12.221 } 00:17:12.221 } 00:17:12.221 } 00:17:12.221 ] 00:17:12.221 01:03:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # return 0 00:17:12.221 01:03:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u aa7d2fe7-9102-4bb9-aa46-f3bb0493e22f 00:17:12.221 01:03:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:17:12.785 01:03:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:17:12.785 01:03:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u aa7d2fe7-9102-4bb9-aa46-f3bb0493e22f 00:17:12.785 01:03:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:17:12.785 01:03:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:17:12.785 01:03:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 2054ae3f-0f0a-4595-9884-50b938673b01 00:17:13.043 01:03:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u aa7d2fe7-9102-4bb9-aa46-f3bb0493e22f 00:17:13.300 01:03:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:17:13.558 01:03:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:13.558 00:17:13.558 real 0m17.404s 00:17:13.558 user 0m16.752s 00:17:13.558 sys 0m1.916s 00:17:13.558 01:03:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1122 -- # xtrace_disable 00:17:13.558 01:03:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:17:13.558 ************************************ 00:17:13.558 END TEST lvs_grow_clean 00:17:13.558 ************************************ 00:17:13.815 01:03:06 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:17:13.815 01:03:06 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:17:13.815 01:03:06 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1103 -- # xtrace_disable 00:17:13.815 01:03:06 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:17:13.815 ************************************ 00:17:13.815 START TEST lvs_grow_dirty 00:17:13.815 ************************************ 00:17:13.815 01:03:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1121 -- # lvs_grow dirty 00:17:13.815 01:03:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:17:13.815 01:03:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:17:13.815 01:03:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:17:13.815 01:03:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:17:13.815 01:03:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:17:13.815 01:03:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:17:13.815 01:03:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:13.815 01:03:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:13.815 01:03:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:17:14.072 01:03:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:17:14.073 01:03:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:17:14.330 01:03:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=5a6b70f1-8fd5-4fc5-b957-543f0a69873e 00:17:14.330 01:03:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5a6b70f1-8fd5-4fc5-b957-543f0a69873e 00:17:14.330 01:03:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:17:14.587 01:03:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:17:14.587 01:03:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:17:14.587 01:03:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 5a6b70f1-8fd5-4fc5-b957-543f0a69873e lvol 150 00:17:14.844 01:03:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=bcb1835b-79a9-485d-8dc7-71aa4009fda2 00:17:14.844 01:03:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:14.844 01:03:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:17:15.102 [2024-07-25 01:03:08.038583] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:17:15.102 [2024-07-25 01:03:08.038677] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:17:15.102 true 00:17:15.102 01:03:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5a6b70f1-8fd5-4fc5-b957-543f0a69873e 00:17:15.102 01:03:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:17:15.360 01:03:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:17:15.360 01:03:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:17:15.618 01:03:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bcb1835b-79a9-485d-8dc7-71aa4009fda2 00:17:15.876 01:03:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:17:16.134 [2024-07-25 01:03:09.045690] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:16.134 01:03:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:17:16.393 01:03:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=3752517 00:17:16.393 01:03:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:17:16.393 01:03:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:16.393 01:03:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 3752517 /var/tmp/bdevperf.sock 00:17:16.393 01:03:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@827 -- # '[' -z 3752517 ']' 00:17:16.393 01:03:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:16.393 01:03:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:16.393 01:03:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:16.393 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:16.393 01:03:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:16.393 01:03:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:17:16.393 [2024-07-25 01:03:09.382490] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 23.11.0 initialization... 00:17:16.393 [2024-07-25 01:03:09.382589] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3752517 ] 00:17:16.393 EAL: No free 2048 kB hugepages reported on node 1 00:17:16.393 [2024-07-25 01:03:09.442125] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:16.393 [2024-07-25 01:03:09.528006] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:16.651 01:03:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:16.651 01:03:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # return 0 00:17:16.651 01:03:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:17:16.909 Nvme0n1 00:17:16.909 01:03:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:17:17.167 [ 00:17:17.167 { 00:17:17.167 "name": "Nvme0n1", 00:17:17.167 "aliases": [ 00:17:17.167 "bcb1835b-79a9-485d-8dc7-71aa4009fda2" 00:17:17.167 ], 00:17:17.167 "product_name": "NVMe disk", 00:17:17.167 "block_size": 4096, 00:17:17.167 "num_blocks": 38912, 00:17:17.167 "uuid": "bcb1835b-79a9-485d-8dc7-71aa4009fda2", 00:17:17.167 "assigned_rate_limits": { 00:17:17.167 "rw_ios_per_sec": 0, 00:17:17.167 "rw_mbytes_per_sec": 0, 00:17:17.167 "r_mbytes_per_sec": 0, 00:17:17.167 "w_mbytes_per_sec": 0 00:17:17.167 }, 00:17:17.167 "claimed": false, 00:17:17.167 "zoned": false, 00:17:17.167 "supported_io_types": { 00:17:17.167 "read": true, 00:17:17.167 "write": true, 00:17:17.167 "unmap": true, 00:17:17.167 "write_zeroes": true, 00:17:17.167 "flush": true, 00:17:17.167 "reset": true, 00:17:17.167 "compare": true, 00:17:17.167 "compare_and_write": true, 00:17:17.167 "abort": true, 00:17:17.167 "nvme_admin": true, 00:17:17.167 "nvme_io": true 00:17:17.167 }, 00:17:17.167 "memory_domains": [ 00:17:17.167 { 00:17:17.167 "dma_device_id": "system", 00:17:17.167 "dma_device_type": 1 00:17:17.167 } 00:17:17.167 ], 00:17:17.167 "driver_specific": { 00:17:17.167 "nvme": [ 00:17:17.167 { 00:17:17.167 "trid": { 00:17:17.167 "trtype": "TCP", 00:17:17.167 "adrfam": "IPv4", 00:17:17.167 "traddr": "10.0.0.2", 00:17:17.167 "trsvcid": "4420", 00:17:17.167 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:17:17.167 }, 00:17:17.167 "ctrlr_data": { 00:17:17.167 "cntlid": 1, 00:17:17.167 "vendor_id": "0x8086", 00:17:17.167 "model_number": "SPDK bdev Controller", 00:17:17.167 "serial_number": "SPDK0", 00:17:17.167 "firmware_revision": "24.05.1", 00:17:17.167 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:17:17.167 "oacs": { 00:17:17.167 "security": 0, 00:17:17.167 "format": 0, 00:17:17.167 "firmware": 0, 00:17:17.167 "ns_manage": 0 00:17:17.167 }, 00:17:17.167 "multi_ctrlr": true, 00:17:17.167 "ana_reporting": false 00:17:17.167 }, 00:17:17.167 "vs": { 00:17:17.167 "nvme_version": "1.3" 00:17:17.167 }, 00:17:17.167 "ns_data": { 00:17:17.167 "id": 1, 00:17:17.167 "can_share": true 00:17:17.167 } 00:17:17.167 } 00:17:17.167 ], 00:17:17.167 "mp_policy": "active_passive" 00:17:17.167 } 00:17:17.167 } 00:17:17.167 ] 00:17:17.167 01:03:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=3752654 00:17:17.167 01:03:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:17:17.167 01:03:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:17.425 Running I/O for 10 seconds... 00:17:18.359 Latency(us) 00:17:18.360 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:18.360 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:18.360 Nvme0n1 : 1.00 14671.00 57.31 0.00 0.00 0.00 0.00 0.00 00:17:18.360 =================================================================================================================== 00:17:18.360 Total : 14671.00 57.31 0.00 0.00 0.00 0.00 0.00 00:17:18.360 00:17:19.294 01:03:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 5a6b70f1-8fd5-4fc5-b957-543f0a69873e 00:17:19.294 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:19.294 Nvme0n1 : 2.00 14766.50 57.68 0.00 0.00 0.00 0.00 0.00 00:17:19.294 =================================================================================================================== 00:17:19.294 Total : 14766.50 57.68 0.00 0.00 0.00 0.00 0.00 00:17:19.294 00:17:19.552 true 00:17:19.553 01:03:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5a6b70f1-8fd5-4fc5-b957-543f0a69873e 00:17:19.553 01:03:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:17:19.811 01:03:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:17:19.811 01:03:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:17:19.811 01:03:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 3752654 00:17:20.381 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:20.381 Nvme0n1 : 3.00 14970.67 58.48 0.00 0.00 0.00 0.00 0.00 00:17:20.381 =================================================================================================================== 00:17:20.381 Total : 14970.67 58.48 0.00 0.00 0.00 0.00 0.00 00:17:20.381 00:17:21.312 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:21.312 Nvme0n1 : 4.00 14948.00 58.39 0.00 0.00 0.00 0.00 0.00 00:17:21.312 =================================================================================================================== 00:17:21.312 Total : 14948.00 58.39 0.00 0.00 0.00 0.00 0.00 00:17:21.312 00:17:22.246 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:22.246 Nvme0n1 : 5.00 15039.80 58.75 0.00 0.00 0.00 0.00 0.00 00:17:22.246 =================================================================================================================== 00:17:22.246 Total : 15039.80 58.75 0.00 0.00 0.00 0.00 0.00 00:17:22.246 00:17:23.620 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:23.620 Nvme0n1 : 6.00 15105.17 59.00 0.00 0.00 0.00 0.00 0.00 00:17:23.620 =================================================================================================================== 00:17:23.620 Total : 15105.17 59.00 0.00 0.00 0.00 0.00 0.00 00:17:23.620 00:17:24.554 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:24.554 Nvme0n1 : 7.00 15125.29 59.08 0.00 0.00 0.00 0.00 0.00 00:17:24.554 =================================================================================================================== 00:17:24.554 Total : 15125.29 59.08 0.00 0.00 0.00 0.00 0.00 00:17:24.554 00:17:25.487 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:25.487 Nvme0n1 : 8.00 15181.25 59.30 0.00 0.00 0.00 0.00 0.00 00:17:25.487 =================================================================================================================== 00:17:25.487 Total : 15181.25 59.30 0.00 0.00 0.00 0.00 0.00 00:17:25.487 00:17:26.447 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:26.447 Nvme0n1 : 9.00 15169.56 59.26 0.00 0.00 0.00 0.00 0.00 00:17:26.447 =================================================================================================================== 00:17:26.447 Total : 15169.56 59.26 0.00 0.00 0.00 0.00 0.00 00:17:26.447 00:17:27.381 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:27.381 Nvme0n1 : 10.00 15189.30 59.33 0.00 0.00 0.00 0.00 0.00 00:17:27.381 =================================================================================================================== 00:17:27.381 Total : 15189.30 59.33 0.00 0.00 0.00 0.00 0.00 00:17:27.381 00:17:27.381 00:17:27.381 Latency(us) 00:17:27.381 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:27.381 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:27.381 Nvme0n1 : 10.01 15194.33 59.35 0.00 0.00 8419.45 4417.61 21942.42 00:17:27.381 =================================================================================================================== 00:17:27.381 Total : 15194.33 59.35 0.00 0.00 8419.45 4417.61 21942.42 00:17:27.381 0 00:17:27.381 01:03:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 3752517 00:17:27.381 01:03:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@946 -- # '[' -z 3752517 ']' 00:17:27.381 01:03:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@950 -- # kill -0 3752517 00:17:27.381 01:03:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@951 -- # uname 00:17:27.381 01:03:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:17:27.381 01:03:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3752517 00:17:27.381 01:03:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:17:27.381 01:03:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:17:27.381 01:03:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3752517' 00:17:27.381 killing process with pid 3752517 00:17:27.381 01:03:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@965 -- # kill 3752517 00:17:27.381 Received shutdown signal, test time was about 10.000000 seconds 00:17:27.381 00:17:27.381 Latency(us) 00:17:27.381 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:27.381 =================================================================================================================== 00:17:27.381 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:27.381 01:03:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@970 -- # wait 3752517 00:17:27.639 01:03:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:17:27.896 01:03:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:17:28.154 01:03:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5a6b70f1-8fd5-4fc5-b957-543f0a69873e 00:17:28.154 01:03:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:17:28.413 01:03:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:17:28.413 01:03:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:17:28.413 01:03:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 3749421 00:17:28.413 01:03:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 3749421 00:17:28.413 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 3749421 Killed "${NVMF_APP[@]}" "$@" 00:17:28.413 01:03:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:17:28.413 01:03:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:17:28.413 01:03:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:28.413 01:03:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@720 -- # xtrace_disable 00:17:28.413 01:03:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:17:28.413 01:03:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@481 -- # nvmfpid=3753985 00:17:28.413 01:03:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:17:28.413 01:03:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@482 -- # waitforlisten 3753985 00:17:28.413 01:03:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@827 -- # '[' -z 3753985 ']' 00:17:28.413 01:03:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:28.413 01:03:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:28.413 01:03:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:28.413 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:28.413 01:03:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:28.413 01:03:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:17:28.413 [2024-07-25 01:03:21.543798] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 23.11.0 initialization... 00:17:28.413 [2024-07-25 01:03:21.543881] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:28.671 EAL: No free 2048 kB hugepages reported on node 1 00:17:28.671 [2024-07-25 01:03:21.616033] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:28.671 [2024-07-25 01:03:21.702670] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:28.671 [2024-07-25 01:03:21.702727] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:28.671 [2024-07-25 01:03:21.702740] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:28.671 [2024-07-25 01:03:21.702752] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:28.671 [2024-07-25 01:03:21.702761] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:28.671 [2024-07-25 01:03:21.702787] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:28.671 01:03:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:28.671 01:03:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # return 0 00:17:28.671 01:03:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:28.671 01:03:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:28.671 01:03:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:17:28.929 01:03:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:28.929 01:03:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:17:28.929 [2024-07-25 01:03:22.058470] blobstore.c:4865:bs_recover: *NOTICE*: Performing recovery on blobstore 00:17:28.929 [2024-07-25 01:03:22.058612] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:17:28.929 [2024-07-25 01:03:22.058669] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:17:29.188 01:03:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:17:29.188 01:03:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev bcb1835b-79a9-485d-8dc7-71aa4009fda2 00:17:29.188 01:03:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@895 -- # local bdev_name=bcb1835b-79a9-485d-8dc7-71aa4009fda2 00:17:29.188 01:03:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:17:29.188 01:03:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local i 00:17:29.188 01:03:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:17:29.188 01:03:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:17:29.188 01:03:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:17:29.188 01:03:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b bcb1835b-79a9-485d-8dc7-71aa4009fda2 -t 2000 00:17:29.446 [ 00:17:29.446 { 00:17:29.446 "name": "bcb1835b-79a9-485d-8dc7-71aa4009fda2", 00:17:29.446 "aliases": [ 00:17:29.446 "lvs/lvol" 00:17:29.447 ], 00:17:29.447 "product_name": "Logical Volume", 00:17:29.447 "block_size": 4096, 00:17:29.447 "num_blocks": 38912, 00:17:29.447 "uuid": "bcb1835b-79a9-485d-8dc7-71aa4009fda2", 00:17:29.447 "assigned_rate_limits": { 00:17:29.447 "rw_ios_per_sec": 0, 00:17:29.447 "rw_mbytes_per_sec": 0, 00:17:29.447 "r_mbytes_per_sec": 0, 00:17:29.447 "w_mbytes_per_sec": 0 00:17:29.447 }, 00:17:29.447 "claimed": false, 00:17:29.447 "zoned": false, 00:17:29.447 "supported_io_types": { 00:17:29.447 "read": true, 00:17:29.447 "write": true, 00:17:29.447 "unmap": true, 00:17:29.447 "write_zeroes": true, 00:17:29.447 "flush": false, 00:17:29.447 "reset": true, 00:17:29.447 "compare": false, 00:17:29.447 "compare_and_write": false, 00:17:29.447 "abort": false, 00:17:29.447 "nvme_admin": false, 00:17:29.447 "nvme_io": false 00:17:29.447 }, 00:17:29.447 "driver_specific": { 00:17:29.447 "lvol": { 00:17:29.447 "lvol_store_uuid": "5a6b70f1-8fd5-4fc5-b957-543f0a69873e", 00:17:29.447 "base_bdev": "aio_bdev", 00:17:29.447 "thin_provision": false, 00:17:29.447 "num_allocated_clusters": 38, 00:17:29.447 "snapshot": false, 00:17:29.447 "clone": false, 00:17:29.447 "esnap_clone": false 00:17:29.447 } 00:17:29.447 } 00:17:29.447 } 00:17:29.447 ] 00:17:29.447 01:03:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # return 0 00:17:29.447 01:03:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5a6b70f1-8fd5-4fc5-b957-543f0a69873e 00:17:29.447 01:03:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:17:29.705 01:03:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:17:29.705 01:03:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5a6b70f1-8fd5-4fc5-b957-543f0a69873e 00:17:29.705 01:03:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:17:29.963 01:03:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:17:29.963 01:03:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:17:30.220 [2024-07-25 01:03:23.331679] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:17:30.477 01:03:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5a6b70f1-8fd5-4fc5-b957-543f0a69873e 00:17:30.477 01:03:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@648 -- # local es=0 00:17:30.477 01:03:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5a6b70f1-8fd5-4fc5-b957-543f0a69873e 00:17:30.477 01:03:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:30.477 01:03:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:30.477 01:03:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:30.477 01:03:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:30.477 01:03:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:30.477 01:03:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:30.477 01:03:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:30.477 01:03:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:17:30.478 01:03:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5a6b70f1-8fd5-4fc5-b957-543f0a69873e 00:17:30.735 request: 00:17:30.735 { 00:17:30.735 "uuid": "5a6b70f1-8fd5-4fc5-b957-543f0a69873e", 00:17:30.735 "method": "bdev_lvol_get_lvstores", 00:17:30.735 "req_id": 1 00:17:30.735 } 00:17:30.735 Got JSON-RPC error response 00:17:30.735 response: 00:17:30.735 { 00:17:30.735 "code": -19, 00:17:30.735 "message": "No such device" 00:17:30.735 } 00:17:30.735 01:03:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # es=1 00:17:30.735 01:03:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:30.735 01:03:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:30.735 01:03:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:30.735 01:03:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:17:30.735 aio_bdev 00:17:30.993 01:03:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev bcb1835b-79a9-485d-8dc7-71aa4009fda2 00:17:30.993 01:03:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@895 -- # local bdev_name=bcb1835b-79a9-485d-8dc7-71aa4009fda2 00:17:30.993 01:03:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:17:30.993 01:03:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local i 00:17:30.993 01:03:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:17:30.993 01:03:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:17:30.993 01:03:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:17:30.993 01:03:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b bcb1835b-79a9-485d-8dc7-71aa4009fda2 -t 2000 00:17:31.558 [ 00:17:31.558 { 00:17:31.558 "name": "bcb1835b-79a9-485d-8dc7-71aa4009fda2", 00:17:31.558 "aliases": [ 00:17:31.558 "lvs/lvol" 00:17:31.558 ], 00:17:31.558 "product_name": "Logical Volume", 00:17:31.558 "block_size": 4096, 00:17:31.558 "num_blocks": 38912, 00:17:31.558 "uuid": "bcb1835b-79a9-485d-8dc7-71aa4009fda2", 00:17:31.558 "assigned_rate_limits": { 00:17:31.558 "rw_ios_per_sec": 0, 00:17:31.558 "rw_mbytes_per_sec": 0, 00:17:31.558 "r_mbytes_per_sec": 0, 00:17:31.558 "w_mbytes_per_sec": 0 00:17:31.558 }, 00:17:31.558 "claimed": false, 00:17:31.558 "zoned": false, 00:17:31.558 "supported_io_types": { 00:17:31.558 "read": true, 00:17:31.558 "write": true, 00:17:31.558 "unmap": true, 00:17:31.558 "write_zeroes": true, 00:17:31.558 "flush": false, 00:17:31.558 "reset": true, 00:17:31.558 "compare": false, 00:17:31.558 "compare_and_write": false, 00:17:31.558 "abort": false, 00:17:31.558 "nvme_admin": false, 00:17:31.558 "nvme_io": false 00:17:31.558 }, 00:17:31.558 "driver_specific": { 00:17:31.558 "lvol": { 00:17:31.558 "lvol_store_uuid": "5a6b70f1-8fd5-4fc5-b957-543f0a69873e", 00:17:31.558 "base_bdev": "aio_bdev", 00:17:31.558 "thin_provision": false, 00:17:31.558 "num_allocated_clusters": 38, 00:17:31.558 "snapshot": false, 00:17:31.558 "clone": false, 00:17:31.558 "esnap_clone": false 00:17:31.558 } 00:17:31.558 } 00:17:31.558 } 00:17:31.558 ] 00:17:31.558 01:03:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # return 0 00:17:31.558 01:03:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5a6b70f1-8fd5-4fc5-b957-543f0a69873e 00:17:31.558 01:03:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:17:31.558 01:03:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:17:31.558 01:03:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5a6b70f1-8fd5-4fc5-b957-543f0a69873e 00:17:31.558 01:03:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:17:31.816 01:03:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:17:31.816 01:03:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete bcb1835b-79a9-485d-8dc7-71aa4009fda2 00:17:32.074 01:03:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 5a6b70f1-8fd5-4fc5-b957-543f0a69873e 00:17:32.332 01:03:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:17:32.589 01:03:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:32.847 00:17:32.847 real 0m19.023s 00:17:32.847 user 0m48.046s 00:17:32.847 sys 0m4.627s 00:17:32.847 01:03:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1122 -- # xtrace_disable 00:17:32.847 01:03:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:17:32.847 ************************************ 00:17:32.847 END TEST lvs_grow_dirty 00:17:32.847 ************************************ 00:17:32.847 01:03:25 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:17:32.847 01:03:25 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@804 -- # type=--id 00:17:32.847 01:03:25 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@805 -- # id=0 00:17:32.847 01:03:25 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@806 -- # '[' --id = --pid ']' 00:17:32.847 01:03:25 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:17:32.847 01:03:25 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # shm_files=nvmf_trace.0 00:17:32.847 01:03:25 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # [[ -z nvmf_trace.0 ]] 00:17:32.847 01:03:25 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # for n in $shm_files 00:17:32.847 01:03:25 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@817 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:17:32.847 nvmf_trace.0 00:17:32.847 01:03:25 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@819 -- # return 0 00:17:32.847 01:03:25 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:17:32.847 01:03:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:32.847 01:03:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@117 -- # sync 00:17:32.847 01:03:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:32.847 01:03:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@120 -- # set +e 00:17:32.847 01:03:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:32.847 01:03:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:32.847 rmmod nvme_tcp 00:17:32.847 rmmod nvme_fabrics 00:17:32.847 rmmod nvme_keyring 00:17:32.847 01:03:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:32.847 01:03:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set -e 00:17:32.847 01:03:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@125 -- # return 0 00:17:32.847 01:03:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@489 -- # '[' -n 3753985 ']' 00:17:32.847 01:03:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@490 -- # killprocess 3753985 00:17:32.847 01:03:25 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@946 -- # '[' -z 3753985 ']' 00:17:32.847 01:03:25 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@950 -- # kill -0 3753985 00:17:32.847 01:03:25 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@951 -- # uname 00:17:32.847 01:03:25 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:17:32.847 01:03:25 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3753985 00:17:32.847 01:03:25 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:17:32.847 01:03:25 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:17:32.847 01:03:25 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3753985' 00:17:32.847 killing process with pid 3753985 00:17:32.847 01:03:25 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@965 -- # kill 3753985 00:17:32.847 01:03:25 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@970 -- # wait 3753985 00:17:33.105 01:03:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:33.105 01:03:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:33.105 01:03:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:33.105 01:03:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:33.105 01:03:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:33.105 01:03:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:33.105 01:03:26 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:33.105 01:03:26 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:35.003 01:03:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:35.003 00:17:35.003 real 0m41.753s 00:17:35.003 user 1m10.522s 00:17:35.003 sys 0m8.396s 00:17:35.003 01:03:28 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1122 -- # xtrace_disable 00:17:35.003 01:03:28 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:17:35.003 ************************************ 00:17:35.003 END TEST nvmf_lvs_grow 00:17:35.003 ************************************ 00:17:35.262 01:03:28 nvmf_tcp -- nvmf/nvmf.sh@50 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:17:35.262 01:03:28 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:17:35.262 01:03:28 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:17:35.262 01:03:28 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:35.262 ************************************ 00:17:35.262 START TEST nvmf_bdev_io_wait 00:17:35.262 ************************************ 00:17:35.262 01:03:28 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:17:35.262 * Looking for test storage... 00:17:35.262 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:35.262 01:03:28 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:35.262 01:03:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:17:35.262 01:03:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:35.262 01:03:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:35.262 01:03:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:35.262 01:03:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:35.262 01:03:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:35.262 01:03:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:35.262 01:03:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:35.262 01:03:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:35.262 01:03:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:35.262 01:03:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:35.262 01:03:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:35.262 01:03:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:17:35.262 01:03:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:35.262 01:03:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:35.262 01:03:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:35.262 01:03:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:35.262 01:03:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:35.262 01:03:28 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:35.262 01:03:28 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:35.262 01:03:28 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:35.262 01:03:28 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:35.262 01:03:28 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:35.262 01:03:28 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:35.262 01:03:28 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:17:35.262 01:03:28 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:35.262 01:03:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@47 -- # : 0 00:17:35.262 01:03:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:35.262 01:03:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:35.262 01:03:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:35.262 01:03:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:35.262 01:03:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:35.262 01:03:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:35.262 01:03:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:35.262 01:03:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:35.262 01:03:28 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:35.262 01:03:28 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:35.262 01:03:28 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:17:35.262 01:03:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:35.262 01:03:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:35.262 01:03:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:35.262 01:03:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:35.262 01:03:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:35.262 01:03:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:35.262 01:03:28 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:35.262 01:03:28 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:35.262 01:03:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:35.262 01:03:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:35.262 01:03:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@285 -- # xtrace_disable 00:17:35.262 01:03:28 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:37.161 01:03:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:37.161 01:03:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # pci_devs=() 00:17:37.161 01:03:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:37.161 01:03:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:37.161 01:03:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:37.161 01:03:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:37.161 01:03:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:37.161 01:03:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # net_devs=() 00:17:37.161 01:03:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:37.161 01:03:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # e810=() 00:17:37.161 01:03:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # local -ga e810 00:17:37.161 01:03:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # x722=() 00:17:37.161 01:03:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # local -ga x722 00:17:37.161 01:03:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # mlx=() 00:17:37.161 01:03:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # local -ga mlx 00:17:37.161 01:03:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:37.161 01:03:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:37.161 01:03:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:37.161 01:03:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:37.161 01:03:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:37.161 01:03:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:37.161 01:03:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:37.161 01:03:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:37.161 01:03:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:37.161 01:03:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:37.161 01:03:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:37.161 01:03:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:37.161 01:03:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:37.161 01:03:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:37.161 01:03:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:37.161 01:03:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:37.161 01:03:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:37.161 01:03:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:37.161 01:03:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:17:37.161 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:17:37.161 01:03:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:37.161 01:03:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:37.161 01:03:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:37.161 01:03:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:37.161 01:03:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:37.161 01:03:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:37.161 01:03:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:17:37.161 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:17:37.161 01:03:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:37.161 01:03:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:37.161 01:03:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:37.161 01:03:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:37.161 01:03:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:37.161 01:03:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:37.161 01:03:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:37.161 01:03:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:37.161 01:03:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:37.161 01:03:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:37.161 01:03:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:37.161 01:03:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:37.161 01:03:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:37.161 01:03:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:37.161 01:03:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:37.161 01:03:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:17:37.161 Found net devices under 0000:0a:00.0: cvl_0_0 00:17:37.161 01:03:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:37.161 01:03:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:37.161 01:03:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:37.161 01:03:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:37.161 01:03:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:37.161 01:03:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:37.161 01:03:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:37.161 01:03:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:37.161 01:03:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:17:37.161 Found net devices under 0000:0a:00.1: cvl_0_1 00:17:37.161 01:03:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:37.161 01:03:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:37.161 01:03:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # is_hw=yes 00:17:37.161 01:03:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:37.161 01:03:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:37.161 01:03:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:37.162 01:03:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:37.162 01:03:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:37.162 01:03:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:37.162 01:03:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:37.162 01:03:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:37.162 01:03:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:37.162 01:03:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:37.162 01:03:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:37.162 01:03:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:37.162 01:03:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:37.162 01:03:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:37.162 01:03:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:37.162 01:03:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:37.162 01:03:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:37.162 01:03:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:37.162 01:03:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:37.162 01:03:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:37.162 01:03:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:37.162 01:03:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:37.162 01:03:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:37.162 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:37.162 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.236 ms 00:17:37.162 00:17:37.162 --- 10.0.0.2 ping statistics --- 00:17:37.162 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:37.162 rtt min/avg/max/mdev = 0.236/0.236/0.236/0.000 ms 00:17:37.162 01:03:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:37.162 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:37.162 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.084 ms 00:17:37.162 00:17:37.162 --- 10.0.0.1 ping statistics --- 00:17:37.162 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:37.162 rtt min/avg/max/mdev = 0.084/0.084/0.084/0.000 ms 00:17:37.162 01:03:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:37.162 01:03:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # return 0 00:17:37.162 01:03:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:37.162 01:03:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:37.162 01:03:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:37.162 01:03:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:37.162 01:03:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:37.162 01:03:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:37.162 01:03:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:37.420 01:03:30 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:17:37.420 01:03:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:37.420 01:03:30 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@720 -- # xtrace_disable 00:17:37.420 01:03:30 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:37.420 01:03:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # nvmfpid=3756392 00:17:37.420 01:03:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:17:37.420 01:03:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # waitforlisten 3756392 00:17:37.420 01:03:30 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@827 -- # '[' -z 3756392 ']' 00:17:37.420 01:03:30 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:37.420 01:03:30 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:37.420 01:03:30 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:37.420 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:37.420 01:03:30 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:37.420 01:03:30 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:37.420 [2024-07-25 01:03:30.364444] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 23.11.0 initialization... 00:17:37.420 [2024-07-25 01:03:30.364526] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:37.420 EAL: No free 2048 kB hugepages reported on node 1 00:17:37.420 [2024-07-25 01:03:30.433484] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:37.420 [2024-07-25 01:03:30.521787] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:37.420 [2024-07-25 01:03:30.521834] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:37.420 [2024-07-25 01:03:30.521856] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:37.420 [2024-07-25 01:03:30.521867] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:37.420 [2024-07-25 01:03:30.521876] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:37.420 [2024-07-25 01:03:30.521973] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:37.420 [2024-07-25 01:03:30.522040] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:37.420 [2024-07-25 01:03:30.522107] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:17:37.420 [2024-07-25 01:03:30.522109] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:37.679 01:03:30 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:37.679 01:03:30 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@860 -- # return 0 00:17:37.679 01:03:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:37.679 01:03:30 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:37.679 01:03:30 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:37.679 01:03:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:37.679 01:03:30 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:17:37.679 01:03:30 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:37.679 01:03:30 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:37.679 01:03:30 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:37.679 01:03:30 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:17:37.679 01:03:30 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:37.679 01:03:30 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:37.679 01:03:30 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:37.679 01:03:30 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:37.679 01:03:30 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:37.679 01:03:30 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:37.679 [2024-07-25 01:03:30.698791] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:37.679 01:03:30 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:37.679 01:03:30 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:37.679 01:03:30 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:37.679 01:03:30 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:37.679 Malloc0 00:17:37.679 01:03:30 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:37.679 01:03:30 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:37.679 01:03:30 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:37.679 01:03:30 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:37.679 01:03:30 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:37.679 01:03:30 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:37.679 01:03:30 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:37.679 01:03:30 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:37.679 01:03:30 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:37.679 01:03:30 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:37.679 01:03:30 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:37.679 01:03:30 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:37.679 [2024-07-25 01:03:30.760970] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:37.679 01:03:30 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:37.679 01:03:30 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=3756531 00:17:37.679 01:03:30 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=3756533 00:17:37.679 01:03:30 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:17:37.679 01:03:30 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:17:37.679 01:03:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:17:37.679 01:03:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:17:37.679 01:03:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:17:37.679 01:03:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:17:37.679 { 00:17:37.679 "params": { 00:17:37.679 "name": "Nvme$subsystem", 00:17:37.679 "trtype": "$TEST_TRANSPORT", 00:17:37.679 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:37.679 "adrfam": "ipv4", 00:17:37.679 "trsvcid": "$NVMF_PORT", 00:17:37.679 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:37.679 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:37.679 "hdgst": ${hdgst:-false}, 00:17:37.679 "ddgst": ${ddgst:-false} 00:17:37.679 }, 00:17:37.679 "method": "bdev_nvme_attach_controller" 00:17:37.679 } 00:17:37.679 EOF 00:17:37.679 )") 00:17:37.679 01:03:30 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=3756535 00:17:37.679 01:03:30 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:17:37.679 01:03:30 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:17:37.679 01:03:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:17:37.679 01:03:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:17:37.679 01:03:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:17:37.679 01:03:30 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=3756537 00:17:37.679 01:03:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:17:37.679 { 00:17:37.679 "params": { 00:17:37.679 "name": "Nvme$subsystem", 00:17:37.679 "trtype": "$TEST_TRANSPORT", 00:17:37.679 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:37.679 "adrfam": "ipv4", 00:17:37.679 "trsvcid": "$NVMF_PORT", 00:17:37.679 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:37.679 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:37.679 "hdgst": ${hdgst:-false}, 00:17:37.679 "ddgst": ${ddgst:-false} 00:17:37.679 }, 00:17:37.679 "method": "bdev_nvme_attach_controller" 00:17:37.679 } 00:17:37.679 EOF 00:17:37.679 )") 00:17:37.679 01:03:30 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:17:37.679 01:03:30 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:17:37.679 01:03:30 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:17:37.679 01:03:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:17:37.679 01:03:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:17:37.679 01:03:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:17:37.679 01:03:30 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:17:37.679 01:03:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:17:37.679 01:03:30 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:17:37.679 01:03:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:17:37.679 { 00:17:37.679 "params": { 00:17:37.679 "name": "Nvme$subsystem", 00:17:37.679 "trtype": "$TEST_TRANSPORT", 00:17:37.679 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:37.679 "adrfam": "ipv4", 00:17:37.679 "trsvcid": "$NVMF_PORT", 00:17:37.679 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:37.679 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:37.679 "hdgst": ${hdgst:-false}, 00:17:37.679 "ddgst": ${ddgst:-false} 00:17:37.679 }, 00:17:37.679 "method": "bdev_nvme_attach_controller" 00:17:37.679 } 00:17:37.679 EOF 00:17:37.679 )") 00:17:37.679 01:03:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:17:37.679 01:03:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:17:37.679 01:03:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:17:37.679 01:03:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:17:37.679 01:03:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:17:37.679 { 00:17:37.679 "params": { 00:17:37.679 "name": "Nvme$subsystem", 00:17:37.679 "trtype": "$TEST_TRANSPORT", 00:17:37.679 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:37.679 "adrfam": "ipv4", 00:17:37.679 "trsvcid": "$NVMF_PORT", 00:17:37.679 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:37.679 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:37.679 "hdgst": ${hdgst:-false}, 00:17:37.679 "ddgst": ${ddgst:-false} 00:17:37.679 }, 00:17:37.679 "method": "bdev_nvme_attach_controller" 00:17:37.679 } 00:17:37.679 EOF 00:17:37.679 )") 00:17:37.679 01:03:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:17:37.679 01:03:30 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 3756531 00:17:37.679 01:03:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:17:37.679 01:03:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:17:37.679 01:03:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:17:37.679 01:03:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:17:37.679 01:03:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:17:37.679 01:03:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:17:37.679 01:03:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:17:37.679 "params": { 00:17:37.679 "name": "Nvme1", 00:17:37.679 "trtype": "tcp", 00:17:37.679 "traddr": "10.0.0.2", 00:17:37.679 "adrfam": "ipv4", 00:17:37.679 "trsvcid": "4420", 00:17:37.680 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:37.680 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:37.680 "hdgst": false, 00:17:37.680 "ddgst": false 00:17:37.680 }, 00:17:37.680 "method": "bdev_nvme_attach_controller" 00:17:37.680 }' 00:17:37.680 01:03:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:17:37.680 01:03:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:17:37.680 "params": { 00:17:37.680 "name": "Nvme1", 00:17:37.680 "trtype": "tcp", 00:17:37.680 "traddr": "10.0.0.2", 00:17:37.680 "adrfam": "ipv4", 00:17:37.680 "trsvcid": "4420", 00:17:37.680 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:37.680 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:37.680 "hdgst": false, 00:17:37.680 "ddgst": false 00:17:37.680 }, 00:17:37.680 "method": "bdev_nvme_attach_controller" 00:17:37.680 }' 00:17:37.680 01:03:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:17:37.680 01:03:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:17:37.680 "params": { 00:17:37.680 "name": "Nvme1", 00:17:37.680 "trtype": "tcp", 00:17:37.680 "traddr": "10.0.0.2", 00:17:37.680 "adrfam": "ipv4", 00:17:37.680 "trsvcid": "4420", 00:17:37.680 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:37.680 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:37.680 "hdgst": false, 00:17:37.680 "ddgst": false 00:17:37.680 }, 00:17:37.680 "method": "bdev_nvme_attach_controller" 00:17:37.680 }' 00:17:37.680 01:03:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:17:37.680 01:03:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:17:37.680 "params": { 00:17:37.680 "name": "Nvme1", 00:17:37.680 "trtype": "tcp", 00:17:37.680 "traddr": "10.0.0.2", 00:17:37.680 "adrfam": "ipv4", 00:17:37.680 "trsvcid": "4420", 00:17:37.680 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:37.680 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:37.680 "hdgst": false, 00:17:37.680 "ddgst": false 00:17:37.680 }, 00:17:37.680 "method": "bdev_nvme_attach_controller" 00:17:37.680 }' 00:17:37.680 [2024-07-25 01:03:30.808922] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 23.11.0 initialization... 00:17:37.680 [2024-07-25 01:03:30.808923] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 23.11.0 initialization... 00:17:37.680 [2024-07-25 01:03:30.808923] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 23.11.0 initialization... 00:17:37.680 [2024-07-25 01:03:30.809046] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-07-25 01:03:30.809046] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-07-25 01:03:30.809046] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:17:37.680 .cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:17:37.680 .cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:17:37.680 [2024-07-25 01:03:30.809104] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 23.11.0 initialization... 00:17:37.680 [2024-07-25 01:03:30.809188] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:17:37.938 EAL: No free 2048 kB hugepages reported on node 1 00:17:37.938 EAL: No free 2048 kB hugepages reported on node 1 00:17:37.938 [2024-07-25 01:03:30.980383] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:37.938 EAL: No free 2048 kB hugepages reported on node 1 00:17:37.938 [2024-07-25 01:03:31.055102] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:17:37.938 [2024-07-25 01:03:31.078849] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:38.194 EAL: No free 2048 kB hugepages reported on node 1 00:17:38.194 [2024-07-25 01:03:31.153886] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:17:38.194 [2024-07-25 01:03:31.177204] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:38.194 [2024-07-25 01:03:31.252724] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:17:38.194 [2024-07-25 01:03:31.279114] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:38.451 [2024-07-25 01:03:31.358584] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:17:38.451 Running I/O for 1 seconds... 00:17:38.451 Running I/O for 1 seconds... 00:17:38.451 Running I/O for 1 seconds... 00:17:38.709 Running I/O for 1 seconds... 00:17:39.646 00:17:39.646 Latency(us) 00:17:39.646 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:39.646 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:17:39.646 Nvme1n1 : 1.00 179144.58 699.78 0.00 0.00 711.70 276.10 958.77 00:17:39.646 =================================================================================================================== 00:17:39.646 Total : 179144.58 699.78 0.00 0.00 711.70 276.10 958.77 00:17:39.646 00:17:39.646 Latency(us) 00:17:39.646 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:39.646 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:17:39.646 Nvme1n1 : 1.01 7176.88 28.03 0.00 0.00 17695.17 9223.59 24660.95 00:17:39.646 =================================================================================================================== 00:17:39.646 Total : 7176.88 28.03 0.00 0.00 17695.17 9223.59 24660.95 00:17:39.646 00:17:39.646 Latency(us) 00:17:39.646 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:39.646 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:17:39.646 Nvme1n1 : 1.01 9091.74 35.51 0.00 0.00 14011.56 8446.86 27573.67 00:17:39.646 =================================================================================================================== 00:17:39.646 Total : 9091.74 35.51 0.00 0.00 14011.56 8446.86 27573.67 00:17:39.646 00:17:39.646 Latency(us) 00:17:39.646 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:39.646 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:17:39.646 Nvme1n1 : 1.00 7546.97 29.48 0.00 0.00 16870.26 4757.43 45632.47 00:17:39.646 =================================================================================================================== 00:17:39.646 Total : 7546.97 29.48 0.00 0.00 16870.26 4757.43 45632.47 00:17:39.646 01:03:32 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 3756533 00:17:39.646 01:03:32 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 3756535 00:17:39.646 01:03:32 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 3756537 00:17:39.903 01:03:32 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:39.903 01:03:32 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:39.903 01:03:32 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:39.903 01:03:32 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:39.903 01:03:32 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:17:39.903 01:03:32 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:17:39.903 01:03:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:39.903 01:03:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # sync 00:17:39.903 01:03:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:39.903 01:03:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@120 -- # set +e 00:17:39.903 01:03:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:39.903 01:03:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:39.903 rmmod nvme_tcp 00:17:39.903 rmmod nvme_fabrics 00:17:39.903 rmmod nvme_keyring 00:17:39.903 01:03:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:39.903 01:03:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set -e 00:17:39.903 01:03:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # return 0 00:17:39.903 01:03:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@489 -- # '[' -n 3756392 ']' 00:17:39.903 01:03:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # killprocess 3756392 00:17:39.903 01:03:33 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@946 -- # '[' -z 3756392 ']' 00:17:39.903 01:03:33 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@950 -- # kill -0 3756392 00:17:39.903 01:03:33 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@951 -- # uname 00:17:39.903 01:03:33 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:17:39.903 01:03:33 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3756392 00:17:39.903 01:03:33 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:17:39.903 01:03:33 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:17:39.903 01:03:33 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3756392' 00:17:39.903 killing process with pid 3756392 00:17:39.903 01:03:33 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@965 -- # kill 3756392 00:17:39.903 01:03:33 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@970 -- # wait 3756392 00:17:40.161 01:03:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:40.161 01:03:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:40.161 01:03:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:40.161 01:03:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:40.161 01:03:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:40.161 01:03:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:40.161 01:03:33 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:40.161 01:03:33 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:42.730 01:03:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:42.730 00:17:42.730 real 0m7.124s 00:17:42.730 user 0m15.790s 00:17:42.730 sys 0m3.479s 00:17:42.730 01:03:35 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1122 -- # xtrace_disable 00:17:42.730 01:03:35 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:42.730 ************************************ 00:17:42.730 END TEST nvmf_bdev_io_wait 00:17:42.730 ************************************ 00:17:42.730 01:03:35 nvmf_tcp -- nvmf/nvmf.sh@51 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:17:42.730 01:03:35 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:17:42.730 01:03:35 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:17:42.730 01:03:35 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:42.730 ************************************ 00:17:42.730 START TEST nvmf_queue_depth 00:17:42.730 ************************************ 00:17:42.730 01:03:35 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:17:42.730 * Looking for test storage... 00:17:42.730 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:42.730 01:03:35 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:42.730 01:03:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:17:42.730 01:03:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:42.730 01:03:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:42.730 01:03:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:42.730 01:03:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:42.730 01:03:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:42.730 01:03:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:42.730 01:03:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:42.730 01:03:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:42.730 01:03:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:42.730 01:03:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:42.730 01:03:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:42.730 01:03:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:17:42.730 01:03:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:42.730 01:03:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:42.730 01:03:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:42.730 01:03:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:42.730 01:03:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:42.730 01:03:35 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:42.730 01:03:35 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:42.730 01:03:35 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:42.730 01:03:35 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:42.730 01:03:35 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:42.730 01:03:35 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:42.730 01:03:35 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:17:42.730 01:03:35 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:42.730 01:03:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@47 -- # : 0 00:17:42.730 01:03:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:42.730 01:03:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:42.730 01:03:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:42.730 01:03:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:42.730 01:03:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:42.730 01:03:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:42.730 01:03:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:42.730 01:03:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:42.730 01:03:35 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:17:42.730 01:03:35 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:17:42.731 01:03:35 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:42.731 01:03:35 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:17:42.731 01:03:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:42.731 01:03:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:42.731 01:03:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:42.731 01:03:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:42.731 01:03:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:42.731 01:03:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:42.731 01:03:35 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:42.731 01:03:35 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:42.731 01:03:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:42.731 01:03:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:42.731 01:03:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@285 -- # xtrace_disable 00:17:42.731 01:03:35 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:44.631 01:03:37 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:44.631 01:03:37 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@291 -- # pci_devs=() 00:17:44.631 01:03:37 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:44.631 01:03:37 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:44.631 01:03:37 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:44.631 01:03:37 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:44.631 01:03:37 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:44.631 01:03:37 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@295 -- # net_devs=() 00:17:44.631 01:03:37 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:44.631 01:03:37 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@296 -- # e810=() 00:17:44.631 01:03:37 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@296 -- # local -ga e810 00:17:44.631 01:03:37 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@297 -- # x722=() 00:17:44.631 01:03:37 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@297 -- # local -ga x722 00:17:44.631 01:03:37 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@298 -- # mlx=() 00:17:44.631 01:03:37 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@298 -- # local -ga mlx 00:17:44.631 01:03:37 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:44.631 01:03:37 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:44.631 01:03:37 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:44.631 01:03:37 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:44.631 01:03:37 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:44.631 01:03:37 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:44.631 01:03:37 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:44.631 01:03:37 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:44.631 01:03:37 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:44.631 01:03:37 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:44.631 01:03:37 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:44.631 01:03:37 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:44.631 01:03:37 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:44.631 01:03:37 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:44.631 01:03:37 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:44.631 01:03:37 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:44.631 01:03:37 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:44.631 01:03:37 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:44.631 01:03:37 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:17:44.631 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:17:44.631 01:03:37 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:44.631 01:03:37 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:44.631 01:03:37 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:44.631 01:03:37 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:44.631 01:03:37 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:44.631 01:03:37 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:44.631 01:03:37 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:17:44.631 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:17:44.631 01:03:37 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:44.631 01:03:37 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:44.631 01:03:37 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:44.631 01:03:37 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:44.631 01:03:37 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:44.631 01:03:37 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:44.631 01:03:37 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:44.631 01:03:37 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:44.631 01:03:37 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:44.632 01:03:37 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:44.632 01:03:37 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:44.632 01:03:37 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:44.632 01:03:37 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:44.632 01:03:37 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:44.632 01:03:37 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:44.632 01:03:37 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:17:44.632 Found net devices under 0000:0a:00.0: cvl_0_0 00:17:44.632 01:03:37 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:44.632 01:03:37 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:44.632 01:03:37 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:44.632 01:03:37 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:44.632 01:03:37 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:44.632 01:03:37 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:44.632 01:03:37 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:44.632 01:03:37 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:44.632 01:03:37 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:17:44.632 Found net devices under 0000:0a:00.1: cvl_0_1 00:17:44.632 01:03:37 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:44.632 01:03:37 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:44.632 01:03:37 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # is_hw=yes 00:17:44.632 01:03:37 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:44.632 01:03:37 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:44.632 01:03:37 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:44.632 01:03:37 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:44.632 01:03:37 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:44.632 01:03:37 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:44.632 01:03:37 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:44.632 01:03:37 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:44.632 01:03:37 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:44.632 01:03:37 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:44.632 01:03:37 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:44.632 01:03:37 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:44.632 01:03:37 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:44.632 01:03:37 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:44.632 01:03:37 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:44.632 01:03:37 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:44.632 01:03:37 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:44.632 01:03:37 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:44.632 01:03:37 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:44.632 01:03:37 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:44.632 01:03:37 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:44.632 01:03:37 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:44.632 01:03:37 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:44.632 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:44.632 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.260 ms 00:17:44.632 00:17:44.632 --- 10.0.0.2 ping statistics --- 00:17:44.632 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:44.632 rtt min/avg/max/mdev = 0.260/0.260/0.260/0.000 ms 00:17:44.632 01:03:37 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:44.632 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:44.632 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.184 ms 00:17:44.632 00:17:44.632 --- 10.0.0.1 ping statistics --- 00:17:44.632 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:44.632 rtt min/avg/max/mdev = 0.184/0.184/0.184/0.000 ms 00:17:44.632 01:03:37 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:44.632 01:03:37 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@422 -- # return 0 00:17:44.632 01:03:37 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:44.632 01:03:37 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:44.632 01:03:37 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:44.632 01:03:37 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:44.632 01:03:37 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:44.632 01:03:37 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:44.632 01:03:37 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:44.632 01:03:37 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:17:44.632 01:03:37 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:44.632 01:03:37 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@720 -- # xtrace_disable 00:17:44.632 01:03:37 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:44.632 01:03:37 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@481 -- # nvmfpid=3758753 00:17:44.632 01:03:37 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:44.632 01:03:37 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@482 -- # waitforlisten 3758753 00:17:44.632 01:03:37 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@827 -- # '[' -z 3758753 ']' 00:17:44.632 01:03:37 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:44.632 01:03:37 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:44.632 01:03:37 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:44.632 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:44.632 01:03:37 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:44.632 01:03:37 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:44.632 [2024-07-25 01:03:37.641815] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 23.11.0 initialization... 00:17:44.632 [2024-07-25 01:03:37.641896] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:44.632 EAL: No free 2048 kB hugepages reported on node 1 00:17:44.632 [2024-07-25 01:03:37.707940] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:44.890 [2024-07-25 01:03:37.797598] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:44.890 [2024-07-25 01:03:37.797661] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:44.890 [2024-07-25 01:03:37.797675] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:44.890 [2024-07-25 01:03:37.797686] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:44.890 [2024-07-25 01:03:37.797703] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:44.890 [2024-07-25 01:03:37.797729] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:44.891 01:03:37 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:44.891 01:03:37 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@860 -- # return 0 00:17:44.891 01:03:37 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:44.891 01:03:37 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:44.891 01:03:37 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:44.891 01:03:37 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:44.891 01:03:37 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:44.891 01:03:37 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:44.891 01:03:37 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:44.891 [2024-07-25 01:03:37.943146] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:44.891 01:03:37 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:44.891 01:03:37 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:44.891 01:03:37 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:44.891 01:03:37 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:44.891 Malloc0 00:17:44.891 01:03:37 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:44.891 01:03:37 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:44.891 01:03:37 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:44.891 01:03:37 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:44.891 01:03:37 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:44.891 01:03:37 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:44.891 01:03:37 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:44.891 01:03:37 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:44.891 01:03:37 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:44.891 01:03:37 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:44.891 01:03:37 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:44.891 01:03:37 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:44.891 [2024-07-25 01:03:38.004852] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:44.891 01:03:38 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:44.891 01:03:38 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=3758773 00:17:44.891 01:03:38 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:17:44.891 01:03:38 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:44.891 01:03:38 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 3758773 /var/tmp/bdevperf.sock 00:17:44.891 01:03:38 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@827 -- # '[' -z 3758773 ']' 00:17:44.891 01:03:38 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:44.891 01:03:38 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:44.891 01:03:38 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:44.891 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:44.891 01:03:38 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:44.891 01:03:38 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:45.149 [2024-07-25 01:03:38.050691] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 23.11.0 initialization... 00:17:45.149 [2024-07-25 01:03:38.050755] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3758773 ] 00:17:45.149 EAL: No free 2048 kB hugepages reported on node 1 00:17:45.149 [2024-07-25 01:03:38.112958] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:45.149 [2024-07-25 01:03:38.203192] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:45.407 01:03:38 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:45.407 01:03:38 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@860 -- # return 0 00:17:45.407 01:03:38 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:17:45.407 01:03:38 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:45.407 01:03:38 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:45.407 NVMe0n1 00:17:45.407 01:03:38 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:45.407 01:03:38 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:45.407 Running I/O for 10 seconds... 00:17:57.601 00:17:57.601 Latency(us) 00:17:57.601 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:57.601 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:17:57.601 Verification LBA range: start 0x0 length 0x4000 00:17:57.601 NVMe0n1 : 10.09 8575.71 33.50 0.00 0.00 118804.32 24855.13 73011.96 00:17:57.601 =================================================================================================================== 00:17:57.601 Total : 8575.71 33.50 0.00 0.00 118804.32 24855.13 73011.96 00:17:57.601 0 00:17:57.601 01:03:48 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 3758773 00:17:57.601 01:03:48 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@946 -- # '[' -z 3758773 ']' 00:17:57.601 01:03:48 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@950 -- # kill -0 3758773 00:17:57.601 01:03:48 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@951 -- # uname 00:17:57.601 01:03:48 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:17:57.601 01:03:48 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3758773 00:17:57.601 01:03:48 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:17:57.601 01:03:48 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:17:57.601 01:03:48 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3758773' 00:17:57.601 killing process with pid 3758773 00:17:57.601 01:03:48 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@965 -- # kill 3758773 00:17:57.601 Received shutdown signal, test time was about 10.000000 seconds 00:17:57.601 00:17:57.601 Latency(us) 00:17:57.601 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:57.602 =================================================================================================================== 00:17:57.602 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:57.602 01:03:48 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@970 -- # wait 3758773 00:17:57.602 01:03:48 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:17:57.602 01:03:48 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:17:57.602 01:03:48 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:57.602 01:03:48 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@117 -- # sync 00:17:57.602 01:03:48 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:57.602 01:03:48 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@120 -- # set +e 00:17:57.602 01:03:48 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:57.602 01:03:48 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:57.602 rmmod nvme_tcp 00:17:57.602 rmmod nvme_fabrics 00:17:57.602 rmmod nvme_keyring 00:17:57.602 01:03:48 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:57.602 01:03:48 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@124 -- # set -e 00:17:57.602 01:03:48 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@125 -- # return 0 00:17:57.602 01:03:48 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@489 -- # '[' -n 3758753 ']' 00:17:57.602 01:03:48 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@490 -- # killprocess 3758753 00:17:57.602 01:03:48 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@946 -- # '[' -z 3758753 ']' 00:17:57.602 01:03:48 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@950 -- # kill -0 3758753 00:17:57.602 01:03:48 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@951 -- # uname 00:17:57.602 01:03:48 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:17:57.602 01:03:48 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3758753 00:17:57.602 01:03:49 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:17:57.602 01:03:49 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:17:57.602 01:03:49 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3758753' 00:17:57.602 killing process with pid 3758753 00:17:57.602 01:03:49 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@965 -- # kill 3758753 00:17:57.602 01:03:49 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@970 -- # wait 3758753 00:17:57.602 01:03:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:57.602 01:03:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:57.602 01:03:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:57.602 01:03:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:57.602 01:03:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:57.602 01:03:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:57.602 01:03:49 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:57.602 01:03:49 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:58.166 01:03:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:58.166 00:17:58.166 real 0m15.947s 00:17:58.166 user 0m22.470s 00:17:58.166 sys 0m2.989s 00:17:58.166 01:03:51 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1122 -- # xtrace_disable 00:17:58.166 01:03:51 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:58.166 ************************************ 00:17:58.166 END TEST nvmf_queue_depth 00:17:58.166 ************************************ 00:17:58.424 01:03:51 nvmf_tcp -- nvmf/nvmf.sh@52 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:17:58.424 01:03:51 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:17:58.424 01:03:51 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:17:58.424 01:03:51 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:58.424 ************************************ 00:17:58.424 START TEST nvmf_target_multipath 00:17:58.424 ************************************ 00:17:58.424 01:03:51 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:17:58.424 * Looking for test storage... 00:17:58.424 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:58.424 01:03:51 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:58.424 01:03:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:17:58.424 01:03:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:58.424 01:03:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:58.424 01:03:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:58.424 01:03:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:58.424 01:03:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:58.424 01:03:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:58.424 01:03:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:58.424 01:03:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:58.424 01:03:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:58.425 01:03:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:58.425 01:03:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:58.425 01:03:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:17:58.425 01:03:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:58.425 01:03:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:58.425 01:03:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:58.425 01:03:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:58.425 01:03:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:58.425 01:03:51 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:58.425 01:03:51 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:58.425 01:03:51 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:58.425 01:03:51 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:58.425 01:03:51 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:58.425 01:03:51 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:58.425 01:03:51 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:17:58.425 01:03:51 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:58.425 01:03:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@47 -- # : 0 00:17:58.425 01:03:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:58.425 01:03:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:58.425 01:03:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:58.425 01:03:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:58.425 01:03:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:58.425 01:03:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:58.425 01:03:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:58.425 01:03:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:58.425 01:03:51 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:58.425 01:03:51 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:58.425 01:03:51 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:17:58.425 01:03:51 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:58.425 01:03:51 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:17:58.425 01:03:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:58.425 01:03:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:58.425 01:03:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:58.425 01:03:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:58.425 01:03:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:58.425 01:03:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:58.425 01:03:51 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:58.425 01:03:51 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:58.425 01:03:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:58.425 01:03:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:58.425 01:03:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@285 -- # xtrace_disable 00:17:58.425 01:03:51 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:18:00.332 01:03:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:00.332 01:03:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@291 -- # pci_devs=() 00:18:00.332 01:03:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:00.332 01:03:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:00.332 01:03:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:00.332 01:03:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:00.332 01:03:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:00.332 01:03:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@295 -- # net_devs=() 00:18:00.332 01:03:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:00.332 01:03:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@296 -- # e810=() 00:18:00.332 01:03:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@296 -- # local -ga e810 00:18:00.332 01:03:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@297 -- # x722=() 00:18:00.332 01:03:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@297 -- # local -ga x722 00:18:00.332 01:03:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@298 -- # mlx=() 00:18:00.332 01:03:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@298 -- # local -ga mlx 00:18:00.332 01:03:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:00.332 01:03:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:00.332 01:03:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:00.332 01:03:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:00.332 01:03:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:00.332 01:03:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:00.332 01:03:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:00.332 01:03:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:00.332 01:03:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:00.332 01:03:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:00.332 01:03:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:00.332 01:03:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:00.332 01:03:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:00.332 01:03:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:00.332 01:03:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:00.332 01:03:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:00.332 01:03:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:00.332 01:03:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:00.332 01:03:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:18:00.332 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:18:00.332 01:03:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:00.332 01:03:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:00.332 01:03:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:00.332 01:03:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:00.332 01:03:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:00.332 01:03:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:00.332 01:03:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:18:00.332 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:18:00.332 01:03:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:00.332 01:03:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:00.332 01:03:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:00.332 01:03:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:00.332 01:03:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:00.332 01:03:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:00.332 01:03:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:00.332 01:03:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:00.332 01:03:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:00.332 01:03:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:00.332 01:03:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:00.332 01:03:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:00.332 01:03:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:00.332 01:03:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:00.332 01:03:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:00.332 01:03:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:18:00.332 Found net devices under 0000:0a:00.0: cvl_0_0 00:18:00.332 01:03:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:00.332 01:03:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:00.332 01:03:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:00.332 01:03:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:00.332 01:03:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:00.332 01:03:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:00.332 01:03:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:00.332 01:03:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:00.332 01:03:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:18:00.332 Found net devices under 0000:0a:00.1: cvl_0_1 00:18:00.332 01:03:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:00.332 01:03:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:00.332 01:03:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # is_hw=yes 00:18:00.332 01:03:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:00.332 01:03:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:18:00.332 01:03:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:18:00.332 01:03:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:00.332 01:03:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:00.332 01:03:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:00.332 01:03:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:00.332 01:03:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:00.332 01:03:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:00.332 01:03:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:00.332 01:03:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:00.332 01:03:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:00.332 01:03:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:00.593 01:03:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:00.593 01:03:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:00.593 01:03:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:00.593 01:03:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:00.593 01:03:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:00.593 01:03:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:00.593 01:03:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:00.593 01:03:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:00.593 01:03:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:00.593 01:03:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:00.593 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:00.593 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.242 ms 00:18:00.593 00:18:00.593 --- 10.0.0.2 ping statistics --- 00:18:00.593 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:00.593 rtt min/avg/max/mdev = 0.242/0.242/0.242/0.000 ms 00:18:00.593 01:03:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:00.593 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:00.593 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.187 ms 00:18:00.593 00:18:00.593 --- 10.0.0.1 ping statistics --- 00:18:00.593 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:00.593 rtt min/avg/max/mdev = 0.187/0.187/0.187/0.000 ms 00:18:00.593 01:03:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:00.593 01:03:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@422 -- # return 0 00:18:00.593 01:03:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:00.593 01:03:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:00.593 01:03:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:00.593 01:03:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:00.593 01:03:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:00.593 01:03:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:00.593 01:03:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:00.593 01:03:53 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:18:00.593 01:03:53 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:18:00.593 only one NIC for nvmf test 00:18:00.593 01:03:53 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:18:00.593 01:03:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:00.593 01:03:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:18:00.593 01:03:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:00.593 01:03:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:18:00.593 01:03:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:00.593 01:03:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:00.593 rmmod nvme_tcp 00:18:00.593 rmmod nvme_fabrics 00:18:00.593 rmmod nvme_keyring 00:18:00.593 01:03:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:00.593 01:03:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:18:00.593 01:03:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:18:00.593 01:03:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:18:00.593 01:03:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:00.593 01:03:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:00.593 01:03:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:00.593 01:03:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:00.593 01:03:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:00.593 01:03:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:00.593 01:03:53 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:00.593 01:03:53 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:03.122 01:03:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:03.122 01:03:55 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:18:03.122 01:03:55 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:18:03.122 01:03:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:03.122 01:03:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:18:03.122 01:03:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:03.122 01:03:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:18:03.122 01:03:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:03.122 01:03:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:03.122 01:03:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:03.122 01:03:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:18:03.122 01:03:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:18:03.122 01:03:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:18:03.122 01:03:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:03.122 01:03:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:03.122 01:03:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:03.122 01:03:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:03.122 01:03:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:03.122 01:03:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:03.122 01:03:55 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:03.122 01:03:55 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:03.122 01:03:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:03.122 00:18:03.122 real 0m4.360s 00:18:03.122 user 0m0.830s 00:18:03.122 sys 0m1.520s 00:18:03.122 01:03:55 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1122 -- # xtrace_disable 00:18:03.122 01:03:55 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:18:03.122 ************************************ 00:18:03.122 END TEST nvmf_target_multipath 00:18:03.122 ************************************ 00:18:03.122 01:03:55 nvmf_tcp -- nvmf/nvmf.sh@53 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:18:03.122 01:03:55 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:18:03.122 01:03:55 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:18:03.122 01:03:55 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:03.122 ************************************ 00:18:03.122 START TEST nvmf_zcopy 00:18:03.122 ************************************ 00:18:03.122 01:03:55 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:18:03.122 * Looking for test storage... 00:18:03.122 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:03.122 01:03:55 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:03.122 01:03:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:18:03.122 01:03:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:03.122 01:03:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:03.122 01:03:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:03.122 01:03:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:03.122 01:03:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:03.122 01:03:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:03.122 01:03:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:03.122 01:03:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:03.122 01:03:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:03.122 01:03:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:03.122 01:03:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:03.122 01:03:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:18:03.122 01:03:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:03.122 01:03:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:03.122 01:03:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:03.122 01:03:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:03.122 01:03:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:03.122 01:03:55 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:03.122 01:03:55 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:03.122 01:03:55 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:03.122 01:03:55 nvmf_tcp.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:03.122 01:03:55 nvmf_tcp.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:03.122 01:03:55 nvmf_tcp.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:03.122 01:03:55 nvmf_tcp.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:18:03.123 01:03:55 nvmf_tcp.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:03.123 01:03:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@47 -- # : 0 00:18:03.123 01:03:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:03.123 01:03:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:03.123 01:03:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:03.123 01:03:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:03.123 01:03:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:03.123 01:03:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:03.123 01:03:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:03.123 01:03:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:03.123 01:03:55 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:18:03.123 01:03:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:03.123 01:03:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:03.123 01:03:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:03.123 01:03:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:03.123 01:03:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:03.123 01:03:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:03.123 01:03:55 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:03.123 01:03:55 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:03.123 01:03:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:18:03.123 01:03:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:03.123 01:03:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@285 -- # xtrace_disable 00:18:03.123 01:03:55 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:05.020 01:03:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:05.020 01:03:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@291 -- # pci_devs=() 00:18:05.020 01:03:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:05.020 01:03:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:05.020 01:03:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:05.020 01:03:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:05.020 01:03:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:05.020 01:03:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@295 -- # net_devs=() 00:18:05.020 01:03:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:05.020 01:03:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@296 -- # e810=() 00:18:05.020 01:03:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@296 -- # local -ga e810 00:18:05.020 01:03:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@297 -- # x722=() 00:18:05.020 01:03:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@297 -- # local -ga x722 00:18:05.020 01:03:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@298 -- # mlx=() 00:18:05.020 01:03:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@298 -- # local -ga mlx 00:18:05.020 01:03:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:05.020 01:03:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:05.020 01:03:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:05.020 01:03:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:05.020 01:03:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:05.020 01:03:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:05.020 01:03:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:05.020 01:03:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:05.020 01:03:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:05.020 01:03:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:05.020 01:03:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:05.020 01:03:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:05.020 01:03:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:05.020 01:03:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:05.020 01:03:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:05.020 01:03:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:05.020 01:03:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:05.020 01:03:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:05.020 01:03:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:18:05.020 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:18:05.020 01:03:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:05.020 01:03:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:05.020 01:03:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:05.020 01:03:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:05.020 01:03:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:05.020 01:03:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:05.020 01:03:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:18:05.020 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:18:05.020 01:03:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:05.020 01:03:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:05.020 01:03:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:05.020 01:03:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:05.020 01:03:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:05.020 01:03:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:05.020 01:03:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:05.020 01:03:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:05.020 01:03:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:05.020 01:03:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:05.020 01:03:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:05.020 01:03:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:05.020 01:03:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:05.020 01:03:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:05.020 01:03:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:05.020 01:03:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:18:05.020 Found net devices under 0000:0a:00.0: cvl_0_0 00:18:05.020 01:03:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:05.020 01:03:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:05.020 01:03:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:05.020 01:03:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:05.020 01:03:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:05.020 01:03:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:05.020 01:03:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:05.020 01:03:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:05.020 01:03:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:18:05.020 Found net devices under 0000:0a:00.1: cvl_0_1 00:18:05.020 01:03:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:05.020 01:03:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:05.020 01:03:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # is_hw=yes 00:18:05.020 01:03:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:05.020 01:03:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:18:05.020 01:03:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:18:05.020 01:03:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:05.020 01:03:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:05.020 01:03:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:05.020 01:03:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:05.020 01:03:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:05.020 01:03:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:05.020 01:03:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:05.020 01:03:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:05.020 01:03:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:05.020 01:03:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:05.020 01:03:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:05.020 01:03:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:05.020 01:03:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:05.020 01:03:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:05.020 01:03:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:05.020 01:03:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:05.020 01:03:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:05.020 01:03:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:05.020 01:03:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:05.020 01:03:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:05.020 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:05.021 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.277 ms 00:18:05.021 00:18:05.021 --- 10.0.0.2 ping statistics --- 00:18:05.021 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:05.021 rtt min/avg/max/mdev = 0.277/0.277/0.277/0.000 ms 00:18:05.021 01:03:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:05.021 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:05.021 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.187 ms 00:18:05.021 00:18:05.021 --- 10.0.0.1 ping statistics --- 00:18:05.021 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:05.021 rtt min/avg/max/mdev = 0.187/0.187/0.187/0.000 ms 00:18:05.021 01:03:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:05.021 01:03:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@422 -- # return 0 00:18:05.021 01:03:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:05.021 01:03:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:05.021 01:03:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:05.021 01:03:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:05.021 01:03:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:05.021 01:03:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:05.021 01:03:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:05.021 01:03:57 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:18:05.021 01:03:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:05.021 01:03:57 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@720 -- # xtrace_disable 00:18:05.021 01:03:57 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:05.021 01:03:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@481 -- # nvmfpid=3763936 00:18:05.021 01:03:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:05.021 01:03:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@482 -- # waitforlisten 3763936 00:18:05.021 01:03:57 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@827 -- # '[' -z 3763936 ']' 00:18:05.021 01:03:57 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:05.021 01:03:57 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@832 -- # local max_retries=100 00:18:05.021 01:03:57 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:05.021 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:05.021 01:03:57 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@836 -- # xtrace_disable 00:18:05.021 01:03:57 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:05.021 [2024-07-25 01:03:57.987085] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 23.11.0 initialization... 00:18:05.021 [2024-07-25 01:03:57.987189] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:05.021 EAL: No free 2048 kB hugepages reported on node 1 00:18:05.021 [2024-07-25 01:03:58.064346] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:05.021 [2024-07-25 01:03:58.158873] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:05.021 [2024-07-25 01:03:58.158939] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:05.021 [2024-07-25 01:03:58.158955] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:05.021 [2024-07-25 01:03:58.158969] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:05.021 [2024-07-25 01:03:58.158981] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:05.021 [2024-07-25 01:03:58.159018] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:05.279 01:03:58 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:18:05.279 01:03:58 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@860 -- # return 0 00:18:05.279 01:03:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:05.279 01:03:58 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:05.279 01:03:58 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:05.279 01:03:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:05.279 01:03:58 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:18:05.279 01:03:58 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:18:05.279 01:03:58 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:05.279 01:03:58 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:05.279 [2024-07-25 01:03:58.310982] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:05.279 01:03:58 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:05.279 01:03:58 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:18:05.279 01:03:58 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:05.279 01:03:58 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:05.279 01:03:58 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:05.279 01:03:58 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:05.279 01:03:58 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:05.279 01:03:58 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:05.279 [2024-07-25 01:03:58.327224] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:05.279 01:03:58 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:05.279 01:03:58 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:18:05.279 01:03:58 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:05.279 01:03:58 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:05.279 01:03:58 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:05.279 01:03:58 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:18:05.279 01:03:58 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:05.279 01:03:58 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:05.279 malloc0 00:18:05.279 01:03:58 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:05.279 01:03:58 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:05.279 01:03:58 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:05.279 01:03:58 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:05.279 01:03:58 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:05.279 01:03:58 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:18:05.279 01:03:58 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:18:05.279 01:03:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:18:05.279 01:03:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:18:05.279 01:03:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:18:05.279 01:03:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:18:05.279 { 00:18:05.279 "params": { 00:18:05.279 "name": "Nvme$subsystem", 00:18:05.279 "trtype": "$TEST_TRANSPORT", 00:18:05.279 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:05.279 "adrfam": "ipv4", 00:18:05.279 "trsvcid": "$NVMF_PORT", 00:18:05.279 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:05.279 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:05.279 "hdgst": ${hdgst:-false}, 00:18:05.279 "ddgst": ${ddgst:-false} 00:18:05.279 }, 00:18:05.279 "method": "bdev_nvme_attach_controller" 00:18:05.279 } 00:18:05.279 EOF 00:18:05.279 )") 00:18:05.279 01:03:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:18:05.279 01:03:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:18:05.279 01:03:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:18:05.279 01:03:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:18:05.279 "params": { 00:18:05.279 "name": "Nvme1", 00:18:05.279 "trtype": "tcp", 00:18:05.279 "traddr": "10.0.0.2", 00:18:05.279 "adrfam": "ipv4", 00:18:05.279 "trsvcid": "4420", 00:18:05.279 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:05.279 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:05.279 "hdgst": false, 00:18:05.279 "ddgst": false 00:18:05.279 }, 00:18:05.279 "method": "bdev_nvme_attach_controller" 00:18:05.279 }' 00:18:05.279 [2024-07-25 01:03:58.408929] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 23.11.0 initialization... 00:18:05.279 [2024-07-25 01:03:58.408997] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3763966 ] 00:18:05.536 EAL: No free 2048 kB hugepages reported on node 1 00:18:05.536 [2024-07-25 01:03:58.473232] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:05.536 [2024-07-25 01:03:58.571414] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:05.794 Running I/O for 10 seconds... 00:18:15.787 00:18:15.787 Latency(us) 00:18:15.787 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:15.787 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:18:15.787 Verification LBA range: start 0x0 length 0x1000 00:18:15.787 Nvme1n1 : 10.02 5707.46 44.59 0.00 0.00 22362.48 2245.21 33399.09 00:18:15.787 =================================================================================================================== 00:18:15.787 Total : 5707.46 44.59 0.00 0.00 22362.48 2245.21 33399.09 00:18:16.045 01:04:09 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=3765157 00:18:16.045 01:04:09 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:18:16.045 01:04:09 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:16.045 01:04:09 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:18:16.045 01:04:09 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:18:16.045 01:04:09 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:18:16.045 01:04:09 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:18:16.045 01:04:09 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:18:16.045 01:04:09 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:18:16.045 { 00:18:16.045 "params": { 00:18:16.045 "name": "Nvme$subsystem", 00:18:16.045 "trtype": "$TEST_TRANSPORT", 00:18:16.045 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:16.045 "adrfam": "ipv4", 00:18:16.045 "trsvcid": "$NVMF_PORT", 00:18:16.045 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:16.045 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:16.045 "hdgst": ${hdgst:-false}, 00:18:16.045 "ddgst": ${ddgst:-false} 00:18:16.045 }, 00:18:16.045 "method": "bdev_nvme_attach_controller" 00:18:16.045 } 00:18:16.045 EOF 00:18:16.045 )") 00:18:16.045 01:04:09 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:18:16.045 [2024-07-25 01:04:09.031980] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.045 [2024-07-25 01:04:09.032028] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.045 01:04:09 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:18:16.045 01:04:09 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:18:16.045 01:04:09 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:18:16.045 "params": { 00:18:16.045 "name": "Nvme1", 00:18:16.045 "trtype": "tcp", 00:18:16.045 "traddr": "10.0.0.2", 00:18:16.045 "adrfam": "ipv4", 00:18:16.045 "trsvcid": "4420", 00:18:16.045 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:16.045 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:16.045 "hdgst": false, 00:18:16.045 "ddgst": false 00:18:16.045 }, 00:18:16.045 "method": "bdev_nvme_attach_controller" 00:18:16.045 }' 00:18:16.045 [2024-07-25 01:04:09.039938] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.045 [2024-07-25 01:04:09.039965] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.045 [2024-07-25 01:04:09.047955] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.045 [2024-07-25 01:04:09.047980] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.045 [2024-07-25 01:04:09.055975] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.045 [2024-07-25 01:04:09.055999] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.045 [2024-07-25 01:04:09.063989] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.045 [2024-07-25 01:04:09.064020] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.045 [2024-07-25 01:04:09.072008] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.045 [2024-07-25 01:04:09.072028] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.045 [2024-07-25 01:04:09.074607] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 23.11.0 initialization... 00:18:16.045 [2024-07-25 01:04:09.074688] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3765157 ] 00:18:16.045 [2024-07-25 01:04:09.080043] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.045 [2024-07-25 01:04:09.080069] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.046 [2024-07-25 01:04:09.088064] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.046 [2024-07-25 01:04:09.088089] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.046 [2024-07-25 01:04:09.096088] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.046 [2024-07-25 01:04:09.096113] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.046 [2024-07-25 01:04:09.104111] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.046 [2024-07-25 01:04:09.104137] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.046 EAL: No free 2048 kB hugepages reported on node 1 00:18:16.046 [2024-07-25 01:04:09.112135] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.046 [2024-07-25 01:04:09.112159] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.046 [2024-07-25 01:04:09.120157] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.046 [2024-07-25 01:04:09.120182] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.046 [2024-07-25 01:04:09.128179] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.046 [2024-07-25 01:04:09.128205] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.046 [2024-07-25 01:04:09.136199] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.046 [2024-07-25 01:04:09.136224] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.046 [2024-07-25 01:04:09.144221] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.046 [2024-07-25 01:04:09.144254] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.046 [2024-07-25 01:04:09.144942] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:16.046 [2024-07-25 01:04:09.152296] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.046 [2024-07-25 01:04:09.152329] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.046 [2024-07-25 01:04:09.160320] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.046 [2024-07-25 01:04:09.160352] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.046 [2024-07-25 01:04:09.168313] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.046 [2024-07-25 01:04:09.168335] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.046 [2024-07-25 01:04:09.176327] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.046 [2024-07-25 01:04:09.176349] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.046 [2024-07-25 01:04:09.184346] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.046 [2024-07-25 01:04:09.184368] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.046 [2024-07-25 01:04:09.192375] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.046 [2024-07-25 01:04:09.192400] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.303 [2024-07-25 01:04:09.200419] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.303 [2024-07-25 01:04:09.200454] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.303 [2024-07-25 01:04:09.208411] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.303 [2024-07-25 01:04:09.208435] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.303 [2024-07-25 01:04:09.216429] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.303 [2024-07-25 01:04:09.216451] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.303 [2024-07-25 01:04:09.224452] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.303 [2024-07-25 01:04:09.224474] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.303 [2024-07-25 01:04:09.232473] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.303 [2024-07-25 01:04:09.232495] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.303 [2024-07-25 01:04:09.236552] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:16.303 [2024-07-25 01:04:09.240497] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.303 [2024-07-25 01:04:09.240518] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.303 [2024-07-25 01:04:09.248536] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.303 [2024-07-25 01:04:09.248557] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.303 [2024-07-25 01:04:09.256577] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.303 [2024-07-25 01:04:09.256628] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.303 [2024-07-25 01:04:09.264614] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.303 [2024-07-25 01:04:09.264651] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.304 [2024-07-25 01:04:09.272636] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.304 [2024-07-25 01:04:09.272674] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.304 [2024-07-25 01:04:09.280663] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.304 [2024-07-25 01:04:09.280700] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.304 [2024-07-25 01:04:09.288684] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.304 [2024-07-25 01:04:09.288721] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.304 [2024-07-25 01:04:09.296707] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.304 [2024-07-25 01:04:09.296746] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.304 [2024-07-25 01:04:09.304720] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.304 [2024-07-25 01:04:09.304753] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.304 [2024-07-25 01:04:09.312728] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.304 [2024-07-25 01:04:09.312753] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.304 [2024-07-25 01:04:09.320772] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.304 [2024-07-25 01:04:09.320808] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.304 [2024-07-25 01:04:09.328795] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.304 [2024-07-25 01:04:09.328830] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.304 [2024-07-25 01:04:09.336793] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.304 [2024-07-25 01:04:09.336820] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.304 [2024-07-25 01:04:09.344815] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.304 [2024-07-25 01:04:09.344854] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.304 [2024-07-25 01:04:09.352836] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.304 [2024-07-25 01:04:09.352861] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.304 [2024-07-25 01:04:09.360871] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.304 [2024-07-25 01:04:09.360902] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.304 [2024-07-25 01:04:09.368893] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.304 [2024-07-25 01:04:09.368921] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.304 [2024-07-25 01:04:09.376916] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.304 [2024-07-25 01:04:09.376944] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.304 [2024-07-25 01:04:09.384934] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.304 [2024-07-25 01:04:09.384961] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.304 [2024-07-25 01:04:09.392954] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.304 [2024-07-25 01:04:09.392980] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.304 [2024-07-25 01:04:09.400976] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.304 [2024-07-25 01:04:09.401001] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.304 [2024-07-25 01:04:09.409000] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.304 [2024-07-25 01:04:09.409025] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.304 [2024-07-25 01:04:09.417019] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.304 [2024-07-25 01:04:09.417043] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.304 [2024-07-25 01:04:09.425047] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.304 [2024-07-25 01:04:09.425075] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.304 [2024-07-25 01:04:09.433071] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.304 [2024-07-25 01:04:09.433099] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.304 [2024-07-25 01:04:09.441094] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.304 [2024-07-25 01:04:09.441122] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.304 [2024-07-25 01:04:09.449116] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.304 [2024-07-25 01:04:09.449143] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.562 [2024-07-25 01:04:09.457144] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.562 [2024-07-25 01:04:09.457175] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.562 [2024-07-25 01:04:09.465162] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.562 [2024-07-25 01:04:09.465189] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.562 Running I/O for 5 seconds... 00:18:16.562 [2024-07-25 01:04:09.473185] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.562 [2024-07-25 01:04:09.473210] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.562 [2024-07-25 01:04:09.487637] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.562 [2024-07-25 01:04:09.487670] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.562 [2024-07-25 01:04:09.499467] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.562 [2024-07-25 01:04:09.499496] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.562 [2024-07-25 01:04:09.511219] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.562 [2024-07-25 01:04:09.511259] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.562 [2024-07-25 01:04:09.522673] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.562 [2024-07-25 01:04:09.522705] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.562 [2024-07-25 01:04:09.535842] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.562 [2024-07-25 01:04:09.535873] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.562 [2024-07-25 01:04:09.546901] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.562 [2024-07-25 01:04:09.546932] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.562 [2024-07-25 01:04:09.557392] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.562 [2024-07-25 01:04:09.557421] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.562 [2024-07-25 01:04:09.568902] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.562 [2024-07-25 01:04:09.568933] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.562 [2024-07-25 01:04:09.580121] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.562 [2024-07-25 01:04:09.580152] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.562 [2024-07-25 01:04:09.591738] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.562 [2024-07-25 01:04:09.591770] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.562 [2024-07-25 01:04:09.603706] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.562 [2024-07-25 01:04:09.603738] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.562 [2024-07-25 01:04:09.614864] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.562 [2024-07-25 01:04:09.614895] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.562 [2024-07-25 01:04:09.626110] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.562 [2024-07-25 01:04:09.626142] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.562 [2024-07-25 01:04:09.637623] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.562 [2024-07-25 01:04:09.637655] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.562 [2024-07-25 01:04:09.649182] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.562 [2024-07-25 01:04:09.649213] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.562 [2024-07-25 01:04:09.660769] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.562 [2024-07-25 01:04:09.660800] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.562 [2024-07-25 01:04:09.672514] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.562 [2024-07-25 01:04:09.672557] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.562 [2024-07-25 01:04:09.685227] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.562 [2024-07-25 01:04:09.685269] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.562 [2024-07-25 01:04:09.696230] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.562 [2024-07-25 01:04:09.696270] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.562 [2024-07-25 01:04:09.707363] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.562 [2024-07-25 01:04:09.707392] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.820 [2024-07-25 01:04:09.718945] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.820 [2024-07-25 01:04:09.718975] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.820 [2024-07-25 01:04:09.730610] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.820 [2024-07-25 01:04:09.730641] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.820 [2024-07-25 01:04:09.742510] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.820 [2024-07-25 01:04:09.742556] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.820 [2024-07-25 01:04:09.754170] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.820 [2024-07-25 01:04:09.754201] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.820 [2024-07-25 01:04:09.766003] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.820 [2024-07-25 01:04:09.766035] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.820 [2024-07-25 01:04:09.777582] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.820 [2024-07-25 01:04:09.777612] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.820 [2024-07-25 01:04:09.789370] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.820 [2024-07-25 01:04:09.789398] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.820 [2024-07-25 01:04:09.801004] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.820 [2024-07-25 01:04:09.801036] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.820 [2024-07-25 01:04:09.811863] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.820 [2024-07-25 01:04:09.811895] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.820 [2024-07-25 01:04:09.825379] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.820 [2024-07-25 01:04:09.825407] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.820 [2024-07-25 01:04:09.836048] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.820 [2024-07-25 01:04:09.836078] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.820 [2024-07-25 01:04:09.847541] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.820 [2024-07-25 01:04:09.847585] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.820 [2024-07-25 01:04:09.860987] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.820 [2024-07-25 01:04:09.861018] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.820 [2024-07-25 01:04:09.871873] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.820 [2024-07-25 01:04:09.871904] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.820 [2024-07-25 01:04:09.883866] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.820 [2024-07-25 01:04:09.883897] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.820 [2024-07-25 01:04:09.895843] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.820 [2024-07-25 01:04:09.895874] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.820 [2024-07-25 01:04:09.907238] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.820 [2024-07-25 01:04:09.907292] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.820 [2024-07-25 01:04:09.918497] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.820 [2024-07-25 01:04:09.918526] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.820 [2024-07-25 01:04:09.930015] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.820 [2024-07-25 01:04:09.930046] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.820 [2024-07-25 01:04:09.940969] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.820 [2024-07-25 01:04:09.940999] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.820 [2024-07-25 01:04:09.952693] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.821 [2024-07-25 01:04:09.952725] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.821 [2024-07-25 01:04:09.964758] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.821 [2024-07-25 01:04:09.964790] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.078 [2024-07-25 01:04:09.976320] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.078 [2024-07-25 01:04:09.976348] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.078 [2024-07-25 01:04:09.986914] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.078 [2024-07-25 01:04:09.986942] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.078 [2024-07-25 01:04:09.999606] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.078 [2024-07-25 01:04:09.999633] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.078 [2024-07-25 01:04:10.011221] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.078 [2024-07-25 01:04:10.011265] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.078 [2024-07-25 01:04:10.020515] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.078 [2024-07-25 01:04:10.020546] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.078 [2024-07-25 01:04:10.031739] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.078 [2024-07-25 01:04:10.031769] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.078 [2024-07-25 01:04:10.045544] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.078 [2024-07-25 01:04:10.045579] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.078 [2024-07-25 01:04:10.055925] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.078 [2024-07-25 01:04:10.055954] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.078 [2024-07-25 01:04:10.066921] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.078 [2024-07-25 01:04:10.066950] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.078 [2024-07-25 01:04:10.079480] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.078 [2024-07-25 01:04:10.079509] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.078 [2024-07-25 01:04:10.089285] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.078 [2024-07-25 01:04:10.089314] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.078 [2024-07-25 01:04:10.100815] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.078 [2024-07-25 01:04:10.100843] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.078 [2024-07-25 01:04:10.111685] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.078 [2024-07-25 01:04:10.111712] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.078 [2024-07-25 01:04:10.121950] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.078 [2024-07-25 01:04:10.121977] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.078 [2024-07-25 01:04:10.131939] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.078 [2024-07-25 01:04:10.131966] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.078 [2024-07-25 01:04:10.142854] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.078 [2024-07-25 01:04:10.142881] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.078 [2024-07-25 01:04:10.155672] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.078 [2024-07-25 01:04:10.155704] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.078 [2024-07-25 01:04:10.165694] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.078 [2024-07-25 01:04:10.165721] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.078 [2024-07-25 01:04:10.176376] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.078 [2024-07-25 01:04:10.176404] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.078 [2024-07-25 01:04:10.188714] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.078 [2024-07-25 01:04:10.188741] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.078 [2024-07-25 01:04:10.198629] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.078 [2024-07-25 01:04:10.198657] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.078 [2024-07-25 01:04:10.209205] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.078 [2024-07-25 01:04:10.209255] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.078 [2024-07-25 01:04:10.221637] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.078 [2024-07-25 01:04:10.221664] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.336 [2024-07-25 01:04:10.231260] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.337 [2024-07-25 01:04:10.231288] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.337 [2024-07-25 01:04:10.242359] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.337 [2024-07-25 01:04:10.242388] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.337 [2024-07-25 01:04:10.253059] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.337 [2024-07-25 01:04:10.253087] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.337 [2024-07-25 01:04:10.264205] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.337 [2024-07-25 01:04:10.264258] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.337 [2024-07-25 01:04:10.274715] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.337 [2024-07-25 01:04:10.274742] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.337 [2024-07-25 01:04:10.285053] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.337 [2024-07-25 01:04:10.285081] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.337 [2024-07-25 01:04:10.295410] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.337 [2024-07-25 01:04:10.295438] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.337 [2024-07-25 01:04:10.305723] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.337 [2024-07-25 01:04:10.305750] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.337 [2024-07-25 01:04:10.316339] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.337 [2024-07-25 01:04:10.316367] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.337 [2024-07-25 01:04:10.328831] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.337 [2024-07-25 01:04:10.328859] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.337 [2024-07-25 01:04:10.338635] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.337 [2024-07-25 01:04:10.338662] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.337 [2024-07-25 01:04:10.349213] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.337 [2024-07-25 01:04:10.349265] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.337 [2024-07-25 01:04:10.359399] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.337 [2024-07-25 01:04:10.359438] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.337 [2024-07-25 01:04:10.369622] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.337 [2024-07-25 01:04:10.369649] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.337 [2024-07-25 01:04:10.380111] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.337 [2024-07-25 01:04:10.380149] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.337 [2024-07-25 01:04:10.390687] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.337 [2024-07-25 01:04:10.390715] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.337 [2024-07-25 01:04:10.401441] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.337 [2024-07-25 01:04:10.401469] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.337 [2024-07-25 01:04:10.411979] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.337 [2024-07-25 01:04:10.412006] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.337 [2024-07-25 01:04:10.423196] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.337 [2024-07-25 01:04:10.423238] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.337 [2024-07-25 01:04:10.435707] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.337 [2024-07-25 01:04:10.435734] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.337 [2024-07-25 01:04:10.445480] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.337 [2024-07-25 01:04:10.445508] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.337 [2024-07-25 01:04:10.455838] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.337 [2024-07-25 01:04:10.455864] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.337 [2024-07-25 01:04:10.466607] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.337 [2024-07-25 01:04:10.466634] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.337 [2024-07-25 01:04:10.479131] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.337 [2024-07-25 01:04:10.479158] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.595 [2024-07-25 01:04:10.489174] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.595 [2024-07-25 01:04:10.489200] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.595 [2024-07-25 01:04:10.499890] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.595 [2024-07-25 01:04:10.499918] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.595 [2024-07-25 01:04:10.512305] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.595 [2024-07-25 01:04:10.512333] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.595 [2024-07-25 01:04:10.523766] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.595 [2024-07-25 01:04:10.523792] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.595 [2024-07-25 01:04:10.533275] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.595 [2024-07-25 01:04:10.533304] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.595 [2024-07-25 01:04:10.544134] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.595 [2024-07-25 01:04:10.544161] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.595 [2024-07-25 01:04:10.556869] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.595 [2024-07-25 01:04:10.556896] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.595 [2024-07-25 01:04:10.567020] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.595 [2024-07-25 01:04:10.567058] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.595 [2024-07-25 01:04:10.577309] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.595 [2024-07-25 01:04:10.577337] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.595 [2024-07-25 01:04:10.587474] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.595 [2024-07-25 01:04:10.587501] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.595 [2024-07-25 01:04:10.598051] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.595 [2024-07-25 01:04:10.598078] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.595 [2024-07-25 01:04:10.610582] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.595 [2024-07-25 01:04:10.610609] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.595 [2024-07-25 01:04:10.620838] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.595 [2024-07-25 01:04:10.620870] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.595 [2024-07-25 01:04:10.632285] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.595 [2024-07-25 01:04:10.632331] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.595 [2024-07-25 01:04:10.643769] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.595 [2024-07-25 01:04:10.643800] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.595 [2024-07-25 01:04:10.655049] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.595 [2024-07-25 01:04:10.655080] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.595 [2024-07-25 01:04:10.666388] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.595 [2024-07-25 01:04:10.666417] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.595 [2024-07-25 01:04:10.679508] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.595 [2024-07-25 01:04:10.679536] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.595 [2024-07-25 01:04:10.690445] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.595 [2024-07-25 01:04:10.690473] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.595 [2024-07-25 01:04:10.701578] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.595 [2024-07-25 01:04:10.701609] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.595 [2024-07-25 01:04:10.712703] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.595 [2024-07-25 01:04:10.712734] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.595 [2024-07-25 01:04:10.724463] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.595 [2024-07-25 01:04:10.724492] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.595 [2024-07-25 01:04:10.738255] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.595 [2024-07-25 01:04:10.738303] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.853 [2024-07-25 01:04:10.749408] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.853 [2024-07-25 01:04:10.749436] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.853 [2024-07-25 01:04:10.760806] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.853 [2024-07-25 01:04:10.760837] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.853 [2024-07-25 01:04:10.772398] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.853 [2024-07-25 01:04:10.772429] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.853 [2024-07-25 01:04:10.783962] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.853 [2024-07-25 01:04:10.784001] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.853 [2024-07-25 01:04:10.795551] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.854 [2024-07-25 01:04:10.795594] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.854 [2024-07-25 01:04:10.806940] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.854 [2024-07-25 01:04:10.806971] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.854 [2024-07-25 01:04:10.818877] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.854 [2024-07-25 01:04:10.818908] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.854 [2024-07-25 01:04:10.830989] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.854 [2024-07-25 01:04:10.831020] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.854 [2024-07-25 01:04:10.842958] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.854 [2024-07-25 01:04:10.842989] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.854 [2024-07-25 01:04:10.854128] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.854 [2024-07-25 01:04:10.854159] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.854 [2024-07-25 01:04:10.865444] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.854 [2024-07-25 01:04:10.865472] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.854 [2024-07-25 01:04:10.876813] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.854 [2024-07-25 01:04:10.876844] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.854 [2024-07-25 01:04:10.888486] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.854 [2024-07-25 01:04:10.888514] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.854 [2024-07-25 01:04:10.899860] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.854 [2024-07-25 01:04:10.899891] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.854 [2024-07-25 01:04:10.911134] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.854 [2024-07-25 01:04:10.911165] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.854 [2024-07-25 01:04:10.922837] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.854 [2024-07-25 01:04:10.922868] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.854 [2024-07-25 01:04:10.934021] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.854 [2024-07-25 01:04:10.934051] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.854 [2024-07-25 01:04:10.945158] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.854 [2024-07-25 01:04:10.945189] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.854 [2024-07-25 01:04:10.956556] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.854 [2024-07-25 01:04:10.956587] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.854 [2024-07-25 01:04:10.968200] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.854 [2024-07-25 01:04:10.968232] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.854 [2024-07-25 01:04:10.979417] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.854 [2024-07-25 01:04:10.979446] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.854 [2024-07-25 01:04:10.992779] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.854 [2024-07-25 01:04:10.992809] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.854 [2024-07-25 01:04:11.003356] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.854 [2024-07-25 01:04:11.003384] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.111 [2024-07-25 01:04:11.015073] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.111 [2024-07-25 01:04:11.015104] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.111 [2024-07-25 01:04:11.026538] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.111 [2024-07-25 01:04:11.026583] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.111 [2024-07-25 01:04:11.038536] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.111 [2024-07-25 01:04:11.038567] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.111 [2024-07-25 01:04:11.050108] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.111 [2024-07-25 01:04:11.050139] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.111 [2024-07-25 01:04:11.062199] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.111 [2024-07-25 01:04:11.062231] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.111 [2024-07-25 01:04:11.073403] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.111 [2024-07-25 01:04:11.073430] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.111 [2024-07-25 01:04:11.084393] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.111 [2024-07-25 01:04:11.084420] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.111 [2024-07-25 01:04:11.097625] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.111 [2024-07-25 01:04:11.097656] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.111 [2024-07-25 01:04:11.108350] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.111 [2024-07-25 01:04:11.108378] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.111 [2024-07-25 01:04:11.119678] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.112 [2024-07-25 01:04:11.119710] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.112 [2024-07-25 01:04:11.131177] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.112 [2024-07-25 01:04:11.131208] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.112 [2024-07-25 01:04:11.143142] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.112 [2024-07-25 01:04:11.143172] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.112 [2024-07-25 01:04:11.154642] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.112 [2024-07-25 01:04:11.154673] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.112 [2024-07-25 01:04:11.167308] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.112 [2024-07-25 01:04:11.167336] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.112 [2024-07-25 01:04:11.177512] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.112 [2024-07-25 01:04:11.177556] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.112 [2024-07-25 01:04:11.189050] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.112 [2024-07-25 01:04:11.189081] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.112 [2024-07-25 01:04:11.200342] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.112 [2024-07-25 01:04:11.200369] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.112 [2024-07-25 01:04:11.211606] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.112 [2024-07-25 01:04:11.211637] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.112 [2024-07-25 01:04:11.222900] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.112 [2024-07-25 01:04:11.222931] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.112 [2024-07-25 01:04:11.234657] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.112 [2024-07-25 01:04:11.234688] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.112 [2024-07-25 01:04:11.246053] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.112 [2024-07-25 01:04:11.246084] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.112 [2024-07-25 01:04:11.257779] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.112 [2024-07-25 01:04:11.257809] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.369 [2024-07-25 01:04:11.269435] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.369 [2024-07-25 01:04:11.269463] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.369 [2024-07-25 01:04:11.281460] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.369 [2024-07-25 01:04:11.281487] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.369 [2024-07-25 01:04:11.294592] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.369 [2024-07-25 01:04:11.294622] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.369 [2024-07-25 01:04:11.305336] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.369 [2024-07-25 01:04:11.305363] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.369 [2024-07-25 01:04:11.317656] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.369 [2024-07-25 01:04:11.317687] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.369 [2024-07-25 01:04:11.329494] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.369 [2024-07-25 01:04:11.329537] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.369 [2024-07-25 01:04:11.340579] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.369 [2024-07-25 01:04:11.340611] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.369 [2024-07-25 01:04:11.352378] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.369 [2024-07-25 01:04:11.352406] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.369 [2024-07-25 01:04:11.364091] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.369 [2024-07-25 01:04:11.364123] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.369 [2024-07-25 01:04:11.375955] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.369 [2024-07-25 01:04:11.375986] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.369 [2024-07-25 01:04:11.388080] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.369 [2024-07-25 01:04:11.388111] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.369 [2024-07-25 01:04:11.399653] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.369 [2024-07-25 01:04:11.399685] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.369 [2024-07-25 01:04:11.410935] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.369 [2024-07-25 01:04:11.410966] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.369 [2024-07-25 01:04:11.422368] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.369 [2024-07-25 01:04:11.422396] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.369 [2024-07-25 01:04:11.432887] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.370 [2024-07-25 01:04:11.432914] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.370 [2024-07-25 01:04:11.443192] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.370 [2024-07-25 01:04:11.443221] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.370 [2024-07-25 01:04:11.453599] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.370 [2024-07-25 01:04:11.453630] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.370 [2024-07-25 01:04:11.463736] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.370 [2024-07-25 01:04:11.463763] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.370 [2024-07-25 01:04:11.474174] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.370 [2024-07-25 01:04:11.474202] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.370 [2024-07-25 01:04:11.484695] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.370 [2024-07-25 01:04:11.484724] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.370 [2024-07-25 01:04:11.495992] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.370 [2024-07-25 01:04:11.496024] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.370 [2024-07-25 01:04:11.509058] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.370 [2024-07-25 01:04:11.509089] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.370 [2024-07-25 01:04:11.519912] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.370 [2024-07-25 01:04:11.519946] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.628 [2024-07-25 01:04:11.531770] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.628 [2024-07-25 01:04:11.531801] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.628 [2024-07-25 01:04:11.543583] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.628 [2024-07-25 01:04:11.543613] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.628 [2024-07-25 01:04:11.555099] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.628 [2024-07-25 01:04:11.555128] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.628 [2024-07-25 01:04:11.566505] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.628 [2024-07-25 01:04:11.566534] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.628 [2024-07-25 01:04:11.577891] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.628 [2024-07-25 01:04:11.577922] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.628 [2024-07-25 01:04:11.589471] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.628 [2024-07-25 01:04:11.589499] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.628 [2024-07-25 01:04:11.600721] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.628 [2024-07-25 01:04:11.600752] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.628 [2024-07-25 01:04:11.612422] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.628 [2024-07-25 01:04:11.612450] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.628 [2024-07-25 01:04:11.623466] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.628 [2024-07-25 01:04:11.623496] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.628 [2024-07-25 01:04:11.634196] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.628 [2024-07-25 01:04:11.634224] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.628 [2024-07-25 01:04:11.645193] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.628 [2024-07-25 01:04:11.645220] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.628 [2024-07-25 01:04:11.657761] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.628 [2024-07-25 01:04:11.657789] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.628 [2024-07-25 01:04:11.669579] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.628 [2024-07-25 01:04:11.669605] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.628 [2024-07-25 01:04:11.679289] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.628 [2024-07-25 01:04:11.679317] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.628 [2024-07-25 01:04:11.691047] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.628 [2024-07-25 01:04:11.691074] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.628 [2024-07-25 01:04:11.701612] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.628 [2024-07-25 01:04:11.701640] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.628 [2024-07-25 01:04:11.713921] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.628 [2024-07-25 01:04:11.713948] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.628 [2024-07-25 01:04:11.723928] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.628 [2024-07-25 01:04:11.723955] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.628 [2024-07-25 01:04:11.734507] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.628 [2024-07-25 01:04:11.734535] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.628 [2024-07-25 01:04:11.745417] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.628 [2024-07-25 01:04:11.745445] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.628 [2024-07-25 01:04:11.756137] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.628 [2024-07-25 01:04:11.756164] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.628 [2024-07-25 01:04:11.766326] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.628 [2024-07-25 01:04:11.766355] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.628 [2024-07-25 01:04:11.776499] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.628 [2024-07-25 01:04:11.776527] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.886 [2024-07-25 01:04:11.786917] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.886 [2024-07-25 01:04:11.786944] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.886 [2024-07-25 01:04:11.797547] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.886 [2024-07-25 01:04:11.797588] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.886 [2024-07-25 01:04:11.808355] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.886 [2024-07-25 01:04:11.808383] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.886 [2024-07-25 01:04:11.820661] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.886 [2024-07-25 01:04:11.820688] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.886 [2024-07-25 01:04:11.830669] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.886 [2024-07-25 01:04:11.830696] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.886 [2024-07-25 01:04:11.841036] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.886 [2024-07-25 01:04:11.841063] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.886 [2024-07-25 01:04:11.851706] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.886 [2024-07-25 01:04:11.851744] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.886 [2024-07-25 01:04:11.864437] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.886 [2024-07-25 01:04:11.864465] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.886 [2024-07-25 01:04:11.874431] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.886 [2024-07-25 01:04:11.874459] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.886 [2024-07-25 01:04:11.884922] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.886 [2024-07-25 01:04:11.884948] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.886 [2024-07-25 01:04:11.895848] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.886 [2024-07-25 01:04:11.895875] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.886 [2024-07-25 01:04:11.906261] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.886 [2024-07-25 01:04:11.906288] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.886 [2024-07-25 01:04:11.916705] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.886 [2024-07-25 01:04:11.916731] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.886 [2024-07-25 01:04:11.929149] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.886 [2024-07-25 01:04:11.929175] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.886 [2024-07-25 01:04:11.938781] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.886 [2024-07-25 01:04:11.938807] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.886 [2024-07-25 01:04:11.949525] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.886 [2024-07-25 01:04:11.949553] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.886 [2024-07-25 01:04:11.960317] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.886 [2024-07-25 01:04:11.960344] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.886 [2024-07-25 01:04:11.970573] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.886 [2024-07-25 01:04:11.970601] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.886 [2024-07-25 01:04:11.981085] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.886 [2024-07-25 01:04:11.981112] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.886 [2024-07-25 01:04:11.991356] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.886 [2024-07-25 01:04:11.991390] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.886 [2024-07-25 01:04:12.001900] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.886 [2024-07-25 01:04:12.001928] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.886 [2024-07-25 01:04:12.012478] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.886 [2024-07-25 01:04:12.012506] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.886 [2024-07-25 01:04:12.025305] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.886 [2024-07-25 01:04:12.025333] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.886 [2024-07-25 01:04:12.035308] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.886 [2024-07-25 01:04:12.035337] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:19.145 [2024-07-25 01:04:12.045609] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:19.145 [2024-07-25 01:04:12.045636] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:19.145 [2024-07-25 01:04:12.056080] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:19.145 [2024-07-25 01:04:12.056115] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:19.145 [2024-07-25 01:04:12.068766] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:19.145 [2024-07-25 01:04:12.068794] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:19.145 [2024-07-25 01:04:12.078818] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:19.145 [2024-07-25 01:04:12.078846] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:19.145 [2024-07-25 01:04:12.089460] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:19.145 [2024-07-25 01:04:12.089488] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:19.145 [2024-07-25 01:04:12.099943] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:19.145 [2024-07-25 01:04:12.099970] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:19.145 [2024-07-25 01:04:12.112586] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:19.145 [2024-07-25 01:04:12.112628] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:19.145 [2024-07-25 01:04:12.122507] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:19.145 [2024-07-25 01:04:12.122535] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:19.145 [2024-07-25 01:04:12.133274] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:19.145 [2024-07-25 01:04:12.133301] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:19.145 [2024-07-25 01:04:12.145896] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:19.145 [2024-07-25 01:04:12.145924] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:19.145 [2024-07-25 01:04:12.155513] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:19.145 [2024-07-25 01:04:12.155541] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:19.145 [2024-07-25 01:04:12.166500] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:19.145 [2024-07-25 01:04:12.166527] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:19.145 [2024-07-25 01:04:12.176985] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:19.145 [2024-07-25 01:04:12.177013] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:19.145 [2024-07-25 01:04:12.188093] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:19.145 [2024-07-25 01:04:12.188121] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:19.145 [2024-07-25 01:04:12.200699] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:19.145 [2024-07-25 01:04:12.200726] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:19.145 [2024-07-25 01:04:12.212341] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:19.145 [2024-07-25 01:04:12.212369] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:19.145 [2024-07-25 01:04:12.221826] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:19.145 [2024-07-25 01:04:12.221853] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:19.145 [2024-07-25 01:04:12.233461] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:19.145 [2024-07-25 01:04:12.233489] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:19.145 [2024-07-25 01:04:12.245848] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:19.145 [2024-07-25 01:04:12.245875] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:19.145 [2024-07-25 01:04:12.256075] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:19.145 [2024-07-25 01:04:12.256102] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:19.145 [2024-07-25 01:04:12.266741] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:19.145 [2024-07-25 01:04:12.266777] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:19.145 [2024-07-25 01:04:12.277260] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:19.145 [2024-07-25 01:04:12.277287] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:19.145 [2024-07-25 01:04:12.287838] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:19.145 [2024-07-25 01:04:12.287865] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:19.403 [2024-07-25 01:04:12.301693] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:19.403 [2024-07-25 01:04:12.301720] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:19.403 [2024-07-25 01:04:12.312346] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:19.403 [2024-07-25 01:04:12.312373] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:19.403 [2024-07-25 01:04:12.322619] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:19.403 [2024-07-25 01:04:12.322647] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:19.403 [2024-07-25 01:04:12.333320] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:19.403 [2024-07-25 01:04:12.333348] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:19.403 [2024-07-25 01:04:12.345175] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:19.403 [2024-07-25 01:04:12.345206] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:19.403 [2024-07-25 01:04:12.356867] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:19.403 [2024-07-25 01:04:12.356899] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:19.403 [2024-07-25 01:04:12.368293] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:19.403 [2024-07-25 01:04:12.368321] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:19.403 [2024-07-25 01:04:12.380112] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:19.403 [2024-07-25 01:04:12.380143] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:19.403 [2024-07-25 01:04:12.391745] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:19.403 [2024-07-25 01:04:12.391776] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:19.403 [2024-07-25 01:04:12.403381] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:19.403 [2024-07-25 01:04:12.403409] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:19.403 [2024-07-25 01:04:12.414901] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:19.403 [2024-07-25 01:04:12.414931] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:19.403 [2024-07-25 01:04:12.426316] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:19.403 [2024-07-25 01:04:12.426344] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:19.403 [2024-07-25 01:04:12.437658] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:19.403 [2024-07-25 01:04:12.437689] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:19.403 [2024-07-25 01:04:12.449420] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:19.403 [2024-07-25 01:04:12.449449] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:19.403 [2024-07-25 01:04:12.460934] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:19.403 [2024-07-25 01:04:12.460965] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:19.403 [2024-07-25 01:04:12.472362] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:19.403 [2024-07-25 01:04:12.472390] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:19.403 [2024-07-25 01:04:12.483894] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:19.403 [2024-07-25 01:04:12.483939] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:19.403 [2024-07-25 01:04:12.495016] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:19.403 [2024-07-25 01:04:12.495048] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:19.403 [2024-07-25 01:04:12.506410] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:19.403 [2024-07-25 01:04:12.506437] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:19.403 [2024-07-25 01:04:12.517715] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:19.403 [2024-07-25 01:04:12.517747] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:19.403 [2024-07-25 01:04:12.529433] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:19.403 [2024-07-25 01:04:12.529461] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:19.403 [2024-07-25 01:04:12.540441] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:19.403 [2024-07-25 01:04:12.540470] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:19.403 [2024-07-25 01:04:12.552103] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:19.403 [2024-07-25 01:04:12.552135] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:19.661 [2024-07-25 01:04:12.563355] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:19.661 [2024-07-25 01:04:12.563383] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:19.661 [2024-07-25 01:04:12.573962] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:19.661 [2024-07-25 01:04:12.573989] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:19.661 [2024-07-25 01:04:12.584956] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:19.661 [2024-07-25 01:04:12.584986] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:19.661 [2024-07-25 01:04:12.596600] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:19.661 [2024-07-25 01:04:12.596631] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:19.662 [2024-07-25 01:04:12.608082] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:19.662 [2024-07-25 01:04:12.608113] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:19.662 [2024-07-25 01:04:12.619537] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:19.662 [2024-07-25 01:04:12.619568] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:19.662 [2024-07-25 01:04:12.630686] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:19.662 [2024-07-25 01:04:12.630717] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:19.662 [2024-07-25 01:04:12.641888] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:19.662 [2024-07-25 01:04:12.641920] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:19.662 [2024-07-25 01:04:12.653356] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:19.662 [2024-07-25 01:04:12.653384] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:19.662 [2024-07-25 01:04:12.664983] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:19.662 [2024-07-25 01:04:12.665014] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:19.662 [2024-07-25 01:04:12.678068] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:19.662 [2024-07-25 01:04:12.678099] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:19.662 [2024-07-25 01:04:12.688045] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:19.662 [2024-07-25 01:04:12.688075] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:19.662 [2024-07-25 01:04:12.699973] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:19.662 [2024-07-25 01:04:12.700004] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:19.662 [2024-07-25 01:04:12.711542] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:19.662 [2024-07-25 01:04:12.711585] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:19.662 [2024-07-25 01:04:12.722404] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:19.662 [2024-07-25 01:04:12.722432] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:19.662 [2024-07-25 01:04:12.733386] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:19.662 [2024-07-25 01:04:12.733414] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:19.662 [2024-07-25 01:04:12.744619] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:19.662 [2024-07-25 01:04:12.744650] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:19.662 [2024-07-25 01:04:12.755846] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:19.662 [2024-07-25 01:04:12.755877] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:19.662 [2024-07-25 01:04:12.769162] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:19.662 [2024-07-25 01:04:12.769194] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:19.662 [2024-07-25 01:04:12.780196] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:19.662 [2024-07-25 01:04:12.780227] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:19.662 [2024-07-25 01:04:12.791215] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:19.662 [2024-07-25 01:04:12.791253] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:19.662 [2024-07-25 01:04:12.802301] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:19.662 [2024-07-25 01:04:12.802330] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:19.920 [2024-07-25 01:04:12.813922] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:19.920 [2024-07-25 01:04:12.813953] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:19.920 [2024-07-25 01:04:12.827142] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:19.920 [2024-07-25 01:04:12.827172] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:19.920 [2024-07-25 01:04:12.837935] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:19.920 [2024-07-25 01:04:12.837966] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:19.920 [2024-07-25 01:04:12.849641] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:19.920 [2024-07-25 01:04:12.849672] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:19.920 [2024-07-25 01:04:12.860964] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:19.920 [2024-07-25 01:04:12.860994] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:19.920 [2024-07-25 01:04:12.871379] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:19.920 [2024-07-25 01:04:12.871407] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:19.920 [2024-07-25 01:04:12.882578] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:19.920 [2024-07-25 01:04:12.882609] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:19.920 [2024-07-25 01:04:12.895497] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:19.920 [2024-07-25 01:04:12.895540] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:19.920 [2024-07-25 01:04:12.906178] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:19.920 [2024-07-25 01:04:12.906208] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:19.920 [2024-07-25 01:04:12.917737] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:19.920 [2024-07-25 01:04:12.917768] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:19.920 [2024-07-25 01:04:12.929456] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:19.920 [2024-07-25 01:04:12.929484] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:19.920 [2024-07-25 01:04:12.940534] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:19.920 [2024-07-25 01:04:12.940579] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:19.920 [2024-07-25 01:04:12.951755] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:19.920 [2024-07-25 01:04:12.951786] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:19.920 [2024-07-25 01:04:12.962710] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:19.920 [2024-07-25 01:04:12.962741] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:19.920 [2024-07-25 01:04:12.974056] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:19.920 [2024-07-25 01:04:12.974087] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:19.920 [2024-07-25 01:04:12.985481] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:19.920 [2024-07-25 01:04:12.985509] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:19.920 [2024-07-25 01:04:12.998669] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:19.920 [2024-07-25 01:04:12.998700] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:19.921 [2024-07-25 01:04:13.009592] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:19.921 [2024-07-25 01:04:13.009624] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:19.921 [2024-07-25 01:04:13.020794] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:19.921 [2024-07-25 01:04:13.020824] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:19.921 [2024-07-25 01:04:13.034072] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:19.921 [2024-07-25 01:04:13.034103] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:19.921 [2024-07-25 01:04:13.044528] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:19.921 [2024-07-25 01:04:13.044556] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:19.921 [2024-07-25 01:04:13.056509] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:19.921 [2024-07-25 01:04:13.056554] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:19.921 [2024-07-25 01:04:13.067666] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:19.921 [2024-07-25 01:04:13.067697] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.179 [2024-07-25 01:04:13.079682] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.179 [2024-07-25 01:04:13.079713] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.179 [2024-07-25 01:04:13.091250] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.179 [2024-07-25 01:04:13.091295] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.179 [2024-07-25 01:04:13.102874] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.179 [2024-07-25 01:04:13.102905] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.179 [2024-07-25 01:04:13.114368] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.179 [2024-07-25 01:04:13.114397] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.179 [2024-07-25 01:04:13.125568] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.179 [2024-07-25 01:04:13.125614] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.179 [2024-07-25 01:04:13.137455] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.179 [2024-07-25 01:04:13.137483] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.179 [2024-07-25 01:04:13.149082] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.179 [2024-07-25 01:04:13.149113] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.179 [2024-07-25 01:04:13.161019] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.179 [2024-07-25 01:04:13.161049] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.179 [2024-07-25 01:04:13.172493] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.179 [2024-07-25 01:04:13.172520] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.180 [2024-07-25 01:04:13.183869] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.180 [2024-07-25 01:04:13.183901] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.180 [2024-07-25 01:04:13.194996] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.180 [2024-07-25 01:04:13.195027] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.180 [2024-07-25 01:04:13.208044] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.180 [2024-07-25 01:04:13.208074] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.180 [2024-07-25 01:04:13.218235] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.180 [2024-07-25 01:04:13.218275] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.180 [2024-07-25 01:04:13.230599] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.180 [2024-07-25 01:04:13.230630] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.180 [2024-07-25 01:04:13.241597] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.180 [2024-07-25 01:04:13.241628] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.180 [2024-07-25 01:04:13.253094] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.180 [2024-07-25 01:04:13.253127] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.180 [2024-07-25 01:04:13.264365] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.180 [2024-07-25 01:04:13.264393] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.180 [2024-07-25 01:04:13.277888] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.180 [2024-07-25 01:04:13.277919] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.180 [2024-07-25 01:04:13.288859] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.180 [2024-07-25 01:04:13.288890] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.180 [2024-07-25 01:04:13.300324] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.180 [2024-07-25 01:04:13.300368] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.180 [2024-07-25 01:04:13.311681] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.180 [2024-07-25 01:04:13.311712] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.180 [2024-07-25 01:04:13.322965] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.180 [2024-07-25 01:04:13.322996] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.437 [2024-07-25 01:04:13.333986] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.437 [2024-07-25 01:04:13.334016] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.437 [2024-07-25 01:04:13.344929] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.437 [2024-07-25 01:04:13.344956] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.437 [2024-07-25 01:04:13.356267] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.437 [2024-07-25 01:04:13.356318] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.437 [2024-07-25 01:04:13.367654] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.437 [2024-07-25 01:04:13.367681] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.437 [2024-07-25 01:04:13.380305] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.437 [2024-07-25 01:04:13.380349] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.437 [2024-07-25 01:04:13.390781] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.437 [2024-07-25 01:04:13.390807] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.437 [2024-07-25 01:04:13.401274] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.437 [2024-07-25 01:04:13.401301] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.437 [2024-07-25 01:04:13.411654] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.437 [2024-07-25 01:04:13.411680] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.437 [2024-07-25 01:04:13.422281] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.437 [2024-07-25 01:04:13.422309] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.437 [2024-07-25 01:04:13.433101] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.437 [2024-07-25 01:04:13.433127] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.437 [2024-07-25 01:04:13.444076] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.437 [2024-07-25 01:04:13.444104] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.437 [2024-07-25 01:04:13.454508] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.437 [2024-07-25 01:04:13.454536] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.437 [2024-07-25 01:04:13.465016] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.437 [2024-07-25 01:04:13.465043] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.437 [2024-07-25 01:04:13.475528] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.437 [2024-07-25 01:04:13.475570] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.437 [2024-07-25 01:04:13.486131] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.437 [2024-07-25 01:04:13.486157] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.437 [2024-07-25 01:04:13.496546] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.437 [2024-07-25 01:04:13.496572] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.437 [2024-07-25 01:04:13.507166] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.437 [2024-07-25 01:04:13.507193] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.437 [2024-07-25 01:04:13.517680] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.437 [2024-07-25 01:04:13.517707] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.437 [2024-07-25 01:04:13.527805] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.437 [2024-07-25 01:04:13.527832] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.437 [2024-07-25 01:04:13.538386] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.437 [2024-07-25 01:04:13.538414] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.437 [2024-07-25 01:04:13.550880] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.437 [2024-07-25 01:04:13.550917] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.437 [2024-07-25 01:04:13.560942] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.437 [2024-07-25 01:04:13.560969] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.437 [2024-07-25 01:04:13.571923] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.437 [2024-07-25 01:04:13.571950] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.437 [2024-07-25 01:04:13.585617] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.437 [2024-07-25 01:04:13.585645] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.730 [2024-07-25 01:04:13.595787] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.730 [2024-07-25 01:04:13.595815] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.730 [2024-07-25 01:04:13.606619] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.730 [2024-07-25 01:04:13.606646] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.730 [2024-07-25 01:04:13.617681] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.730 [2024-07-25 01:04:13.617708] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.730 [2024-07-25 01:04:13.627986] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.730 [2024-07-25 01:04:13.628012] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.730 [2024-07-25 01:04:13.638227] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.730 [2024-07-25 01:04:13.638278] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.730 [2024-07-25 01:04:13.648522] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.730 [2024-07-25 01:04:13.648551] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.730 [2024-07-25 01:04:13.659332] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.730 [2024-07-25 01:04:13.659360] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.730 [2024-07-25 01:04:13.669719] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.730 [2024-07-25 01:04:13.669746] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.730 [2024-07-25 01:04:13.680174] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.730 [2024-07-25 01:04:13.680202] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.730 [2024-07-25 01:04:13.690624] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.730 [2024-07-25 01:04:13.690652] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.730 [2024-07-25 01:04:13.700787] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.730 [2024-07-25 01:04:13.700815] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.730 [2024-07-25 01:04:13.711010] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.730 [2024-07-25 01:04:13.711039] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.730 [2024-07-25 01:04:13.721882] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.730 [2024-07-25 01:04:13.721909] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.730 [2024-07-25 01:04:13.734608] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.730 [2024-07-25 01:04:13.734635] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.730 [2024-07-25 01:04:13.744786] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.730 [2024-07-25 01:04:13.744817] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.730 [2024-07-25 01:04:13.756200] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.730 [2024-07-25 01:04:13.756265] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.730 [2024-07-25 01:04:13.766746] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.730 [2024-07-25 01:04:13.766773] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.730 [2024-07-25 01:04:13.776961] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.730 [2024-07-25 01:04:13.776988] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.730 [2024-07-25 01:04:13.787562] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.730 [2024-07-25 01:04:13.787589] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.730 [2024-07-25 01:04:13.798325] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.730 [2024-07-25 01:04:13.798352] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.730 [2024-07-25 01:04:13.808924] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.730 [2024-07-25 01:04:13.808951] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.730 [2024-07-25 01:04:13.819502] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.730 [2024-07-25 01:04:13.819538] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.730 [2024-07-25 01:04:13.830486] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.730 [2024-07-25 01:04:13.830514] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.730 [2024-07-25 01:04:13.843367] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.730 [2024-07-25 01:04:13.843395] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.730 [2024-07-25 01:04:13.853902] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.730 [2024-07-25 01:04:13.853928] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.730 [2024-07-25 01:04:13.864460] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.730 [2024-07-25 01:04:13.864487] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.730 [2024-07-25 01:04:13.877074] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.730 [2024-07-25 01:04:13.877104] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.989 [2024-07-25 01:04:13.887090] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.989 [2024-07-25 01:04:13.887116] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.989 [2024-07-25 01:04:13.897740] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.989 [2024-07-25 01:04:13.897767] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.989 [2024-07-25 01:04:13.910012] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.989 [2024-07-25 01:04:13.910054] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.989 [2024-07-25 01:04:13.919926] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.989 [2024-07-25 01:04:13.919953] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.989 [2024-07-25 01:04:13.939914] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.989 [2024-07-25 01:04:13.939943] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.989 [2024-07-25 01:04:13.949692] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.989 [2024-07-25 01:04:13.949719] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.989 [2024-07-25 01:04:13.960421] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.989 [2024-07-25 01:04:13.960448] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.989 [2024-07-25 01:04:13.973572] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.989 [2024-07-25 01:04:13.973610] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.989 [2024-07-25 01:04:13.983254] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.989 [2024-07-25 01:04:13.983281] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.989 [2024-07-25 01:04:13.993527] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.989 [2024-07-25 01:04:13.993570] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.989 [2024-07-25 01:04:14.004205] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.989 [2024-07-25 01:04:14.004233] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.989 [2024-07-25 01:04:14.015007] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.989 [2024-07-25 01:04:14.015035] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.989 [2024-07-25 01:04:14.027433] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.989 [2024-07-25 01:04:14.027460] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.989 [2024-07-25 01:04:14.037228] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.989 [2024-07-25 01:04:14.037263] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.989 [2024-07-25 01:04:14.047986] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.989 [2024-07-25 01:04:14.048012] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.989 [2024-07-25 01:04:14.059798] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.989 [2024-07-25 01:04:14.059828] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.989 [2024-07-25 01:04:14.071377] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.989 [2024-07-25 01:04:14.071406] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.989 [2024-07-25 01:04:14.082872] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.989 [2024-07-25 01:04:14.082903] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.989 [2024-07-25 01:04:14.094306] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.989 [2024-07-25 01:04:14.094334] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.989 [2024-07-25 01:04:14.105839] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.989 [2024-07-25 01:04:14.105869] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.989 [2024-07-25 01:04:14.117202] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.989 [2024-07-25 01:04:14.117233] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.989 [2024-07-25 01:04:14.128677] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.989 [2024-07-25 01:04:14.128708] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.989 [2024-07-25 01:04:14.140017] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.989 [2024-07-25 01:04:14.140047] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.247 [2024-07-25 01:04:14.152140] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.247 [2024-07-25 01:04:14.152171] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.247 [2024-07-25 01:04:14.164044] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.247 [2024-07-25 01:04:14.164075] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.247 [2024-07-25 01:04:14.175636] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.247 [2024-07-25 01:04:14.175667] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.247 [2024-07-25 01:04:14.188489] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.247 [2024-07-25 01:04:14.188528] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.247 [2024-07-25 01:04:14.199023] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.247 [2024-07-25 01:04:14.199053] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.247 [2024-07-25 01:04:14.210390] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.247 [2024-07-25 01:04:14.210418] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.247 [2024-07-25 01:04:14.221980] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.247 [2024-07-25 01:04:14.222010] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.247 [2024-07-25 01:04:14.233505] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.247 [2024-07-25 01:04:14.233533] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.247 [2024-07-25 01:04:14.244719] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.247 [2024-07-25 01:04:14.244750] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.247 [2024-07-25 01:04:14.256017] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.247 [2024-07-25 01:04:14.256047] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.247 [2024-07-25 01:04:14.267160] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.247 [2024-07-25 01:04:14.267191] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.247 [2024-07-25 01:04:14.278790] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.247 [2024-07-25 01:04:14.278822] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.247 [2024-07-25 01:04:14.290447] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.247 [2024-07-25 01:04:14.290474] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.247 [2024-07-25 01:04:14.303719] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.247 [2024-07-25 01:04:14.303750] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.247 [2024-07-25 01:04:14.314169] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.247 [2024-07-25 01:04:14.314199] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.247 [2024-07-25 01:04:14.325417] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.247 [2024-07-25 01:04:14.325445] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.247 [2024-07-25 01:04:14.336969] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.247 [2024-07-25 01:04:14.337001] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.247 [2024-07-25 01:04:14.349107] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.247 [2024-07-25 01:04:14.349138] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.247 [2024-07-25 01:04:14.361108] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.247 [2024-07-25 01:04:14.361140] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.247 [2024-07-25 01:04:14.372304] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.247 [2024-07-25 01:04:14.372332] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.247 [2024-07-25 01:04:14.383687] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.247 [2024-07-25 01:04:14.383718] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.247 [2024-07-25 01:04:14.395060] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.247 [2024-07-25 01:04:14.395091] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.504 [2024-07-25 01:04:14.406663] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.504 [2024-07-25 01:04:14.406694] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.504 [2024-07-25 01:04:14.418155] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.504 [2024-07-25 01:04:14.418187] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.504 [2024-07-25 01:04:14.429382] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.504 [2024-07-25 01:04:14.429410] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.504 [2024-07-25 01:04:14.440998] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.504 [2024-07-25 01:04:14.441029] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.504 [2024-07-25 01:04:14.452308] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.504 [2024-07-25 01:04:14.452337] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.504 [2024-07-25 01:04:14.465436] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.504 [2024-07-25 01:04:14.465464] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.504 [2024-07-25 01:04:14.475774] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.504 [2024-07-25 01:04:14.475806] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.504 [2024-07-25 01:04:14.486657] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.505 [2024-07-25 01:04:14.486688] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.505 00:18:21.505 Latency(us) 00:18:21.505 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:21.505 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:18:21.505 Nvme1n1 : 5.01 11476.78 89.66 0.00 0.00 11136.43 4393.34 26602.76 00:18:21.505 =================================================================================================================== 00:18:21.505 Total : 11476.78 89.66 0.00 0.00 11136.43 4393.34 26602.76 00:18:21.505 [2024-07-25 01:04:14.493743] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.505 [2024-07-25 01:04:14.493772] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.505 [2024-07-25 01:04:14.501754] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.505 [2024-07-25 01:04:14.501781] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.505 [2024-07-25 01:04:14.509792] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.505 [2024-07-25 01:04:14.509825] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.505 [2024-07-25 01:04:14.517842] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.505 [2024-07-25 01:04:14.517887] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.505 [2024-07-25 01:04:14.525859] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.505 [2024-07-25 01:04:14.525903] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.505 [2024-07-25 01:04:14.533885] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.505 [2024-07-25 01:04:14.533930] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.505 [2024-07-25 01:04:14.541914] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.505 [2024-07-25 01:04:14.541958] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.505 [2024-07-25 01:04:14.549930] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.505 [2024-07-25 01:04:14.549975] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.505 [2024-07-25 01:04:14.557949] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.505 [2024-07-25 01:04:14.557993] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.505 [2024-07-25 01:04:14.565968] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.505 [2024-07-25 01:04:14.566012] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.505 [2024-07-25 01:04:14.573996] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.505 [2024-07-25 01:04:14.574042] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.505 [2024-07-25 01:04:14.582023] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.505 [2024-07-25 01:04:14.582068] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.505 [2024-07-25 01:04:14.590040] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.505 [2024-07-25 01:04:14.590085] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.505 [2024-07-25 01:04:14.598059] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.505 [2024-07-25 01:04:14.598104] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.505 [2024-07-25 01:04:14.606079] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.505 [2024-07-25 01:04:14.606121] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.505 [2024-07-25 01:04:14.614102] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.505 [2024-07-25 01:04:14.614144] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.505 [2024-07-25 01:04:14.622124] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.505 [2024-07-25 01:04:14.622167] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.505 [2024-07-25 01:04:14.630138] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.505 [2024-07-25 01:04:14.630179] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.505 [2024-07-25 01:04:14.638130] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.505 [2024-07-25 01:04:14.638156] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.505 [2024-07-25 01:04:14.646169] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.505 [2024-07-25 01:04:14.646204] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.505 [2024-07-25 01:04:14.654208] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.505 [2024-07-25 01:04:14.654258] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.763 [2024-07-25 01:04:14.662236] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.763 [2024-07-25 01:04:14.662284] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.763 [2024-07-25 01:04:14.670225] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.763 [2024-07-25 01:04:14.670265] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.763 [2024-07-25 01:04:14.678229] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.763 [2024-07-25 01:04:14.678259] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.763 [2024-07-25 01:04:14.686303] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.763 [2024-07-25 01:04:14.686346] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.763 [2024-07-25 01:04:14.694322] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.763 [2024-07-25 01:04:14.694364] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.763 [2024-07-25 01:04:14.702326] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.763 [2024-07-25 01:04:14.702351] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.763 [2024-07-25 01:04:14.710336] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.763 [2024-07-25 01:04:14.710357] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.763 [2024-07-25 01:04:14.718350] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.763 [2024-07-25 01:04:14.718371] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.763 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (3765157) - No such process 00:18:21.763 01:04:14 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 3765157 00:18:21.763 01:04:14 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:21.763 01:04:14 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:21.763 01:04:14 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:21.763 01:04:14 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:21.763 01:04:14 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:18:21.763 01:04:14 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:21.763 01:04:14 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:21.763 delay0 00:18:21.763 01:04:14 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:21.763 01:04:14 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:18:21.763 01:04:14 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:21.763 01:04:14 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:21.763 01:04:14 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:21.763 01:04:14 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:18:21.763 EAL: No free 2048 kB hugepages reported on node 1 00:18:21.763 [2024-07-25 01:04:14.839480] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:18:28.318 Initializing NVMe Controllers 00:18:28.318 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:18:28.318 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:18:28.318 Initialization complete. Launching workers. 00:18:28.318 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 865 00:18:28.318 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 1139, failed to submit 46 00:18:28.318 success 945, unsuccess 194, failed 0 00:18:28.318 01:04:21 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:18:28.318 01:04:21 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:18:28.318 01:04:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:28.318 01:04:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@117 -- # sync 00:18:28.318 01:04:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:28.318 01:04:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@120 -- # set +e 00:18:28.318 01:04:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:28.318 01:04:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:28.318 rmmod nvme_tcp 00:18:28.318 rmmod nvme_fabrics 00:18:28.318 rmmod nvme_keyring 00:18:28.318 01:04:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:28.318 01:04:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@124 -- # set -e 00:18:28.318 01:04:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@125 -- # return 0 00:18:28.318 01:04:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@489 -- # '[' -n 3763936 ']' 00:18:28.318 01:04:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@490 -- # killprocess 3763936 00:18:28.318 01:04:21 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@946 -- # '[' -z 3763936 ']' 00:18:28.318 01:04:21 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@950 -- # kill -0 3763936 00:18:28.318 01:04:21 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@951 -- # uname 00:18:28.318 01:04:21 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:18:28.318 01:04:21 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3763936 00:18:28.318 01:04:21 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:18:28.318 01:04:21 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:18:28.318 01:04:21 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3763936' 00:18:28.318 killing process with pid 3763936 00:18:28.318 01:04:21 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@965 -- # kill 3763936 00:18:28.318 01:04:21 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@970 -- # wait 3763936 00:18:28.318 01:04:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:28.318 01:04:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:28.318 01:04:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:28.318 01:04:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:28.318 01:04:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:28.318 01:04:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:28.318 01:04:21 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:28.318 01:04:21 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:30.848 01:04:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:30.848 00:18:30.848 real 0m27.662s 00:18:30.848 user 0m40.717s 00:18:30.848 sys 0m8.483s 00:18:30.848 01:04:23 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1122 -- # xtrace_disable 00:18:30.848 01:04:23 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:30.848 ************************************ 00:18:30.848 END TEST nvmf_zcopy 00:18:30.848 ************************************ 00:18:30.848 01:04:23 nvmf_tcp -- nvmf/nvmf.sh@54 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:18:30.848 01:04:23 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:18:30.848 01:04:23 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:18:30.848 01:04:23 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:30.848 ************************************ 00:18:30.848 START TEST nvmf_nmic 00:18:30.848 ************************************ 00:18:30.848 01:04:23 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:18:30.848 * Looking for test storage... 00:18:30.848 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:30.848 01:04:23 nvmf_tcp.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:30.848 01:04:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:18:30.848 01:04:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:30.848 01:04:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:30.848 01:04:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:30.848 01:04:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:30.848 01:04:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:30.848 01:04:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:30.848 01:04:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:30.848 01:04:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:30.848 01:04:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:30.848 01:04:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:30.848 01:04:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:30.848 01:04:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:18:30.848 01:04:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:30.849 01:04:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:30.849 01:04:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:30.849 01:04:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:30.849 01:04:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:30.849 01:04:23 nvmf_tcp.nvmf_nmic -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:30.849 01:04:23 nvmf_tcp.nvmf_nmic -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:30.849 01:04:23 nvmf_tcp.nvmf_nmic -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:30.849 01:04:23 nvmf_tcp.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:30.849 01:04:23 nvmf_tcp.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:30.849 01:04:23 nvmf_tcp.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:30.849 01:04:23 nvmf_tcp.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:18:30.849 01:04:23 nvmf_tcp.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:30.849 01:04:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@47 -- # : 0 00:18:30.849 01:04:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:30.849 01:04:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:30.849 01:04:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:30.849 01:04:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:30.849 01:04:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:30.849 01:04:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:30.849 01:04:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:30.849 01:04:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:30.849 01:04:23 nvmf_tcp.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:30.849 01:04:23 nvmf_tcp.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:30.849 01:04:23 nvmf_tcp.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:18:30.849 01:04:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:30.849 01:04:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:30.849 01:04:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:30.849 01:04:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:30.849 01:04:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:30.849 01:04:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:30.849 01:04:23 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:30.849 01:04:23 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:30.849 01:04:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:18:30.849 01:04:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:30.849 01:04:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@285 -- # xtrace_disable 00:18:30.849 01:04:23 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:32.749 01:04:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:32.749 01:04:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@291 -- # pci_devs=() 00:18:32.749 01:04:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:32.749 01:04:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:32.750 01:04:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:32.750 01:04:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:32.750 01:04:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:32.750 01:04:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@295 -- # net_devs=() 00:18:32.750 01:04:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:32.750 01:04:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@296 -- # e810=() 00:18:32.750 01:04:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@296 -- # local -ga e810 00:18:32.750 01:04:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@297 -- # x722=() 00:18:32.750 01:04:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@297 -- # local -ga x722 00:18:32.750 01:04:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@298 -- # mlx=() 00:18:32.750 01:04:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@298 -- # local -ga mlx 00:18:32.750 01:04:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:32.750 01:04:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:32.750 01:04:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:32.750 01:04:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:32.750 01:04:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:32.750 01:04:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:32.750 01:04:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:32.750 01:04:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:32.750 01:04:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:32.750 01:04:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:32.750 01:04:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:32.750 01:04:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:32.750 01:04:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:32.750 01:04:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:32.750 01:04:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:32.750 01:04:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:32.750 01:04:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:32.750 01:04:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:32.750 01:04:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:18:32.750 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:18:32.750 01:04:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:32.750 01:04:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:32.750 01:04:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:32.750 01:04:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:32.750 01:04:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:32.750 01:04:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:32.750 01:04:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:18:32.750 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:18:32.750 01:04:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:32.750 01:04:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:32.750 01:04:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:32.750 01:04:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:32.750 01:04:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:32.750 01:04:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:32.750 01:04:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:32.750 01:04:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:32.750 01:04:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:32.750 01:04:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:32.750 01:04:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:32.750 01:04:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:32.750 01:04:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:32.750 01:04:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:32.750 01:04:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:32.750 01:04:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:18:32.750 Found net devices under 0000:0a:00.0: cvl_0_0 00:18:32.750 01:04:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:32.750 01:04:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:32.750 01:04:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:32.750 01:04:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:32.750 01:04:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:32.750 01:04:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:32.750 01:04:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:32.750 01:04:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:32.750 01:04:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:18:32.750 Found net devices under 0000:0a:00.1: cvl_0_1 00:18:32.750 01:04:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:32.750 01:04:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:32.750 01:04:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # is_hw=yes 00:18:32.750 01:04:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:32.750 01:04:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:18:32.750 01:04:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:18:32.750 01:04:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:32.750 01:04:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:32.750 01:04:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:32.750 01:04:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:32.750 01:04:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:32.750 01:04:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:32.750 01:04:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:32.750 01:04:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:32.750 01:04:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:32.750 01:04:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:32.750 01:04:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:32.750 01:04:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:32.750 01:04:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:32.750 01:04:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:32.750 01:04:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:32.750 01:04:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:32.750 01:04:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:32.750 01:04:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:32.750 01:04:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:32.750 01:04:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:32.750 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:32.750 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.332 ms 00:18:32.750 00:18:32.750 --- 10.0.0.2 ping statistics --- 00:18:32.750 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:32.750 rtt min/avg/max/mdev = 0.332/0.332/0.332/0.000 ms 00:18:32.750 01:04:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:32.750 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:32.750 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.167 ms 00:18:32.750 00:18:32.750 --- 10.0.0.1 ping statistics --- 00:18:32.750 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:32.750 rtt min/avg/max/mdev = 0.167/0.167/0.167/0.000 ms 00:18:32.750 01:04:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:32.750 01:04:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@422 -- # return 0 00:18:32.750 01:04:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:32.750 01:04:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:32.750 01:04:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:32.750 01:04:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:32.750 01:04:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:32.750 01:04:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:32.750 01:04:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:32.750 01:04:25 nvmf_tcp.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:18:32.750 01:04:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:32.750 01:04:25 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@720 -- # xtrace_disable 00:18:32.750 01:04:25 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:32.750 01:04:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@481 -- # nvmfpid=3768525 00:18:32.750 01:04:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:18:32.750 01:04:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@482 -- # waitforlisten 3768525 00:18:32.750 01:04:25 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@827 -- # '[' -z 3768525 ']' 00:18:32.750 01:04:25 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:32.750 01:04:25 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@832 -- # local max_retries=100 00:18:32.750 01:04:25 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:32.750 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:32.750 01:04:25 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@836 -- # xtrace_disable 00:18:32.750 01:04:25 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:32.750 [2024-07-25 01:04:25.629444] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 23.11.0 initialization... 00:18:32.751 [2024-07-25 01:04:25.629537] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:32.751 EAL: No free 2048 kB hugepages reported on node 1 00:18:32.751 [2024-07-25 01:04:25.701988] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:32.751 [2024-07-25 01:04:25.799137] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:32.751 [2024-07-25 01:04:25.799200] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:32.751 [2024-07-25 01:04:25.799216] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:32.751 [2024-07-25 01:04:25.799230] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:32.751 [2024-07-25 01:04:25.799251] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:32.751 [2024-07-25 01:04:25.799312] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:32.751 [2024-07-25 01:04:25.799368] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:32.751 [2024-07-25 01:04:25.799431] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:18:32.751 [2024-07-25 01:04:25.799434] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:33.008 01:04:25 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:18:33.008 01:04:25 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@860 -- # return 0 00:18:33.008 01:04:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:33.008 01:04:25 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:33.008 01:04:25 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:33.008 01:04:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:33.008 01:04:25 nvmf_tcp.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:33.008 01:04:25 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:33.008 01:04:25 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:33.009 [2024-07-25 01:04:25.950994] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:33.009 01:04:25 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:33.009 01:04:25 nvmf_tcp.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:18:33.009 01:04:25 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:33.009 01:04:25 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:33.009 Malloc0 00:18:33.009 01:04:25 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:33.009 01:04:25 nvmf_tcp.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:18:33.009 01:04:25 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:33.009 01:04:25 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:33.009 01:04:25 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:33.009 01:04:25 nvmf_tcp.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:33.009 01:04:25 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:33.009 01:04:25 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:33.009 01:04:25 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:33.009 01:04:25 nvmf_tcp.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:33.009 01:04:25 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:33.009 01:04:25 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:33.009 [2024-07-25 01:04:26.004460] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:33.009 01:04:26 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:33.009 01:04:26 nvmf_tcp.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:18:33.009 test case1: single bdev can't be used in multiple subsystems 00:18:33.009 01:04:26 nvmf_tcp.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:18:33.009 01:04:26 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:33.009 01:04:26 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:33.009 01:04:26 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:33.009 01:04:26 nvmf_tcp.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:18:33.009 01:04:26 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:33.009 01:04:26 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:33.009 01:04:26 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:33.009 01:04:26 nvmf_tcp.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:18:33.009 01:04:26 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:18:33.009 01:04:26 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:33.009 01:04:26 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:33.009 [2024-07-25 01:04:26.028271] bdev.c:8035:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:18:33.009 [2024-07-25 01:04:26.028308] subsystem.c:2063:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:18:33.009 [2024-07-25 01:04:26.028324] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.009 request: 00:18:33.009 { 00:18:33.009 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:18:33.009 "namespace": { 00:18:33.009 "bdev_name": "Malloc0", 00:18:33.009 "no_auto_visible": false 00:18:33.009 }, 00:18:33.009 "method": "nvmf_subsystem_add_ns", 00:18:33.009 "req_id": 1 00:18:33.009 } 00:18:33.009 Got JSON-RPC error response 00:18:33.009 response: 00:18:33.009 { 00:18:33.009 "code": -32602, 00:18:33.009 "message": "Invalid parameters" 00:18:33.009 } 00:18:33.009 01:04:26 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:18:33.009 01:04:26 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:18:33.009 01:04:26 nvmf_tcp.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:18:33.009 01:04:26 nvmf_tcp.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:18:33.009 Adding namespace failed - expected result. 00:18:33.009 01:04:26 nvmf_tcp.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:18:33.009 test case2: host connect to nvmf target in multiple paths 00:18:33.009 01:04:26 nvmf_tcp.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:18:33.009 01:04:26 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:33.009 01:04:26 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:33.009 [2024-07-25 01:04:26.036402] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:18:33.009 01:04:26 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:33.009 01:04:26 nvmf_tcp.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:18:33.573 01:04:26 nvmf_tcp.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:18:34.532 01:04:27 nvmf_tcp.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:18:34.532 01:04:27 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1194 -- # local i=0 00:18:34.532 01:04:27 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:18:34.532 01:04:27 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:18:34.532 01:04:27 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1201 -- # sleep 2 00:18:36.426 01:04:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:18:36.426 01:04:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:18:36.426 01:04:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:18:36.426 01:04:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:18:36.426 01:04:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:18:36.426 01:04:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1204 -- # return 0 00:18:36.426 01:04:29 nvmf_tcp.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:18:36.426 [global] 00:18:36.426 thread=1 00:18:36.426 invalidate=1 00:18:36.426 rw=write 00:18:36.426 time_based=1 00:18:36.426 runtime=1 00:18:36.426 ioengine=libaio 00:18:36.426 direct=1 00:18:36.426 bs=4096 00:18:36.426 iodepth=1 00:18:36.426 norandommap=0 00:18:36.426 numjobs=1 00:18:36.426 00:18:36.427 verify_dump=1 00:18:36.427 verify_backlog=512 00:18:36.427 verify_state_save=0 00:18:36.427 do_verify=1 00:18:36.427 verify=crc32c-intel 00:18:36.427 [job0] 00:18:36.427 filename=/dev/nvme0n1 00:18:36.427 Could not set queue depth (nvme0n1) 00:18:36.427 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:36.427 fio-3.35 00:18:36.427 Starting 1 thread 00:18:37.797 00:18:37.797 job0: (groupid=0, jobs=1): err= 0: pid=3769043: Thu Jul 25 01:04:30 2024 00:18:37.797 read: IOPS=509, BW=2037KiB/s (2086kB/s)(2112KiB/1037msec) 00:18:37.797 slat (nsec): min=6765, max=34674, avg=11931.37, stdev=4949.73 00:18:37.797 clat (usec): min=239, max=42034, avg=1538.76, stdev=7140.87 00:18:37.797 lat (usec): min=247, max=42047, avg=1550.69, stdev=7141.64 00:18:37.797 clat percentiles (usec): 00:18:37.797 | 1.00th=[ 243], 5.00th=[ 249], 10.00th=[ 255], 20.00th=[ 265], 00:18:37.797 | 30.00th=[ 269], 40.00th=[ 273], 50.00th=[ 277], 60.00th=[ 281], 00:18:37.797 | 70.00th=[ 285], 80.00th=[ 293], 90.00th=[ 302], 95.00th=[ 318], 00:18:37.797 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:18:37.797 | 99.99th=[42206] 00:18:37.797 write: IOPS=987, BW=3950KiB/s (4045kB/s)(4096KiB/1037msec); 0 zone resets 00:18:37.797 slat (nsec): min=8376, max=53496, avg=13162.72, stdev=5966.54 00:18:37.797 clat (usec): min=158, max=375, avg=194.28, stdev=23.01 00:18:37.797 lat (usec): min=167, max=420, avg=207.44, stdev=26.09 00:18:37.797 clat percentiles (usec): 00:18:37.797 | 1.00th=[ 163], 5.00th=[ 169], 10.00th=[ 174], 20.00th=[ 180], 00:18:37.797 | 30.00th=[ 184], 40.00th=[ 186], 50.00th=[ 190], 60.00th=[ 196], 00:18:37.797 | 70.00th=[ 200], 80.00th=[ 204], 90.00th=[ 219], 95.00th=[ 233], 00:18:37.797 | 99.00th=[ 293], 99.50th=[ 318], 99.90th=[ 338], 99.95th=[ 375], 00:18:37.797 | 99.99th=[ 375] 00:18:37.797 bw ( KiB/s): min= 8192, max= 8192, per=100.00%, avg=8192.00, stdev= 0.00, samples=1 00:18:37.797 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:18:37.797 lat (usec) : 250=66.04%, 500=32.93% 00:18:37.797 lat (msec) : 50=1.03% 00:18:37.797 cpu : usr=1.45%, sys=2.61%, ctx=1552, majf=0, minf=2 00:18:37.797 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:37.797 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:37.797 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:37.797 issued rwts: total=528,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:37.797 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:37.797 00:18:37.797 Run status group 0 (all jobs): 00:18:37.797 READ: bw=2037KiB/s (2086kB/s), 2037KiB/s-2037KiB/s (2086kB/s-2086kB/s), io=2112KiB (2163kB), run=1037-1037msec 00:18:37.797 WRITE: bw=3950KiB/s (4045kB/s), 3950KiB/s-3950KiB/s (4045kB/s-4045kB/s), io=4096KiB (4194kB), run=1037-1037msec 00:18:37.797 00:18:37.797 Disk stats (read/write): 00:18:37.797 nvme0n1: ios=574/1024, merge=0/0, ticks=828/192, in_queue=1020, util=95.69% 00:18:37.797 01:04:30 nvmf_tcp.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:37.797 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:18:37.797 01:04:30 nvmf_tcp.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:18:37.797 01:04:30 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1215 -- # local i=0 00:18:37.797 01:04:30 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:18:37.797 01:04:30 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:37.797 01:04:30 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:18:37.797 01:04:30 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:37.797 01:04:30 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1227 -- # return 0 00:18:37.797 01:04:30 nvmf_tcp.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:18:37.797 01:04:30 nvmf_tcp.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:18:37.797 01:04:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:37.797 01:04:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@117 -- # sync 00:18:37.797 01:04:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:37.797 01:04:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@120 -- # set +e 00:18:37.797 01:04:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:37.797 01:04:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:37.797 rmmod nvme_tcp 00:18:37.797 rmmod nvme_fabrics 00:18:37.797 rmmod nvme_keyring 00:18:38.055 01:04:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:38.055 01:04:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@124 -- # set -e 00:18:38.055 01:04:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@125 -- # return 0 00:18:38.055 01:04:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@489 -- # '[' -n 3768525 ']' 00:18:38.055 01:04:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@490 -- # killprocess 3768525 00:18:38.055 01:04:30 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@946 -- # '[' -z 3768525 ']' 00:18:38.055 01:04:30 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@950 -- # kill -0 3768525 00:18:38.055 01:04:30 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@951 -- # uname 00:18:38.055 01:04:30 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:18:38.055 01:04:30 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3768525 00:18:38.055 01:04:30 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:18:38.055 01:04:30 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:18:38.055 01:04:30 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3768525' 00:18:38.055 killing process with pid 3768525 00:18:38.055 01:04:30 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@965 -- # kill 3768525 00:18:38.055 01:04:30 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@970 -- # wait 3768525 00:18:38.314 01:04:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:38.314 01:04:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:38.314 01:04:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:38.314 01:04:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:38.314 01:04:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:38.314 01:04:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:38.314 01:04:31 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:38.314 01:04:31 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:40.215 01:04:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:40.215 00:18:40.215 real 0m9.816s 00:18:40.215 user 0m22.547s 00:18:40.215 sys 0m2.218s 00:18:40.215 01:04:33 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1122 -- # xtrace_disable 00:18:40.215 01:04:33 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:40.215 ************************************ 00:18:40.215 END TEST nvmf_nmic 00:18:40.215 ************************************ 00:18:40.215 01:04:33 nvmf_tcp -- nvmf/nvmf.sh@55 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:18:40.215 01:04:33 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:18:40.215 01:04:33 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:18:40.215 01:04:33 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:40.215 ************************************ 00:18:40.215 START TEST nvmf_fio_target 00:18:40.215 ************************************ 00:18:40.215 01:04:33 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:18:40.473 * Looking for test storage... 00:18:40.473 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:40.473 01:04:33 nvmf_tcp.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:40.473 01:04:33 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:18:40.473 01:04:33 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:40.473 01:04:33 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:40.473 01:04:33 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:40.473 01:04:33 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:40.473 01:04:33 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:40.473 01:04:33 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:40.473 01:04:33 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:40.473 01:04:33 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:40.473 01:04:33 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:40.473 01:04:33 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:40.473 01:04:33 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:40.473 01:04:33 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:18:40.473 01:04:33 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:40.473 01:04:33 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:40.473 01:04:33 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:40.473 01:04:33 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:40.473 01:04:33 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:40.473 01:04:33 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:40.473 01:04:33 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:40.473 01:04:33 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:40.473 01:04:33 nvmf_tcp.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:40.473 01:04:33 nvmf_tcp.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:40.473 01:04:33 nvmf_tcp.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:40.473 01:04:33 nvmf_tcp.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:18:40.473 01:04:33 nvmf_tcp.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:40.473 01:04:33 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@47 -- # : 0 00:18:40.473 01:04:33 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:40.473 01:04:33 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:40.473 01:04:33 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:40.473 01:04:33 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:40.473 01:04:33 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:40.473 01:04:33 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:40.473 01:04:33 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:40.473 01:04:33 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:40.473 01:04:33 nvmf_tcp.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:40.473 01:04:33 nvmf_tcp.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:40.473 01:04:33 nvmf_tcp.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:40.473 01:04:33 nvmf_tcp.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:18:40.473 01:04:33 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:40.473 01:04:33 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:40.473 01:04:33 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:40.473 01:04:33 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:40.473 01:04:33 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:40.473 01:04:33 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:40.473 01:04:33 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:40.473 01:04:33 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:40.473 01:04:33 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:18:40.473 01:04:33 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:40.473 01:04:33 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@285 -- # xtrace_disable 00:18:40.473 01:04:33 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:18:42.373 01:04:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:42.373 01:04:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@291 -- # pci_devs=() 00:18:42.373 01:04:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:42.373 01:04:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:42.373 01:04:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:42.373 01:04:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:42.373 01:04:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:42.373 01:04:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@295 -- # net_devs=() 00:18:42.373 01:04:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:42.373 01:04:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@296 -- # e810=() 00:18:42.373 01:04:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@296 -- # local -ga e810 00:18:42.373 01:04:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@297 -- # x722=() 00:18:42.373 01:04:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@297 -- # local -ga x722 00:18:42.373 01:04:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@298 -- # mlx=() 00:18:42.373 01:04:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@298 -- # local -ga mlx 00:18:42.373 01:04:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:42.373 01:04:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:42.373 01:04:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:42.373 01:04:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:42.373 01:04:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:42.373 01:04:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:42.373 01:04:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:42.373 01:04:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:42.373 01:04:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:42.373 01:04:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:42.373 01:04:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:42.373 01:04:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:42.373 01:04:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:42.373 01:04:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:42.373 01:04:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:42.373 01:04:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:42.373 01:04:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:42.373 01:04:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:42.373 01:04:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:18:42.373 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:18:42.373 01:04:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:42.373 01:04:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:42.373 01:04:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:42.373 01:04:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:42.373 01:04:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:42.373 01:04:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:42.373 01:04:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:18:42.373 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:18:42.373 01:04:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:42.373 01:04:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:42.373 01:04:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:42.373 01:04:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:42.373 01:04:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:42.373 01:04:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:42.373 01:04:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:42.373 01:04:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:42.373 01:04:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:42.373 01:04:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:42.373 01:04:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:42.373 01:04:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:42.373 01:04:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:42.373 01:04:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:42.373 01:04:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:42.373 01:04:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:18:42.373 Found net devices under 0000:0a:00.0: cvl_0_0 00:18:42.373 01:04:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:42.373 01:04:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:42.373 01:04:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:42.373 01:04:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:42.373 01:04:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:42.373 01:04:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:42.373 01:04:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:42.373 01:04:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:42.373 01:04:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:18:42.373 Found net devices under 0000:0a:00.1: cvl_0_1 00:18:42.373 01:04:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:42.373 01:04:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:42.373 01:04:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # is_hw=yes 00:18:42.373 01:04:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:42.373 01:04:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:18:42.373 01:04:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:18:42.373 01:04:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:42.373 01:04:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:42.373 01:04:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:42.373 01:04:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:42.373 01:04:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:42.373 01:04:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:42.373 01:04:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:42.373 01:04:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:42.373 01:04:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:42.373 01:04:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:42.373 01:04:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:42.373 01:04:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:42.373 01:04:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:42.373 01:04:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:42.373 01:04:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:42.373 01:04:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:42.373 01:04:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:42.373 01:04:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:42.631 01:04:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:42.631 01:04:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:42.631 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:42.631 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.141 ms 00:18:42.631 00:18:42.631 --- 10.0.0.2 ping statistics --- 00:18:42.631 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:42.631 rtt min/avg/max/mdev = 0.141/0.141/0.141/0.000 ms 00:18:42.631 01:04:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:42.631 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:42.631 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.107 ms 00:18:42.631 00:18:42.631 --- 10.0.0.1 ping statistics --- 00:18:42.631 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:42.631 rtt min/avg/max/mdev = 0.107/0.107/0.107/0.000 ms 00:18:42.632 01:04:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:42.632 01:04:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@422 -- # return 0 00:18:42.632 01:04:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:42.632 01:04:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:42.632 01:04:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:42.632 01:04:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:42.632 01:04:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:42.632 01:04:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:42.632 01:04:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:42.632 01:04:35 nvmf_tcp.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:18:42.632 01:04:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:42.632 01:04:35 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@720 -- # xtrace_disable 00:18:42.632 01:04:35 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:18:42.632 01:04:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@481 -- # nvmfpid=3771231 00:18:42.632 01:04:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:18:42.632 01:04:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@482 -- # waitforlisten 3771231 00:18:42.632 01:04:35 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@827 -- # '[' -z 3771231 ']' 00:18:42.632 01:04:35 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:42.632 01:04:35 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@832 -- # local max_retries=100 00:18:42.632 01:04:35 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:42.632 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:42.632 01:04:35 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@836 -- # xtrace_disable 00:18:42.632 01:04:35 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:18:42.632 [2024-07-25 01:04:35.623031] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 23.11.0 initialization... 00:18:42.632 [2024-07-25 01:04:35.623122] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:42.632 EAL: No free 2048 kB hugepages reported on node 1 00:18:42.632 [2024-07-25 01:04:35.699805] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:42.890 [2024-07-25 01:04:35.793884] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:42.890 [2024-07-25 01:04:35.793938] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:42.890 [2024-07-25 01:04:35.793954] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:42.890 [2024-07-25 01:04:35.793967] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:42.890 [2024-07-25 01:04:35.793979] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:42.890 [2024-07-25 01:04:35.794058] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:42.890 [2024-07-25 01:04:35.794109] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:42.890 [2024-07-25 01:04:35.794159] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:18:42.890 [2024-07-25 01:04:35.794162] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:43.454 01:04:36 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:18:43.454 01:04:36 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@860 -- # return 0 00:18:43.454 01:04:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:43.454 01:04:36 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:43.454 01:04:36 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:18:43.712 01:04:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:43.712 01:04:36 nvmf_tcp.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:18:43.712 [2024-07-25 01:04:36.837160] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:43.712 01:04:36 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:43.969 01:04:37 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:18:43.969 01:04:37 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:44.226 01:04:37 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:18:44.226 01:04:37 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:44.790 01:04:37 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:18:44.790 01:04:37 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:44.790 01:04:37 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:18:44.790 01:04:37 nvmf_tcp.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:18:45.048 01:04:38 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:45.305 01:04:38 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:18:45.305 01:04:38 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:45.562 01:04:38 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:18:45.562 01:04:38 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:45.820 01:04:38 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:18:45.820 01:04:38 nvmf_tcp.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:18:46.077 01:04:39 nvmf_tcp.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:18:46.334 01:04:39 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:18:46.334 01:04:39 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:46.591 01:04:39 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:18:46.591 01:04:39 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:18:46.848 01:04:39 nvmf_tcp.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:47.105 [2024-07-25 01:04:40.139467] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:47.105 01:04:40 nvmf_tcp.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:18:47.363 01:04:40 nvmf_tcp.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:18:47.621 01:04:40 nvmf_tcp.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:18:48.185 01:04:41 nvmf_tcp.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:18:48.185 01:04:41 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1194 -- # local i=0 00:18:48.185 01:04:41 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:18:48.185 01:04:41 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1196 -- # [[ -n 4 ]] 00:18:48.185 01:04:41 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1197 -- # nvme_device_counter=4 00:18:48.185 01:04:41 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1201 -- # sleep 2 00:18:50.081 01:04:43 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:18:50.081 01:04:43 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:18:50.082 01:04:43 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:18:50.339 01:04:43 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1203 -- # nvme_devices=4 00:18:50.339 01:04:43 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:18:50.339 01:04:43 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1204 -- # return 0 00:18:50.339 01:04:43 nvmf_tcp.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:18:50.339 [global] 00:18:50.339 thread=1 00:18:50.339 invalidate=1 00:18:50.339 rw=write 00:18:50.339 time_based=1 00:18:50.339 runtime=1 00:18:50.339 ioengine=libaio 00:18:50.339 direct=1 00:18:50.339 bs=4096 00:18:50.339 iodepth=1 00:18:50.339 norandommap=0 00:18:50.339 numjobs=1 00:18:50.339 00:18:50.339 verify_dump=1 00:18:50.339 verify_backlog=512 00:18:50.339 verify_state_save=0 00:18:50.339 do_verify=1 00:18:50.339 verify=crc32c-intel 00:18:50.339 [job0] 00:18:50.339 filename=/dev/nvme0n1 00:18:50.339 [job1] 00:18:50.339 filename=/dev/nvme0n2 00:18:50.339 [job2] 00:18:50.339 filename=/dev/nvme0n3 00:18:50.339 [job3] 00:18:50.339 filename=/dev/nvme0n4 00:18:50.339 Could not set queue depth (nvme0n1) 00:18:50.339 Could not set queue depth (nvme0n2) 00:18:50.339 Could not set queue depth (nvme0n3) 00:18:50.339 Could not set queue depth (nvme0n4) 00:18:50.339 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:50.339 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:50.339 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:50.339 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:50.339 fio-3.35 00:18:50.339 Starting 4 threads 00:18:51.767 00:18:51.767 job0: (groupid=0, jobs=1): err= 0: pid=3772310: Thu Jul 25 01:04:44 2024 00:18:51.767 read: IOPS=1316, BW=5267KiB/s (5393kB/s)(5272KiB/1001msec) 00:18:51.767 slat (nsec): min=6917, max=42688, avg=15691.07, stdev=5436.25 00:18:51.767 clat (usec): min=295, max=634, avg=393.11, stdev=29.13 00:18:51.767 lat (usec): min=302, max=655, avg=408.80, stdev=32.04 00:18:51.767 clat percentiles (usec): 00:18:51.767 | 1.00th=[ 334], 5.00th=[ 351], 10.00th=[ 359], 20.00th=[ 371], 00:18:51.767 | 30.00th=[ 379], 40.00th=[ 388], 50.00th=[ 396], 60.00th=[ 404], 00:18:51.767 | 70.00th=[ 408], 80.00th=[ 412], 90.00th=[ 424], 95.00th=[ 433], 00:18:51.767 | 99.00th=[ 457], 99.50th=[ 519], 99.90th=[ 627], 99.95th=[ 635], 00:18:51.767 | 99.99th=[ 635] 00:18:51.767 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:18:51.767 slat (nsec): min=8560, max=71338, avg=21053.21, stdev=8921.47 00:18:51.767 clat (usec): min=179, max=603, avg=269.56, stdev=79.68 00:18:51.767 lat (usec): min=189, max=634, avg=290.61, stdev=83.72 00:18:51.767 clat percentiles (usec): 00:18:51.767 | 1.00th=[ 188], 5.00th=[ 194], 10.00th=[ 200], 20.00th=[ 217], 00:18:51.767 | 30.00th=[ 229], 40.00th=[ 237], 50.00th=[ 243], 60.00th=[ 253], 00:18:51.767 | 70.00th=[ 269], 80.00th=[ 302], 90.00th=[ 408], 95.00th=[ 457], 00:18:51.767 | 99.00th=[ 537], 99.50th=[ 562], 99.90th=[ 594], 99.95th=[ 603], 00:18:51.767 | 99.99th=[ 603] 00:18:51.767 bw ( KiB/s): min= 6616, max= 6616, per=36.22%, avg=6616.00, stdev= 0.00, samples=1 00:18:51.767 iops : min= 1654, max= 1654, avg=1654.00, stdev= 0.00, samples=1 00:18:51.767 lat (usec) : 250=31.15%, 500=67.13%, 750=1.72% 00:18:51.767 cpu : usr=4.00%, sys=7.20%, ctx=2854, majf=0, minf=1 00:18:51.767 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:51.767 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:51.767 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:51.767 issued rwts: total=1318,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:51.767 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:51.767 job1: (groupid=0, jobs=1): err= 0: pid=3772311: Thu Jul 25 01:04:44 2024 00:18:51.767 read: IOPS=826, BW=3305KiB/s (3384kB/s)(3308KiB/1001msec) 00:18:51.767 slat (nsec): min=5876, max=64659, avg=19407.81, stdev=10806.65 00:18:51.767 clat (usec): min=297, max=41391, avg=881.86, stdev=4516.90 00:18:51.767 lat (usec): min=304, max=41410, avg=901.27, stdev=4518.46 00:18:51.767 clat percentiles (usec): 00:18:51.767 | 1.00th=[ 306], 5.00th=[ 310], 10.00th=[ 318], 20.00th=[ 326], 00:18:51.767 | 30.00th=[ 338], 40.00th=[ 347], 50.00th=[ 355], 60.00th=[ 371], 00:18:51.767 | 70.00th=[ 383], 80.00th=[ 396], 90.00th=[ 412], 95.00th=[ 441], 00:18:51.767 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:18:51.767 | 99.99th=[41157] 00:18:51.767 write: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec); 0 zone resets 00:18:51.767 slat (nsec): min=7397, max=43463, avg=15291.19, stdev=7120.20 00:18:51.767 clat (usec): min=165, max=455, avg=221.37, stdev=38.30 00:18:51.767 lat (usec): min=174, max=489, avg=236.67, stdev=41.04 00:18:51.767 clat percentiles (usec): 00:18:51.767 | 1.00th=[ 178], 5.00th=[ 186], 10.00th=[ 192], 20.00th=[ 198], 00:18:51.767 | 30.00th=[ 202], 40.00th=[ 206], 50.00th=[ 210], 60.00th=[ 217], 00:18:51.767 | 70.00th=[ 223], 80.00th=[ 235], 90.00th=[ 273], 95.00th=[ 306], 00:18:51.767 | 99.00th=[ 379], 99.50th=[ 383], 99.90th=[ 420], 99.95th=[ 457], 00:18:51.767 | 99.99th=[ 457] 00:18:51.767 bw ( KiB/s): min= 4096, max= 4096, per=22.42%, avg=4096.00, stdev= 0.00, samples=1 00:18:51.767 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:18:51.767 lat (usec) : 250=46.95%, 500=52.35%, 750=0.11% 00:18:51.767 lat (msec) : 50=0.59% 00:18:51.767 cpu : usr=1.30%, sys=3.80%, ctx=1852, majf=0, minf=1 00:18:51.767 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:51.767 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:51.767 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:51.767 issued rwts: total=827,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:51.767 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:51.767 job2: (groupid=0, jobs=1): err= 0: pid=3772312: Thu Jul 25 01:04:44 2024 00:18:51.767 read: IOPS=399, BW=1598KiB/s (1637kB/s)(1600KiB/1001msec) 00:18:51.767 slat (nsec): min=5691, max=59031, avg=22162.67, stdev=10807.79 00:18:51.767 clat (usec): min=311, max=42062, avg=2010.20, stdev=7864.69 00:18:51.767 lat (usec): min=326, max=42078, avg=2032.36, stdev=7865.93 00:18:51.767 clat percentiles (usec): 00:18:51.767 | 1.00th=[ 314], 5.00th=[ 334], 10.00th=[ 343], 20.00th=[ 359], 00:18:51.767 | 30.00th=[ 375], 40.00th=[ 388], 50.00th=[ 400], 60.00th=[ 408], 00:18:51.767 | 70.00th=[ 420], 80.00th=[ 453], 90.00th=[ 529], 95.00th=[ 742], 00:18:51.767 | 99.00th=[41157], 99.50th=[41681], 99.90th=[42206], 99.95th=[42206], 00:18:51.767 | 99.99th=[42206] 00:18:51.767 write: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec); 0 zone resets 00:18:51.767 slat (nsec): min=8216, max=63131, avg=25525.24, stdev=10450.12 00:18:51.767 clat (usec): min=244, max=572, avg=323.06, stdev=53.62 00:18:51.767 lat (usec): min=263, max=605, avg=348.58, stdev=51.94 00:18:51.767 clat percentiles (usec): 00:18:51.767 | 1.00th=[ 251], 5.00th=[ 260], 10.00th=[ 269], 20.00th=[ 277], 00:18:51.768 | 30.00th=[ 285], 40.00th=[ 293], 50.00th=[ 306], 60.00th=[ 322], 00:18:51.768 | 70.00th=[ 343], 80.00th=[ 383], 90.00th=[ 404], 95.00th=[ 424], 00:18:51.768 | 99.00th=[ 453], 99.50th=[ 469], 99.90th=[ 570], 99.95th=[ 570], 00:18:51.768 | 99.99th=[ 570] 00:18:51.768 bw ( KiB/s): min= 4096, max= 4096, per=22.42%, avg=4096.00, stdev= 0.00, samples=1 00:18:51.768 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:18:51.768 lat (usec) : 250=0.44%, 500=93.20%, 750=4.17%, 1000=0.44% 00:18:51.768 lat (msec) : 50=1.75% 00:18:51.768 cpu : usr=1.00%, sys=2.40%, ctx=913, majf=0, minf=1 00:18:51.768 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:51.768 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:51.768 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:51.768 issued rwts: total=400,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:51.768 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:51.768 job3: (groupid=0, jobs=1): err= 0: pid=3772313: Thu Jul 25 01:04:44 2024 00:18:51.768 read: IOPS=1050, BW=4202KiB/s (4303kB/s)(4240KiB/1009msec) 00:18:51.768 slat (nsec): min=5899, max=65206, avg=24765.03, stdev=10263.84 00:18:51.768 clat (usec): min=257, max=42046, avg=551.64, stdev=2528.04 00:18:51.768 lat (usec): min=264, max=42062, avg=576.41, stdev=2527.92 00:18:51.768 clat percentiles (usec): 00:18:51.768 | 1.00th=[ 273], 5.00th=[ 293], 10.00th=[ 306], 20.00th=[ 322], 00:18:51.768 | 30.00th=[ 343], 40.00th=[ 359], 50.00th=[ 379], 60.00th=[ 396], 00:18:51.768 | 70.00th=[ 412], 80.00th=[ 465], 90.00th=[ 529], 95.00th=[ 578], 00:18:51.768 | 99.00th=[ 717], 99.50th=[ 783], 99.90th=[42206], 99.95th=[42206], 00:18:51.768 | 99.99th=[42206] 00:18:51.768 write: IOPS=1522, BW=6089KiB/s (6235kB/s)(6144KiB/1009msec); 0 zone resets 00:18:51.768 slat (nsec): min=6605, max=55339, avg=16425.48, stdev=7675.17 00:18:51.768 clat (usec): min=177, max=610, avg=230.43, stdev=44.28 00:18:51.768 lat (usec): min=186, max=639, avg=246.86, stdev=47.92 00:18:51.768 clat percentiles (usec): 00:18:51.768 | 1.00th=[ 182], 5.00th=[ 188], 10.00th=[ 194], 20.00th=[ 202], 00:18:51.768 | 30.00th=[ 208], 40.00th=[ 215], 50.00th=[ 219], 60.00th=[ 225], 00:18:51.768 | 70.00th=[ 235], 80.00th=[ 247], 90.00th=[ 277], 95.00th=[ 334], 00:18:51.768 | 99.00th=[ 383], 99.50th=[ 400], 99.90th=[ 594], 99.95th=[ 611], 00:18:51.768 | 99.99th=[ 611] 00:18:51.768 bw ( KiB/s): min= 4528, max= 7760, per=33.63%, avg=6144.00, stdev=2285.37, samples=2 00:18:51.768 iops : min= 1132, max= 1940, avg=1536.00, stdev=571.34, samples=2 00:18:51.768 lat (usec) : 250=48.27%, 500=45.03%, 750=6.43%, 1000=0.12% 00:18:51.768 lat (msec) : 50=0.15% 00:18:51.768 cpu : usr=3.08%, sys=5.06%, ctx=2598, majf=0, minf=2 00:18:51.768 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:51.768 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:51.768 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:51.768 issued rwts: total=1060,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:51.768 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:51.768 00:18:51.768 Run status group 0 (all jobs): 00:18:51.768 READ: bw=14.0MiB/s (14.6MB/s), 1598KiB/s-5267KiB/s (1637kB/s-5393kB/s), io=14.1MiB (14.8MB), run=1001-1009msec 00:18:51.768 WRITE: bw=17.8MiB/s (18.7MB/s), 2046KiB/s-6138KiB/s (2095kB/s-6285kB/s), io=18.0MiB (18.9MB), run=1001-1009msec 00:18:51.768 00:18:51.768 Disk stats (read/write): 00:18:51.768 nvme0n1: ios=1074/1380, merge=0/0, ticks=431/357, in_queue=788, util=87.37% 00:18:51.768 nvme0n2: ios=567/939, merge=0/0, ticks=1053/198, in_queue=1251, util=90.65% 00:18:51.768 nvme0n3: ios=114/512, merge=0/0, ticks=1053/161, in_queue=1214, util=93.22% 00:18:51.768 nvme0n4: ios=1113/1536, merge=0/0, ticks=482/340, in_queue=822, util=95.69% 00:18:51.768 01:04:44 nvmf_tcp.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:18:51.768 [global] 00:18:51.768 thread=1 00:18:51.768 invalidate=1 00:18:51.768 rw=randwrite 00:18:51.768 time_based=1 00:18:51.768 runtime=1 00:18:51.768 ioengine=libaio 00:18:51.768 direct=1 00:18:51.768 bs=4096 00:18:51.768 iodepth=1 00:18:51.768 norandommap=0 00:18:51.768 numjobs=1 00:18:51.768 00:18:51.768 verify_dump=1 00:18:51.768 verify_backlog=512 00:18:51.768 verify_state_save=0 00:18:51.768 do_verify=1 00:18:51.768 verify=crc32c-intel 00:18:51.768 [job0] 00:18:51.768 filename=/dev/nvme0n1 00:18:51.768 [job1] 00:18:51.768 filename=/dev/nvme0n2 00:18:51.768 [job2] 00:18:51.768 filename=/dev/nvme0n3 00:18:51.768 [job3] 00:18:51.768 filename=/dev/nvme0n4 00:18:51.768 Could not set queue depth (nvme0n1) 00:18:51.768 Could not set queue depth (nvme0n2) 00:18:51.768 Could not set queue depth (nvme0n3) 00:18:51.768 Could not set queue depth (nvme0n4) 00:18:51.768 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:51.768 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:51.768 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:51.768 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:51.768 fio-3.35 00:18:51.768 Starting 4 threads 00:18:53.140 00:18:53.140 job0: (groupid=0, jobs=1): err= 0: pid=3772538: Thu Jul 25 01:04:46 2024 00:18:53.140 read: IOPS=501, BW=2008KiB/s (2056kB/s)(2080KiB/1036msec) 00:18:53.140 slat (nsec): min=5160, max=61923, avg=10671.49, stdev=6191.52 00:18:53.140 clat (usec): min=230, max=42080, avg=1481.73, stdev=6904.37 00:18:53.140 lat (usec): min=237, max=42097, avg=1492.40, stdev=6907.10 00:18:53.140 clat percentiles (usec): 00:18:53.140 | 1.00th=[ 235], 5.00th=[ 239], 10.00th=[ 243], 20.00th=[ 247], 00:18:53.140 | 30.00th=[ 251], 40.00th=[ 258], 50.00th=[ 265], 60.00th=[ 269], 00:18:53.140 | 70.00th=[ 285], 80.00th=[ 314], 90.00th=[ 351], 95.00th=[ 392], 00:18:53.140 | 99.00th=[41681], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:18:53.140 | 99.99th=[42206] 00:18:53.140 write: IOPS=988, BW=3954KiB/s (4049kB/s)(4096KiB/1036msec); 0 zone resets 00:18:53.140 slat (nsec): min=6229, max=66440, avg=12669.02, stdev=9203.85 00:18:53.140 clat (usec): min=154, max=452, avg=235.71, stdev=38.27 00:18:53.140 lat (usec): min=163, max=460, avg=248.38, stdev=42.36 00:18:53.140 clat percentiles (usec): 00:18:53.140 | 1.00th=[ 159], 5.00th=[ 165], 10.00th=[ 176], 20.00th=[ 217], 00:18:53.140 | 30.00th=[ 227], 40.00th=[ 231], 50.00th=[ 237], 60.00th=[ 241], 00:18:53.140 | 70.00th=[ 247], 80.00th=[ 253], 90.00th=[ 273], 95.00th=[ 302], 00:18:53.140 | 99.00th=[ 355], 99.50th=[ 379], 99.90th=[ 449], 99.95th=[ 453], 00:18:53.140 | 99.99th=[ 453] 00:18:53.140 bw ( KiB/s): min= 4096, max= 4096, per=41.52%, avg=4096.00, stdev= 0.00, samples=2 00:18:53.140 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=2 00:18:53.140 lat (usec) : 250=59.39%, 500=39.31%, 750=0.13%, 1000=0.06% 00:18:53.140 lat (msec) : 2=0.06%, 10=0.06%, 50=0.97% 00:18:53.140 cpu : usr=0.97%, sys=1.74%, ctx=1547, majf=0, minf=1 00:18:53.140 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:53.140 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:53.140 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:53.140 issued rwts: total=520,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:53.140 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:53.140 job1: (groupid=0, jobs=1): err= 0: pid=3772540: Thu Jul 25 01:04:46 2024 00:18:53.140 read: IOPS=142, BW=571KiB/s (585kB/s)(588KiB/1030msec) 00:18:53.140 slat (nsec): min=5962, max=34070, avg=15668.99, stdev=8047.92 00:18:53.140 clat (usec): min=280, max=41444, avg=6124.67, stdev=14231.96 00:18:53.140 lat (usec): min=286, max=41451, avg=6140.34, stdev=14235.29 00:18:53.140 clat percentiles (usec): 00:18:53.140 | 1.00th=[ 281], 5.00th=[ 289], 10.00th=[ 293], 20.00th=[ 302], 00:18:53.140 | 30.00th=[ 314], 40.00th=[ 326], 50.00th=[ 330], 60.00th=[ 347], 00:18:53.140 | 70.00th=[ 371], 80.00th=[ 396], 90.00th=[40633], 95.00th=[41157], 00:18:53.140 | 99.00th=[41157], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:18:53.140 | 99.99th=[41681] 00:18:53.140 write: IOPS=497, BW=1988KiB/s (2036kB/s)(2048KiB/1030msec); 0 zone resets 00:18:53.140 slat (nsec): min=5988, max=31516, avg=7825.15, stdev=3051.19 00:18:53.140 clat (usec): min=165, max=860, avg=236.63, stdev=59.99 00:18:53.140 lat (usec): min=171, max=868, avg=244.45, stdev=60.33 00:18:53.140 clat percentiles (usec): 00:18:53.140 | 1.00th=[ 176], 5.00th=[ 186], 10.00th=[ 192], 20.00th=[ 202], 00:18:53.140 | 30.00th=[ 210], 40.00th=[ 217], 50.00th=[ 225], 60.00th=[ 233], 00:18:53.140 | 70.00th=[ 241], 80.00th=[ 251], 90.00th=[ 289], 95.00th=[ 334], 00:18:53.140 | 99.00th=[ 404], 99.50th=[ 619], 99.90th=[ 865], 99.95th=[ 865], 00:18:53.140 | 99.99th=[ 865] 00:18:53.140 bw ( KiB/s): min= 4096, max= 4096, per=41.52%, avg=4096.00, stdev= 0.00, samples=1 00:18:53.140 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:18:53.140 lat (usec) : 250=61.76%, 500=34.29%, 750=0.61%, 1000=0.15% 00:18:53.140 lat (msec) : 50=3.19% 00:18:53.140 cpu : usr=0.00%, sys=0.87%, ctx=659, majf=0, minf=2 00:18:53.140 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:53.140 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:53.140 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:53.140 issued rwts: total=147,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:53.140 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:53.140 job2: (groupid=0, jobs=1): err= 0: pid=3772541: Thu Jul 25 01:04:46 2024 00:18:53.140 read: IOPS=226, BW=906KiB/s (927kB/s)(940KiB/1038msec) 00:18:53.140 slat (nsec): min=5601, max=42692, avg=15571.80, stdev=8612.99 00:18:53.140 clat (usec): min=291, max=41088, avg=3809.30, stdev=11353.76 00:18:53.140 lat (usec): min=298, max=41101, avg=3824.87, stdev=11357.12 00:18:53.140 clat percentiles (usec): 00:18:53.140 | 1.00th=[ 293], 5.00th=[ 297], 10.00th=[ 302], 20.00th=[ 310], 00:18:53.140 | 30.00th=[ 318], 40.00th=[ 326], 50.00th=[ 338], 60.00th=[ 347], 00:18:53.140 | 70.00th=[ 375], 80.00th=[ 400], 90.00th=[ 644], 95.00th=[41157], 00:18:53.140 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:18:53.140 | 99.99th=[41157] 00:18:53.140 write: IOPS=493, BW=1973KiB/s (2020kB/s)(2048KiB/1038msec); 0 zone resets 00:18:53.140 slat (nsec): min=6828, max=30652, avg=9039.76, stdev=3281.10 00:18:53.140 clat (usec): min=177, max=633, avg=257.15, stdev=48.85 00:18:53.140 lat (usec): min=185, max=641, avg=266.19, stdev=49.44 00:18:53.140 clat percentiles (usec): 00:18:53.140 | 1.00th=[ 208], 5.00th=[ 221], 10.00th=[ 227], 20.00th=[ 233], 00:18:53.140 | 30.00th=[ 237], 40.00th=[ 241], 50.00th=[ 243], 60.00th=[ 249], 00:18:53.140 | 70.00th=[ 253], 80.00th=[ 265], 90.00th=[ 306], 95.00th=[ 388], 00:18:53.140 | 99.00th=[ 412], 99.50th=[ 562], 99.90th=[ 635], 99.95th=[ 635], 00:18:53.140 | 99.99th=[ 635] 00:18:53.140 bw ( KiB/s): min= 4096, max= 4096, per=41.52%, avg=4096.00, stdev= 0.00, samples=1 00:18:53.140 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:18:53.140 lat (usec) : 250=43.78%, 500=52.07%, 750=1.20%, 1000=0.27% 00:18:53.140 lat (msec) : 50=2.68% 00:18:53.140 cpu : usr=0.87%, sys=0.29%, ctx=748, majf=0, minf=1 00:18:53.140 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:53.140 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:53.140 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:53.140 issued rwts: total=235,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:53.140 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:53.140 job3: (groupid=0, jobs=1): err= 0: pid=3772542: Thu Jul 25 01:04:46 2024 00:18:53.140 read: IOPS=21, BW=85.4KiB/s (87.4kB/s)(88.0KiB/1031msec) 00:18:53.140 slat (nsec): min=5986, max=33598, avg=24295.27, stdev=9574.19 00:18:53.140 clat (usec): min=32209, max=41027, avg=40551.39, stdev=1865.11 00:18:53.140 lat (usec): min=32242, max=41044, avg=40575.68, stdev=1863.25 00:18:53.140 clat percentiles (usec): 00:18:53.140 | 1.00th=[32113], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:18:53.140 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:18:53.140 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:18:53.140 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:18:53.140 | 99.99th=[41157] 00:18:53.140 write: IOPS=496, BW=1986KiB/s (2034kB/s)(2048KiB/1031msec); 0 zone resets 00:18:53.141 slat (nsec): min=6117, max=31582, avg=8962.32, stdev=3834.66 00:18:53.141 clat (usec): min=187, max=631, avg=258.43, stdev=52.92 00:18:53.141 lat (usec): min=202, max=647, avg=267.39, stdev=53.29 00:18:53.141 clat percentiles (usec): 00:18:53.141 | 1.00th=[ 198], 5.00th=[ 212], 10.00th=[ 225], 20.00th=[ 231], 00:18:53.141 | 30.00th=[ 235], 40.00th=[ 239], 50.00th=[ 243], 60.00th=[ 247], 00:18:53.141 | 70.00th=[ 253], 80.00th=[ 269], 90.00th=[ 322], 95.00th=[ 383], 00:18:53.141 | 99.00th=[ 449], 99.50th=[ 529], 99.90th=[ 635], 99.95th=[ 635], 00:18:53.141 | 99.99th=[ 635] 00:18:53.141 bw ( KiB/s): min= 4096, max= 4096, per=41.52%, avg=4096.00, stdev= 0.00, samples=1 00:18:53.141 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:18:53.141 lat (usec) : 250=62.55%, 500=32.58%, 750=0.75% 00:18:53.141 lat (msec) : 50=4.12% 00:18:53.141 cpu : usr=0.10%, sys=0.58%, ctx=535, majf=0, minf=1 00:18:53.141 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:53.141 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:53.141 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:53.141 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:53.141 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:53.141 00:18:53.141 Run status group 0 (all jobs): 00:18:53.141 READ: bw=3561KiB/s (3646kB/s), 85.4KiB/s-2008KiB/s (87.4kB/s-2056kB/s), io=3696KiB (3785kB), run=1030-1038msec 00:18:53.141 WRITE: bw=9865KiB/s (10.1MB/s), 1973KiB/s-3954KiB/s (2020kB/s-4049kB/s), io=10.0MiB (10.5MB), run=1030-1038msec 00:18:53.141 00:18:53.141 Disk stats (read/write): 00:18:53.141 nvme0n1: ios=563/1024, merge=0/0, ticks=846/227, in_queue=1073, util=98.40% 00:18:53.141 nvme0n2: ios=191/512, merge=0/0, ticks=724/119, in_queue=843, util=87.59% 00:18:53.141 nvme0n3: ios=287/512, merge=0/0, ticks=1270/132, in_queue=1402, util=97.59% 00:18:53.141 nvme0n4: ios=71/512, merge=0/0, ticks=1310/129, in_queue=1439, util=99.05% 00:18:53.141 01:04:46 nvmf_tcp.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:18:53.141 [global] 00:18:53.141 thread=1 00:18:53.141 invalidate=1 00:18:53.141 rw=write 00:18:53.141 time_based=1 00:18:53.141 runtime=1 00:18:53.141 ioengine=libaio 00:18:53.141 direct=1 00:18:53.141 bs=4096 00:18:53.141 iodepth=128 00:18:53.141 norandommap=0 00:18:53.141 numjobs=1 00:18:53.141 00:18:53.141 verify_dump=1 00:18:53.141 verify_backlog=512 00:18:53.141 verify_state_save=0 00:18:53.141 do_verify=1 00:18:53.141 verify=crc32c-intel 00:18:53.141 [job0] 00:18:53.141 filename=/dev/nvme0n1 00:18:53.141 [job1] 00:18:53.141 filename=/dev/nvme0n2 00:18:53.141 [job2] 00:18:53.141 filename=/dev/nvme0n3 00:18:53.141 [job3] 00:18:53.141 filename=/dev/nvme0n4 00:18:53.141 Could not set queue depth (nvme0n1) 00:18:53.141 Could not set queue depth (nvme0n2) 00:18:53.141 Could not set queue depth (nvme0n3) 00:18:53.141 Could not set queue depth (nvme0n4) 00:18:53.398 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:18:53.398 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:18:53.398 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:18:53.398 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:18:53.398 fio-3.35 00:18:53.398 Starting 4 threads 00:18:54.770 00:18:54.770 job0: (groupid=0, jobs=1): err= 0: pid=3772774: Thu Jul 25 01:04:47 2024 00:18:54.770 read: IOPS=3061, BW=12.0MiB/s (12.5MB/s)(12.1MiB/1008msec) 00:18:54.770 slat (usec): min=2, max=18165, avg=128.39, stdev=840.37 00:18:54.770 clat (usec): min=5447, max=35180, avg=16915.92, stdev=5124.82 00:18:54.770 lat (usec): min=5451, max=39180, avg=17044.31, stdev=5198.88 00:18:54.770 clat percentiles (usec): 00:18:54.770 | 1.00th=[ 5538], 5.00th=[10421], 10.00th=[10945], 20.00th=[11600], 00:18:54.770 | 30.00th=[14222], 40.00th=[15533], 50.00th=[16319], 60.00th=[17433], 00:18:54.770 | 70.00th=[18744], 80.00th=[21103], 90.00th=[23725], 95.00th=[26608], 00:18:54.770 | 99.00th=[32375], 99.50th=[32900], 99.90th=[33817], 99.95th=[33817], 00:18:54.770 | 99.99th=[35390] 00:18:54.770 write: IOPS=3555, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1008msec); 0 zone resets 00:18:54.770 slat (usec): min=3, max=28013, avg=140.55, stdev=908.24 00:18:54.770 clat (usec): min=3512, max=82806, avg=21005.53, stdev=13154.33 00:18:54.770 lat (usec): min=3517, max=82818, avg=21146.08, stdev=13240.81 00:18:54.770 clat percentiles (usec): 00:18:54.770 | 1.00th=[ 4359], 5.00th=[ 7439], 10.00th=[ 7898], 20.00th=[10945], 00:18:54.770 | 30.00th=[14746], 40.00th=[15795], 50.00th=[18220], 60.00th=[20055], 00:18:54.770 | 70.00th=[24249], 80.00th=[27395], 90.00th=[33817], 95.00th=[38011], 00:18:54.770 | 99.00th=[79168], 99.50th=[81265], 99.90th=[82314], 99.95th=[82314], 00:18:54.770 | 99.99th=[82314] 00:18:54.770 bw ( KiB/s): min=12664, max=15096, per=21.90%, avg=13880.00, stdev=1719.68, samples=2 00:18:54.770 iops : min= 3166, max= 3774, avg=3470.00, stdev=429.92, samples=2 00:18:54.770 lat (msec) : 4=0.40%, 10=10.90%, 20=54.39%, 50=32.40%, 100=1.90% 00:18:54.770 cpu : usr=3.77%, sys=7.55%, ctx=312, majf=0, minf=1 00:18:54.770 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:18:54.770 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:54.770 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:54.770 issued rwts: total=3086,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:54.770 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:54.770 job1: (groupid=0, jobs=1): err= 0: pid=3772775: Thu Jul 25 01:04:47 2024 00:18:54.770 read: IOPS=3914, BW=15.3MiB/s (16.0MB/s)(15.4MiB/1010msec) 00:18:54.770 slat (usec): min=2, max=12986, avg=113.84, stdev=751.03 00:18:54.770 clat (usec): min=3374, max=58307, avg=14358.72, stdev=6752.73 00:18:54.770 lat (usec): min=3723, max=58316, avg=14472.56, stdev=6803.09 00:18:54.770 clat percentiles (usec): 00:18:54.770 | 1.00th=[ 4621], 5.00th=[ 8455], 10.00th=[ 9503], 20.00th=[ 9896], 00:18:54.770 | 30.00th=[10421], 40.00th=[10945], 50.00th=[12256], 60.00th=[13829], 00:18:54.770 | 70.00th=[15401], 80.00th=[17695], 90.00th=[21890], 95.00th=[26084], 00:18:54.770 | 99.00th=[45876], 99.50th=[47449], 99.90th=[54264], 99.95th=[58459], 00:18:54.770 | 99.99th=[58459] 00:18:54.770 write: IOPS=4055, BW=15.8MiB/s (16.6MB/s)(16.0MiB/1010msec); 0 zone resets 00:18:54.770 slat (usec): min=3, max=18220, avg=129.33, stdev=716.76 00:18:54.770 clat (usec): min=2458, max=87344, avg=16849.70, stdev=12618.73 00:18:54.770 lat (usec): min=2465, max=87350, avg=16979.02, stdev=12704.54 00:18:54.770 clat percentiles (usec): 00:18:54.770 | 1.00th=[ 3851], 5.00th=[ 6390], 10.00th=[ 7963], 20.00th=[10028], 00:18:54.770 | 30.00th=[10552], 40.00th=[10683], 50.00th=[11469], 60.00th=[14877], 00:18:54.770 | 70.00th=[18744], 80.00th=[22676], 90.00th=[29492], 95.00th=[39584], 00:18:54.770 | 99.00th=[80217], 99.50th=[80217], 99.90th=[87557], 99.95th=[87557], 00:18:54.770 | 99.99th=[87557] 00:18:54.770 bw ( KiB/s): min=12288, max=20480, per=25.85%, avg=16384.00, stdev=5792.62, samples=2 00:18:54.770 iops : min= 3072, max= 5120, avg=4096.00, stdev=1448.15, samples=2 00:18:54.770 lat (msec) : 4=0.84%, 10=19.60%, 20=58.05%, 50=19.83%, 100=1.68% 00:18:54.770 cpu : usr=3.27%, sys=4.86%, ctx=514, majf=0, minf=1 00:18:54.770 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:18:54.770 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:54.770 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:54.771 issued rwts: total=3954,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:54.771 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:54.771 job2: (groupid=0, jobs=1): err= 0: pid=3772776: Thu Jul 25 01:04:47 2024 00:18:54.771 read: IOPS=3064, BW=12.0MiB/s (12.6MB/s)(12.1MiB/1008msec) 00:18:54.771 slat (usec): min=2, max=22327, avg=153.34, stdev=1062.57 00:18:54.771 clat (usec): min=3012, max=52778, avg=19912.08, stdev=8472.36 00:18:54.771 lat (usec): min=4576, max=52813, avg=20065.42, stdev=8531.44 00:18:54.771 clat percentiles (usec): 00:18:54.771 | 1.00th=[ 4621], 5.00th=[ 8455], 10.00th=[11338], 20.00th=[13698], 00:18:54.771 | 30.00th=[14877], 40.00th=[15795], 50.00th=[18482], 60.00th=[19268], 00:18:54.771 | 70.00th=[21103], 80.00th=[25822], 90.00th=[33817], 95.00th=[39060], 00:18:54.771 | 99.00th=[41681], 99.50th=[42730], 99.90th=[49546], 99.95th=[49546], 00:18:54.771 | 99.99th=[52691] 00:18:54.771 write: IOPS=3555, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1008msec); 0 zone resets 00:18:54.771 slat (usec): min=3, max=16152, avg=140.14, stdev=767.24 00:18:54.771 clat (usec): min=1815, max=46376, avg=18496.32, stdev=7837.97 00:18:54.771 lat (usec): min=1825, max=46389, avg=18636.47, stdev=7886.20 00:18:54.771 clat percentiles (usec): 00:18:54.771 | 1.00th=[ 4080], 5.00th=[ 8455], 10.00th=[ 9765], 20.00th=[11994], 00:18:54.771 | 30.00th=[13960], 40.00th=[16450], 50.00th=[17695], 60.00th=[18482], 00:18:54.771 | 70.00th=[20317], 80.00th=[24249], 90.00th=[28967], 95.00th=[33817], 00:18:54.771 | 99.00th=[43779], 99.50th=[44303], 99.90th=[46400], 99.95th=[46400], 00:18:54.771 | 99.99th=[46400] 00:18:54.771 bw ( KiB/s): min=11568, max=16216, per=21.92%, avg=13892.00, stdev=3286.63, samples=2 00:18:54.771 iops : min= 2892, max= 4054, avg=3473.00, stdev=821.66, samples=2 00:18:54.771 lat (msec) : 2=0.22%, 4=0.25%, 10=8.23%, 20=58.83%, 50=32.44% 00:18:54.771 lat (msec) : 100=0.01% 00:18:54.771 cpu : usr=3.57%, sys=4.17%, ctx=351, majf=0, minf=1 00:18:54.771 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:18:54.771 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:54.771 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:54.771 issued rwts: total=3089,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:54.771 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:54.771 job3: (groupid=0, jobs=1): err= 0: pid=3772777: Thu Jul 25 01:04:47 2024 00:18:54.771 read: IOPS=4585, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1005msec) 00:18:54.771 slat (usec): min=3, max=12440, avg=98.24, stdev=582.53 00:18:54.771 clat (usec): min=4436, max=61201, avg=13076.41, stdev=3548.36 00:18:54.771 lat (usec): min=4443, max=61206, avg=13174.65, stdev=3553.91 00:18:54.771 clat percentiles (usec): 00:18:54.771 | 1.00th=[ 6456], 5.00th=[ 9372], 10.00th=[ 9765], 20.00th=[12125], 00:18:54.771 | 30.00th=[12387], 40.00th=[12518], 50.00th=[12780], 60.00th=[13173], 00:18:54.771 | 70.00th=[13566], 80.00th=[14091], 90.00th=[15139], 95.00th=[16319], 00:18:54.771 | 99.00th=[21365], 99.50th=[24511], 99.90th=[56886], 99.95th=[56886], 00:18:54.771 | 99.99th=[61080] 00:18:54.771 write: IOPS=4714, BW=18.4MiB/s (19.3MB/s)(18.5MiB/1005msec); 0 zone resets 00:18:54.771 slat (usec): min=3, max=11274, avg=102.88, stdev=542.45 00:18:54.771 clat (usec): min=3693, max=62415, avg=14172.22, stdev=5965.61 00:18:54.771 lat (usec): min=3703, max=62424, avg=14275.10, stdev=5987.11 00:18:54.771 clat percentiles (usec): 00:18:54.771 | 1.00th=[ 5276], 5.00th=[ 9241], 10.00th=[10945], 20.00th=[11994], 00:18:54.771 | 30.00th=[12256], 40.00th=[12518], 50.00th=[12780], 60.00th=[13304], 00:18:54.771 | 70.00th=[13960], 80.00th=[15008], 90.00th=[17695], 95.00th=[22938], 00:18:54.771 | 99.00th=[45351], 99.50th=[54264], 99.90th=[62653], 99.95th=[62653], 00:18:54.771 | 99.99th=[62653] 00:18:54.771 bw ( KiB/s): min=17992, max=18928, per=29.13%, avg=18460.00, stdev=661.85, samples=2 00:18:54.771 iops : min= 4498, max= 4732, avg=4615.00, stdev=165.46, samples=2 00:18:54.771 lat (msec) : 4=0.06%, 10=9.22%, 20=85.64%, 50=4.57%, 100=0.50% 00:18:54.771 cpu : usr=6.97%, sys=8.76%, ctx=437, majf=0, minf=1 00:18:54.771 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:18:54.771 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:54.771 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:54.771 issued rwts: total=4608,4738,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:54.771 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:54.771 00:18:54.771 Run status group 0 (all jobs): 00:18:54.771 READ: bw=57.0MiB/s (59.8MB/s), 12.0MiB/s-17.9MiB/s (12.5MB/s-18.8MB/s), io=57.6MiB (60.4MB), run=1005-1010msec 00:18:54.771 WRITE: bw=61.9MiB/s (64.9MB/s), 13.9MiB/s-18.4MiB/s (14.6MB/s-19.3MB/s), io=62.5MiB (65.5MB), run=1005-1010msec 00:18:54.771 00:18:54.771 Disk stats (read/write): 00:18:54.771 nvme0n1: ios=2598/2727, merge=0/0, ticks=25595/36300, in_queue=61895, util=96.19% 00:18:54.771 nvme0n2: ios=3077/3095, merge=0/0, ticks=26394/29281, in_queue=55675, util=86.59% 00:18:54.771 nvme0n3: ios=2841/3072, merge=0/0, ticks=23221/18960, in_queue=42181, util=95.19% 00:18:54.771 nvme0n4: ios=3891/4096, merge=0/0, ticks=22650/28474, in_queue=51124, util=97.79% 00:18:54.771 01:04:47 nvmf_tcp.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:18:54.771 [global] 00:18:54.771 thread=1 00:18:54.771 invalidate=1 00:18:54.771 rw=randwrite 00:18:54.771 time_based=1 00:18:54.771 runtime=1 00:18:54.771 ioengine=libaio 00:18:54.771 direct=1 00:18:54.771 bs=4096 00:18:54.771 iodepth=128 00:18:54.771 norandommap=0 00:18:54.771 numjobs=1 00:18:54.771 00:18:54.771 verify_dump=1 00:18:54.771 verify_backlog=512 00:18:54.771 verify_state_save=0 00:18:54.771 do_verify=1 00:18:54.771 verify=crc32c-intel 00:18:54.771 [job0] 00:18:54.771 filename=/dev/nvme0n1 00:18:54.771 [job1] 00:18:54.771 filename=/dev/nvme0n2 00:18:54.771 [job2] 00:18:54.771 filename=/dev/nvme0n3 00:18:54.771 [job3] 00:18:54.771 filename=/dev/nvme0n4 00:18:54.771 Could not set queue depth (nvme0n1) 00:18:54.771 Could not set queue depth (nvme0n2) 00:18:54.771 Could not set queue depth (nvme0n3) 00:18:54.771 Could not set queue depth (nvme0n4) 00:18:54.771 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:18:54.771 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:18:54.771 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:18:54.771 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:18:54.771 fio-3.35 00:18:54.771 Starting 4 threads 00:18:56.143 00:18:56.143 job0: (groupid=0, jobs=1): err= 0: pid=3773033: Thu Jul 25 01:04:49 2024 00:18:56.143 read: IOPS=1526, BW=6107KiB/s (6254kB/s)(6144KiB/1006msec) 00:18:56.143 slat (usec): min=3, max=17681, avg=261.67, stdev=1472.92 00:18:56.143 clat (usec): min=16229, max=58815, avg=32581.24, stdev=8915.23 00:18:56.143 lat (usec): min=16235, max=58824, avg=32842.91, stdev=9021.26 00:18:56.143 clat percentiles (usec): 00:18:56.143 | 1.00th=[17695], 5.00th=[19006], 10.00th=[19530], 20.00th=[20579], 00:18:56.143 | 30.00th=[29230], 40.00th=[32637], 50.00th=[34341], 60.00th=[35914], 00:18:56.143 | 70.00th=[37487], 80.00th=[39584], 90.00th=[42730], 95.00th=[44303], 00:18:56.143 | 99.00th=[55837], 99.50th=[56886], 99.90th=[56886], 99.95th=[58983], 00:18:56.143 | 99.99th=[58983] 00:18:56.143 write: IOPS=1862, BW=7451KiB/s (7630kB/s)(7496KiB/1006msec); 0 zone resets 00:18:56.143 slat (usec): min=4, max=24834, avg=309.71, stdev=1485.96 00:18:56.143 clat (usec): min=3475, max=95828, avg=39840.11, stdev=17592.58 00:18:56.143 lat (usec): min=5575, max=95836, avg=40149.82, stdev=17708.13 00:18:56.143 clat percentiles (usec): 00:18:56.143 | 1.00th=[17433], 5.00th=[25822], 10.00th=[26870], 20.00th=[28181], 00:18:56.143 | 30.00th=[28705], 40.00th=[30016], 50.00th=[32113], 60.00th=[36963], 00:18:56.143 | 70.00th=[42206], 80.00th=[50070], 90.00th=[65799], 95.00th=[86508], 00:18:56.143 | 99.00th=[94897], 99.50th=[95945], 99.90th=[95945], 99.95th=[95945], 00:18:56.143 | 99.99th=[95945] 00:18:56.143 bw ( KiB/s): min= 6160, max= 7808, per=14.21%, avg=6984.00, stdev=1165.31, samples=2 00:18:56.143 iops : min= 1540, max= 1952, avg=1746.00, stdev=291.33, samples=2 00:18:56.143 lat (msec) : 4=0.03%, 10=0.23%, 20=9.21%, 50=78.30%, 100=12.23% 00:18:56.143 cpu : usr=2.09%, sys=4.18%, ctx=236, majf=0, minf=1 00:18:56.143 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=0.9%, >=64=98.2% 00:18:56.143 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:56.143 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:56.143 issued rwts: total=1536,1874,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:56.143 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:56.143 job1: (groupid=0, jobs=1): err= 0: pid=3773054: Thu Jul 25 01:04:49 2024 00:18:56.143 read: IOPS=2672, BW=10.4MiB/s (10.9MB/s)(10.5MiB/1005msec) 00:18:56.143 slat (usec): min=3, max=12348, avg=143.19, stdev=853.40 00:18:56.143 clat (usec): min=3163, max=49548, avg=15684.39, stdev=7862.63 00:18:56.143 lat (usec): min=4495, max=49555, avg=15827.57, stdev=7941.58 00:18:56.143 clat percentiles (usec): 00:18:56.143 | 1.00th=[ 7177], 5.00th=[ 9634], 10.00th=[10814], 20.00th=[10945], 00:18:56.143 | 30.00th=[11338], 40.00th=[11731], 50.00th=[12387], 60.00th=[14877], 00:18:56.143 | 70.00th=[16450], 80.00th=[16909], 90.00th=[26346], 95.00th=[36439], 00:18:56.143 | 99.00th=[43779], 99.50th=[46400], 99.90th=[49546], 99.95th=[49546], 00:18:56.143 | 99.99th=[49546] 00:18:56.143 write: IOPS=3056, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1005msec); 0 zone resets 00:18:56.143 slat (usec): min=4, max=11469, avg=191.83, stdev=854.08 00:18:56.143 clat (usec): min=1142, max=66172, avg=27800.22, stdev=16881.21 00:18:56.143 lat (usec): min=1152, max=66179, avg=27992.06, stdev=17000.85 00:18:56.143 clat percentiles (usec): 00:18:56.143 | 1.00th=[ 6063], 5.00th=[ 7898], 10.00th=[ 8160], 20.00th=[11994], 00:18:56.143 | 30.00th=[14091], 40.00th=[18744], 50.00th=[28443], 60.00th=[30540], 00:18:56.143 | 70.00th=[33817], 80.00th=[42730], 90.00th=[55837], 95.00th=[61604], 00:18:56.143 | 99.00th=[65799], 99.50th=[66323], 99.90th=[66323], 99.95th=[66323], 00:18:56.143 | 99.99th=[66323] 00:18:56.143 bw ( KiB/s): min=11776, max=12784, per=24.99%, avg=12280.00, stdev=712.76, samples=2 00:18:56.143 iops : min= 2944, max= 3196, avg=3070.00, stdev=178.19, samples=2 00:18:56.143 lat (msec) : 2=0.05%, 4=0.12%, 10=10.37%, 20=51.96%, 50=30.36% 00:18:56.143 lat (msec) : 100=7.14% 00:18:56.143 cpu : usr=3.39%, sys=5.48%, ctx=335, majf=0, minf=1 00:18:56.143 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:18:56.143 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:56.143 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:56.143 issued rwts: total=2686,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:56.143 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:56.143 job2: (groupid=0, jobs=1): err= 0: pid=3773089: Thu Jul 25 01:04:49 2024 00:18:56.143 read: IOPS=4580, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1006msec) 00:18:56.143 slat (usec): min=2, max=17151, avg=109.37, stdev=677.69 00:18:56.143 clat (usec): min=5930, max=62122, avg=14068.23, stdev=8316.77 00:18:56.143 lat (usec): min=5937, max=62143, avg=14177.60, stdev=8377.29 00:18:56.143 clat percentiles (usec): 00:18:56.143 | 1.00th=[ 7504], 5.00th=[ 8225], 10.00th=[ 9241], 20.00th=[10814], 00:18:56.143 | 30.00th=[10945], 40.00th=[11076], 50.00th=[11469], 60.00th=[11863], 00:18:56.143 | 70.00th=[13042], 80.00th=[14222], 90.00th=[22414], 95.00th=[33162], 00:18:56.143 | 99.00th=[56886], 99.50th=[56886], 99.90th=[60556], 99.95th=[60556], 00:18:56.143 | 99.99th=[62129] 00:18:56.143 write: IOPS=4802, BW=18.8MiB/s (19.7MB/s)(18.9MiB/1006msec); 0 zone resets 00:18:56.143 slat (usec): min=3, max=8877, avg=95.70, stdev=512.92 00:18:56.143 clat (usec): min=3503, max=28968, avg=12839.46, stdev=3609.72 00:18:56.143 lat (usec): min=3515, max=30204, avg=12935.16, stdev=3617.97 00:18:56.143 clat percentiles (usec): 00:18:56.143 | 1.00th=[ 6259], 5.00th=[ 7898], 10.00th=[ 8848], 20.00th=[10683], 00:18:56.143 | 30.00th=[11207], 40.00th=[11469], 50.00th=[11994], 60.00th=[12780], 00:18:56.143 | 70.00th=[14746], 80.00th=[15270], 90.00th=[16057], 95.00th=[19268], 00:18:56.143 | 99.00th=[27132], 99.50th=[28967], 99.90th=[28967], 99.95th=[28967], 00:18:56.143 | 99.99th=[28967] 00:18:56.143 bw ( KiB/s): min=17152, max=20480, per=38.29%, avg=18816.00, stdev=2353.25, samples=2 00:18:56.143 iops : min= 4288, max= 5120, avg=4704.00, stdev=588.31, samples=2 00:18:56.143 lat (msec) : 4=0.33%, 10=12.70%, 20=79.29%, 50=6.88%, 100=0.81% 00:18:56.143 cpu : usr=3.68%, sys=6.67%, ctx=408, majf=0, minf=1 00:18:56.143 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:18:56.143 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:56.143 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:56.143 issued rwts: total=4608,4831,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:56.143 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:56.143 job3: (groupid=0, jobs=1): err= 0: pid=3773104: Thu Jul 25 01:04:49 2024 00:18:56.143 read: IOPS=2528, BW=9.88MiB/s (10.4MB/s)(10.3MiB/1046msec) 00:18:56.143 slat (usec): min=2, max=10426, avg=129.95, stdev=715.18 00:18:56.143 clat (usec): min=8146, max=71563, avg=17254.85, stdev=9119.64 00:18:56.143 lat (usec): min=8171, max=71575, avg=17384.80, stdev=9168.69 00:18:56.143 clat percentiles (usec): 00:18:56.143 | 1.00th=[ 9110], 5.00th=[10028], 10.00th=[10945], 20.00th=[12256], 00:18:56.143 | 30.00th=[13042], 40.00th=[14484], 50.00th=[15270], 60.00th=[17171], 00:18:56.143 | 70.00th=[18220], 80.00th=[19268], 90.00th=[21365], 95.00th=[25560], 00:18:56.143 | 99.00th=[66323], 99.50th=[67634], 99.90th=[71828], 99.95th=[71828], 00:18:56.143 | 99.99th=[71828] 00:18:56.143 write: IOPS=2936, BW=11.5MiB/s (12.0MB/s)(12.0MiB/1046msec); 0 zone resets 00:18:56.143 slat (usec): min=3, max=24070, avg=205.72, stdev=1163.07 00:18:56.143 clat (usec): min=3979, max=95806, avg=28054.55, stdev=19630.09 00:18:56.143 lat (usec): min=4458, max=95835, avg=28260.27, stdev=19753.86 00:18:56.143 clat percentiles (usec): 00:18:56.143 | 1.00th=[ 6194], 5.00th=[ 7177], 10.00th=[11207], 20.00th=[12518], 00:18:56.143 | 30.00th=[14746], 40.00th=[16909], 50.00th=[23200], 60.00th=[28705], 00:18:56.143 | 70.00th=[32900], 80.00th=[36439], 90.00th=[56361], 95.00th=[74974], 00:18:56.143 | 99.00th=[91751], 99.50th=[94897], 99.90th=[95945], 99.95th=[95945], 00:18:56.143 | 99.99th=[95945] 00:18:56.143 bw ( KiB/s): min=10168, max=14064, per=24.66%, avg=12116.00, stdev=2754.89, samples=2 00:18:56.143 iops : min= 2542, max= 3516, avg=3029.00, stdev=688.72, samples=2 00:18:56.143 lat (msec) : 4=0.02%, 10=6.54%, 20=56.29%, 50=29.33%, 100=7.82% 00:18:56.143 cpu : usr=2.30%, sys=5.74%, ctx=344, majf=0, minf=1 00:18:56.143 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:18:56.143 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:56.143 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:56.143 issued rwts: total=2645,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:56.143 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:56.143 00:18:56.143 Run status group 0 (all jobs): 00:18:56.143 READ: bw=42.9MiB/s (44.9MB/s), 6107KiB/s-17.9MiB/s (6254kB/s-18.8MB/s), io=44.8MiB (47.0MB), run=1005-1046msec 00:18:56.144 WRITE: bw=48.0MiB/s (50.3MB/s), 7451KiB/s-18.8MiB/s (7630kB/s-19.7MB/s), io=50.2MiB (52.6MB), run=1005-1046msec 00:18:56.144 00:18:56.144 Disk stats (read/write): 00:18:56.144 nvme0n1: ios=1131/1536, merge=0/0, ticks=13570/21518, in_queue=35088, util=98.60% 00:18:56.144 nvme0n2: ios=2312/2560, merge=0/0, ticks=34183/70034, in_queue=104217, util=86.38% 00:18:56.144 nvme0n3: ios=4086/4096, merge=0/0, ticks=18040/14175, in_queue=32215, util=96.75% 00:18:56.144 nvme0n4: ios=2174/2560, merge=0/0, ticks=15748/33916, in_queue=49664, util=90.27% 00:18:56.144 01:04:49 nvmf_tcp.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:18:56.144 01:04:49 nvmf_tcp.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=3773261 00:18:56.144 01:04:49 nvmf_tcp.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:18:56.144 01:04:49 nvmf_tcp.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:18:56.144 [global] 00:18:56.144 thread=1 00:18:56.144 invalidate=1 00:18:56.144 rw=read 00:18:56.144 time_based=1 00:18:56.144 runtime=10 00:18:56.144 ioengine=libaio 00:18:56.144 direct=1 00:18:56.144 bs=4096 00:18:56.144 iodepth=1 00:18:56.144 norandommap=1 00:18:56.144 numjobs=1 00:18:56.144 00:18:56.144 [job0] 00:18:56.144 filename=/dev/nvme0n1 00:18:56.144 [job1] 00:18:56.144 filename=/dev/nvme0n2 00:18:56.144 [job2] 00:18:56.144 filename=/dev/nvme0n3 00:18:56.144 [job3] 00:18:56.144 filename=/dev/nvme0n4 00:18:56.144 Could not set queue depth (nvme0n1) 00:18:56.144 Could not set queue depth (nvme0n2) 00:18:56.144 Could not set queue depth (nvme0n3) 00:18:56.144 Could not set queue depth (nvme0n4) 00:18:56.401 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:56.401 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:56.401 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:56.401 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:56.401 fio-3.35 00:18:56.401 Starting 4 threads 00:18:59.676 01:04:52 nvmf_tcp.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:18:59.676 01:04:52 nvmf_tcp.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:18:59.676 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=6348800, buflen=4096 00:18:59.676 fio: pid=3773352, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:18:59.676 01:04:52 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:18:59.676 01:04:52 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:18:59.676 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=37081088, buflen=4096 00:18:59.676 fio: pid=3773351, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:18:59.933 01:04:52 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:18:59.933 01:04:52 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:18:59.933 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=4001792, buflen=4096 00:18:59.933 fio: pid=3773349, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:19:00.190 01:04:53 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:00.190 01:04:53 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:19:00.190 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=7618560, buflen=4096 00:19:00.190 fio: pid=3773350, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:19:00.190 00:19:00.190 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=3773349: Thu Jul 25 01:04:53 2024 00:19:00.190 read: IOPS=285, BW=1142KiB/s (1169kB/s)(3908KiB/3423msec) 00:19:00.190 slat (usec): min=4, max=2871, avg=21.21, stdev=91.81 00:19:00.190 clat (usec): min=228, max=41126, avg=3453.46, stdev=10745.22 00:19:00.190 lat (usec): min=235, max=43988, avg=3474.68, stdev=10757.06 00:19:00.190 clat percentiles (usec): 00:19:00.190 | 1.00th=[ 253], 5.00th=[ 281], 10.00th=[ 297], 20.00th=[ 318], 00:19:00.190 | 30.00th=[ 338], 40.00th=[ 359], 50.00th=[ 379], 60.00th=[ 396], 00:19:00.190 | 70.00th=[ 420], 80.00th=[ 457], 90.00th=[ 529], 95.00th=[41157], 00:19:00.190 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:19:00.190 | 99.99th=[41157] 00:19:00.190 bw ( KiB/s): min= 96, max= 5504, per=8.86%, avg=1288.00, stdev=2178.76, samples=6 00:19:00.190 iops : min= 24, max= 1376, avg=322.00, stdev=544.69, samples=6 00:19:00.190 lat (usec) : 250=0.61%, 500=87.63%, 750=3.89% 00:19:00.190 lat (msec) : 2=0.10%, 4=0.10%, 50=7.57% 00:19:00.190 cpu : usr=0.15%, sys=0.64%, ctx=981, majf=0, minf=1 00:19:00.190 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:00.190 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:00.190 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:00.190 issued rwts: total=978,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:00.190 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:00.190 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=3773350: Thu Jul 25 01:04:53 2024 00:19:00.190 read: IOPS=503, BW=2013KiB/s (2061kB/s)(7440KiB/3696msec) 00:19:00.191 slat (usec): min=4, max=8841, avg=21.52, stdev=260.25 00:19:00.191 clat (usec): min=240, max=41667, avg=1947.52, stdev=7835.58 00:19:00.191 lat (usec): min=250, max=49993, avg=1969.05, stdev=7886.17 00:19:00.191 clat percentiles (usec): 00:19:00.191 | 1.00th=[ 262], 5.00th=[ 289], 10.00th=[ 306], 20.00th=[ 330], 00:19:00.191 | 30.00th=[ 343], 40.00th=[ 363], 50.00th=[ 367], 60.00th=[ 375], 00:19:00.191 | 70.00th=[ 396], 80.00th=[ 416], 90.00th=[ 469], 95.00th=[ 553], 00:19:00.191 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41681], 00:19:00.191 | 99.99th=[41681] 00:19:00.191 bw ( KiB/s): min= 96, max=10616, per=14.58%, avg=2120.71, stdev=3884.15, samples=7 00:19:00.191 iops : min= 24, max= 2654, avg=530.14, stdev=971.06, samples=7 00:19:00.191 lat (usec) : 250=0.27%, 500=92.26%, 750=3.33% 00:19:00.191 lat (msec) : 2=0.16%, 10=0.05%, 50=3.87% 00:19:00.191 cpu : usr=0.41%, sys=0.60%, ctx=1866, majf=0, minf=1 00:19:00.191 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:00.191 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:00.191 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:00.191 issued rwts: total=1861,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:00.191 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:00.191 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=3773351: Thu Jul 25 01:04:53 2024 00:19:00.191 read: IOPS=2868, BW=11.2MiB/s (11.7MB/s)(35.4MiB/3156msec) 00:19:00.191 slat (usec): min=5, max=18614, avg=14.33, stdev=230.62 00:19:00.191 clat (usec): min=256, max=41142, avg=328.79, stdev=435.08 00:19:00.191 lat (usec): min=261, max=41149, avg=343.12, stdev=492.87 00:19:00.191 clat percentiles (usec): 00:19:00.191 | 1.00th=[ 277], 5.00th=[ 281], 10.00th=[ 285], 20.00th=[ 293], 00:19:00.191 | 30.00th=[ 297], 40.00th=[ 310], 50.00th=[ 318], 60.00th=[ 322], 00:19:00.191 | 70.00th=[ 330], 80.00th=[ 338], 90.00th=[ 367], 95.00th=[ 392], 00:19:00.191 | 99.00th=[ 519], 99.50th=[ 603], 99.90th=[ 1037], 99.95th=[ 2180], 00:19:00.191 | 99.99th=[41157] 00:19:00.191 bw ( KiB/s): min=10112, max=12544, per=79.04%, avg=11496.00, stdev=784.98, samples=6 00:19:00.191 iops : min= 2528, max= 3136, avg=2874.00, stdev=196.24, samples=6 00:19:00.191 lat (usec) : 500=98.71%, 750=1.06%, 1000=0.09% 00:19:00.191 lat (msec) : 2=0.08%, 4=0.04%, 50=0.01% 00:19:00.191 cpu : usr=1.97%, sys=5.13%, ctx=9057, majf=0, minf=1 00:19:00.191 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:00.191 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:00.191 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:00.191 issued rwts: total=9054,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:00.191 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:00.191 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=3773352: Thu Jul 25 01:04:53 2024 00:19:00.191 read: IOPS=528, BW=2112KiB/s (2162kB/s)(6200KiB/2936msec) 00:19:00.191 slat (nsec): min=4518, max=36289, avg=13085.79, stdev=5287.17 00:19:00.191 clat (usec): min=291, max=41341, avg=1863.19, stdev=7574.55 00:19:00.191 lat (usec): min=303, max=41356, avg=1876.27, stdev=7576.65 00:19:00.191 clat percentiles (usec): 00:19:00.191 | 1.00th=[ 306], 5.00th=[ 326], 10.00th=[ 330], 20.00th=[ 359], 00:19:00.191 | 30.00th=[ 363], 40.00th=[ 367], 50.00th=[ 371], 60.00th=[ 379], 00:19:00.191 | 70.00th=[ 404], 80.00th=[ 449], 90.00th=[ 529], 95.00th=[ 627], 00:19:00.191 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:19:00.191 | 99.99th=[41157] 00:19:00.191 bw ( KiB/s): min= 96, max= 8912, per=16.91%, avg=2460.80, stdev=3832.77, samples=5 00:19:00.191 iops : min= 24, max= 2228, avg=615.20, stdev=958.19, samples=5 00:19:00.191 lat (usec) : 500=86.20%, 750=9.80%, 1000=0.13% 00:19:00.191 lat (msec) : 2=0.19%, 50=3.61% 00:19:00.191 cpu : usr=0.24%, sys=0.82%, ctx=1551, majf=0, minf=1 00:19:00.191 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:00.191 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:00.191 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:00.191 issued rwts: total=1551,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:00.191 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:00.191 00:19:00.191 Run status group 0 (all jobs): 00:19:00.191 READ: bw=14.2MiB/s (14.9MB/s), 1142KiB/s-11.2MiB/s (1169kB/s-11.7MB/s), io=52.5MiB (55.1MB), run=2936-3696msec 00:19:00.191 00:19:00.191 Disk stats (read/write): 00:19:00.191 nvme0n1: ios=998/0, merge=0/0, ticks=3460/0, in_queue=3460, util=99.43% 00:19:00.191 nvme0n2: ios=1894/0, merge=0/0, ticks=4401/0, in_queue=4401, util=99.06% 00:19:00.191 nvme0n3: ios=8928/0, merge=0/0, ticks=2847/0, in_queue=2847, util=95.81% 00:19:00.191 nvme0n4: ios=1546/0, merge=0/0, ticks=2793/0, in_queue=2793, util=96.74% 00:19:00.448 01:04:53 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:00.448 01:04:53 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:19:00.705 01:04:53 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:00.705 01:04:53 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:19:00.962 01:04:53 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:00.962 01:04:53 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:19:01.220 01:04:54 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:01.220 01:04:54 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:19:01.478 01:04:54 nvmf_tcp.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:19:01.478 01:04:54 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # wait 3773261 00:19:01.478 01:04:54 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:19:01.478 01:04:54 nvmf_tcp.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:19:01.478 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:01.478 01:04:54 nvmf_tcp.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:19:01.478 01:04:54 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1215 -- # local i=0 00:19:01.478 01:04:54 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:19:01.478 01:04:54 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:01.478 01:04:54 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:19:01.478 01:04:54 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:01.478 01:04:54 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1227 -- # return 0 00:19:01.478 01:04:54 nvmf_tcp.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:19:01.478 01:04:54 nvmf_tcp.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:19:01.478 nvmf hotplug test: fio failed as expected 00:19:01.478 01:04:54 nvmf_tcp.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:01.735 01:04:54 nvmf_tcp.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:19:01.735 01:04:54 nvmf_tcp.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:19:01.735 01:04:54 nvmf_tcp.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:19:01.735 01:04:54 nvmf_tcp.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:19:01.735 01:04:54 nvmf_tcp.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:19:01.735 01:04:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:01.735 01:04:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@117 -- # sync 00:19:01.735 01:04:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:01.735 01:04:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@120 -- # set +e 00:19:01.735 01:04:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:01.735 01:04:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:01.735 rmmod nvme_tcp 00:19:01.735 rmmod nvme_fabrics 00:19:01.735 rmmod nvme_keyring 00:19:01.993 01:04:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:01.993 01:04:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@124 -- # set -e 00:19:01.993 01:04:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@125 -- # return 0 00:19:01.993 01:04:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@489 -- # '[' -n 3771231 ']' 00:19:01.993 01:04:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@490 -- # killprocess 3771231 00:19:01.993 01:04:54 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@946 -- # '[' -z 3771231 ']' 00:19:01.993 01:04:54 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@950 -- # kill -0 3771231 00:19:01.993 01:04:54 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@951 -- # uname 00:19:01.993 01:04:54 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:19:01.993 01:04:54 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3771231 00:19:01.993 01:04:54 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:19:01.993 01:04:54 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:19:01.993 01:04:54 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3771231' 00:19:01.993 killing process with pid 3771231 00:19:01.993 01:04:54 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@965 -- # kill 3771231 00:19:01.993 01:04:54 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@970 -- # wait 3771231 00:19:02.251 01:04:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:02.251 01:04:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:02.251 01:04:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:02.251 01:04:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:02.251 01:04:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:02.251 01:04:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:02.251 01:04:55 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:02.251 01:04:55 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:04.150 01:04:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:04.150 00:19:04.150 real 0m23.854s 00:19:04.150 user 1m23.174s 00:19:04.150 sys 0m6.618s 00:19:04.150 01:04:57 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1122 -- # xtrace_disable 00:19:04.150 01:04:57 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:19:04.150 ************************************ 00:19:04.150 END TEST nvmf_fio_target 00:19:04.150 ************************************ 00:19:04.151 01:04:57 nvmf_tcp -- nvmf/nvmf.sh@56 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:19:04.151 01:04:57 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:19:04.151 01:04:57 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:19:04.151 01:04:57 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:04.151 ************************************ 00:19:04.151 START TEST nvmf_bdevio 00:19:04.151 ************************************ 00:19:04.151 01:04:57 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:19:04.442 * Looking for test storage... 00:19:04.442 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:04.442 01:04:57 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:04.442 01:04:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:19:04.442 01:04:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:04.442 01:04:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:04.442 01:04:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:04.442 01:04:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:04.442 01:04:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:04.442 01:04:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:04.442 01:04:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:04.442 01:04:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:04.442 01:04:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:04.442 01:04:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:04.442 01:04:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:04.442 01:04:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:19:04.442 01:04:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:04.442 01:04:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:04.442 01:04:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:04.442 01:04:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:04.442 01:04:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:04.442 01:04:57 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:04.442 01:04:57 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:04.442 01:04:57 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:04.442 01:04:57 nvmf_tcp.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:04.443 01:04:57 nvmf_tcp.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:04.443 01:04:57 nvmf_tcp.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:04.443 01:04:57 nvmf_tcp.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:19:04.443 01:04:57 nvmf_tcp.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:04.443 01:04:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@47 -- # : 0 00:19:04.443 01:04:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:04.443 01:04:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:04.443 01:04:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:04.443 01:04:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:04.443 01:04:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:04.443 01:04:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:04.443 01:04:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:04.443 01:04:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:04.443 01:04:57 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:04.443 01:04:57 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:04.443 01:04:57 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:19:04.443 01:04:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:04.443 01:04:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:04.443 01:04:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:04.443 01:04:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:04.443 01:04:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:04.443 01:04:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:04.443 01:04:57 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:04.443 01:04:57 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:04.443 01:04:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:04.443 01:04:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:04.443 01:04:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@285 -- # xtrace_disable 00:19:04.443 01:04:57 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:19:06.346 01:04:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:06.346 01:04:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@291 -- # pci_devs=() 00:19:06.346 01:04:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:06.346 01:04:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:06.346 01:04:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:06.346 01:04:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:06.346 01:04:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:06.346 01:04:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@295 -- # net_devs=() 00:19:06.346 01:04:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:06.346 01:04:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@296 -- # e810=() 00:19:06.346 01:04:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@296 -- # local -ga e810 00:19:06.346 01:04:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@297 -- # x722=() 00:19:06.346 01:04:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@297 -- # local -ga x722 00:19:06.346 01:04:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@298 -- # mlx=() 00:19:06.346 01:04:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@298 -- # local -ga mlx 00:19:06.346 01:04:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:06.346 01:04:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:06.346 01:04:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:06.346 01:04:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:06.346 01:04:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:06.346 01:04:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:06.346 01:04:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:06.346 01:04:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:06.346 01:04:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:06.346 01:04:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:06.346 01:04:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:06.346 01:04:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:06.346 01:04:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:06.346 01:04:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:06.346 01:04:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:06.346 01:04:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:06.346 01:04:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:06.346 01:04:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:06.346 01:04:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:19:06.346 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:19:06.346 01:04:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:06.346 01:04:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:06.346 01:04:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:06.346 01:04:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:06.346 01:04:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:06.346 01:04:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:06.346 01:04:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:19:06.346 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:19:06.346 01:04:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:06.346 01:04:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:06.346 01:04:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:06.346 01:04:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:06.346 01:04:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:06.346 01:04:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:06.346 01:04:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:06.346 01:04:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:06.346 01:04:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:06.346 01:04:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:06.346 01:04:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:06.346 01:04:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:06.346 01:04:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:06.346 01:04:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:06.346 01:04:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:06.346 01:04:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:19:06.346 Found net devices under 0000:0a:00.0: cvl_0_0 00:19:06.346 01:04:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:06.346 01:04:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:06.346 01:04:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:06.346 01:04:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:06.346 01:04:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:06.346 01:04:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:06.346 01:04:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:06.346 01:04:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:06.346 01:04:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:19:06.346 Found net devices under 0000:0a:00.1: cvl_0_1 00:19:06.346 01:04:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:06.346 01:04:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:06.346 01:04:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # is_hw=yes 00:19:06.346 01:04:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:06.346 01:04:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:19:06.346 01:04:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:19:06.346 01:04:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:06.346 01:04:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:06.346 01:04:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:06.346 01:04:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:06.346 01:04:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:06.346 01:04:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:06.346 01:04:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:06.346 01:04:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:06.346 01:04:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:06.346 01:04:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:06.346 01:04:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:06.346 01:04:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:06.346 01:04:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:06.346 01:04:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:06.346 01:04:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:06.346 01:04:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:06.346 01:04:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:06.605 01:04:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:06.605 01:04:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:06.605 01:04:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:06.605 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:06.605 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.127 ms 00:19:06.605 00:19:06.605 --- 10.0.0.2 ping statistics --- 00:19:06.605 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:06.605 rtt min/avg/max/mdev = 0.127/0.127/0.127/0.000 ms 00:19:06.605 01:04:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:06.605 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:06.605 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.197 ms 00:19:06.605 00:19:06.605 --- 10.0.0.1 ping statistics --- 00:19:06.605 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:06.605 rtt min/avg/max/mdev = 0.197/0.197/0.197/0.000 ms 00:19:06.605 01:04:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:06.605 01:04:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@422 -- # return 0 00:19:06.605 01:04:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:06.605 01:04:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:06.605 01:04:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:06.605 01:04:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:06.605 01:04:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:06.605 01:04:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:06.605 01:04:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:06.605 01:04:59 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:19:06.605 01:04:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:06.605 01:04:59 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@720 -- # xtrace_disable 00:19:06.605 01:04:59 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:19:06.605 01:04:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@481 -- # nvmfpid=3775974 00:19:06.605 01:04:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:19:06.605 01:04:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@482 -- # waitforlisten 3775974 00:19:06.605 01:04:59 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@827 -- # '[' -z 3775974 ']' 00:19:06.605 01:04:59 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:06.605 01:04:59 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@832 -- # local max_retries=100 00:19:06.605 01:04:59 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:06.605 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:06.605 01:04:59 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@836 -- # xtrace_disable 00:19:06.605 01:04:59 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:19:06.605 [2024-07-25 01:04:59.616883] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 23.11.0 initialization... 00:19:06.606 [2024-07-25 01:04:59.616955] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:06.606 EAL: No free 2048 kB hugepages reported on node 1 00:19:06.606 [2024-07-25 01:04:59.681022] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:06.864 [2024-07-25 01:04:59.767788] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:06.864 [2024-07-25 01:04:59.767832] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:06.864 [2024-07-25 01:04:59.767860] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:06.864 [2024-07-25 01:04:59.767871] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:06.864 [2024-07-25 01:04:59.767880] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:06.864 [2024-07-25 01:04:59.767966] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:19:06.864 [2024-07-25 01:04:59.767996] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:19:06.864 [2024-07-25 01:04:59.768052] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:19:06.864 [2024-07-25 01:04:59.768054] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:19:06.864 01:04:59 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:19:06.864 01:04:59 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@860 -- # return 0 00:19:06.864 01:04:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:06.864 01:04:59 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:06.864 01:04:59 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:19:06.864 01:04:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:06.864 01:04:59 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:06.864 01:04:59 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:06.864 01:04:59 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:19:06.864 [2024-07-25 01:04:59.903785] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:06.864 01:04:59 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:06.864 01:04:59 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:19:06.864 01:04:59 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:06.864 01:04:59 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:19:06.864 Malloc0 00:19:06.864 01:04:59 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:06.864 01:04:59 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:06.864 01:04:59 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:06.864 01:04:59 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:19:06.864 01:04:59 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:06.864 01:04:59 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:06.864 01:04:59 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:06.864 01:04:59 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:19:06.864 01:04:59 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:06.864 01:04:59 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:06.864 01:04:59 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:06.864 01:04:59 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:19:06.864 [2024-07-25 01:04:59.954918] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:06.864 01:04:59 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:06.864 01:04:59 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:19:06.864 01:04:59 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:19:06.864 01:04:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # config=() 00:19:06.864 01:04:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # local subsystem config 00:19:06.864 01:04:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:06.864 01:04:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:06.864 { 00:19:06.864 "params": { 00:19:06.864 "name": "Nvme$subsystem", 00:19:06.864 "trtype": "$TEST_TRANSPORT", 00:19:06.864 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:06.864 "adrfam": "ipv4", 00:19:06.864 "trsvcid": "$NVMF_PORT", 00:19:06.864 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:06.864 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:06.864 "hdgst": ${hdgst:-false}, 00:19:06.864 "ddgst": ${ddgst:-false} 00:19:06.864 }, 00:19:06.864 "method": "bdev_nvme_attach_controller" 00:19:06.864 } 00:19:06.864 EOF 00:19:06.864 )") 00:19:06.864 01:04:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # cat 00:19:06.864 01:04:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@556 -- # jq . 00:19:06.864 01:04:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@557 -- # IFS=, 00:19:06.864 01:04:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:19:06.864 "params": { 00:19:06.864 "name": "Nvme1", 00:19:06.864 "trtype": "tcp", 00:19:06.864 "traddr": "10.0.0.2", 00:19:06.864 "adrfam": "ipv4", 00:19:06.864 "trsvcid": "4420", 00:19:06.864 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:06.864 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:06.864 "hdgst": false, 00:19:06.864 "ddgst": false 00:19:06.864 }, 00:19:06.864 "method": "bdev_nvme_attach_controller" 00:19:06.864 }' 00:19:06.864 [2024-07-25 01:04:59.996946] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 23.11.0 initialization... 00:19:06.864 [2024-07-25 01:04:59.997020] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3775997 ] 00:19:07.121 EAL: No free 2048 kB hugepages reported on node 1 00:19:07.121 [2024-07-25 01:05:00.062132] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:07.121 [2024-07-25 01:05:00.155213] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:07.121 [2024-07-25 01:05:00.155273] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:07.121 [2024-07-25 01:05:00.155277] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:07.378 I/O targets: 00:19:07.378 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:19:07.378 00:19:07.378 00:19:07.378 CUnit - A unit testing framework for C - Version 2.1-3 00:19:07.378 http://cunit.sourceforge.net/ 00:19:07.378 00:19:07.378 00:19:07.378 Suite: bdevio tests on: Nvme1n1 00:19:07.378 Test: blockdev write read block ...passed 00:19:07.378 Test: blockdev write zeroes read block ...passed 00:19:07.378 Test: blockdev write zeroes read no split ...passed 00:19:07.378 Test: blockdev write zeroes read split ...passed 00:19:07.378 Test: blockdev write zeroes read split partial ...passed 00:19:07.378 Test: blockdev reset ...[2024-07-25 01:05:00.501068] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:07.378 [2024-07-25 01:05:00.501178] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c6cf80 (9): Bad file descriptor 00:19:07.635 [2024-07-25 01:05:00.597470] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:19:07.635 passed 00:19:07.635 Test: blockdev write read 8 blocks ...passed 00:19:07.635 Test: blockdev write read size > 128k ...passed 00:19:07.635 Test: blockdev write read invalid size ...passed 00:19:07.635 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:19:07.635 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:19:07.635 Test: blockdev write read max offset ...passed 00:19:07.635 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:19:07.893 Test: blockdev writev readv 8 blocks ...passed 00:19:07.893 Test: blockdev writev readv 30 x 1block ...passed 00:19:07.893 Test: blockdev writev readv block ...passed 00:19:07.893 Test: blockdev writev readv size > 128k ...passed 00:19:07.893 Test: blockdev writev readv size > 128k in two iovs ...passed 00:19:07.893 Test: blockdev comparev and writev ...[2024-07-25 01:05:00.896384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:07.893 [2024-07-25 01:05:00.896420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:07.893 [2024-07-25 01:05:00.896444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:07.893 [2024-07-25 01:05:00.896460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:07.893 [2024-07-25 01:05:00.896835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:07.893 [2024-07-25 01:05:00.896860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:19:07.893 [2024-07-25 01:05:00.896882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:07.893 [2024-07-25 01:05:00.896899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:19:07.893 [2024-07-25 01:05:00.897274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:07.893 [2024-07-25 01:05:00.897300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:07.893 [2024-07-25 01:05:00.897322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:07.893 [2024-07-25 01:05:00.897339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:19:07.893 [2024-07-25 01:05:00.897719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:07.893 [2024-07-25 01:05:00.897744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:07.893 [2024-07-25 01:05:00.897765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:07.893 [2024-07-25 01:05:00.897782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:19:07.893 passed 00:19:07.893 Test: blockdev nvme passthru rw ...passed 00:19:07.893 Test: blockdev nvme passthru vendor specific ...[2024-07-25 01:05:00.980585] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:07.893 [2024-07-25 01:05:00.980613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:19:07.893 [2024-07-25 01:05:00.980793] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:07.893 [2024-07-25 01:05:00.980816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:19:07.893 [2024-07-25 01:05:00.980995] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:07.893 [2024-07-25 01:05:00.981024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:19:07.893 [2024-07-25 01:05:00.981201] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:07.893 [2024-07-25 01:05:00.981226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:19:07.893 passed 00:19:07.893 Test: blockdev nvme admin passthru ...passed 00:19:07.893 Test: blockdev copy ...passed 00:19:07.893 00:19:07.893 Run Summary: Type Total Ran Passed Failed Inactive 00:19:07.893 suites 1 1 n/a 0 0 00:19:07.893 tests 23 23 23 0 0 00:19:07.893 asserts 152 152 152 0 n/a 00:19:07.893 00:19:07.893 Elapsed time = 1.314 seconds 00:19:08.150 01:05:01 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:08.150 01:05:01 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:08.150 01:05:01 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:19:08.150 01:05:01 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:08.150 01:05:01 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:19:08.150 01:05:01 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:19:08.150 01:05:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:08.150 01:05:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@117 -- # sync 00:19:08.150 01:05:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:08.150 01:05:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@120 -- # set +e 00:19:08.150 01:05:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:08.150 01:05:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:08.150 rmmod nvme_tcp 00:19:08.150 rmmod nvme_fabrics 00:19:08.150 rmmod nvme_keyring 00:19:08.150 01:05:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:08.150 01:05:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@124 -- # set -e 00:19:08.150 01:05:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@125 -- # return 0 00:19:08.150 01:05:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@489 -- # '[' -n 3775974 ']' 00:19:08.150 01:05:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@490 -- # killprocess 3775974 00:19:08.150 01:05:01 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@946 -- # '[' -z 3775974 ']' 00:19:08.150 01:05:01 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@950 -- # kill -0 3775974 00:19:08.150 01:05:01 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@951 -- # uname 00:19:08.150 01:05:01 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:19:08.150 01:05:01 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3775974 00:19:08.150 01:05:01 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@952 -- # process_name=reactor_3 00:19:08.150 01:05:01 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@956 -- # '[' reactor_3 = sudo ']' 00:19:08.150 01:05:01 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3775974' 00:19:08.150 killing process with pid 3775974 00:19:08.150 01:05:01 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@965 -- # kill 3775974 00:19:08.150 01:05:01 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@970 -- # wait 3775974 00:19:08.408 01:05:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:08.408 01:05:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:08.408 01:05:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:08.408 01:05:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:08.408 01:05:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:08.408 01:05:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:08.408 01:05:01 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:08.408 01:05:01 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:10.997 01:05:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:10.997 00:19:10.997 real 0m6.322s 00:19:10.997 user 0m10.041s 00:19:10.997 sys 0m2.096s 00:19:10.997 01:05:03 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1122 -- # xtrace_disable 00:19:10.997 01:05:03 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:19:10.997 ************************************ 00:19:10.997 END TEST nvmf_bdevio 00:19:10.997 ************************************ 00:19:10.997 01:05:03 nvmf_tcp -- nvmf/nvmf.sh@57 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:19:10.997 01:05:03 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:19:10.997 01:05:03 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:19:10.997 01:05:03 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:10.997 ************************************ 00:19:10.997 START TEST nvmf_auth_target 00:19:10.997 ************************************ 00:19:10.997 01:05:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:19:10.997 * Looking for test storage... 00:19:10.997 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:10.997 01:05:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:10.997 01:05:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:19:10.997 01:05:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:10.997 01:05:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:10.997 01:05:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:10.997 01:05:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:10.997 01:05:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:10.997 01:05:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:10.997 01:05:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:10.997 01:05:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:10.997 01:05:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:10.997 01:05:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:10.997 01:05:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:10.997 01:05:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:19:10.997 01:05:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:10.997 01:05:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:10.997 01:05:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:10.997 01:05:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:10.997 01:05:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:10.997 01:05:03 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:10.997 01:05:03 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:10.997 01:05:03 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:10.997 01:05:03 nvmf_tcp.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:10.997 01:05:03 nvmf_tcp.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:10.997 01:05:03 nvmf_tcp.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:10.997 01:05:03 nvmf_tcp.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:19:10.997 01:05:03 nvmf_tcp.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:10.997 01:05:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@47 -- # : 0 00:19:10.997 01:05:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:10.997 01:05:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:10.997 01:05:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:10.997 01:05:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:10.997 01:05:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:10.997 01:05:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:10.997 01:05:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:10.997 01:05:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:10.997 01:05:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:19:10.997 01:05:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:19:10.997 01:05:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:19:10.997 01:05:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:10.997 01:05:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:19:10.997 01:05:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:19:10.997 01:05:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:19:10.997 01:05:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@59 -- # nvmftestinit 00:19:10.997 01:05:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:10.997 01:05:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:10.997 01:05:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:10.997 01:05:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:10.997 01:05:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:10.997 01:05:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:10.997 01:05:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:10.997 01:05:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:10.997 01:05:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:10.997 01:05:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:10.997 01:05:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@285 -- # xtrace_disable 00:19:10.997 01:05:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:12.897 01:05:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:12.897 01:05:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@291 -- # pci_devs=() 00:19:12.897 01:05:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:12.897 01:05:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:12.897 01:05:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:12.897 01:05:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:12.897 01:05:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:12.897 01:05:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@295 -- # net_devs=() 00:19:12.897 01:05:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:12.897 01:05:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@296 -- # e810=() 00:19:12.897 01:05:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@296 -- # local -ga e810 00:19:12.897 01:05:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@297 -- # x722=() 00:19:12.897 01:05:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@297 -- # local -ga x722 00:19:12.897 01:05:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@298 -- # mlx=() 00:19:12.897 01:05:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@298 -- # local -ga mlx 00:19:12.897 01:05:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:12.897 01:05:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:12.897 01:05:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:12.897 01:05:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:12.897 01:05:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:12.897 01:05:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:12.897 01:05:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:12.897 01:05:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:12.897 01:05:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:12.897 01:05:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:12.897 01:05:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:12.897 01:05:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:12.897 01:05:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:12.897 01:05:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:12.897 01:05:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:12.897 01:05:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:12.897 01:05:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:12.897 01:05:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:12.897 01:05:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:19:12.897 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:19:12.897 01:05:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:12.897 01:05:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:12.897 01:05:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:12.897 01:05:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:12.897 01:05:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:12.897 01:05:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:12.897 01:05:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:19:12.897 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:19:12.897 01:05:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:12.897 01:05:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:12.897 01:05:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:12.897 01:05:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:12.897 01:05:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:12.897 01:05:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:12.897 01:05:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:12.897 01:05:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:12.897 01:05:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:12.897 01:05:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:12.897 01:05:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:12.898 01:05:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:12.898 01:05:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:12.898 01:05:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:12.898 01:05:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:12.898 01:05:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:19:12.898 Found net devices under 0000:0a:00.0: cvl_0_0 00:19:12.898 01:05:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:12.898 01:05:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:12.898 01:05:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:12.898 01:05:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:12.898 01:05:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:12.898 01:05:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:12.898 01:05:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:12.898 01:05:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:12.898 01:05:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:19:12.898 Found net devices under 0000:0a:00.1: cvl_0_1 00:19:12.898 01:05:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:12.898 01:05:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:12.898 01:05:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # is_hw=yes 00:19:12.898 01:05:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:12.898 01:05:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:19:12.898 01:05:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:19:12.898 01:05:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:12.898 01:05:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:12.898 01:05:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:12.898 01:05:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:12.898 01:05:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:12.898 01:05:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:12.898 01:05:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:12.898 01:05:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:12.898 01:05:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:12.898 01:05:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:12.898 01:05:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:12.898 01:05:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:12.898 01:05:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:12.898 01:05:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:12.898 01:05:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:12.898 01:05:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:12.898 01:05:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:12.898 01:05:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:12.898 01:05:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:12.898 01:05:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:12.898 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:12.898 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.199 ms 00:19:12.898 00:19:12.898 --- 10.0.0.2 ping statistics --- 00:19:12.898 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:12.898 rtt min/avg/max/mdev = 0.199/0.199/0.199/0.000 ms 00:19:12.898 01:05:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:12.898 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:12.898 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.158 ms 00:19:12.898 00:19:12.898 --- 10.0.0.1 ping statistics --- 00:19:12.898 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:12.898 rtt min/avg/max/mdev = 0.158/0.158/0.158/0.000 ms 00:19:12.898 01:05:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:12.898 01:05:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@422 -- # return 0 00:19:12.898 01:05:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:12.898 01:05:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:12.898 01:05:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:12.898 01:05:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:12.898 01:05:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:12.898 01:05:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:12.898 01:05:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:12.898 01:05:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@60 -- # nvmfappstart -L nvmf_auth 00:19:12.898 01:05:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:12.898 01:05:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@720 -- # xtrace_disable 00:19:12.898 01:05:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:12.898 01:05:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=3778067 00:19:12.898 01:05:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:19:12.898 01:05:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 3778067 00:19:12.898 01:05:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@827 -- # '[' -z 3778067 ']' 00:19:12.898 01:05:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:12.898 01:05:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@832 -- # local max_retries=100 00:19:12.898 01:05:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:12.898 01:05:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # xtrace_disable 00:19:12.898 01:05:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:13.156 01:05:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:19:13.156 01:05:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@860 -- # return 0 00:19:13.157 01:05:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:13.157 01:05:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:13.157 01:05:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:13.157 01:05:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:13.157 01:05:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@62 -- # hostpid=3778207 00:19:13.157 01:05:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:19:13.157 01:05:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@64 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:19:13.157 01:05:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key null 48 00:19:13.157 01:05:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:19:13.157 01:05:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:13.157 01:05:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:19:13.157 01:05:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=null 00:19:13.157 01:05:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:19:13.157 01:05:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:19:13.157 01:05:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=7dc207ed65e415f86593f4ba1077301486327a91502370a9 00:19:13.157 01:05:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:19:13.157 01:05:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.XdH 00:19:13.157 01:05:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 7dc207ed65e415f86593f4ba1077301486327a91502370a9 0 00:19:13.157 01:05:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 7dc207ed65e415f86593f4ba1077301486327a91502370a9 0 00:19:13.157 01:05:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:19:13.157 01:05:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:19:13.157 01:05:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=7dc207ed65e415f86593f4ba1077301486327a91502370a9 00:19:13.157 01:05:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=0 00:19:13.157 01:05:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:19:13.157 01:05:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.XdH 00:19:13.157 01:05:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.XdH 00:19:13.157 01:05:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # keys[0]=/tmp/spdk.key-null.XdH 00:19:13.157 01:05:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key sha512 64 00:19:13.157 01:05:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:19:13.157 01:05:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:13.157 01:05:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:19:13.157 01:05:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:19:13.157 01:05:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:19:13.157 01:05:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:19:13.157 01:05:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=6f1ac5cb575a3ef6ca34960b6883f32e0c2dfe9f206f3facb4634031d5e9a18e 00:19:13.157 01:05:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:19:13.157 01:05:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.pWA 00:19:13.157 01:05:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 6f1ac5cb575a3ef6ca34960b6883f32e0c2dfe9f206f3facb4634031d5e9a18e 3 00:19:13.157 01:05:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 6f1ac5cb575a3ef6ca34960b6883f32e0c2dfe9f206f3facb4634031d5e9a18e 3 00:19:13.157 01:05:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:19:13.157 01:05:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:19:13.157 01:05:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=6f1ac5cb575a3ef6ca34960b6883f32e0c2dfe9f206f3facb4634031d5e9a18e 00:19:13.157 01:05:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:19:13.157 01:05:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:19:13.157 01:05:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.pWA 00:19:13.157 01:05:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.pWA 00:19:13.157 01:05:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # ckeys[0]=/tmp/spdk.key-sha512.pWA 00:19:13.157 01:05:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha256 32 00:19:13.157 01:05:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:19:13.157 01:05:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:13.157 01:05:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:19:13.157 01:05:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:19:13.157 01:05:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:19:13.157 01:05:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:19:13.157 01:05:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=dd8c842a5d91cde4cff43c019218fcfd 00:19:13.157 01:05:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:19:13.157 01:05:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.IPL 00:19:13.157 01:05:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key dd8c842a5d91cde4cff43c019218fcfd 1 00:19:13.157 01:05:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 dd8c842a5d91cde4cff43c019218fcfd 1 00:19:13.157 01:05:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:19:13.157 01:05:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:19:13.157 01:05:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=dd8c842a5d91cde4cff43c019218fcfd 00:19:13.157 01:05:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:19:13.157 01:05:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:19:13.157 01:05:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.IPL 00:19:13.157 01:05:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.IPL 00:19:13.157 01:05:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # keys[1]=/tmp/spdk.key-sha256.IPL 00:19:13.157 01:05:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha384 48 00:19:13.157 01:05:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:19:13.157 01:05:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:13.157 01:05:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:19:13.157 01:05:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:19:13.157 01:05:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:19:13.157 01:05:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:19:13.157 01:05:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=70111593b0d322b9071d1f24869c0829e1c1bc5f51c9fd44 00:19:13.157 01:05:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:19:13.157 01:05:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.jMn 00:19:13.157 01:05:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 70111593b0d322b9071d1f24869c0829e1c1bc5f51c9fd44 2 00:19:13.157 01:05:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 70111593b0d322b9071d1f24869c0829e1c1bc5f51c9fd44 2 00:19:13.157 01:05:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:19:13.157 01:05:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:19:13.157 01:05:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=70111593b0d322b9071d1f24869c0829e1c1bc5f51c9fd44 00:19:13.157 01:05:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:19:13.157 01:05:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:19:13.415 01:05:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.jMn 00:19:13.415 01:05:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.jMn 00:19:13.415 01:05:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # ckeys[1]=/tmp/spdk.key-sha384.jMn 00:19:13.415 01:05:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha384 48 00:19:13.415 01:05:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:19:13.415 01:05:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:13.415 01:05:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:19:13.415 01:05:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:19:13.415 01:05:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:19:13.415 01:05:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:19:13.415 01:05:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=d9a707fe3b492ba1941a0a53873a6de99e7627c2d612e6b7 00:19:13.415 01:05:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:19:13.415 01:05:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.pwc 00:19:13.415 01:05:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key d9a707fe3b492ba1941a0a53873a6de99e7627c2d612e6b7 2 00:19:13.415 01:05:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 d9a707fe3b492ba1941a0a53873a6de99e7627c2d612e6b7 2 00:19:13.415 01:05:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:19:13.415 01:05:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:19:13.415 01:05:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=d9a707fe3b492ba1941a0a53873a6de99e7627c2d612e6b7 00:19:13.415 01:05:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:19:13.415 01:05:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:19:13.415 01:05:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.pwc 00:19:13.415 01:05:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.pwc 00:19:13.415 01:05:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # keys[2]=/tmp/spdk.key-sha384.pwc 00:19:13.415 01:05:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha256 32 00:19:13.415 01:05:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:19:13.415 01:05:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:13.415 01:05:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:19:13.415 01:05:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:19:13.415 01:05:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:19:13.415 01:05:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:19:13.415 01:05:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=b8693ddd31d568782056a37d42b30cb1 00:19:13.415 01:05:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:19:13.415 01:05:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.kLe 00:19:13.415 01:05:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key b8693ddd31d568782056a37d42b30cb1 1 00:19:13.415 01:05:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 b8693ddd31d568782056a37d42b30cb1 1 00:19:13.415 01:05:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:19:13.415 01:05:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:19:13.415 01:05:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=b8693ddd31d568782056a37d42b30cb1 00:19:13.415 01:05:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:19:13.415 01:05:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:19:13.415 01:05:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.kLe 00:19:13.415 01:05:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.kLe 00:19:13.415 01:05:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # ckeys[2]=/tmp/spdk.key-sha256.kLe 00:19:13.415 01:05:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # gen_dhchap_key sha512 64 00:19:13.415 01:05:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:19:13.415 01:05:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:13.415 01:05:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:19:13.415 01:05:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:19:13.415 01:05:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:19:13.415 01:05:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:19:13.415 01:05:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=b0939b1c348da6e683df01bb4f05249244c0a34eb1b660b5dc9b988cc510e9bf 00:19:13.416 01:05:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:19:13.416 01:05:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.NAF 00:19:13.416 01:05:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key b0939b1c348da6e683df01bb4f05249244c0a34eb1b660b5dc9b988cc510e9bf 3 00:19:13.416 01:05:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 b0939b1c348da6e683df01bb4f05249244c0a34eb1b660b5dc9b988cc510e9bf 3 00:19:13.416 01:05:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:19:13.416 01:05:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:19:13.416 01:05:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=b0939b1c348da6e683df01bb4f05249244c0a34eb1b660b5dc9b988cc510e9bf 00:19:13.416 01:05:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:19:13.416 01:05:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:19:13.416 01:05:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.NAF 00:19:13.416 01:05:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.NAF 00:19:13.416 01:05:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # keys[3]=/tmp/spdk.key-sha512.NAF 00:19:13.416 01:05:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # ckeys[3]= 00:19:13.416 01:05:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@72 -- # waitforlisten 3778067 00:19:13.416 01:05:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@827 -- # '[' -z 3778067 ']' 00:19:13.416 01:05:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:13.416 01:05:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@832 -- # local max_retries=100 00:19:13.416 01:05:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:13.416 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:13.416 01:05:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # xtrace_disable 00:19:13.416 01:05:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:13.673 01:05:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:19:13.673 01:05:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@860 -- # return 0 00:19:13.673 01:05:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@73 -- # waitforlisten 3778207 /var/tmp/host.sock 00:19:13.673 01:05:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@827 -- # '[' -z 3778207 ']' 00:19:13.673 01:05:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/host.sock 00:19:13.673 01:05:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@832 -- # local max_retries=100 00:19:13.673 01:05:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:19:13.673 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:19:13.673 01:05:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # xtrace_disable 00:19:13.673 01:05:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:13.931 01:05:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:19:13.931 01:05:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@860 -- # return 0 00:19:13.931 01:05:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd 00:19:13.931 01:05:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:13.931 01:05:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:13.931 01:05:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:13.931 01:05:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:19:13.931 01:05:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.XdH 00:19:13.931 01:05:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:13.931 01:05:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:13.931 01:05:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:13.931 01:05:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.XdH 00:19:13.931 01:05:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.XdH 00:19:14.189 01:05:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha512.pWA ]] 00:19:14.189 01:05:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.pWA 00:19:14.189 01:05:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:14.189 01:05:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:14.189 01:05:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:14.189 01:05:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.pWA 00:19:14.189 01:05:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.pWA 00:19:14.446 01:05:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:19:14.446 01:05:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.IPL 00:19:14.446 01:05:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:14.446 01:05:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:14.446 01:05:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:14.446 01:05:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.IPL 00:19:14.446 01:05:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.IPL 00:19:14.704 01:05:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha384.jMn ]] 00:19:14.704 01:05:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.jMn 00:19:14.704 01:05:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:14.704 01:05:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:14.704 01:05:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:14.704 01:05:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.jMn 00:19:14.704 01:05:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.jMn 00:19:14.962 01:05:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:19:14.962 01:05:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.pwc 00:19:14.962 01:05:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:14.962 01:05:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:14.962 01:05:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:14.962 01:05:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.pwc 00:19:14.962 01:05:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.pwc 00:19:15.220 01:05:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha256.kLe ]] 00:19:15.220 01:05:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.kLe 00:19:15.220 01:05:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:15.220 01:05:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:15.220 01:05:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:15.220 01:05:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.kLe 00:19:15.220 01:05:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.kLe 00:19:15.477 01:05:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:19:15.477 01:05:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.NAF 00:19:15.477 01:05:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:15.477 01:05:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:15.477 01:05:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:15.477 01:05:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.NAF 00:19:15.477 01:05:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.NAF 00:19:15.734 01:05:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n '' ]] 00:19:15.734 01:05:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:19:15.734 01:05:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:15.734 01:05:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:15.734 01:05:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:15.734 01:05:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:15.992 01:05:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 0 00:19:15.992 01:05:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:15.992 01:05:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:15.992 01:05:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:15.992 01:05:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:15.992 01:05:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:15.992 01:05:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:15.992 01:05:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:15.992 01:05:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:15.992 01:05:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:15.992 01:05:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:15.992 01:05:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:16.249 00:19:16.249 01:05:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:16.249 01:05:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:16.249 01:05:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:16.507 01:05:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:16.507 01:05:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:16.507 01:05:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:16.507 01:05:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:16.507 01:05:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:16.507 01:05:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:16.507 { 00:19:16.507 "cntlid": 1, 00:19:16.507 "qid": 0, 00:19:16.507 "state": "enabled", 00:19:16.507 "listen_address": { 00:19:16.507 "trtype": "TCP", 00:19:16.507 "adrfam": "IPv4", 00:19:16.507 "traddr": "10.0.0.2", 00:19:16.507 "trsvcid": "4420" 00:19:16.507 }, 00:19:16.507 "peer_address": { 00:19:16.507 "trtype": "TCP", 00:19:16.507 "adrfam": "IPv4", 00:19:16.507 "traddr": "10.0.0.1", 00:19:16.507 "trsvcid": "34862" 00:19:16.507 }, 00:19:16.507 "auth": { 00:19:16.507 "state": "completed", 00:19:16.507 "digest": "sha256", 00:19:16.507 "dhgroup": "null" 00:19:16.507 } 00:19:16.507 } 00:19:16.507 ]' 00:19:16.507 01:05:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:16.507 01:05:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:16.507 01:05:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:16.507 01:05:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:16.507 01:05:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:16.764 01:05:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:16.765 01:05:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:16.765 01:05:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:16.765 01:05:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:N2RjMjA3ZWQ2NWU0MTVmODY1OTNmNGJhMTA3NzMwMTQ4NjMyN2E5MTUwMjM3MGE52ez0nA==: --dhchap-ctrl-secret DHHC-1:03:NmYxYWM1Y2I1NzVhM2VmNmNhMzQ5NjBiNjg4M2YzMmUwYzJkZmU5ZjIwNmYzZmFjYjQ2MzQwMzFkNWU5YTE4ZdZZ/8A=: 00:19:18.136 01:05:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:18.136 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:18.136 01:05:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:18.136 01:05:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:18.136 01:05:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:18.136 01:05:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:18.136 01:05:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:18.136 01:05:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:18.136 01:05:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:18.136 01:05:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 1 00:19:18.136 01:05:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:18.136 01:05:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:18.136 01:05:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:18.136 01:05:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:18.136 01:05:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:18.136 01:05:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:18.136 01:05:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:18.136 01:05:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:18.136 01:05:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:18.136 01:05:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:18.136 01:05:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:18.394 00:19:18.394 01:05:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:18.394 01:05:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:18.394 01:05:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:18.652 01:05:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:18.652 01:05:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:18.652 01:05:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:18.652 01:05:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:18.652 01:05:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:18.652 01:05:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:18.652 { 00:19:18.652 "cntlid": 3, 00:19:18.652 "qid": 0, 00:19:18.652 "state": "enabled", 00:19:18.652 "listen_address": { 00:19:18.652 "trtype": "TCP", 00:19:18.652 "adrfam": "IPv4", 00:19:18.652 "traddr": "10.0.0.2", 00:19:18.652 "trsvcid": "4420" 00:19:18.652 }, 00:19:18.652 "peer_address": { 00:19:18.652 "trtype": "TCP", 00:19:18.652 "adrfam": "IPv4", 00:19:18.652 "traddr": "10.0.0.1", 00:19:18.652 "trsvcid": "58212" 00:19:18.652 }, 00:19:18.652 "auth": { 00:19:18.652 "state": "completed", 00:19:18.652 "digest": "sha256", 00:19:18.652 "dhgroup": "null" 00:19:18.652 } 00:19:18.652 } 00:19:18.652 ]' 00:19:18.652 01:05:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:18.652 01:05:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:18.652 01:05:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:18.652 01:05:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:18.652 01:05:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:18.909 01:05:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:18.909 01:05:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:18.910 01:05:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:19.167 01:05:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:ZGQ4Yzg0MmE1ZDkxY2RlNGNmZjQzYzAxOTIxOGZjZmTWfQyC: --dhchap-ctrl-secret DHHC-1:02:NzAxMTE1OTNiMGQzMjJiOTA3MWQxZjI0ODY5YzA4MjllMWMxYmM1ZjUxYzlmZDQ0Z0yyfA==: 00:19:20.099 01:05:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:20.099 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:20.099 01:05:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:20.099 01:05:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:20.099 01:05:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:20.099 01:05:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:20.099 01:05:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:20.099 01:05:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:20.099 01:05:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:20.359 01:05:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 2 00:19:20.359 01:05:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:20.359 01:05:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:20.359 01:05:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:20.359 01:05:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:20.359 01:05:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:20.359 01:05:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:20.359 01:05:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:20.359 01:05:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:20.359 01:05:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:20.359 01:05:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:20.359 01:05:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:20.617 00:19:20.617 01:05:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:20.617 01:05:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:20.617 01:05:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:20.874 01:05:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:20.874 01:05:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:20.874 01:05:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:20.874 01:05:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:20.874 01:05:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:20.874 01:05:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:20.874 { 00:19:20.874 "cntlid": 5, 00:19:20.874 "qid": 0, 00:19:20.874 "state": "enabled", 00:19:20.874 "listen_address": { 00:19:20.874 "trtype": "TCP", 00:19:20.874 "adrfam": "IPv4", 00:19:20.874 "traddr": "10.0.0.2", 00:19:20.874 "trsvcid": "4420" 00:19:20.874 }, 00:19:20.874 "peer_address": { 00:19:20.874 "trtype": "TCP", 00:19:20.874 "adrfam": "IPv4", 00:19:20.874 "traddr": "10.0.0.1", 00:19:20.874 "trsvcid": "58250" 00:19:20.874 }, 00:19:20.874 "auth": { 00:19:20.874 "state": "completed", 00:19:20.874 "digest": "sha256", 00:19:20.874 "dhgroup": "null" 00:19:20.874 } 00:19:20.874 } 00:19:20.874 ]' 00:19:20.874 01:05:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:20.874 01:05:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:20.874 01:05:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:20.874 01:05:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:20.874 01:05:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:20.874 01:05:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:20.874 01:05:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:20.874 01:05:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:21.131 01:05:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:ZDlhNzA3ZmUzYjQ5MmJhMTk0MWEwYTUzODczYTZkZTk5ZTc2MjdjMmQ2MTJlNmI3PajNMw==: --dhchap-ctrl-secret DHHC-1:01:Yjg2OTNkZGQzMWQ1Njg3ODIwNTZhMzdkNDJiMzBjYjFsLW8v: 00:19:22.061 01:05:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:22.319 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:22.319 01:05:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:22.319 01:05:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:22.319 01:05:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:22.319 01:05:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:22.319 01:05:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:22.319 01:05:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:22.319 01:05:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:22.577 01:05:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 3 00:19:22.577 01:05:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:22.577 01:05:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:22.577 01:05:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:22.577 01:05:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:22.577 01:05:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:22.577 01:05:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:19:22.577 01:05:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:22.577 01:05:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:22.577 01:05:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:22.577 01:05:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:22.577 01:05:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:22.835 00:19:22.835 01:05:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:22.835 01:05:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:22.835 01:05:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:23.092 01:05:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:23.092 01:05:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:23.092 01:05:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:23.092 01:05:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:23.092 01:05:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:23.092 01:05:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:23.092 { 00:19:23.092 "cntlid": 7, 00:19:23.092 "qid": 0, 00:19:23.092 "state": "enabled", 00:19:23.092 "listen_address": { 00:19:23.092 "trtype": "TCP", 00:19:23.093 "adrfam": "IPv4", 00:19:23.093 "traddr": "10.0.0.2", 00:19:23.093 "trsvcid": "4420" 00:19:23.093 }, 00:19:23.093 "peer_address": { 00:19:23.093 "trtype": "TCP", 00:19:23.093 "adrfam": "IPv4", 00:19:23.093 "traddr": "10.0.0.1", 00:19:23.093 "trsvcid": "58272" 00:19:23.093 }, 00:19:23.093 "auth": { 00:19:23.093 "state": "completed", 00:19:23.093 "digest": "sha256", 00:19:23.093 "dhgroup": "null" 00:19:23.093 } 00:19:23.093 } 00:19:23.093 ]' 00:19:23.093 01:05:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:23.093 01:05:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:23.093 01:05:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:23.093 01:05:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:23.093 01:05:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:23.093 01:05:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:23.093 01:05:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:23.093 01:05:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:23.350 01:05:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:YjA5MzliMWMzNDhkYTZlNjgzZGYwMWJiNGYwNTI0OTI0NGMwYTM0ZWIxYjY2MGI1ZGM5Yjk4OGNjNTEwZTliZkMFjv0=: 00:19:24.719 01:05:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:24.719 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:24.719 01:05:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:24.719 01:05:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:24.719 01:05:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:24.719 01:05:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:24.719 01:05:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:24.719 01:05:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:24.719 01:05:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:24.719 01:05:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:24.719 01:05:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 0 00:19:24.719 01:05:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:24.719 01:05:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:24.719 01:05:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:24.719 01:05:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:24.719 01:05:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:24.719 01:05:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:24.719 01:05:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:24.719 01:05:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:24.719 01:05:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:24.719 01:05:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:24.719 01:05:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:24.977 00:19:24.977 01:05:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:24.977 01:05:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:24.977 01:05:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:25.235 01:05:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:25.235 01:05:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:25.235 01:05:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:25.235 01:05:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:25.235 01:05:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:25.235 01:05:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:25.235 { 00:19:25.235 "cntlid": 9, 00:19:25.235 "qid": 0, 00:19:25.236 "state": "enabled", 00:19:25.236 "listen_address": { 00:19:25.236 "trtype": "TCP", 00:19:25.236 "adrfam": "IPv4", 00:19:25.236 "traddr": "10.0.0.2", 00:19:25.236 "trsvcid": "4420" 00:19:25.236 }, 00:19:25.236 "peer_address": { 00:19:25.236 "trtype": "TCP", 00:19:25.236 "adrfam": "IPv4", 00:19:25.236 "traddr": "10.0.0.1", 00:19:25.236 "trsvcid": "58294" 00:19:25.236 }, 00:19:25.236 "auth": { 00:19:25.236 "state": "completed", 00:19:25.236 "digest": "sha256", 00:19:25.236 "dhgroup": "ffdhe2048" 00:19:25.236 } 00:19:25.236 } 00:19:25.236 ]' 00:19:25.236 01:05:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:25.236 01:05:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:25.236 01:05:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:25.552 01:05:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:25.552 01:05:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:25.552 01:05:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:25.552 01:05:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:25.552 01:05:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:25.809 01:05:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:N2RjMjA3ZWQ2NWU0MTVmODY1OTNmNGJhMTA3NzMwMTQ4NjMyN2E5MTUwMjM3MGE52ez0nA==: --dhchap-ctrl-secret DHHC-1:03:NmYxYWM1Y2I1NzVhM2VmNmNhMzQ5NjBiNjg4M2YzMmUwYzJkZmU5ZjIwNmYzZmFjYjQ2MzQwMzFkNWU5YTE4ZdZZ/8A=: 00:19:26.739 01:05:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:26.739 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:26.739 01:05:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:26.739 01:05:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:26.739 01:05:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:26.739 01:05:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:26.739 01:05:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:26.739 01:05:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:26.739 01:05:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:26.996 01:05:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 1 00:19:26.996 01:05:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:26.996 01:05:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:26.996 01:05:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:26.996 01:05:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:26.996 01:05:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:26.996 01:05:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:26.996 01:05:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:26.996 01:05:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:26.996 01:05:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:26.996 01:05:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:26.996 01:05:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:27.253 00:19:27.253 01:05:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:27.253 01:05:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:27.253 01:05:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:27.510 01:05:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:27.510 01:05:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:27.510 01:05:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:27.510 01:05:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:27.510 01:05:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:27.510 01:05:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:27.510 { 00:19:27.510 "cntlid": 11, 00:19:27.510 "qid": 0, 00:19:27.510 "state": "enabled", 00:19:27.510 "listen_address": { 00:19:27.510 "trtype": "TCP", 00:19:27.510 "adrfam": "IPv4", 00:19:27.510 "traddr": "10.0.0.2", 00:19:27.510 "trsvcid": "4420" 00:19:27.511 }, 00:19:27.511 "peer_address": { 00:19:27.511 "trtype": "TCP", 00:19:27.511 "adrfam": "IPv4", 00:19:27.511 "traddr": "10.0.0.1", 00:19:27.511 "trsvcid": "58168" 00:19:27.511 }, 00:19:27.511 "auth": { 00:19:27.511 "state": "completed", 00:19:27.511 "digest": "sha256", 00:19:27.511 "dhgroup": "ffdhe2048" 00:19:27.511 } 00:19:27.511 } 00:19:27.511 ]' 00:19:27.511 01:05:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:27.511 01:05:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:27.511 01:05:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:27.511 01:05:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:27.511 01:05:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:27.768 01:05:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:27.768 01:05:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:27.768 01:05:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:28.025 01:05:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:ZGQ4Yzg0MmE1ZDkxY2RlNGNmZjQzYzAxOTIxOGZjZmTWfQyC: --dhchap-ctrl-secret DHHC-1:02:NzAxMTE1OTNiMGQzMjJiOTA3MWQxZjI0ODY5YzA4MjllMWMxYmM1ZjUxYzlmZDQ0Z0yyfA==: 00:19:28.957 01:05:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:28.957 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:28.957 01:05:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:28.957 01:05:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:28.957 01:05:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:28.957 01:05:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:28.957 01:05:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:28.957 01:05:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:28.957 01:05:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:29.214 01:05:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 2 00:19:29.214 01:05:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:29.214 01:05:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:29.214 01:05:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:29.214 01:05:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:29.214 01:05:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:29.214 01:05:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:29.214 01:05:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:29.214 01:05:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:29.214 01:05:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:29.214 01:05:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:29.214 01:05:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:29.471 00:19:29.471 01:05:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:29.471 01:05:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:29.471 01:05:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:29.729 01:05:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:29.729 01:05:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:29.729 01:05:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:29.729 01:05:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:29.729 01:05:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:29.729 01:05:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:29.729 { 00:19:29.729 "cntlid": 13, 00:19:29.729 "qid": 0, 00:19:29.729 "state": "enabled", 00:19:29.729 "listen_address": { 00:19:29.729 "trtype": "TCP", 00:19:29.729 "adrfam": "IPv4", 00:19:29.729 "traddr": "10.0.0.2", 00:19:29.729 "trsvcid": "4420" 00:19:29.729 }, 00:19:29.729 "peer_address": { 00:19:29.729 "trtype": "TCP", 00:19:29.729 "adrfam": "IPv4", 00:19:29.729 "traddr": "10.0.0.1", 00:19:29.729 "trsvcid": "58216" 00:19:29.729 }, 00:19:29.729 "auth": { 00:19:29.729 "state": "completed", 00:19:29.729 "digest": "sha256", 00:19:29.729 "dhgroup": "ffdhe2048" 00:19:29.729 } 00:19:29.729 } 00:19:29.729 ]' 00:19:29.729 01:05:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:29.729 01:05:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:29.729 01:05:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:29.729 01:05:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:29.729 01:05:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:29.986 01:05:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:29.986 01:05:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:29.986 01:05:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:30.243 01:05:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:ZDlhNzA3ZmUzYjQ5MmJhMTk0MWEwYTUzODczYTZkZTk5ZTc2MjdjMmQ2MTJlNmI3PajNMw==: --dhchap-ctrl-secret DHHC-1:01:Yjg2OTNkZGQzMWQ1Njg3ODIwNTZhMzdkNDJiMzBjYjFsLW8v: 00:19:31.172 01:05:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:31.172 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:31.172 01:05:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:31.172 01:05:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:31.172 01:05:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:31.172 01:05:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:31.172 01:05:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:31.172 01:05:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:31.172 01:05:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:31.429 01:05:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 3 00:19:31.429 01:05:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:31.429 01:05:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:31.430 01:05:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:31.430 01:05:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:31.430 01:05:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:31.430 01:05:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:19:31.430 01:05:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:31.430 01:05:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:31.430 01:05:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:31.430 01:05:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:31.430 01:05:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:31.687 00:19:31.687 01:05:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:31.687 01:05:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:31.687 01:05:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:31.943 01:05:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:31.943 01:05:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:31.943 01:05:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:31.943 01:05:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:32.200 01:05:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:32.200 01:05:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:32.200 { 00:19:32.200 "cntlid": 15, 00:19:32.201 "qid": 0, 00:19:32.201 "state": "enabled", 00:19:32.201 "listen_address": { 00:19:32.201 "trtype": "TCP", 00:19:32.201 "adrfam": "IPv4", 00:19:32.201 "traddr": "10.0.0.2", 00:19:32.201 "trsvcid": "4420" 00:19:32.201 }, 00:19:32.201 "peer_address": { 00:19:32.201 "trtype": "TCP", 00:19:32.201 "adrfam": "IPv4", 00:19:32.201 "traddr": "10.0.0.1", 00:19:32.201 "trsvcid": "58238" 00:19:32.201 }, 00:19:32.201 "auth": { 00:19:32.201 "state": "completed", 00:19:32.201 "digest": "sha256", 00:19:32.201 "dhgroup": "ffdhe2048" 00:19:32.201 } 00:19:32.201 } 00:19:32.201 ]' 00:19:32.201 01:05:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:32.201 01:05:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:32.201 01:05:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:32.201 01:05:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:32.201 01:05:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:32.201 01:05:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:32.201 01:05:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:32.201 01:05:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:32.458 01:05:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:YjA5MzliMWMzNDhkYTZlNjgzZGYwMWJiNGYwNTI0OTI0NGMwYTM0ZWIxYjY2MGI1ZGM5Yjk4OGNjNTEwZTliZkMFjv0=: 00:19:33.391 01:05:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:33.391 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:33.391 01:05:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:33.391 01:05:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:33.391 01:05:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:33.391 01:05:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:33.391 01:05:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:33.391 01:05:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:33.391 01:05:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:33.391 01:05:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:33.648 01:05:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 0 00:19:33.648 01:05:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:33.648 01:05:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:33.648 01:05:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:33.648 01:05:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:33.648 01:05:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:33.648 01:05:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:33.648 01:05:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:33.648 01:05:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:33.648 01:05:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:33.648 01:05:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:33.648 01:05:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:33.905 00:19:34.162 01:05:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:34.162 01:05:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:34.162 01:05:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:34.419 01:05:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:34.419 01:05:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:34.419 01:05:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:34.419 01:05:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:34.419 01:05:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:34.419 01:05:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:34.419 { 00:19:34.419 "cntlid": 17, 00:19:34.419 "qid": 0, 00:19:34.419 "state": "enabled", 00:19:34.419 "listen_address": { 00:19:34.419 "trtype": "TCP", 00:19:34.419 "adrfam": "IPv4", 00:19:34.419 "traddr": "10.0.0.2", 00:19:34.419 "trsvcid": "4420" 00:19:34.419 }, 00:19:34.419 "peer_address": { 00:19:34.419 "trtype": "TCP", 00:19:34.419 "adrfam": "IPv4", 00:19:34.419 "traddr": "10.0.0.1", 00:19:34.419 "trsvcid": "58276" 00:19:34.419 }, 00:19:34.419 "auth": { 00:19:34.419 "state": "completed", 00:19:34.419 "digest": "sha256", 00:19:34.419 "dhgroup": "ffdhe3072" 00:19:34.419 } 00:19:34.419 } 00:19:34.419 ]' 00:19:34.419 01:05:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:34.420 01:05:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:34.420 01:05:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:34.420 01:05:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:34.420 01:05:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:34.420 01:05:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:34.420 01:05:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:34.420 01:05:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:34.677 01:05:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:N2RjMjA3ZWQ2NWU0MTVmODY1OTNmNGJhMTA3NzMwMTQ4NjMyN2E5MTUwMjM3MGE52ez0nA==: --dhchap-ctrl-secret DHHC-1:03:NmYxYWM1Y2I1NzVhM2VmNmNhMzQ5NjBiNjg4M2YzMmUwYzJkZmU5ZjIwNmYzZmFjYjQ2MzQwMzFkNWU5YTE4ZdZZ/8A=: 00:19:35.609 01:05:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:35.609 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:35.609 01:05:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:35.609 01:05:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:35.609 01:05:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:35.609 01:05:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:35.609 01:05:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:35.609 01:05:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:35.609 01:05:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:35.866 01:05:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 1 00:19:35.866 01:05:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:35.866 01:05:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:35.866 01:05:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:35.866 01:05:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:35.866 01:05:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:35.866 01:05:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:35.866 01:05:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:35.866 01:05:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:35.866 01:05:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:35.866 01:05:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:35.867 01:05:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:36.430 00:19:36.430 01:05:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:36.430 01:05:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:36.430 01:05:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:36.688 01:05:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:36.688 01:05:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:36.688 01:05:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:36.688 01:05:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:36.688 01:05:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:36.688 01:05:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:36.688 { 00:19:36.688 "cntlid": 19, 00:19:36.688 "qid": 0, 00:19:36.688 "state": "enabled", 00:19:36.688 "listen_address": { 00:19:36.688 "trtype": "TCP", 00:19:36.688 "adrfam": "IPv4", 00:19:36.688 "traddr": "10.0.0.2", 00:19:36.688 "trsvcid": "4420" 00:19:36.688 }, 00:19:36.688 "peer_address": { 00:19:36.688 "trtype": "TCP", 00:19:36.688 "adrfam": "IPv4", 00:19:36.688 "traddr": "10.0.0.1", 00:19:36.688 "trsvcid": "58300" 00:19:36.688 }, 00:19:36.688 "auth": { 00:19:36.688 "state": "completed", 00:19:36.688 "digest": "sha256", 00:19:36.688 "dhgroup": "ffdhe3072" 00:19:36.688 } 00:19:36.688 } 00:19:36.688 ]' 00:19:36.688 01:05:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:36.688 01:05:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:36.688 01:05:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:36.688 01:05:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:36.688 01:05:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:36.688 01:05:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:36.688 01:05:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:36.688 01:05:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:36.946 01:05:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:ZGQ4Yzg0MmE1ZDkxY2RlNGNmZjQzYzAxOTIxOGZjZmTWfQyC: --dhchap-ctrl-secret DHHC-1:02:NzAxMTE1OTNiMGQzMjJiOTA3MWQxZjI0ODY5YzA4MjllMWMxYmM1ZjUxYzlmZDQ0Z0yyfA==: 00:19:37.878 01:05:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:37.878 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:37.878 01:05:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:37.878 01:05:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:37.878 01:05:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:37.878 01:05:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:37.878 01:05:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:37.878 01:05:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:37.878 01:05:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:38.136 01:05:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 2 00:19:38.136 01:05:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:38.136 01:05:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:38.136 01:05:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:38.136 01:05:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:38.136 01:05:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:38.136 01:05:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:38.136 01:05:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:38.136 01:05:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:38.136 01:05:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:38.136 01:05:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:38.136 01:05:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:38.394 00:19:38.394 01:05:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:38.394 01:05:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:38.394 01:05:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:38.651 01:05:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:38.651 01:05:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:38.651 01:05:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:38.651 01:05:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:38.908 01:05:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:38.908 01:05:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:38.908 { 00:19:38.908 "cntlid": 21, 00:19:38.908 "qid": 0, 00:19:38.908 "state": "enabled", 00:19:38.908 "listen_address": { 00:19:38.909 "trtype": "TCP", 00:19:38.909 "adrfam": "IPv4", 00:19:38.909 "traddr": "10.0.0.2", 00:19:38.909 "trsvcid": "4420" 00:19:38.909 }, 00:19:38.909 "peer_address": { 00:19:38.909 "trtype": "TCP", 00:19:38.909 "adrfam": "IPv4", 00:19:38.909 "traddr": "10.0.0.1", 00:19:38.909 "trsvcid": "45966" 00:19:38.909 }, 00:19:38.909 "auth": { 00:19:38.909 "state": "completed", 00:19:38.909 "digest": "sha256", 00:19:38.909 "dhgroup": "ffdhe3072" 00:19:38.909 } 00:19:38.909 } 00:19:38.909 ]' 00:19:38.909 01:05:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:38.909 01:05:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:38.909 01:05:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:38.909 01:05:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:38.909 01:05:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:38.909 01:05:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:38.909 01:05:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:38.909 01:05:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:39.166 01:05:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:ZDlhNzA3ZmUzYjQ5MmJhMTk0MWEwYTUzODczYTZkZTk5ZTc2MjdjMmQ2MTJlNmI3PajNMw==: --dhchap-ctrl-secret DHHC-1:01:Yjg2OTNkZGQzMWQ1Njg3ODIwNTZhMzdkNDJiMzBjYjFsLW8v: 00:19:40.097 01:05:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:40.097 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:40.097 01:05:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:40.097 01:05:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:40.097 01:05:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:40.097 01:05:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:40.097 01:05:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:40.097 01:05:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:40.097 01:05:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:40.361 01:05:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 3 00:19:40.361 01:05:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:40.361 01:05:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:40.361 01:05:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:40.361 01:05:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:40.361 01:05:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:40.361 01:05:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:19:40.361 01:05:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:40.361 01:05:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:40.361 01:05:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:40.361 01:05:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:40.361 01:05:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:40.952 00:19:40.952 01:05:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:40.952 01:05:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:40.952 01:05:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:41.210 01:05:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:41.210 01:05:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:41.210 01:05:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:41.210 01:05:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:41.210 01:05:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:41.210 01:05:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:41.210 { 00:19:41.210 "cntlid": 23, 00:19:41.210 "qid": 0, 00:19:41.210 "state": "enabled", 00:19:41.210 "listen_address": { 00:19:41.210 "trtype": "TCP", 00:19:41.210 "adrfam": "IPv4", 00:19:41.210 "traddr": "10.0.0.2", 00:19:41.210 "trsvcid": "4420" 00:19:41.210 }, 00:19:41.210 "peer_address": { 00:19:41.210 "trtype": "TCP", 00:19:41.210 "adrfam": "IPv4", 00:19:41.210 "traddr": "10.0.0.1", 00:19:41.210 "trsvcid": "45986" 00:19:41.210 }, 00:19:41.210 "auth": { 00:19:41.210 "state": "completed", 00:19:41.210 "digest": "sha256", 00:19:41.210 "dhgroup": "ffdhe3072" 00:19:41.210 } 00:19:41.210 } 00:19:41.210 ]' 00:19:41.210 01:05:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:41.210 01:05:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:41.210 01:05:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:41.210 01:05:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:41.210 01:05:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:41.210 01:05:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:41.210 01:05:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:41.210 01:05:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:41.467 01:05:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:YjA5MzliMWMzNDhkYTZlNjgzZGYwMWJiNGYwNTI0OTI0NGMwYTM0ZWIxYjY2MGI1ZGM5Yjk4OGNjNTEwZTliZkMFjv0=: 00:19:42.399 01:05:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:42.399 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:42.399 01:05:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:42.399 01:05:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:42.399 01:05:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:42.399 01:05:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:42.399 01:05:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:42.399 01:05:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:42.399 01:05:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:42.399 01:05:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:42.678 01:05:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 0 00:19:42.678 01:05:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:42.678 01:05:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:42.678 01:05:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:42.678 01:05:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:42.678 01:05:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:42.678 01:05:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:42.678 01:05:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:42.678 01:05:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:42.678 01:05:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:42.678 01:05:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:42.678 01:05:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:42.935 00:19:42.935 01:05:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:42.935 01:05:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:42.935 01:05:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:43.193 01:05:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:43.193 01:05:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:43.193 01:05:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:43.193 01:05:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:43.193 01:05:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:43.193 01:05:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:43.193 { 00:19:43.193 "cntlid": 25, 00:19:43.193 "qid": 0, 00:19:43.193 "state": "enabled", 00:19:43.193 "listen_address": { 00:19:43.193 "trtype": "TCP", 00:19:43.193 "adrfam": "IPv4", 00:19:43.193 "traddr": "10.0.0.2", 00:19:43.193 "trsvcid": "4420" 00:19:43.193 }, 00:19:43.193 "peer_address": { 00:19:43.193 "trtype": "TCP", 00:19:43.193 "adrfam": "IPv4", 00:19:43.193 "traddr": "10.0.0.1", 00:19:43.193 "trsvcid": "46022" 00:19:43.193 }, 00:19:43.193 "auth": { 00:19:43.193 "state": "completed", 00:19:43.193 "digest": "sha256", 00:19:43.193 "dhgroup": "ffdhe4096" 00:19:43.193 } 00:19:43.193 } 00:19:43.193 ]' 00:19:43.193 01:05:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:43.450 01:05:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:43.450 01:05:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:43.450 01:05:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:43.450 01:05:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:43.450 01:05:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:43.450 01:05:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:43.450 01:05:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:43.707 01:05:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:N2RjMjA3ZWQ2NWU0MTVmODY1OTNmNGJhMTA3NzMwMTQ4NjMyN2E5MTUwMjM3MGE52ez0nA==: --dhchap-ctrl-secret DHHC-1:03:NmYxYWM1Y2I1NzVhM2VmNmNhMzQ5NjBiNjg4M2YzMmUwYzJkZmU5ZjIwNmYzZmFjYjQ2MzQwMzFkNWU5YTE4ZdZZ/8A=: 00:19:44.640 01:05:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:44.640 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:44.640 01:05:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:44.640 01:05:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:44.640 01:05:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:44.640 01:05:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:44.640 01:05:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:44.640 01:05:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:44.640 01:05:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:44.897 01:05:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 1 00:19:44.897 01:05:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:44.897 01:05:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:44.897 01:05:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:44.897 01:05:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:44.897 01:05:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:44.897 01:05:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:44.897 01:05:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:44.897 01:05:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:44.897 01:05:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:44.897 01:05:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:44.897 01:05:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:45.155 00:19:45.155 01:05:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:45.155 01:05:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:45.155 01:05:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:45.413 01:05:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:45.413 01:05:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:45.413 01:05:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:45.413 01:05:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:45.670 01:05:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:45.671 01:05:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:45.671 { 00:19:45.671 "cntlid": 27, 00:19:45.671 "qid": 0, 00:19:45.671 "state": "enabled", 00:19:45.671 "listen_address": { 00:19:45.671 "trtype": "TCP", 00:19:45.671 "adrfam": "IPv4", 00:19:45.671 "traddr": "10.0.0.2", 00:19:45.671 "trsvcid": "4420" 00:19:45.671 }, 00:19:45.671 "peer_address": { 00:19:45.671 "trtype": "TCP", 00:19:45.671 "adrfam": "IPv4", 00:19:45.671 "traddr": "10.0.0.1", 00:19:45.671 "trsvcid": "46040" 00:19:45.671 }, 00:19:45.671 "auth": { 00:19:45.671 "state": "completed", 00:19:45.671 "digest": "sha256", 00:19:45.671 "dhgroup": "ffdhe4096" 00:19:45.671 } 00:19:45.671 } 00:19:45.671 ]' 00:19:45.671 01:05:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:45.671 01:05:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:45.671 01:05:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:45.671 01:05:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:45.671 01:05:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:45.671 01:05:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:45.671 01:05:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:45.671 01:05:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:45.928 01:05:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:ZGQ4Yzg0MmE1ZDkxY2RlNGNmZjQzYzAxOTIxOGZjZmTWfQyC: --dhchap-ctrl-secret DHHC-1:02:NzAxMTE1OTNiMGQzMjJiOTA3MWQxZjI0ODY5YzA4MjllMWMxYmM1ZjUxYzlmZDQ0Z0yyfA==: 00:19:46.858 01:05:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:46.858 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:46.858 01:05:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:46.858 01:05:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:46.858 01:05:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:46.859 01:05:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:46.859 01:05:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:46.859 01:05:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:46.859 01:05:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:47.116 01:05:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 2 00:19:47.116 01:05:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:47.116 01:05:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:47.116 01:05:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:47.116 01:05:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:47.116 01:05:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:47.116 01:05:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:47.116 01:05:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:47.116 01:05:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:47.116 01:05:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:47.116 01:05:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:47.116 01:05:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:47.681 00:19:47.681 01:05:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:47.681 01:05:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:47.681 01:05:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:47.938 01:05:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:47.938 01:05:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:47.938 01:05:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:47.938 01:05:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:47.938 01:05:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:47.938 01:05:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:47.938 { 00:19:47.938 "cntlid": 29, 00:19:47.938 "qid": 0, 00:19:47.938 "state": "enabled", 00:19:47.938 "listen_address": { 00:19:47.938 "trtype": "TCP", 00:19:47.938 "adrfam": "IPv4", 00:19:47.938 "traddr": "10.0.0.2", 00:19:47.938 "trsvcid": "4420" 00:19:47.938 }, 00:19:47.938 "peer_address": { 00:19:47.938 "trtype": "TCP", 00:19:47.938 "adrfam": "IPv4", 00:19:47.938 "traddr": "10.0.0.1", 00:19:47.938 "trsvcid": "52054" 00:19:47.938 }, 00:19:47.938 "auth": { 00:19:47.938 "state": "completed", 00:19:47.938 "digest": "sha256", 00:19:47.938 "dhgroup": "ffdhe4096" 00:19:47.938 } 00:19:47.938 } 00:19:47.938 ]' 00:19:47.938 01:05:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:47.938 01:05:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:47.938 01:05:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:47.938 01:05:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:47.938 01:05:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:47.938 01:05:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:47.938 01:05:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:47.938 01:05:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:48.195 01:05:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:ZDlhNzA3ZmUzYjQ5MmJhMTk0MWEwYTUzODczYTZkZTk5ZTc2MjdjMmQ2MTJlNmI3PajNMw==: --dhchap-ctrl-secret DHHC-1:01:Yjg2OTNkZGQzMWQ1Njg3ODIwNTZhMzdkNDJiMzBjYjFsLW8v: 00:19:49.126 01:05:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:49.126 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:49.127 01:05:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:49.127 01:05:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:49.127 01:05:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:49.127 01:05:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:49.127 01:05:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:49.127 01:05:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:49.127 01:05:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:49.384 01:05:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 3 00:19:49.384 01:05:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:49.384 01:05:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:49.384 01:05:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:49.384 01:05:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:49.384 01:05:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:49.384 01:05:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:19:49.384 01:05:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:49.384 01:05:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:49.384 01:05:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:49.384 01:05:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:49.384 01:05:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:49.948 00:19:49.948 01:05:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:49.948 01:05:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:49.948 01:05:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:50.205 01:05:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:50.205 01:05:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:50.205 01:05:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:50.205 01:05:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:50.205 01:05:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:50.205 01:05:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:50.205 { 00:19:50.205 "cntlid": 31, 00:19:50.205 "qid": 0, 00:19:50.205 "state": "enabled", 00:19:50.205 "listen_address": { 00:19:50.205 "trtype": "TCP", 00:19:50.205 "adrfam": "IPv4", 00:19:50.205 "traddr": "10.0.0.2", 00:19:50.205 "trsvcid": "4420" 00:19:50.205 }, 00:19:50.205 "peer_address": { 00:19:50.205 "trtype": "TCP", 00:19:50.205 "adrfam": "IPv4", 00:19:50.205 "traddr": "10.0.0.1", 00:19:50.205 "trsvcid": "52088" 00:19:50.205 }, 00:19:50.205 "auth": { 00:19:50.205 "state": "completed", 00:19:50.205 "digest": "sha256", 00:19:50.205 "dhgroup": "ffdhe4096" 00:19:50.205 } 00:19:50.205 } 00:19:50.205 ]' 00:19:50.205 01:05:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:50.205 01:05:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:50.205 01:05:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:50.205 01:05:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:50.205 01:05:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:50.205 01:05:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:50.205 01:05:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:50.205 01:05:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:50.770 01:05:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:YjA5MzliMWMzNDhkYTZlNjgzZGYwMWJiNGYwNTI0OTI0NGMwYTM0ZWIxYjY2MGI1ZGM5Yjk4OGNjNTEwZTliZkMFjv0=: 00:19:51.701 01:05:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:51.701 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:51.701 01:05:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:51.701 01:05:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:51.701 01:05:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:51.701 01:05:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:51.701 01:05:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:51.701 01:05:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:51.701 01:05:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:51.701 01:05:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:51.701 01:05:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 0 00:19:51.701 01:05:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:51.701 01:05:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:51.701 01:05:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:51.701 01:05:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:51.701 01:05:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:51.701 01:05:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:51.701 01:05:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:51.701 01:05:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:51.958 01:05:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:51.958 01:05:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:51.958 01:05:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:52.523 00:19:52.523 01:05:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:52.523 01:05:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:52.523 01:05:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:52.523 01:05:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:52.523 01:05:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:52.523 01:05:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:52.523 01:05:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:52.523 01:05:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:52.523 01:05:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:52.523 { 00:19:52.523 "cntlid": 33, 00:19:52.523 "qid": 0, 00:19:52.523 "state": "enabled", 00:19:52.523 "listen_address": { 00:19:52.523 "trtype": "TCP", 00:19:52.523 "adrfam": "IPv4", 00:19:52.523 "traddr": "10.0.0.2", 00:19:52.523 "trsvcid": "4420" 00:19:52.523 }, 00:19:52.523 "peer_address": { 00:19:52.523 "trtype": "TCP", 00:19:52.523 "adrfam": "IPv4", 00:19:52.523 "traddr": "10.0.0.1", 00:19:52.523 "trsvcid": "52100" 00:19:52.523 }, 00:19:52.523 "auth": { 00:19:52.523 "state": "completed", 00:19:52.523 "digest": "sha256", 00:19:52.523 "dhgroup": "ffdhe6144" 00:19:52.523 } 00:19:52.523 } 00:19:52.523 ]' 00:19:52.523 01:05:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:52.781 01:05:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:52.781 01:05:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:52.781 01:05:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:52.781 01:05:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:52.781 01:05:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:52.781 01:05:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:52.781 01:05:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:53.038 01:05:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:N2RjMjA3ZWQ2NWU0MTVmODY1OTNmNGJhMTA3NzMwMTQ4NjMyN2E5MTUwMjM3MGE52ez0nA==: --dhchap-ctrl-secret DHHC-1:03:NmYxYWM1Y2I1NzVhM2VmNmNhMzQ5NjBiNjg4M2YzMmUwYzJkZmU5ZjIwNmYzZmFjYjQ2MzQwMzFkNWU5YTE4ZdZZ/8A=: 00:19:53.970 01:05:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:53.970 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:53.970 01:05:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:53.970 01:05:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:53.970 01:05:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:53.970 01:05:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:53.970 01:05:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:53.970 01:05:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:53.970 01:05:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:54.228 01:05:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 1 00:19:54.228 01:05:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:54.228 01:05:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:54.228 01:05:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:54.228 01:05:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:54.228 01:05:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:54.228 01:05:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:54.228 01:05:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:54.228 01:05:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:54.228 01:05:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:54.228 01:05:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:54.228 01:05:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:54.793 00:19:54.793 01:05:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:54.793 01:05:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:54.793 01:05:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:55.051 01:05:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:55.051 01:05:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:55.051 01:05:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:55.051 01:05:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:55.051 01:05:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:55.051 01:05:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:55.051 { 00:19:55.051 "cntlid": 35, 00:19:55.051 "qid": 0, 00:19:55.051 "state": "enabled", 00:19:55.051 "listen_address": { 00:19:55.051 "trtype": "TCP", 00:19:55.051 "adrfam": "IPv4", 00:19:55.051 "traddr": "10.0.0.2", 00:19:55.051 "trsvcid": "4420" 00:19:55.051 }, 00:19:55.051 "peer_address": { 00:19:55.051 "trtype": "TCP", 00:19:55.051 "adrfam": "IPv4", 00:19:55.051 "traddr": "10.0.0.1", 00:19:55.051 "trsvcid": "52140" 00:19:55.051 }, 00:19:55.051 "auth": { 00:19:55.051 "state": "completed", 00:19:55.051 "digest": "sha256", 00:19:55.051 "dhgroup": "ffdhe6144" 00:19:55.051 } 00:19:55.051 } 00:19:55.051 ]' 00:19:55.051 01:05:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:55.051 01:05:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:55.051 01:05:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:55.051 01:05:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:55.051 01:05:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:55.051 01:05:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:55.051 01:05:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:55.051 01:05:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:55.649 01:05:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:ZGQ4Yzg0MmE1ZDkxY2RlNGNmZjQzYzAxOTIxOGZjZmTWfQyC: --dhchap-ctrl-secret DHHC-1:02:NzAxMTE1OTNiMGQzMjJiOTA3MWQxZjI0ODY5YzA4MjllMWMxYmM1ZjUxYzlmZDQ0Z0yyfA==: 00:19:56.580 01:05:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:56.580 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:56.580 01:05:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:56.580 01:05:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:56.580 01:05:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:56.580 01:05:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:56.580 01:05:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:56.580 01:05:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:56.580 01:05:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:56.837 01:05:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 2 00:19:56.838 01:05:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:56.838 01:05:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:56.838 01:05:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:56.838 01:05:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:56.838 01:05:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:56.838 01:05:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:56.838 01:05:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:56.838 01:05:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:56.838 01:05:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:56.838 01:05:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:56.838 01:05:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:57.401 00:19:57.401 01:05:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:57.401 01:05:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:57.401 01:05:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:57.659 01:05:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:57.659 01:05:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:57.659 01:05:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:57.659 01:05:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:57.659 01:05:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:57.659 01:05:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:57.659 { 00:19:57.659 "cntlid": 37, 00:19:57.659 "qid": 0, 00:19:57.659 "state": "enabled", 00:19:57.659 "listen_address": { 00:19:57.659 "trtype": "TCP", 00:19:57.659 "adrfam": "IPv4", 00:19:57.659 "traddr": "10.0.0.2", 00:19:57.659 "trsvcid": "4420" 00:19:57.659 }, 00:19:57.659 "peer_address": { 00:19:57.659 "trtype": "TCP", 00:19:57.659 "adrfam": "IPv4", 00:19:57.659 "traddr": "10.0.0.1", 00:19:57.659 "trsvcid": "39404" 00:19:57.659 }, 00:19:57.659 "auth": { 00:19:57.659 "state": "completed", 00:19:57.659 "digest": "sha256", 00:19:57.659 "dhgroup": "ffdhe6144" 00:19:57.659 } 00:19:57.659 } 00:19:57.659 ]' 00:19:57.659 01:05:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:57.659 01:05:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:57.659 01:05:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:57.659 01:05:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:57.659 01:05:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:57.659 01:05:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:57.659 01:05:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:57.659 01:05:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:57.915 01:05:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:ZDlhNzA3ZmUzYjQ5MmJhMTk0MWEwYTUzODczYTZkZTk5ZTc2MjdjMmQ2MTJlNmI3PajNMw==: --dhchap-ctrl-secret DHHC-1:01:Yjg2OTNkZGQzMWQ1Njg3ODIwNTZhMzdkNDJiMzBjYjFsLW8v: 00:19:58.847 01:05:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:58.847 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:58.847 01:05:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:58.847 01:05:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:58.847 01:05:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:58.847 01:05:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:58.847 01:05:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:58.847 01:05:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:58.848 01:05:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:59.105 01:05:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 3 00:19:59.105 01:05:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:59.105 01:05:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:59.105 01:05:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:59.105 01:05:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:59.105 01:05:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:59.105 01:05:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:19:59.105 01:05:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:59.105 01:05:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:59.105 01:05:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:59.105 01:05:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:59.105 01:05:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:59.670 00:19:59.670 01:05:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:59.670 01:05:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:59.670 01:05:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:59.927 01:05:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:59.927 01:05:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:59.928 01:05:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:59.928 01:05:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:59.928 01:05:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:59.928 01:05:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:59.928 { 00:19:59.928 "cntlid": 39, 00:19:59.928 "qid": 0, 00:19:59.928 "state": "enabled", 00:19:59.928 "listen_address": { 00:19:59.928 "trtype": "TCP", 00:19:59.928 "adrfam": "IPv4", 00:19:59.928 "traddr": "10.0.0.2", 00:19:59.928 "trsvcid": "4420" 00:19:59.928 }, 00:19:59.928 "peer_address": { 00:19:59.928 "trtype": "TCP", 00:19:59.928 "adrfam": "IPv4", 00:19:59.928 "traddr": "10.0.0.1", 00:19:59.928 "trsvcid": "39428" 00:19:59.928 }, 00:19:59.928 "auth": { 00:19:59.928 "state": "completed", 00:19:59.928 "digest": "sha256", 00:19:59.928 "dhgroup": "ffdhe6144" 00:19:59.928 } 00:19:59.928 } 00:19:59.928 ]' 00:19:59.928 01:05:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:59.928 01:05:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:59.928 01:05:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:59.928 01:05:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:59.928 01:05:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:59.928 01:05:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:59.928 01:05:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:59.928 01:05:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:00.185 01:05:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:YjA5MzliMWMzNDhkYTZlNjgzZGYwMWJiNGYwNTI0OTI0NGMwYTM0ZWIxYjY2MGI1ZGM5Yjk4OGNjNTEwZTliZkMFjv0=: 00:20:01.563 01:05:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:01.563 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:01.563 01:05:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:01.563 01:05:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:01.563 01:05:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:01.563 01:05:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:01.563 01:05:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:01.563 01:05:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:01.564 01:05:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:01.564 01:05:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:01.564 01:05:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 0 00:20:01.564 01:05:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:01.564 01:05:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:01.564 01:05:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:01.564 01:05:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:01.564 01:05:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:01.564 01:05:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:01.564 01:05:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:01.564 01:05:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:01.564 01:05:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:01.564 01:05:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:01.564 01:05:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:02.496 00:20:02.496 01:05:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:02.496 01:05:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:02.496 01:05:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:02.754 01:05:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:02.754 01:05:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:02.754 01:05:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:02.754 01:05:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:02.754 01:05:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:02.754 01:05:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:02.754 { 00:20:02.754 "cntlid": 41, 00:20:02.754 "qid": 0, 00:20:02.754 "state": "enabled", 00:20:02.754 "listen_address": { 00:20:02.754 "trtype": "TCP", 00:20:02.754 "adrfam": "IPv4", 00:20:02.754 "traddr": "10.0.0.2", 00:20:02.754 "trsvcid": "4420" 00:20:02.754 }, 00:20:02.754 "peer_address": { 00:20:02.754 "trtype": "TCP", 00:20:02.754 "adrfam": "IPv4", 00:20:02.754 "traddr": "10.0.0.1", 00:20:02.754 "trsvcid": "39456" 00:20:02.754 }, 00:20:02.754 "auth": { 00:20:02.754 "state": "completed", 00:20:02.754 "digest": "sha256", 00:20:02.754 "dhgroup": "ffdhe8192" 00:20:02.754 } 00:20:02.754 } 00:20:02.754 ]' 00:20:02.754 01:05:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:02.754 01:05:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:02.754 01:05:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:02.754 01:05:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:02.754 01:05:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:02.754 01:05:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:02.754 01:05:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:02.754 01:05:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:03.013 01:05:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:N2RjMjA3ZWQ2NWU0MTVmODY1OTNmNGJhMTA3NzMwMTQ4NjMyN2E5MTUwMjM3MGE52ez0nA==: --dhchap-ctrl-secret DHHC-1:03:NmYxYWM1Y2I1NzVhM2VmNmNhMzQ5NjBiNjg4M2YzMmUwYzJkZmU5ZjIwNmYzZmFjYjQ2MzQwMzFkNWU5YTE4ZdZZ/8A=: 00:20:04.384 01:05:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:04.384 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:04.384 01:05:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:04.384 01:05:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:04.384 01:05:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:04.384 01:05:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:04.384 01:05:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:04.384 01:05:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:04.384 01:05:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:04.384 01:05:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 1 00:20:04.384 01:05:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:04.384 01:05:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:04.384 01:05:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:04.384 01:05:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:04.384 01:05:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:04.384 01:05:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:04.384 01:05:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:04.384 01:05:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:04.384 01:05:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:04.385 01:05:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:04.385 01:05:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:05.317 00:20:05.317 01:05:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:05.317 01:05:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:05.317 01:05:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:05.574 01:05:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:05.574 01:05:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:05.574 01:05:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:05.574 01:05:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:05.574 01:05:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:05.574 01:05:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:05.574 { 00:20:05.574 "cntlid": 43, 00:20:05.574 "qid": 0, 00:20:05.574 "state": "enabled", 00:20:05.574 "listen_address": { 00:20:05.574 "trtype": "TCP", 00:20:05.574 "adrfam": "IPv4", 00:20:05.574 "traddr": "10.0.0.2", 00:20:05.574 "trsvcid": "4420" 00:20:05.574 }, 00:20:05.574 "peer_address": { 00:20:05.574 "trtype": "TCP", 00:20:05.574 "adrfam": "IPv4", 00:20:05.574 "traddr": "10.0.0.1", 00:20:05.574 "trsvcid": "39478" 00:20:05.574 }, 00:20:05.574 "auth": { 00:20:05.574 "state": "completed", 00:20:05.574 "digest": "sha256", 00:20:05.574 "dhgroup": "ffdhe8192" 00:20:05.574 } 00:20:05.574 } 00:20:05.574 ]' 00:20:05.574 01:05:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:05.574 01:05:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:05.574 01:05:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:05.574 01:05:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:05.574 01:05:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:05.574 01:05:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:05.574 01:05:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:05.574 01:05:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:05.832 01:05:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:ZGQ4Yzg0MmE1ZDkxY2RlNGNmZjQzYzAxOTIxOGZjZmTWfQyC: --dhchap-ctrl-secret DHHC-1:02:NzAxMTE1OTNiMGQzMjJiOTA3MWQxZjI0ODY5YzA4MjllMWMxYmM1ZjUxYzlmZDQ0Z0yyfA==: 00:20:06.762 01:05:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:06.762 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:06.762 01:05:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:06.762 01:05:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:06.762 01:05:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:06.762 01:05:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:06.762 01:05:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:06.762 01:05:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:06.762 01:05:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:07.020 01:06:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 2 00:20:07.020 01:06:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:07.020 01:06:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:07.020 01:06:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:07.020 01:06:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:07.020 01:06:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:07.020 01:06:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:07.020 01:06:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:07.020 01:06:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:07.020 01:06:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:07.020 01:06:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:07.020 01:06:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:07.952 00:20:07.952 01:06:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:07.952 01:06:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:07.952 01:06:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:08.210 01:06:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:08.210 01:06:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:08.210 01:06:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:08.210 01:06:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:08.210 01:06:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:08.210 01:06:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:08.210 { 00:20:08.210 "cntlid": 45, 00:20:08.210 "qid": 0, 00:20:08.210 "state": "enabled", 00:20:08.210 "listen_address": { 00:20:08.210 "trtype": "TCP", 00:20:08.210 "adrfam": "IPv4", 00:20:08.210 "traddr": "10.0.0.2", 00:20:08.210 "trsvcid": "4420" 00:20:08.210 }, 00:20:08.210 "peer_address": { 00:20:08.210 "trtype": "TCP", 00:20:08.210 "adrfam": "IPv4", 00:20:08.210 "traddr": "10.0.0.1", 00:20:08.210 "trsvcid": "44312" 00:20:08.210 }, 00:20:08.210 "auth": { 00:20:08.210 "state": "completed", 00:20:08.210 "digest": "sha256", 00:20:08.210 "dhgroup": "ffdhe8192" 00:20:08.210 } 00:20:08.210 } 00:20:08.210 ]' 00:20:08.210 01:06:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:08.210 01:06:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:08.210 01:06:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:08.467 01:06:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:08.467 01:06:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:08.467 01:06:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:08.467 01:06:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:08.467 01:06:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:08.724 01:06:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:ZDlhNzA3ZmUzYjQ5MmJhMTk0MWEwYTUzODczYTZkZTk5ZTc2MjdjMmQ2MTJlNmI3PajNMw==: --dhchap-ctrl-secret DHHC-1:01:Yjg2OTNkZGQzMWQ1Njg3ODIwNTZhMzdkNDJiMzBjYjFsLW8v: 00:20:09.656 01:06:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:09.656 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:09.656 01:06:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:09.656 01:06:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:09.656 01:06:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:09.656 01:06:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:09.656 01:06:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:09.656 01:06:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:09.656 01:06:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:09.914 01:06:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 3 00:20:09.914 01:06:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:09.914 01:06:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:09.914 01:06:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:09.914 01:06:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:09.914 01:06:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:09.914 01:06:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:09.914 01:06:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:09.914 01:06:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:09.914 01:06:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:09.914 01:06:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:09.914 01:06:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:10.845 00:20:10.845 01:06:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:10.845 01:06:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:10.845 01:06:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:11.103 01:06:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:11.103 01:06:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:11.103 01:06:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:11.103 01:06:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:11.103 01:06:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:11.103 01:06:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:11.103 { 00:20:11.103 "cntlid": 47, 00:20:11.103 "qid": 0, 00:20:11.103 "state": "enabled", 00:20:11.103 "listen_address": { 00:20:11.103 "trtype": "TCP", 00:20:11.103 "adrfam": "IPv4", 00:20:11.103 "traddr": "10.0.0.2", 00:20:11.103 "trsvcid": "4420" 00:20:11.103 }, 00:20:11.103 "peer_address": { 00:20:11.103 "trtype": "TCP", 00:20:11.103 "adrfam": "IPv4", 00:20:11.103 "traddr": "10.0.0.1", 00:20:11.103 "trsvcid": "44350" 00:20:11.103 }, 00:20:11.103 "auth": { 00:20:11.103 "state": "completed", 00:20:11.103 "digest": "sha256", 00:20:11.103 "dhgroup": "ffdhe8192" 00:20:11.103 } 00:20:11.103 } 00:20:11.103 ]' 00:20:11.103 01:06:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:11.103 01:06:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:11.103 01:06:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:11.103 01:06:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:11.103 01:06:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:11.103 01:06:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:11.103 01:06:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:11.103 01:06:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:11.389 01:06:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:YjA5MzliMWMzNDhkYTZlNjgzZGYwMWJiNGYwNTI0OTI0NGMwYTM0ZWIxYjY2MGI1ZGM5Yjk4OGNjNTEwZTliZkMFjv0=: 00:20:12.320 01:06:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:12.320 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:12.320 01:06:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:12.320 01:06:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:12.320 01:06:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:12.320 01:06:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:12.320 01:06:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:20:12.320 01:06:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:12.320 01:06:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:12.320 01:06:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:12.320 01:06:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:12.577 01:06:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 0 00:20:12.577 01:06:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:12.577 01:06:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:12.577 01:06:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:20:12.577 01:06:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:12.577 01:06:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:12.577 01:06:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:12.577 01:06:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:12.577 01:06:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:12.577 01:06:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:12.577 01:06:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:12.577 01:06:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:12.835 00:20:12.835 01:06:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:12.835 01:06:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:12.835 01:06:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:13.093 01:06:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:13.093 01:06:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:13.093 01:06:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:13.093 01:06:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:13.093 01:06:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:13.093 01:06:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:13.093 { 00:20:13.093 "cntlid": 49, 00:20:13.093 "qid": 0, 00:20:13.093 "state": "enabled", 00:20:13.093 "listen_address": { 00:20:13.093 "trtype": "TCP", 00:20:13.093 "adrfam": "IPv4", 00:20:13.093 "traddr": "10.0.0.2", 00:20:13.093 "trsvcid": "4420" 00:20:13.093 }, 00:20:13.093 "peer_address": { 00:20:13.093 "trtype": "TCP", 00:20:13.093 "adrfam": "IPv4", 00:20:13.093 "traddr": "10.0.0.1", 00:20:13.093 "trsvcid": "44370" 00:20:13.093 }, 00:20:13.093 "auth": { 00:20:13.093 "state": "completed", 00:20:13.093 "digest": "sha384", 00:20:13.093 "dhgroup": "null" 00:20:13.093 } 00:20:13.093 } 00:20:13.093 ]' 00:20:13.093 01:06:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:13.093 01:06:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:13.093 01:06:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:13.350 01:06:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:20:13.350 01:06:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:13.350 01:06:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:13.350 01:06:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:13.350 01:06:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:13.608 01:06:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:N2RjMjA3ZWQ2NWU0MTVmODY1OTNmNGJhMTA3NzMwMTQ4NjMyN2E5MTUwMjM3MGE52ez0nA==: --dhchap-ctrl-secret DHHC-1:03:NmYxYWM1Y2I1NzVhM2VmNmNhMzQ5NjBiNjg4M2YzMmUwYzJkZmU5ZjIwNmYzZmFjYjQ2MzQwMzFkNWU5YTE4ZdZZ/8A=: 00:20:14.542 01:06:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:14.542 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:14.542 01:06:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:14.542 01:06:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:14.542 01:06:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:14.542 01:06:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:14.542 01:06:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:14.542 01:06:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:14.542 01:06:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:14.800 01:06:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 1 00:20:14.800 01:06:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:14.800 01:06:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:14.800 01:06:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:20:14.800 01:06:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:14.800 01:06:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:14.800 01:06:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:14.800 01:06:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:14.800 01:06:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:14.800 01:06:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:14.800 01:06:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:14.800 01:06:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:15.058 00:20:15.058 01:06:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:15.058 01:06:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:15.058 01:06:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:15.315 01:06:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:15.315 01:06:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:15.315 01:06:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:15.315 01:06:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:15.315 01:06:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:15.315 01:06:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:15.315 { 00:20:15.315 "cntlid": 51, 00:20:15.315 "qid": 0, 00:20:15.315 "state": "enabled", 00:20:15.315 "listen_address": { 00:20:15.315 "trtype": "TCP", 00:20:15.315 "adrfam": "IPv4", 00:20:15.315 "traddr": "10.0.0.2", 00:20:15.315 "trsvcid": "4420" 00:20:15.315 }, 00:20:15.315 "peer_address": { 00:20:15.316 "trtype": "TCP", 00:20:15.316 "adrfam": "IPv4", 00:20:15.316 "traddr": "10.0.0.1", 00:20:15.316 "trsvcid": "44388" 00:20:15.316 }, 00:20:15.316 "auth": { 00:20:15.316 "state": "completed", 00:20:15.316 "digest": "sha384", 00:20:15.316 "dhgroup": "null" 00:20:15.316 } 00:20:15.316 } 00:20:15.316 ]' 00:20:15.316 01:06:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:15.316 01:06:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:15.316 01:06:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:15.573 01:06:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:20:15.573 01:06:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:15.573 01:06:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:15.573 01:06:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:15.573 01:06:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:15.831 01:06:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:ZGQ4Yzg0MmE1ZDkxY2RlNGNmZjQzYzAxOTIxOGZjZmTWfQyC: --dhchap-ctrl-secret DHHC-1:02:NzAxMTE1OTNiMGQzMjJiOTA3MWQxZjI0ODY5YzA4MjllMWMxYmM1ZjUxYzlmZDQ0Z0yyfA==: 00:20:16.763 01:06:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:16.763 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:16.763 01:06:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:16.763 01:06:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:16.763 01:06:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:16.763 01:06:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:16.763 01:06:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:16.763 01:06:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:16.763 01:06:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:17.021 01:06:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 2 00:20:17.021 01:06:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:17.021 01:06:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:17.021 01:06:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:20:17.021 01:06:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:17.021 01:06:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:17.021 01:06:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:17.021 01:06:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:17.021 01:06:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:17.021 01:06:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:17.021 01:06:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:17.021 01:06:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:17.279 00:20:17.279 01:06:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:17.279 01:06:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:17.279 01:06:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:17.537 01:06:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:17.537 01:06:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:17.537 01:06:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:17.537 01:06:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:17.537 01:06:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:17.537 01:06:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:17.537 { 00:20:17.537 "cntlid": 53, 00:20:17.537 "qid": 0, 00:20:17.537 "state": "enabled", 00:20:17.537 "listen_address": { 00:20:17.537 "trtype": "TCP", 00:20:17.537 "adrfam": "IPv4", 00:20:17.537 "traddr": "10.0.0.2", 00:20:17.537 "trsvcid": "4420" 00:20:17.537 }, 00:20:17.537 "peer_address": { 00:20:17.537 "trtype": "TCP", 00:20:17.537 "adrfam": "IPv4", 00:20:17.537 "traddr": "10.0.0.1", 00:20:17.537 "trsvcid": "56062" 00:20:17.537 }, 00:20:17.537 "auth": { 00:20:17.537 "state": "completed", 00:20:17.537 "digest": "sha384", 00:20:17.537 "dhgroup": "null" 00:20:17.537 } 00:20:17.537 } 00:20:17.537 ]' 00:20:17.537 01:06:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:17.537 01:06:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:17.537 01:06:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:17.537 01:06:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:20:17.537 01:06:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:17.537 01:06:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:17.537 01:06:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:17.537 01:06:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:17.794 01:06:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:ZDlhNzA3ZmUzYjQ5MmJhMTk0MWEwYTUzODczYTZkZTk5ZTc2MjdjMmQ2MTJlNmI3PajNMw==: --dhchap-ctrl-secret DHHC-1:01:Yjg2OTNkZGQzMWQ1Njg3ODIwNTZhMzdkNDJiMzBjYjFsLW8v: 00:20:18.729 01:06:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:18.729 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:18.729 01:06:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:18.729 01:06:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:18.729 01:06:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:18.729 01:06:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:18.729 01:06:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:18.729 01:06:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:18.729 01:06:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:18.986 01:06:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 3 00:20:18.986 01:06:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:18.986 01:06:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:18.986 01:06:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:20:18.986 01:06:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:18.987 01:06:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:18.987 01:06:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:18.987 01:06:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:18.987 01:06:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:18.987 01:06:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:18.987 01:06:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:18.987 01:06:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:19.550 00:20:19.550 01:06:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:19.550 01:06:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:19.550 01:06:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:19.550 01:06:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:19.550 01:06:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:19.550 01:06:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:19.550 01:06:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:19.550 01:06:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:19.550 01:06:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:19.550 { 00:20:19.550 "cntlid": 55, 00:20:19.550 "qid": 0, 00:20:19.550 "state": "enabled", 00:20:19.550 "listen_address": { 00:20:19.550 "trtype": "TCP", 00:20:19.550 "adrfam": "IPv4", 00:20:19.550 "traddr": "10.0.0.2", 00:20:19.550 "trsvcid": "4420" 00:20:19.551 }, 00:20:19.551 "peer_address": { 00:20:19.551 "trtype": "TCP", 00:20:19.551 "adrfam": "IPv4", 00:20:19.551 "traddr": "10.0.0.1", 00:20:19.551 "trsvcid": "56082" 00:20:19.551 }, 00:20:19.551 "auth": { 00:20:19.551 "state": "completed", 00:20:19.551 "digest": "sha384", 00:20:19.551 "dhgroup": "null" 00:20:19.551 } 00:20:19.551 } 00:20:19.551 ]' 00:20:19.551 01:06:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:19.807 01:06:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:19.807 01:06:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:19.807 01:06:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:20:19.807 01:06:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:19.807 01:06:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:19.807 01:06:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:19.807 01:06:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:20.065 01:06:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:YjA5MzliMWMzNDhkYTZlNjgzZGYwMWJiNGYwNTI0OTI0NGMwYTM0ZWIxYjY2MGI1ZGM5Yjk4OGNjNTEwZTliZkMFjv0=: 00:20:21.032 01:06:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:21.032 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:21.032 01:06:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:21.032 01:06:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:21.033 01:06:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:21.033 01:06:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:21.033 01:06:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:21.033 01:06:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:21.033 01:06:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:21.033 01:06:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:21.290 01:06:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 0 00:20:21.290 01:06:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:21.290 01:06:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:21.290 01:06:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:20:21.290 01:06:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:21.290 01:06:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:21.290 01:06:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:21.290 01:06:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:21.290 01:06:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:21.290 01:06:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:21.290 01:06:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:21.290 01:06:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:21.548 00:20:21.548 01:06:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:21.548 01:06:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:21.548 01:06:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:22.112 01:06:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:22.112 01:06:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:22.112 01:06:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:22.112 01:06:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:22.112 01:06:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:22.112 01:06:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:22.112 { 00:20:22.112 "cntlid": 57, 00:20:22.112 "qid": 0, 00:20:22.112 "state": "enabled", 00:20:22.112 "listen_address": { 00:20:22.112 "trtype": "TCP", 00:20:22.112 "adrfam": "IPv4", 00:20:22.112 "traddr": "10.0.0.2", 00:20:22.112 "trsvcid": "4420" 00:20:22.112 }, 00:20:22.112 "peer_address": { 00:20:22.112 "trtype": "TCP", 00:20:22.112 "adrfam": "IPv4", 00:20:22.112 "traddr": "10.0.0.1", 00:20:22.112 "trsvcid": "56104" 00:20:22.112 }, 00:20:22.112 "auth": { 00:20:22.112 "state": "completed", 00:20:22.112 "digest": "sha384", 00:20:22.112 "dhgroup": "ffdhe2048" 00:20:22.112 } 00:20:22.112 } 00:20:22.112 ]' 00:20:22.112 01:06:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:22.112 01:06:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:22.112 01:06:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:22.112 01:06:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:22.112 01:06:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:22.112 01:06:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:22.112 01:06:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:22.112 01:06:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:22.370 01:06:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:N2RjMjA3ZWQ2NWU0MTVmODY1OTNmNGJhMTA3NzMwMTQ4NjMyN2E5MTUwMjM3MGE52ez0nA==: --dhchap-ctrl-secret DHHC-1:03:NmYxYWM1Y2I1NzVhM2VmNmNhMzQ5NjBiNjg4M2YzMmUwYzJkZmU5ZjIwNmYzZmFjYjQ2MzQwMzFkNWU5YTE4ZdZZ/8A=: 00:20:23.302 01:06:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:23.302 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:23.302 01:06:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:23.302 01:06:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:23.302 01:06:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:23.302 01:06:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:23.302 01:06:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:23.302 01:06:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:23.302 01:06:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:23.560 01:06:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 1 00:20:23.560 01:06:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:23.560 01:06:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:23.560 01:06:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:20:23.560 01:06:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:23.560 01:06:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:23.560 01:06:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:23.560 01:06:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:23.560 01:06:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:23.560 01:06:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:23.560 01:06:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:23.560 01:06:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:23.818 00:20:23.818 01:06:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:23.818 01:06:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:23.818 01:06:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:24.382 01:06:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:24.382 01:06:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:24.382 01:06:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:24.382 01:06:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:24.382 01:06:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:24.382 01:06:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:24.382 { 00:20:24.382 "cntlid": 59, 00:20:24.382 "qid": 0, 00:20:24.382 "state": "enabled", 00:20:24.382 "listen_address": { 00:20:24.382 "trtype": "TCP", 00:20:24.382 "adrfam": "IPv4", 00:20:24.382 "traddr": "10.0.0.2", 00:20:24.383 "trsvcid": "4420" 00:20:24.383 }, 00:20:24.383 "peer_address": { 00:20:24.383 "trtype": "TCP", 00:20:24.383 "adrfam": "IPv4", 00:20:24.383 "traddr": "10.0.0.1", 00:20:24.383 "trsvcid": "56138" 00:20:24.383 }, 00:20:24.383 "auth": { 00:20:24.383 "state": "completed", 00:20:24.383 "digest": "sha384", 00:20:24.383 "dhgroup": "ffdhe2048" 00:20:24.383 } 00:20:24.383 } 00:20:24.383 ]' 00:20:24.383 01:06:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:24.383 01:06:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:24.383 01:06:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:24.383 01:06:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:24.383 01:06:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:24.383 01:06:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:24.383 01:06:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:24.383 01:06:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:24.640 01:06:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:ZGQ4Yzg0MmE1ZDkxY2RlNGNmZjQzYzAxOTIxOGZjZmTWfQyC: --dhchap-ctrl-secret DHHC-1:02:NzAxMTE1OTNiMGQzMjJiOTA3MWQxZjI0ODY5YzA4MjllMWMxYmM1ZjUxYzlmZDQ0Z0yyfA==: 00:20:25.571 01:06:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:25.571 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:25.571 01:06:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:25.571 01:06:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:25.571 01:06:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:25.571 01:06:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:25.571 01:06:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:25.571 01:06:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:25.571 01:06:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:25.829 01:06:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 2 00:20:25.829 01:06:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:25.829 01:06:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:25.829 01:06:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:20:25.829 01:06:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:25.829 01:06:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:25.829 01:06:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:25.829 01:06:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:25.829 01:06:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:25.829 01:06:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:25.829 01:06:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:25.829 01:06:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:26.086 00:20:26.086 01:06:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:26.086 01:06:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:26.087 01:06:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:26.344 01:06:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:26.344 01:06:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:26.344 01:06:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:26.344 01:06:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:26.344 01:06:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:26.344 01:06:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:26.344 { 00:20:26.344 "cntlid": 61, 00:20:26.344 "qid": 0, 00:20:26.344 "state": "enabled", 00:20:26.344 "listen_address": { 00:20:26.344 "trtype": "TCP", 00:20:26.344 "adrfam": "IPv4", 00:20:26.344 "traddr": "10.0.0.2", 00:20:26.344 "trsvcid": "4420" 00:20:26.344 }, 00:20:26.344 "peer_address": { 00:20:26.344 "trtype": "TCP", 00:20:26.344 "adrfam": "IPv4", 00:20:26.344 "traddr": "10.0.0.1", 00:20:26.344 "trsvcid": "56168" 00:20:26.344 }, 00:20:26.344 "auth": { 00:20:26.344 "state": "completed", 00:20:26.344 "digest": "sha384", 00:20:26.344 "dhgroup": "ffdhe2048" 00:20:26.344 } 00:20:26.344 } 00:20:26.344 ]' 00:20:26.344 01:06:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:26.642 01:06:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:26.642 01:06:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:26.642 01:06:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:26.642 01:06:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:26.642 01:06:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:26.642 01:06:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:26.642 01:06:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:26.900 01:06:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:ZDlhNzA3ZmUzYjQ5MmJhMTk0MWEwYTUzODczYTZkZTk5ZTc2MjdjMmQ2MTJlNmI3PajNMw==: --dhchap-ctrl-secret DHHC-1:01:Yjg2OTNkZGQzMWQ1Njg3ODIwNTZhMzdkNDJiMzBjYjFsLW8v: 00:20:27.830 01:06:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:27.830 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:27.830 01:06:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:27.830 01:06:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:27.830 01:06:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:27.830 01:06:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:27.830 01:06:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:27.830 01:06:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:27.830 01:06:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:28.088 01:06:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 3 00:20:28.088 01:06:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:28.088 01:06:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:28.088 01:06:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:20:28.088 01:06:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:28.088 01:06:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:28.088 01:06:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:28.088 01:06:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:28.088 01:06:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:28.088 01:06:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:28.088 01:06:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:28.088 01:06:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:28.345 00:20:28.345 01:06:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:28.345 01:06:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:28.345 01:06:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:28.602 01:06:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:28.602 01:06:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:28.602 01:06:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:28.602 01:06:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:28.602 01:06:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:28.602 01:06:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:28.602 { 00:20:28.602 "cntlid": 63, 00:20:28.602 "qid": 0, 00:20:28.602 "state": "enabled", 00:20:28.602 "listen_address": { 00:20:28.602 "trtype": "TCP", 00:20:28.602 "adrfam": "IPv4", 00:20:28.602 "traddr": "10.0.0.2", 00:20:28.602 "trsvcid": "4420" 00:20:28.602 }, 00:20:28.602 "peer_address": { 00:20:28.602 "trtype": "TCP", 00:20:28.602 "adrfam": "IPv4", 00:20:28.602 "traddr": "10.0.0.1", 00:20:28.602 "trsvcid": "56972" 00:20:28.602 }, 00:20:28.602 "auth": { 00:20:28.602 "state": "completed", 00:20:28.602 "digest": "sha384", 00:20:28.602 "dhgroup": "ffdhe2048" 00:20:28.602 } 00:20:28.602 } 00:20:28.602 ]' 00:20:28.602 01:06:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:28.602 01:06:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:28.602 01:06:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:28.859 01:06:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:28.859 01:06:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:28.859 01:06:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:28.859 01:06:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:28.859 01:06:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:29.145 01:06:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:YjA5MzliMWMzNDhkYTZlNjgzZGYwMWJiNGYwNTI0OTI0NGMwYTM0ZWIxYjY2MGI1ZGM5Yjk4OGNjNTEwZTliZkMFjv0=: 00:20:30.077 01:06:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:30.077 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:30.077 01:06:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:30.077 01:06:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:30.077 01:06:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:30.077 01:06:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:30.077 01:06:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:30.077 01:06:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:30.077 01:06:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:30.077 01:06:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:30.335 01:06:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 0 00:20:30.335 01:06:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:30.335 01:06:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:30.335 01:06:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:20:30.335 01:06:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:30.335 01:06:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:30.335 01:06:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:30.335 01:06:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:30.335 01:06:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:30.335 01:06:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:30.335 01:06:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:30.335 01:06:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:30.592 00:20:30.592 01:06:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:30.592 01:06:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:30.592 01:06:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:30.850 01:06:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:30.850 01:06:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:30.850 01:06:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:30.850 01:06:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:30.850 01:06:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:30.850 01:06:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:30.850 { 00:20:30.850 "cntlid": 65, 00:20:30.850 "qid": 0, 00:20:30.850 "state": "enabled", 00:20:30.850 "listen_address": { 00:20:30.850 "trtype": "TCP", 00:20:30.850 "adrfam": "IPv4", 00:20:30.850 "traddr": "10.0.0.2", 00:20:30.850 "trsvcid": "4420" 00:20:30.850 }, 00:20:30.850 "peer_address": { 00:20:30.850 "trtype": "TCP", 00:20:30.850 "adrfam": "IPv4", 00:20:30.850 "traddr": "10.0.0.1", 00:20:30.850 "trsvcid": "57002" 00:20:30.850 }, 00:20:30.850 "auth": { 00:20:30.850 "state": "completed", 00:20:30.850 "digest": "sha384", 00:20:30.850 "dhgroup": "ffdhe3072" 00:20:30.850 } 00:20:30.850 } 00:20:30.850 ]' 00:20:30.850 01:06:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:30.850 01:06:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:30.850 01:06:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:30.850 01:06:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:30.850 01:06:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:30.850 01:06:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:30.850 01:06:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:30.850 01:06:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:31.107 01:06:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:N2RjMjA3ZWQ2NWU0MTVmODY1OTNmNGJhMTA3NzMwMTQ4NjMyN2E5MTUwMjM3MGE52ez0nA==: --dhchap-ctrl-secret DHHC-1:03:NmYxYWM1Y2I1NzVhM2VmNmNhMzQ5NjBiNjg4M2YzMmUwYzJkZmU5ZjIwNmYzZmFjYjQ2MzQwMzFkNWU5YTE4ZdZZ/8A=: 00:20:32.479 01:06:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:32.479 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:32.479 01:06:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:32.479 01:06:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:32.479 01:06:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:32.479 01:06:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:32.479 01:06:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:32.479 01:06:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:32.479 01:06:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:32.479 01:06:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 1 00:20:32.479 01:06:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:32.479 01:06:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:32.479 01:06:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:20:32.479 01:06:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:32.479 01:06:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:32.479 01:06:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:32.479 01:06:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:32.479 01:06:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:32.479 01:06:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:32.479 01:06:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:32.479 01:06:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:32.737 00:20:32.737 01:06:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:32.737 01:06:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:32.737 01:06:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:32.994 01:06:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:32.994 01:06:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:32.994 01:06:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:32.994 01:06:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:32.994 01:06:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:32.994 01:06:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:32.994 { 00:20:32.994 "cntlid": 67, 00:20:32.994 "qid": 0, 00:20:32.994 "state": "enabled", 00:20:32.994 "listen_address": { 00:20:32.994 "trtype": "TCP", 00:20:32.994 "adrfam": "IPv4", 00:20:32.994 "traddr": "10.0.0.2", 00:20:32.994 "trsvcid": "4420" 00:20:32.994 }, 00:20:32.994 "peer_address": { 00:20:32.994 "trtype": "TCP", 00:20:32.994 "adrfam": "IPv4", 00:20:32.994 "traddr": "10.0.0.1", 00:20:32.994 "trsvcid": "57020" 00:20:32.994 }, 00:20:32.994 "auth": { 00:20:32.994 "state": "completed", 00:20:32.994 "digest": "sha384", 00:20:32.994 "dhgroup": "ffdhe3072" 00:20:32.994 } 00:20:32.994 } 00:20:32.994 ]' 00:20:32.994 01:06:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:33.251 01:06:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:33.251 01:06:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:33.251 01:06:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:33.251 01:06:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:33.251 01:06:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:33.251 01:06:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:33.251 01:06:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:33.508 01:06:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:ZGQ4Yzg0MmE1ZDkxY2RlNGNmZjQzYzAxOTIxOGZjZmTWfQyC: --dhchap-ctrl-secret DHHC-1:02:NzAxMTE1OTNiMGQzMjJiOTA3MWQxZjI0ODY5YzA4MjllMWMxYmM1ZjUxYzlmZDQ0Z0yyfA==: 00:20:34.440 01:06:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:34.440 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:34.440 01:06:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:34.440 01:06:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:34.440 01:06:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:34.440 01:06:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:34.440 01:06:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:34.440 01:06:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:34.440 01:06:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:34.698 01:06:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 2 00:20:34.698 01:06:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:34.698 01:06:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:34.698 01:06:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:20:34.698 01:06:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:34.698 01:06:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:34.698 01:06:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:34.698 01:06:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:34.698 01:06:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:34.698 01:06:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:34.698 01:06:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:34.698 01:06:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:34.955 00:20:34.955 01:06:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:34.955 01:06:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:34.955 01:06:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:35.213 01:06:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:35.213 01:06:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:35.213 01:06:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:35.213 01:06:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:35.213 01:06:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:35.213 01:06:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:35.213 { 00:20:35.213 "cntlid": 69, 00:20:35.213 "qid": 0, 00:20:35.213 "state": "enabled", 00:20:35.213 "listen_address": { 00:20:35.213 "trtype": "TCP", 00:20:35.213 "adrfam": "IPv4", 00:20:35.213 "traddr": "10.0.0.2", 00:20:35.213 "trsvcid": "4420" 00:20:35.213 }, 00:20:35.213 "peer_address": { 00:20:35.213 "trtype": "TCP", 00:20:35.213 "adrfam": "IPv4", 00:20:35.213 "traddr": "10.0.0.1", 00:20:35.213 "trsvcid": "57048" 00:20:35.213 }, 00:20:35.213 "auth": { 00:20:35.213 "state": "completed", 00:20:35.213 "digest": "sha384", 00:20:35.213 "dhgroup": "ffdhe3072" 00:20:35.213 } 00:20:35.213 } 00:20:35.213 ]' 00:20:35.213 01:06:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:35.213 01:06:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:35.213 01:06:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:35.470 01:06:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:35.470 01:06:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:35.470 01:06:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:35.471 01:06:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:35.471 01:06:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:35.728 01:06:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:ZDlhNzA3ZmUzYjQ5MmJhMTk0MWEwYTUzODczYTZkZTk5ZTc2MjdjMmQ2MTJlNmI3PajNMw==: --dhchap-ctrl-secret DHHC-1:01:Yjg2OTNkZGQzMWQ1Njg3ODIwNTZhMzdkNDJiMzBjYjFsLW8v: 00:20:36.660 01:06:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:36.660 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:36.660 01:06:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:36.660 01:06:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:36.660 01:06:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:36.660 01:06:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:36.660 01:06:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:36.660 01:06:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:36.660 01:06:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:36.918 01:06:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 3 00:20:36.918 01:06:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:36.918 01:06:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:36.918 01:06:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:20:36.918 01:06:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:36.918 01:06:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:36.918 01:06:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:36.918 01:06:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:36.918 01:06:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:36.918 01:06:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:36.918 01:06:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:36.918 01:06:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:37.175 00:20:37.175 01:06:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:37.175 01:06:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:37.175 01:06:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:37.432 01:06:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:37.432 01:06:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:37.432 01:06:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:37.432 01:06:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:37.433 01:06:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:37.433 01:06:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:37.433 { 00:20:37.433 "cntlid": 71, 00:20:37.433 "qid": 0, 00:20:37.433 "state": "enabled", 00:20:37.433 "listen_address": { 00:20:37.433 "trtype": "TCP", 00:20:37.433 "adrfam": "IPv4", 00:20:37.433 "traddr": "10.0.0.2", 00:20:37.433 "trsvcid": "4420" 00:20:37.433 }, 00:20:37.433 "peer_address": { 00:20:37.433 "trtype": "TCP", 00:20:37.433 "adrfam": "IPv4", 00:20:37.433 "traddr": "10.0.0.1", 00:20:37.433 "trsvcid": "34332" 00:20:37.433 }, 00:20:37.433 "auth": { 00:20:37.433 "state": "completed", 00:20:37.433 "digest": "sha384", 00:20:37.433 "dhgroup": "ffdhe3072" 00:20:37.433 } 00:20:37.433 } 00:20:37.433 ]' 00:20:37.433 01:06:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:37.433 01:06:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:37.433 01:06:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:37.690 01:06:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:37.690 01:06:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:37.690 01:06:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:37.690 01:06:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:37.690 01:06:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:37.947 01:06:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:YjA5MzliMWMzNDhkYTZlNjgzZGYwMWJiNGYwNTI0OTI0NGMwYTM0ZWIxYjY2MGI1ZGM5Yjk4OGNjNTEwZTliZkMFjv0=: 00:20:38.880 01:06:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:38.880 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:38.880 01:06:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:38.880 01:06:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:38.880 01:06:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:38.880 01:06:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:38.880 01:06:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:38.880 01:06:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:38.880 01:06:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:38.880 01:06:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:39.137 01:06:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 0 00:20:39.137 01:06:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:39.137 01:06:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:39.137 01:06:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:39.137 01:06:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:39.137 01:06:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:39.137 01:06:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:39.137 01:06:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:39.137 01:06:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:39.137 01:06:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:39.137 01:06:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:39.137 01:06:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:39.702 00:20:39.702 01:06:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:39.702 01:06:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:39.702 01:06:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:39.702 01:06:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:39.702 01:06:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:39.702 01:06:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:39.702 01:06:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:39.702 01:06:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:39.702 01:06:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:39.702 { 00:20:39.702 "cntlid": 73, 00:20:39.702 "qid": 0, 00:20:39.702 "state": "enabled", 00:20:39.702 "listen_address": { 00:20:39.702 "trtype": "TCP", 00:20:39.702 "adrfam": "IPv4", 00:20:39.702 "traddr": "10.0.0.2", 00:20:39.702 "trsvcid": "4420" 00:20:39.702 }, 00:20:39.702 "peer_address": { 00:20:39.702 "trtype": "TCP", 00:20:39.702 "adrfam": "IPv4", 00:20:39.702 "traddr": "10.0.0.1", 00:20:39.702 "trsvcid": "34366" 00:20:39.702 }, 00:20:39.702 "auth": { 00:20:39.702 "state": "completed", 00:20:39.702 "digest": "sha384", 00:20:39.702 "dhgroup": "ffdhe4096" 00:20:39.702 } 00:20:39.702 } 00:20:39.702 ]' 00:20:39.702 01:06:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:39.959 01:06:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:39.959 01:06:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:39.959 01:06:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:39.959 01:06:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:39.959 01:06:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:39.959 01:06:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:39.959 01:06:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:40.216 01:06:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:N2RjMjA3ZWQ2NWU0MTVmODY1OTNmNGJhMTA3NzMwMTQ4NjMyN2E5MTUwMjM3MGE52ez0nA==: --dhchap-ctrl-secret DHHC-1:03:NmYxYWM1Y2I1NzVhM2VmNmNhMzQ5NjBiNjg4M2YzMmUwYzJkZmU5ZjIwNmYzZmFjYjQ2MzQwMzFkNWU5YTE4ZdZZ/8A=: 00:20:41.147 01:06:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:41.147 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:41.147 01:06:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:41.147 01:06:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:41.147 01:06:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:41.147 01:06:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:41.147 01:06:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:41.147 01:06:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:41.147 01:06:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:41.404 01:06:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 1 00:20:41.404 01:06:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:41.404 01:06:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:41.404 01:06:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:41.405 01:06:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:41.405 01:06:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:41.405 01:06:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:41.405 01:06:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:41.405 01:06:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:41.405 01:06:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:41.405 01:06:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:41.405 01:06:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:41.696 00:20:41.975 01:06:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:41.975 01:06:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:41.975 01:06:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:41.976 01:06:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:41.976 01:06:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:41.976 01:06:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:41.976 01:06:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:41.976 01:06:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:41.976 01:06:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:41.976 { 00:20:41.976 "cntlid": 75, 00:20:41.976 "qid": 0, 00:20:41.976 "state": "enabled", 00:20:41.976 "listen_address": { 00:20:41.976 "trtype": "TCP", 00:20:41.976 "adrfam": "IPv4", 00:20:41.976 "traddr": "10.0.0.2", 00:20:41.976 "trsvcid": "4420" 00:20:41.976 }, 00:20:41.976 "peer_address": { 00:20:41.976 "trtype": "TCP", 00:20:41.976 "adrfam": "IPv4", 00:20:41.976 "traddr": "10.0.0.1", 00:20:41.976 "trsvcid": "34378" 00:20:41.976 }, 00:20:41.976 "auth": { 00:20:41.976 "state": "completed", 00:20:41.976 "digest": "sha384", 00:20:41.976 "dhgroup": "ffdhe4096" 00:20:41.976 } 00:20:41.976 } 00:20:41.976 ]' 00:20:41.976 01:06:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:42.232 01:06:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:42.232 01:06:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:42.232 01:06:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:42.232 01:06:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:42.232 01:06:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:42.232 01:06:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:42.232 01:06:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:42.489 01:06:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:ZGQ4Yzg0MmE1ZDkxY2RlNGNmZjQzYzAxOTIxOGZjZmTWfQyC: --dhchap-ctrl-secret DHHC-1:02:NzAxMTE1OTNiMGQzMjJiOTA3MWQxZjI0ODY5YzA4MjllMWMxYmM1ZjUxYzlmZDQ0Z0yyfA==: 00:20:43.421 01:06:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:43.421 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:43.421 01:06:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:43.421 01:06:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:43.421 01:06:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:43.421 01:06:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:43.421 01:06:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:43.421 01:06:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:43.421 01:06:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:43.678 01:06:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 2 00:20:43.678 01:06:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:43.678 01:06:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:43.678 01:06:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:43.678 01:06:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:43.678 01:06:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:43.678 01:06:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:43.678 01:06:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:43.678 01:06:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:43.678 01:06:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:43.678 01:06:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:43.678 01:06:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:44.243 00:20:44.243 01:06:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:44.243 01:06:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:44.243 01:06:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:44.243 01:06:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:44.243 01:06:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:44.243 01:06:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:44.243 01:06:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:44.243 01:06:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:44.243 01:06:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:44.243 { 00:20:44.243 "cntlid": 77, 00:20:44.243 "qid": 0, 00:20:44.243 "state": "enabled", 00:20:44.243 "listen_address": { 00:20:44.243 "trtype": "TCP", 00:20:44.243 "adrfam": "IPv4", 00:20:44.243 "traddr": "10.0.0.2", 00:20:44.243 "trsvcid": "4420" 00:20:44.244 }, 00:20:44.244 "peer_address": { 00:20:44.244 "trtype": "TCP", 00:20:44.244 "adrfam": "IPv4", 00:20:44.244 "traddr": "10.0.0.1", 00:20:44.244 "trsvcid": "34398" 00:20:44.244 }, 00:20:44.244 "auth": { 00:20:44.244 "state": "completed", 00:20:44.244 "digest": "sha384", 00:20:44.244 "dhgroup": "ffdhe4096" 00:20:44.244 } 00:20:44.244 } 00:20:44.244 ]' 00:20:44.244 01:06:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:44.501 01:06:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:44.501 01:06:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:44.501 01:06:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:44.501 01:06:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:44.501 01:06:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:44.502 01:06:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:44.502 01:06:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:44.759 01:06:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:ZDlhNzA3ZmUzYjQ5MmJhMTk0MWEwYTUzODczYTZkZTk5ZTc2MjdjMmQ2MTJlNmI3PajNMw==: --dhchap-ctrl-secret DHHC-1:01:Yjg2OTNkZGQzMWQ1Njg3ODIwNTZhMzdkNDJiMzBjYjFsLW8v: 00:20:45.691 01:06:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:45.692 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:45.692 01:06:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:45.692 01:06:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:45.692 01:06:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:45.692 01:06:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:45.692 01:06:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:45.692 01:06:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:45.692 01:06:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:45.949 01:06:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 3 00:20:45.949 01:06:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:45.949 01:06:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:45.949 01:06:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:45.949 01:06:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:45.949 01:06:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:45.949 01:06:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:45.949 01:06:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:45.949 01:06:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:45.949 01:06:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:45.949 01:06:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:45.949 01:06:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:46.514 00:20:46.514 01:06:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:46.514 01:06:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:46.514 01:06:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:46.772 01:06:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:46.772 01:06:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:46.772 01:06:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:46.772 01:06:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:46.772 01:06:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:46.772 01:06:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:46.772 { 00:20:46.772 "cntlid": 79, 00:20:46.772 "qid": 0, 00:20:46.772 "state": "enabled", 00:20:46.772 "listen_address": { 00:20:46.772 "trtype": "TCP", 00:20:46.772 "adrfam": "IPv4", 00:20:46.772 "traddr": "10.0.0.2", 00:20:46.772 "trsvcid": "4420" 00:20:46.772 }, 00:20:46.772 "peer_address": { 00:20:46.772 "trtype": "TCP", 00:20:46.772 "adrfam": "IPv4", 00:20:46.772 "traddr": "10.0.0.1", 00:20:46.772 "trsvcid": "34420" 00:20:46.772 }, 00:20:46.772 "auth": { 00:20:46.772 "state": "completed", 00:20:46.772 "digest": "sha384", 00:20:46.772 "dhgroup": "ffdhe4096" 00:20:46.772 } 00:20:46.772 } 00:20:46.772 ]' 00:20:46.772 01:06:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:46.772 01:06:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:46.772 01:06:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:46.772 01:06:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:46.772 01:06:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:46.772 01:06:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:46.772 01:06:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:46.772 01:06:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:47.030 01:06:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:YjA5MzliMWMzNDhkYTZlNjgzZGYwMWJiNGYwNTI0OTI0NGMwYTM0ZWIxYjY2MGI1ZGM5Yjk4OGNjNTEwZTliZkMFjv0=: 00:20:47.962 01:06:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:47.962 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:47.962 01:06:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:47.962 01:06:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:47.962 01:06:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:47.962 01:06:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:47.962 01:06:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:47.962 01:06:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:47.962 01:06:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:47.962 01:06:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:48.220 01:06:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 0 00:20:48.220 01:06:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:48.220 01:06:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:48.220 01:06:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:48.220 01:06:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:48.220 01:06:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:48.220 01:06:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:48.220 01:06:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:48.220 01:06:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:48.220 01:06:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:48.220 01:06:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:48.220 01:06:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:48.785 00:20:49.042 01:06:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:49.042 01:06:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:49.042 01:06:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:49.042 01:06:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:49.042 01:06:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:49.042 01:06:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:49.042 01:06:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:49.042 01:06:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:49.042 01:06:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:49.042 { 00:20:49.042 "cntlid": 81, 00:20:49.042 "qid": 0, 00:20:49.042 "state": "enabled", 00:20:49.042 "listen_address": { 00:20:49.042 "trtype": "TCP", 00:20:49.042 "adrfam": "IPv4", 00:20:49.042 "traddr": "10.0.0.2", 00:20:49.042 "trsvcid": "4420" 00:20:49.042 }, 00:20:49.042 "peer_address": { 00:20:49.042 "trtype": "TCP", 00:20:49.042 "adrfam": "IPv4", 00:20:49.042 "traddr": "10.0.0.1", 00:20:49.042 "trsvcid": "46230" 00:20:49.042 }, 00:20:49.042 "auth": { 00:20:49.042 "state": "completed", 00:20:49.042 "digest": "sha384", 00:20:49.042 "dhgroup": "ffdhe6144" 00:20:49.042 } 00:20:49.042 } 00:20:49.042 ]' 00:20:49.300 01:06:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:49.300 01:06:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:49.300 01:06:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:49.300 01:06:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:49.300 01:06:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:49.300 01:06:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:49.300 01:06:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:49.300 01:06:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:49.557 01:06:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:N2RjMjA3ZWQ2NWU0MTVmODY1OTNmNGJhMTA3NzMwMTQ4NjMyN2E5MTUwMjM3MGE52ez0nA==: --dhchap-ctrl-secret DHHC-1:03:NmYxYWM1Y2I1NzVhM2VmNmNhMzQ5NjBiNjg4M2YzMmUwYzJkZmU5ZjIwNmYzZmFjYjQ2MzQwMzFkNWU5YTE4ZdZZ/8A=: 00:20:50.489 01:06:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:50.489 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:50.489 01:06:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:50.489 01:06:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:50.489 01:06:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:50.489 01:06:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:50.489 01:06:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:50.489 01:06:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:50.489 01:06:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:50.747 01:06:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 1 00:20:50.747 01:06:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:50.747 01:06:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:50.747 01:06:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:50.747 01:06:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:50.747 01:06:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:50.747 01:06:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:50.747 01:06:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:50.747 01:06:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:50.747 01:06:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:50.747 01:06:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:50.747 01:06:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:51.312 00:20:51.312 01:06:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:51.312 01:06:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:51.312 01:06:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:51.570 01:06:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:51.570 01:06:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:51.570 01:06:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:51.570 01:06:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:51.570 01:06:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:51.570 01:06:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:51.570 { 00:20:51.570 "cntlid": 83, 00:20:51.570 "qid": 0, 00:20:51.570 "state": "enabled", 00:20:51.570 "listen_address": { 00:20:51.570 "trtype": "TCP", 00:20:51.570 "adrfam": "IPv4", 00:20:51.570 "traddr": "10.0.0.2", 00:20:51.570 "trsvcid": "4420" 00:20:51.570 }, 00:20:51.570 "peer_address": { 00:20:51.570 "trtype": "TCP", 00:20:51.570 "adrfam": "IPv4", 00:20:51.570 "traddr": "10.0.0.1", 00:20:51.570 "trsvcid": "46260" 00:20:51.570 }, 00:20:51.570 "auth": { 00:20:51.570 "state": "completed", 00:20:51.570 "digest": "sha384", 00:20:51.570 "dhgroup": "ffdhe6144" 00:20:51.570 } 00:20:51.570 } 00:20:51.570 ]' 00:20:51.570 01:06:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:51.570 01:06:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:51.570 01:06:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:51.570 01:06:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:51.570 01:06:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:51.570 01:06:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:51.570 01:06:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:51.570 01:06:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:51.828 01:06:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:ZGQ4Yzg0MmE1ZDkxY2RlNGNmZjQzYzAxOTIxOGZjZmTWfQyC: --dhchap-ctrl-secret DHHC-1:02:NzAxMTE1OTNiMGQzMjJiOTA3MWQxZjI0ODY5YzA4MjllMWMxYmM1ZjUxYzlmZDQ0Z0yyfA==: 00:20:53.198 01:06:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:53.198 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:53.198 01:06:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:53.198 01:06:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:53.198 01:06:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:53.198 01:06:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:53.198 01:06:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:53.198 01:06:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:53.198 01:06:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:53.198 01:06:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 2 00:20:53.198 01:06:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:53.198 01:06:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:53.198 01:06:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:53.198 01:06:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:53.198 01:06:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:53.198 01:06:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:53.198 01:06:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:53.198 01:06:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:53.198 01:06:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:53.198 01:06:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:53.198 01:06:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:53.763 00:20:53.763 01:06:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:53.763 01:06:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:53.763 01:06:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:54.020 01:06:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:54.020 01:06:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:54.020 01:06:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:54.020 01:06:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:54.021 01:06:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:54.021 01:06:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:54.021 { 00:20:54.021 "cntlid": 85, 00:20:54.021 "qid": 0, 00:20:54.021 "state": "enabled", 00:20:54.021 "listen_address": { 00:20:54.021 "trtype": "TCP", 00:20:54.021 "adrfam": "IPv4", 00:20:54.021 "traddr": "10.0.0.2", 00:20:54.021 "trsvcid": "4420" 00:20:54.021 }, 00:20:54.021 "peer_address": { 00:20:54.021 "trtype": "TCP", 00:20:54.021 "adrfam": "IPv4", 00:20:54.021 "traddr": "10.0.0.1", 00:20:54.021 "trsvcid": "46276" 00:20:54.021 }, 00:20:54.021 "auth": { 00:20:54.021 "state": "completed", 00:20:54.021 "digest": "sha384", 00:20:54.021 "dhgroup": "ffdhe6144" 00:20:54.021 } 00:20:54.021 } 00:20:54.021 ]' 00:20:54.021 01:06:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:54.021 01:06:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:54.021 01:06:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:54.021 01:06:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:54.021 01:06:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:54.021 01:06:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:54.021 01:06:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:54.021 01:06:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:54.278 01:06:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:ZDlhNzA3ZmUzYjQ5MmJhMTk0MWEwYTUzODczYTZkZTk5ZTc2MjdjMmQ2MTJlNmI3PajNMw==: --dhchap-ctrl-secret DHHC-1:01:Yjg2OTNkZGQzMWQ1Njg3ODIwNTZhMzdkNDJiMzBjYjFsLW8v: 00:20:55.211 01:06:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:55.211 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:55.211 01:06:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:55.211 01:06:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:55.211 01:06:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:55.211 01:06:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:55.211 01:06:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:55.211 01:06:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:55.211 01:06:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:55.468 01:06:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 3 00:20:55.468 01:06:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:55.468 01:06:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:55.468 01:06:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:55.468 01:06:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:55.468 01:06:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:55.468 01:06:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:55.468 01:06:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:55.468 01:06:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:55.725 01:06:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:55.725 01:06:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:55.725 01:06:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:56.294 00:20:56.294 01:06:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:56.294 01:06:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:56.294 01:06:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:56.294 01:06:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:56.294 01:06:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:56.294 01:06:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:56.294 01:06:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:56.294 01:06:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:56.294 01:06:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:56.294 { 00:20:56.294 "cntlid": 87, 00:20:56.294 "qid": 0, 00:20:56.294 "state": "enabled", 00:20:56.294 "listen_address": { 00:20:56.294 "trtype": "TCP", 00:20:56.294 "adrfam": "IPv4", 00:20:56.294 "traddr": "10.0.0.2", 00:20:56.294 "trsvcid": "4420" 00:20:56.294 }, 00:20:56.294 "peer_address": { 00:20:56.294 "trtype": "TCP", 00:20:56.294 "adrfam": "IPv4", 00:20:56.294 "traddr": "10.0.0.1", 00:20:56.294 "trsvcid": "46314" 00:20:56.294 }, 00:20:56.294 "auth": { 00:20:56.294 "state": "completed", 00:20:56.294 "digest": "sha384", 00:20:56.294 "dhgroup": "ffdhe6144" 00:20:56.294 } 00:20:56.294 } 00:20:56.294 ]' 00:20:56.294 01:06:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:56.552 01:06:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:56.552 01:06:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:56.552 01:06:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:56.552 01:06:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:56.552 01:06:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:56.552 01:06:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:56.552 01:06:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:56.810 01:06:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:YjA5MzliMWMzNDhkYTZlNjgzZGYwMWJiNGYwNTI0OTI0NGMwYTM0ZWIxYjY2MGI1ZGM5Yjk4OGNjNTEwZTliZkMFjv0=: 00:20:57.776 01:06:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:57.776 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:57.776 01:06:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:57.776 01:06:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:57.776 01:06:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:57.776 01:06:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:57.776 01:06:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:57.776 01:06:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:57.776 01:06:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:57.776 01:06:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:58.033 01:06:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 0 00:20:58.033 01:06:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:58.033 01:06:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:58.033 01:06:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:58.033 01:06:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:58.034 01:06:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:58.034 01:06:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:58.034 01:06:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:58.034 01:06:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:58.034 01:06:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:58.034 01:06:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:58.034 01:06:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:58.966 00:20:58.966 01:06:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:58.966 01:06:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:58.966 01:06:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:59.224 01:06:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:59.224 01:06:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:59.224 01:06:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:59.224 01:06:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:59.224 01:06:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:59.224 01:06:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:59.224 { 00:20:59.224 "cntlid": 89, 00:20:59.224 "qid": 0, 00:20:59.224 "state": "enabled", 00:20:59.224 "listen_address": { 00:20:59.224 "trtype": "TCP", 00:20:59.224 "adrfam": "IPv4", 00:20:59.224 "traddr": "10.0.0.2", 00:20:59.224 "trsvcid": "4420" 00:20:59.224 }, 00:20:59.224 "peer_address": { 00:20:59.224 "trtype": "TCP", 00:20:59.224 "adrfam": "IPv4", 00:20:59.224 "traddr": "10.0.0.1", 00:20:59.224 "trsvcid": "51660" 00:20:59.224 }, 00:20:59.224 "auth": { 00:20:59.224 "state": "completed", 00:20:59.224 "digest": "sha384", 00:20:59.224 "dhgroup": "ffdhe8192" 00:20:59.224 } 00:20:59.224 } 00:20:59.224 ]' 00:20:59.224 01:06:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:59.224 01:06:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:59.224 01:06:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:59.224 01:06:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:59.224 01:06:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:59.480 01:06:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:59.481 01:06:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:59.481 01:06:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:59.481 01:06:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:N2RjMjA3ZWQ2NWU0MTVmODY1OTNmNGJhMTA3NzMwMTQ4NjMyN2E5MTUwMjM3MGE52ez0nA==: --dhchap-ctrl-secret DHHC-1:03:NmYxYWM1Y2I1NzVhM2VmNmNhMzQ5NjBiNjg4M2YzMmUwYzJkZmU5ZjIwNmYzZmFjYjQ2MzQwMzFkNWU5YTE4ZdZZ/8A=: 00:21:00.850 01:06:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:00.850 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:00.850 01:06:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:00.850 01:06:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:00.850 01:06:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:00.850 01:06:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:00.850 01:06:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:00.850 01:06:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:00.850 01:06:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:00.850 01:06:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 1 00:21:00.850 01:06:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:00.850 01:06:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:00.850 01:06:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:21:00.850 01:06:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:00.850 01:06:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:00.850 01:06:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:00.850 01:06:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:00.850 01:06:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:00.850 01:06:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:00.850 01:06:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:00.850 01:06:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:01.783 00:21:01.783 01:06:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:01.783 01:06:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:01.783 01:06:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:01.783 01:06:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:01.783 01:06:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:01.783 01:06:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:01.783 01:06:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:02.040 01:06:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:02.040 01:06:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:02.040 { 00:21:02.040 "cntlid": 91, 00:21:02.040 "qid": 0, 00:21:02.040 "state": "enabled", 00:21:02.040 "listen_address": { 00:21:02.040 "trtype": "TCP", 00:21:02.040 "adrfam": "IPv4", 00:21:02.040 "traddr": "10.0.0.2", 00:21:02.040 "trsvcid": "4420" 00:21:02.040 }, 00:21:02.040 "peer_address": { 00:21:02.040 "trtype": "TCP", 00:21:02.040 "adrfam": "IPv4", 00:21:02.040 "traddr": "10.0.0.1", 00:21:02.040 "trsvcid": "51704" 00:21:02.040 }, 00:21:02.040 "auth": { 00:21:02.040 "state": "completed", 00:21:02.040 "digest": "sha384", 00:21:02.040 "dhgroup": "ffdhe8192" 00:21:02.040 } 00:21:02.040 } 00:21:02.040 ]' 00:21:02.040 01:06:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:02.040 01:06:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:02.040 01:06:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:02.040 01:06:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:02.040 01:06:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:02.040 01:06:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:02.040 01:06:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:02.040 01:06:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:02.298 01:06:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:ZGQ4Yzg0MmE1ZDkxY2RlNGNmZjQzYzAxOTIxOGZjZmTWfQyC: --dhchap-ctrl-secret DHHC-1:02:NzAxMTE1OTNiMGQzMjJiOTA3MWQxZjI0ODY5YzA4MjllMWMxYmM1ZjUxYzlmZDQ0Z0yyfA==: 00:21:03.230 01:06:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:03.230 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:03.230 01:06:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:03.230 01:06:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:03.230 01:06:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:03.230 01:06:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:03.230 01:06:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:03.230 01:06:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:03.230 01:06:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:03.488 01:06:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 2 00:21:03.488 01:06:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:03.488 01:06:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:03.488 01:06:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:21:03.488 01:06:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:03.488 01:06:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:03.488 01:06:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:03.488 01:06:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:03.488 01:06:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:03.488 01:06:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:03.488 01:06:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:03.488 01:06:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:04.420 00:21:04.420 01:06:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:04.420 01:06:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:04.420 01:06:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:04.678 01:06:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:04.678 01:06:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:04.678 01:06:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:04.678 01:06:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:04.678 01:06:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:04.678 01:06:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:04.678 { 00:21:04.678 "cntlid": 93, 00:21:04.678 "qid": 0, 00:21:04.678 "state": "enabled", 00:21:04.678 "listen_address": { 00:21:04.678 "trtype": "TCP", 00:21:04.678 "adrfam": "IPv4", 00:21:04.678 "traddr": "10.0.0.2", 00:21:04.678 "trsvcid": "4420" 00:21:04.678 }, 00:21:04.678 "peer_address": { 00:21:04.678 "trtype": "TCP", 00:21:04.678 "adrfam": "IPv4", 00:21:04.678 "traddr": "10.0.0.1", 00:21:04.678 "trsvcid": "51742" 00:21:04.678 }, 00:21:04.678 "auth": { 00:21:04.678 "state": "completed", 00:21:04.678 "digest": "sha384", 00:21:04.678 "dhgroup": "ffdhe8192" 00:21:04.678 } 00:21:04.678 } 00:21:04.678 ]' 00:21:04.678 01:06:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:04.678 01:06:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:04.678 01:06:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:04.936 01:06:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:04.936 01:06:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:04.936 01:06:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:04.936 01:06:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:04.936 01:06:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:05.194 01:06:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:ZDlhNzA3ZmUzYjQ5MmJhMTk0MWEwYTUzODczYTZkZTk5ZTc2MjdjMmQ2MTJlNmI3PajNMw==: --dhchap-ctrl-secret DHHC-1:01:Yjg2OTNkZGQzMWQ1Njg3ODIwNTZhMzdkNDJiMzBjYjFsLW8v: 00:21:06.126 01:06:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:06.126 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:06.126 01:06:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:06.126 01:06:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:06.126 01:06:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:06.126 01:06:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:06.126 01:06:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:06.126 01:06:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:06.126 01:06:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:06.384 01:06:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 3 00:21:06.384 01:06:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:06.384 01:06:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:06.384 01:06:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:21:06.384 01:06:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:06.384 01:06:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:06.384 01:06:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:06.384 01:06:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:06.384 01:06:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:06.384 01:06:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:06.384 01:06:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:06.384 01:06:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:07.317 00:21:07.317 01:07:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:07.317 01:07:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:07.317 01:07:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:07.317 01:07:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:07.317 01:07:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:07.317 01:07:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:07.317 01:07:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:07.317 01:07:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:07.317 01:07:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:07.317 { 00:21:07.317 "cntlid": 95, 00:21:07.317 "qid": 0, 00:21:07.317 "state": "enabled", 00:21:07.317 "listen_address": { 00:21:07.317 "trtype": "TCP", 00:21:07.317 "adrfam": "IPv4", 00:21:07.317 "traddr": "10.0.0.2", 00:21:07.317 "trsvcid": "4420" 00:21:07.317 }, 00:21:07.317 "peer_address": { 00:21:07.317 "trtype": "TCP", 00:21:07.317 "adrfam": "IPv4", 00:21:07.317 "traddr": "10.0.0.1", 00:21:07.317 "trsvcid": "51310" 00:21:07.317 }, 00:21:07.317 "auth": { 00:21:07.317 "state": "completed", 00:21:07.317 "digest": "sha384", 00:21:07.317 "dhgroup": "ffdhe8192" 00:21:07.317 } 00:21:07.317 } 00:21:07.317 ]' 00:21:07.317 01:07:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:07.317 01:07:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:07.317 01:07:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:07.575 01:07:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:07.575 01:07:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:07.575 01:07:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:07.575 01:07:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:07.575 01:07:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:07.832 01:07:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:YjA5MzliMWMzNDhkYTZlNjgzZGYwMWJiNGYwNTI0OTI0NGMwYTM0ZWIxYjY2MGI1ZGM5Yjk4OGNjNTEwZTliZkMFjv0=: 00:21:08.765 01:07:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:08.765 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:08.765 01:07:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:08.765 01:07:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:08.765 01:07:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:08.765 01:07:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:08.765 01:07:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:21:08.765 01:07:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:21:08.765 01:07:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:08.765 01:07:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:08.765 01:07:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:09.023 01:07:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 0 00:21:09.023 01:07:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:09.023 01:07:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:09.023 01:07:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:21:09.023 01:07:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:09.023 01:07:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:09.023 01:07:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:09.023 01:07:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:09.023 01:07:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:09.023 01:07:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:09.023 01:07:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:09.023 01:07:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:09.281 00:21:09.281 01:07:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:09.281 01:07:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:09.281 01:07:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:09.539 01:07:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:09.539 01:07:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:09.539 01:07:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:09.539 01:07:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:09.539 01:07:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:09.539 01:07:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:09.539 { 00:21:09.539 "cntlid": 97, 00:21:09.539 "qid": 0, 00:21:09.539 "state": "enabled", 00:21:09.539 "listen_address": { 00:21:09.539 "trtype": "TCP", 00:21:09.539 "adrfam": "IPv4", 00:21:09.539 "traddr": "10.0.0.2", 00:21:09.540 "trsvcid": "4420" 00:21:09.540 }, 00:21:09.540 "peer_address": { 00:21:09.540 "trtype": "TCP", 00:21:09.540 "adrfam": "IPv4", 00:21:09.540 "traddr": "10.0.0.1", 00:21:09.540 "trsvcid": "51342" 00:21:09.540 }, 00:21:09.540 "auth": { 00:21:09.540 "state": "completed", 00:21:09.540 "digest": "sha512", 00:21:09.540 "dhgroup": "null" 00:21:09.540 } 00:21:09.540 } 00:21:09.540 ]' 00:21:09.540 01:07:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:09.540 01:07:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:09.540 01:07:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:09.797 01:07:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:21:09.797 01:07:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:09.797 01:07:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:09.797 01:07:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:09.797 01:07:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:10.055 01:07:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:N2RjMjA3ZWQ2NWU0MTVmODY1OTNmNGJhMTA3NzMwMTQ4NjMyN2E5MTUwMjM3MGE52ez0nA==: --dhchap-ctrl-secret DHHC-1:03:NmYxYWM1Y2I1NzVhM2VmNmNhMzQ5NjBiNjg4M2YzMmUwYzJkZmU5ZjIwNmYzZmFjYjQ2MzQwMzFkNWU5YTE4ZdZZ/8A=: 00:21:10.987 01:07:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:10.987 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:10.987 01:07:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:10.987 01:07:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:10.987 01:07:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:10.987 01:07:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:10.987 01:07:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:10.987 01:07:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:10.987 01:07:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:11.246 01:07:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 1 00:21:11.246 01:07:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:11.246 01:07:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:11.246 01:07:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:21:11.246 01:07:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:11.246 01:07:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:11.246 01:07:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:11.246 01:07:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:11.246 01:07:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:11.246 01:07:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:11.246 01:07:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:11.246 01:07:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:11.504 00:21:11.504 01:07:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:11.504 01:07:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:11.504 01:07:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:11.761 01:07:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:11.761 01:07:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:11.761 01:07:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:11.761 01:07:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:11.761 01:07:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:11.761 01:07:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:11.761 { 00:21:11.761 "cntlid": 99, 00:21:11.761 "qid": 0, 00:21:11.761 "state": "enabled", 00:21:11.761 "listen_address": { 00:21:11.761 "trtype": "TCP", 00:21:11.761 "adrfam": "IPv4", 00:21:11.761 "traddr": "10.0.0.2", 00:21:11.761 "trsvcid": "4420" 00:21:11.761 }, 00:21:11.761 "peer_address": { 00:21:11.761 "trtype": "TCP", 00:21:11.761 "adrfam": "IPv4", 00:21:11.761 "traddr": "10.0.0.1", 00:21:11.761 "trsvcid": "51374" 00:21:11.761 }, 00:21:11.761 "auth": { 00:21:11.761 "state": "completed", 00:21:11.761 "digest": "sha512", 00:21:11.761 "dhgroup": "null" 00:21:11.761 } 00:21:11.761 } 00:21:11.761 ]' 00:21:11.761 01:07:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:12.019 01:07:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:12.019 01:07:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:12.019 01:07:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:21:12.019 01:07:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:12.019 01:07:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:12.019 01:07:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:12.019 01:07:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:12.277 01:07:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:ZGQ4Yzg0MmE1ZDkxY2RlNGNmZjQzYzAxOTIxOGZjZmTWfQyC: --dhchap-ctrl-secret DHHC-1:02:NzAxMTE1OTNiMGQzMjJiOTA3MWQxZjI0ODY5YzA4MjllMWMxYmM1ZjUxYzlmZDQ0Z0yyfA==: 00:21:13.250 01:07:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:13.250 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:13.250 01:07:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:13.250 01:07:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:13.250 01:07:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:13.250 01:07:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:13.250 01:07:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:13.250 01:07:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:13.250 01:07:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:13.507 01:07:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 2 00:21:13.507 01:07:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:13.507 01:07:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:13.507 01:07:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:21:13.507 01:07:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:13.507 01:07:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:13.508 01:07:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:13.508 01:07:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:13.508 01:07:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:13.508 01:07:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:13.508 01:07:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:13.508 01:07:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:13.765 00:21:13.765 01:07:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:13.765 01:07:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:13.765 01:07:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:14.022 01:07:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:14.022 01:07:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:14.022 01:07:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:14.022 01:07:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:14.022 01:07:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:14.022 01:07:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:14.022 { 00:21:14.022 "cntlid": 101, 00:21:14.022 "qid": 0, 00:21:14.022 "state": "enabled", 00:21:14.022 "listen_address": { 00:21:14.022 "trtype": "TCP", 00:21:14.022 "adrfam": "IPv4", 00:21:14.022 "traddr": "10.0.0.2", 00:21:14.022 "trsvcid": "4420" 00:21:14.022 }, 00:21:14.022 "peer_address": { 00:21:14.022 "trtype": "TCP", 00:21:14.022 "adrfam": "IPv4", 00:21:14.022 "traddr": "10.0.0.1", 00:21:14.022 "trsvcid": "51404" 00:21:14.022 }, 00:21:14.022 "auth": { 00:21:14.022 "state": "completed", 00:21:14.022 "digest": "sha512", 00:21:14.022 "dhgroup": "null" 00:21:14.022 } 00:21:14.022 } 00:21:14.022 ]' 00:21:14.022 01:07:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:14.022 01:07:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:14.022 01:07:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:14.022 01:07:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:21:14.280 01:07:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:14.280 01:07:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:14.280 01:07:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:14.280 01:07:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:14.538 01:07:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:ZDlhNzA3ZmUzYjQ5MmJhMTk0MWEwYTUzODczYTZkZTk5ZTc2MjdjMmQ2MTJlNmI3PajNMw==: --dhchap-ctrl-secret DHHC-1:01:Yjg2OTNkZGQzMWQ1Njg3ODIwNTZhMzdkNDJiMzBjYjFsLW8v: 00:21:15.472 01:07:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:15.472 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:15.472 01:07:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:15.472 01:07:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:15.472 01:07:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:15.472 01:07:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:15.472 01:07:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:15.472 01:07:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:15.472 01:07:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:15.730 01:07:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 3 00:21:15.730 01:07:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:15.730 01:07:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:15.730 01:07:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:21:15.730 01:07:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:15.730 01:07:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:15.730 01:07:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:15.730 01:07:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:15.730 01:07:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:15.730 01:07:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:15.730 01:07:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:15.730 01:07:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:15.988 00:21:15.988 01:07:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:15.988 01:07:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:15.988 01:07:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:16.245 01:07:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:16.245 01:07:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:16.245 01:07:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:16.245 01:07:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:16.245 01:07:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:16.245 01:07:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:16.245 { 00:21:16.245 "cntlid": 103, 00:21:16.245 "qid": 0, 00:21:16.245 "state": "enabled", 00:21:16.245 "listen_address": { 00:21:16.245 "trtype": "TCP", 00:21:16.245 "adrfam": "IPv4", 00:21:16.245 "traddr": "10.0.0.2", 00:21:16.245 "trsvcid": "4420" 00:21:16.245 }, 00:21:16.245 "peer_address": { 00:21:16.245 "trtype": "TCP", 00:21:16.245 "adrfam": "IPv4", 00:21:16.245 "traddr": "10.0.0.1", 00:21:16.245 "trsvcid": "51428" 00:21:16.245 }, 00:21:16.245 "auth": { 00:21:16.245 "state": "completed", 00:21:16.245 "digest": "sha512", 00:21:16.245 "dhgroup": "null" 00:21:16.245 } 00:21:16.245 } 00:21:16.245 ]' 00:21:16.245 01:07:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:16.503 01:07:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:16.503 01:07:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:16.503 01:07:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:21:16.503 01:07:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:16.503 01:07:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:16.503 01:07:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:16.503 01:07:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:16.760 01:07:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:YjA5MzliMWMzNDhkYTZlNjgzZGYwMWJiNGYwNTI0OTI0NGMwYTM0ZWIxYjY2MGI1ZGM5Yjk4OGNjNTEwZTliZkMFjv0=: 00:21:17.693 01:07:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:17.693 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:17.693 01:07:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:17.693 01:07:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:17.693 01:07:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:17.693 01:07:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:17.693 01:07:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:21:17.693 01:07:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:17.693 01:07:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:17.693 01:07:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:17.951 01:07:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 0 00:21:17.951 01:07:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:17.951 01:07:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:17.951 01:07:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:21:17.951 01:07:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:17.951 01:07:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:17.951 01:07:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:17.951 01:07:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:17.951 01:07:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:17.951 01:07:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:17.951 01:07:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:17.951 01:07:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:18.208 00:21:18.208 01:07:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:18.208 01:07:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:18.208 01:07:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:18.465 01:07:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:18.465 01:07:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:18.465 01:07:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:18.465 01:07:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:18.465 01:07:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:18.465 01:07:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:18.465 { 00:21:18.465 "cntlid": 105, 00:21:18.465 "qid": 0, 00:21:18.465 "state": "enabled", 00:21:18.465 "listen_address": { 00:21:18.465 "trtype": "TCP", 00:21:18.465 "adrfam": "IPv4", 00:21:18.465 "traddr": "10.0.0.2", 00:21:18.465 "trsvcid": "4420" 00:21:18.465 }, 00:21:18.465 "peer_address": { 00:21:18.465 "trtype": "TCP", 00:21:18.465 "adrfam": "IPv4", 00:21:18.465 "traddr": "10.0.0.1", 00:21:18.465 "trsvcid": "38848" 00:21:18.465 }, 00:21:18.465 "auth": { 00:21:18.465 "state": "completed", 00:21:18.465 "digest": "sha512", 00:21:18.465 "dhgroup": "ffdhe2048" 00:21:18.465 } 00:21:18.465 } 00:21:18.465 ]' 00:21:18.465 01:07:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:18.465 01:07:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:18.465 01:07:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:18.465 01:07:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:18.465 01:07:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:18.465 01:07:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:18.465 01:07:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:18.465 01:07:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:19.029 01:07:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:N2RjMjA3ZWQ2NWU0MTVmODY1OTNmNGJhMTA3NzMwMTQ4NjMyN2E5MTUwMjM3MGE52ez0nA==: --dhchap-ctrl-secret DHHC-1:03:NmYxYWM1Y2I1NzVhM2VmNmNhMzQ5NjBiNjg4M2YzMmUwYzJkZmU5ZjIwNmYzZmFjYjQ2MzQwMzFkNWU5YTE4ZdZZ/8A=: 00:21:19.962 01:07:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:19.962 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:19.962 01:07:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:19.962 01:07:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:19.962 01:07:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:19.962 01:07:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:19.962 01:07:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:19.962 01:07:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:19.962 01:07:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:20.220 01:07:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 1 00:21:20.220 01:07:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:20.220 01:07:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:20.220 01:07:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:21:20.220 01:07:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:20.220 01:07:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:20.220 01:07:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:20.220 01:07:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:20.220 01:07:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:20.220 01:07:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:20.220 01:07:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:20.220 01:07:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:20.476 00:21:20.476 01:07:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:20.476 01:07:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:20.476 01:07:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:20.733 01:07:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:20.733 01:07:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:20.733 01:07:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:20.733 01:07:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:20.733 01:07:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:20.733 01:07:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:20.733 { 00:21:20.733 "cntlid": 107, 00:21:20.733 "qid": 0, 00:21:20.733 "state": "enabled", 00:21:20.733 "listen_address": { 00:21:20.733 "trtype": "TCP", 00:21:20.733 "adrfam": "IPv4", 00:21:20.733 "traddr": "10.0.0.2", 00:21:20.733 "trsvcid": "4420" 00:21:20.733 }, 00:21:20.733 "peer_address": { 00:21:20.733 "trtype": "TCP", 00:21:20.733 "adrfam": "IPv4", 00:21:20.733 "traddr": "10.0.0.1", 00:21:20.733 "trsvcid": "38874" 00:21:20.733 }, 00:21:20.733 "auth": { 00:21:20.733 "state": "completed", 00:21:20.733 "digest": "sha512", 00:21:20.733 "dhgroup": "ffdhe2048" 00:21:20.733 } 00:21:20.733 } 00:21:20.733 ]' 00:21:20.733 01:07:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:20.733 01:07:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:20.733 01:07:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:20.733 01:07:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:20.733 01:07:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:20.733 01:07:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:20.733 01:07:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:20.733 01:07:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:20.991 01:07:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:ZGQ4Yzg0MmE1ZDkxY2RlNGNmZjQzYzAxOTIxOGZjZmTWfQyC: --dhchap-ctrl-secret DHHC-1:02:NzAxMTE1OTNiMGQzMjJiOTA3MWQxZjI0ODY5YzA4MjllMWMxYmM1ZjUxYzlmZDQ0Z0yyfA==: 00:21:21.922 01:07:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:21.922 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:21.922 01:07:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:21.923 01:07:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:21.923 01:07:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:21.923 01:07:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:21.923 01:07:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:22.180 01:07:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:22.180 01:07:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:22.439 01:07:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 2 00:21:22.439 01:07:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:22.439 01:07:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:22.439 01:07:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:21:22.439 01:07:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:22.439 01:07:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:22.439 01:07:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:22.439 01:07:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:22.439 01:07:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:22.439 01:07:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:22.439 01:07:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:22.439 01:07:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:22.696 00:21:22.696 01:07:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:22.696 01:07:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:22.696 01:07:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:22.953 01:07:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:22.953 01:07:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:22.953 01:07:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:22.953 01:07:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:22.953 01:07:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:22.953 01:07:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:22.953 { 00:21:22.953 "cntlid": 109, 00:21:22.953 "qid": 0, 00:21:22.953 "state": "enabled", 00:21:22.953 "listen_address": { 00:21:22.953 "trtype": "TCP", 00:21:22.953 "adrfam": "IPv4", 00:21:22.953 "traddr": "10.0.0.2", 00:21:22.953 "trsvcid": "4420" 00:21:22.953 }, 00:21:22.953 "peer_address": { 00:21:22.953 "trtype": "TCP", 00:21:22.953 "adrfam": "IPv4", 00:21:22.953 "traddr": "10.0.0.1", 00:21:22.953 "trsvcid": "38886" 00:21:22.953 }, 00:21:22.953 "auth": { 00:21:22.953 "state": "completed", 00:21:22.953 "digest": "sha512", 00:21:22.953 "dhgroup": "ffdhe2048" 00:21:22.953 } 00:21:22.953 } 00:21:22.953 ]' 00:21:22.953 01:07:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:22.953 01:07:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:22.953 01:07:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:22.953 01:07:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:22.953 01:07:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:22.953 01:07:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:22.953 01:07:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:22.953 01:07:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:23.211 01:07:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:ZDlhNzA3ZmUzYjQ5MmJhMTk0MWEwYTUzODczYTZkZTk5ZTc2MjdjMmQ2MTJlNmI3PajNMw==: --dhchap-ctrl-secret DHHC-1:01:Yjg2OTNkZGQzMWQ1Njg3ODIwNTZhMzdkNDJiMzBjYjFsLW8v: 00:21:24.144 01:07:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:24.144 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:24.144 01:07:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:24.144 01:07:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:24.144 01:07:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:24.144 01:07:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:24.144 01:07:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:24.144 01:07:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:24.144 01:07:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:24.402 01:07:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 3 00:21:24.402 01:07:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:24.402 01:07:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:24.402 01:07:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:21:24.402 01:07:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:24.402 01:07:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:24.402 01:07:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:24.402 01:07:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:24.402 01:07:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:24.659 01:07:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:24.659 01:07:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:24.659 01:07:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:24.917 00:21:24.917 01:07:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:24.917 01:07:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:24.917 01:07:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:25.174 01:07:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:25.174 01:07:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:25.174 01:07:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:25.174 01:07:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:25.174 01:07:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:25.174 01:07:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:25.174 { 00:21:25.174 "cntlid": 111, 00:21:25.174 "qid": 0, 00:21:25.174 "state": "enabled", 00:21:25.174 "listen_address": { 00:21:25.174 "trtype": "TCP", 00:21:25.174 "adrfam": "IPv4", 00:21:25.174 "traddr": "10.0.0.2", 00:21:25.174 "trsvcid": "4420" 00:21:25.174 }, 00:21:25.174 "peer_address": { 00:21:25.174 "trtype": "TCP", 00:21:25.174 "adrfam": "IPv4", 00:21:25.174 "traddr": "10.0.0.1", 00:21:25.174 "trsvcid": "38930" 00:21:25.174 }, 00:21:25.174 "auth": { 00:21:25.174 "state": "completed", 00:21:25.174 "digest": "sha512", 00:21:25.174 "dhgroup": "ffdhe2048" 00:21:25.174 } 00:21:25.174 } 00:21:25.174 ]' 00:21:25.174 01:07:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:25.174 01:07:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:25.174 01:07:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:25.174 01:07:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:25.174 01:07:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:25.174 01:07:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:25.174 01:07:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:25.174 01:07:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:25.432 01:07:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:YjA5MzliMWMzNDhkYTZlNjgzZGYwMWJiNGYwNTI0OTI0NGMwYTM0ZWIxYjY2MGI1ZGM5Yjk4OGNjNTEwZTliZkMFjv0=: 00:21:26.804 01:07:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:26.804 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:26.804 01:07:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:26.804 01:07:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:26.804 01:07:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:26.804 01:07:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:26.804 01:07:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:21:26.804 01:07:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:26.804 01:07:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:26.804 01:07:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:26.804 01:07:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 0 00:21:26.805 01:07:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:26.805 01:07:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:26.805 01:07:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:21:26.805 01:07:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:26.805 01:07:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:26.805 01:07:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:26.805 01:07:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:26.805 01:07:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:26.805 01:07:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:26.805 01:07:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:26.805 01:07:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:27.069 00:21:27.069 01:07:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:27.069 01:07:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:27.069 01:07:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:27.382 01:07:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:27.382 01:07:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:27.382 01:07:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:27.382 01:07:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:27.382 01:07:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:27.382 01:07:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:27.382 { 00:21:27.382 "cntlid": 113, 00:21:27.382 "qid": 0, 00:21:27.382 "state": "enabled", 00:21:27.382 "listen_address": { 00:21:27.382 "trtype": "TCP", 00:21:27.382 "adrfam": "IPv4", 00:21:27.382 "traddr": "10.0.0.2", 00:21:27.382 "trsvcid": "4420" 00:21:27.382 }, 00:21:27.382 "peer_address": { 00:21:27.382 "trtype": "TCP", 00:21:27.382 "adrfam": "IPv4", 00:21:27.382 "traddr": "10.0.0.1", 00:21:27.382 "trsvcid": "52686" 00:21:27.382 }, 00:21:27.382 "auth": { 00:21:27.382 "state": "completed", 00:21:27.382 "digest": "sha512", 00:21:27.382 "dhgroup": "ffdhe3072" 00:21:27.382 } 00:21:27.382 } 00:21:27.382 ]' 00:21:27.382 01:07:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:27.382 01:07:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:27.382 01:07:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:27.382 01:07:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:27.382 01:07:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:27.382 01:07:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:27.382 01:07:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:27.382 01:07:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:27.639 01:07:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:N2RjMjA3ZWQ2NWU0MTVmODY1OTNmNGJhMTA3NzMwMTQ4NjMyN2E5MTUwMjM3MGE52ez0nA==: --dhchap-ctrl-secret DHHC-1:03:NmYxYWM1Y2I1NzVhM2VmNmNhMzQ5NjBiNjg4M2YzMmUwYzJkZmU5ZjIwNmYzZmFjYjQ2MzQwMzFkNWU5YTE4ZdZZ/8A=: 00:21:28.572 01:07:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:28.572 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:28.572 01:07:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:28.572 01:07:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:28.572 01:07:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:28.572 01:07:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:28.572 01:07:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:28.572 01:07:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:28.572 01:07:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:29.138 01:07:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 1 00:21:29.138 01:07:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:29.138 01:07:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:29.138 01:07:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:21:29.138 01:07:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:29.138 01:07:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:29.138 01:07:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:29.138 01:07:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:29.138 01:07:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:29.138 01:07:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:29.138 01:07:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:29.138 01:07:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:29.394 00:21:29.394 01:07:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:29.394 01:07:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:29.394 01:07:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:29.651 01:07:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:29.651 01:07:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:29.651 01:07:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:29.651 01:07:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:29.651 01:07:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:29.651 01:07:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:29.651 { 00:21:29.651 "cntlid": 115, 00:21:29.651 "qid": 0, 00:21:29.651 "state": "enabled", 00:21:29.651 "listen_address": { 00:21:29.651 "trtype": "TCP", 00:21:29.651 "adrfam": "IPv4", 00:21:29.651 "traddr": "10.0.0.2", 00:21:29.651 "trsvcid": "4420" 00:21:29.652 }, 00:21:29.652 "peer_address": { 00:21:29.652 "trtype": "TCP", 00:21:29.652 "adrfam": "IPv4", 00:21:29.652 "traddr": "10.0.0.1", 00:21:29.652 "trsvcid": "52700" 00:21:29.652 }, 00:21:29.652 "auth": { 00:21:29.652 "state": "completed", 00:21:29.652 "digest": "sha512", 00:21:29.652 "dhgroup": "ffdhe3072" 00:21:29.652 } 00:21:29.652 } 00:21:29.652 ]' 00:21:29.652 01:07:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:29.652 01:07:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:29.652 01:07:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:29.652 01:07:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:29.652 01:07:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:29.652 01:07:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:29.652 01:07:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:29.652 01:07:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:29.909 01:07:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:ZGQ4Yzg0MmE1ZDkxY2RlNGNmZjQzYzAxOTIxOGZjZmTWfQyC: --dhchap-ctrl-secret DHHC-1:02:NzAxMTE1OTNiMGQzMjJiOTA3MWQxZjI0ODY5YzA4MjllMWMxYmM1ZjUxYzlmZDQ0Z0yyfA==: 00:21:30.840 01:07:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:30.840 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:30.840 01:07:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:30.840 01:07:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:30.840 01:07:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:30.840 01:07:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:30.840 01:07:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:30.840 01:07:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:30.840 01:07:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:31.097 01:07:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 2 00:21:31.097 01:07:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:31.097 01:07:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:31.097 01:07:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:21:31.097 01:07:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:31.098 01:07:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:31.098 01:07:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:31.098 01:07:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:31.098 01:07:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:31.098 01:07:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:31.098 01:07:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:31.098 01:07:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:31.662 00:21:31.662 01:07:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:31.662 01:07:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:31.662 01:07:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:31.919 01:07:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:31.919 01:07:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:31.919 01:07:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:31.919 01:07:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:31.919 01:07:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:31.919 01:07:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:31.919 { 00:21:31.919 "cntlid": 117, 00:21:31.919 "qid": 0, 00:21:31.919 "state": "enabled", 00:21:31.919 "listen_address": { 00:21:31.919 "trtype": "TCP", 00:21:31.919 "adrfam": "IPv4", 00:21:31.919 "traddr": "10.0.0.2", 00:21:31.919 "trsvcid": "4420" 00:21:31.919 }, 00:21:31.919 "peer_address": { 00:21:31.919 "trtype": "TCP", 00:21:31.919 "adrfam": "IPv4", 00:21:31.919 "traddr": "10.0.0.1", 00:21:31.919 "trsvcid": "52724" 00:21:31.919 }, 00:21:31.919 "auth": { 00:21:31.919 "state": "completed", 00:21:31.919 "digest": "sha512", 00:21:31.919 "dhgroup": "ffdhe3072" 00:21:31.919 } 00:21:31.919 } 00:21:31.919 ]' 00:21:31.919 01:07:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:31.919 01:07:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:31.919 01:07:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:31.919 01:07:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:31.919 01:07:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:31.919 01:07:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:31.919 01:07:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:31.919 01:07:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:32.175 01:07:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:ZDlhNzA3ZmUzYjQ5MmJhMTk0MWEwYTUzODczYTZkZTk5ZTc2MjdjMmQ2MTJlNmI3PajNMw==: --dhchap-ctrl-secret DHHC-1:01:Yjg2OTNkZGQzMWQ1Njg3ODIwNTZhMzdkNDJiMzBjYjFsLW8v: 00:21:33.106 01:07:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:33.106 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:33.106 01:07:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:33.106 01:07:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:33.106 01:07:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:33.106 01:07:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:33.106 01:07:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:33.106 01:07:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:33.106 01:07:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:33.363 01:07:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 3 00:21:33.363 01:07:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:33.363 01:07:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:33.363 01:07:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:21:33.363 01:07:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:33.363 01:07:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:33.363 01:07:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:33.363 01:07:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:33.363 01:07:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:33.620 01:07:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:33.620 01:07:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:33.620 01:07:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:33.877 00:21:33.877 01:07:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:33.877 01:07:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:33.877 01:07:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:34.134 01:07:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:34.134 01:07:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:34.135 01:07:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:34.135 01:07:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:34.135 01:07:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:34.135 01:07:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:34.135 { 00:21:34.135 "cntlid": 119, 00:21:34.135 "qid": 0, 00:21:34.135 "state": "enabled", 00:21:34.135 "listen_address": { 00:21:34.135 "trtype": "TCP", 00:21:34.135 "adrfam": "IPv4", 00:21:34.135 "traddr": "10.0.0.2", 00:21:34.135 "trsvcid": "4420" 00:21:34.135 }, 00:21:34.135 "peer_address": { 00:21:34.135 "trtype": "TCP", 00:21:34.135 "adrfam": "IPv4", 00:21:34.135 "traddr": "10.0.0.1", 00:21:34.135 "trsvcid": "52750" 00:21:34.135 }, 00:21:34.135 "auth": { 00:21:34.135 "state": "completed", 00:21:34.135 "digest": "sha512", 00:21:34.135 "dhgroup": "ffdhe3072" 00:21:34.135 } 00:21:34.135 } 00:21:34.135 ]' 00:21:34.135 01:07:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:34.135 01:07:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:34.135 01:07:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:34.135 01:07:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:34.135 01:07:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:34.135 01:07:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:34.135 01:07:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:34.135 01:07:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:34.392 01:07:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:YjA5MzliMWMzNDhkYTZlNjgzZGYwMWJiNGYwNTI0OTI0NGMwYTM0ZWIxYjY2MGI1ZGM5Yjk4OGNjNTEwZTliZkMFjv0=: 00:21:35.324 01:07:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:35.324 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:35.324 01:07:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:35.324 01:07:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:35.324 01:07:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:35.324 01:07:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:35.324 01:07:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:21:35.324 01:07:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:35.324 01:07:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:35.324 01:07:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:35.582 01:07:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 0 00:21:35.582 01:07:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:35.582 01:07:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:35.582 01:07:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:21:35.582 01:07:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:35.582 01:07:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:35.582 01:07:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:35.582 01:07:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:35.582 01:07:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:35.582 01:07:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:35.582 01:07:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:35.582 01:07:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:36.147 00:21:36.147 01:07:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:36.147 01:07:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:36.147 01:07:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:36.405 01:07:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:36.405 01:07:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:36.405 01:07:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:36.405 01:07:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:36.405 01:07:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:36.405 01:07:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:36.405 { 00:21:36.405 "cntlid": 121, 00:21:36.405 "qid": 0, 00:21:36.405 "state": "enabled", 00:21:36.405 "listen_address": { 00:21:36.405 "trtype": "TCP", 00:21:36.405 "adrfam": "IPv4", 00:21:36.405 "traddr": "10.0.0.2", 00:21:36.405 "trsvcid": "4420" 00:21:36.405 }, 00:21:36.405 "peer_address": { 00:21:36.405 "trtype": "TCP", 00:21:36.405 "adrfam": "IPv4", 00:21:36.405 "traddr": "10.0.0.1", 00:21:36.405 "trsvcid": "52778" 00:21:36.405 }, 00:21:36.405 "auth": { 00:21:36.405 "state": "completed", 00:21:36.405 "digest": "sha512", 00:21:36.405 "dhgroup": "ffdhe4096" 00:21:36.405 } 00:21:36.405 } 00:21:36.405 ]' 00:21:36.405 01:07:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:36.405 01:07:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:36.405 01:07:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:36.405 01:07:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:36.405 01:07:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:36.405 01:07:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:36.405 01:07:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:36.405 01:07:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:36.662 01:07:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:N2RjMjA3ZWQ2NWU0MTVmODY1OTNmNGJhMTA3NzMwMTQ4NjMyN2E5MTUwMjM3MGE52ez0nA==: --dhchap-ctrl-secret DHHC-1:03:NmYxYWM1Y2I1NzVhM2VmNmNhMzQ5NjBiNjg4M2YzMmUwYzJkZmU5ZjIwNmYzZmFjYjQ2MzQwMzFkNWU5YTE4ZdZZ/8A=: 00:21:37.596 01:07:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:37.596 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:37.596 01:07:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:37.596 01:07:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:37.597 01:07:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:37.597 01:07:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:37.597 01:07:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:37.597 01:07:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:37.597 01:07:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:37.854 01:07:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 1 00:21:37.854 01:07:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:37.854 01:07:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:37.854 01:07:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:21:37.854 01:07:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:37.854 01:07:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:37.854 01:07:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:37.854 01:07:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:37.854 01:07:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:37.854 01:07:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:37.854 01:07:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:37.854 01:07:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:38.419 00:21:38.419 01:07:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:38.419 01:07:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:38.419 01:07:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:38.420 01:07:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:38.420 01:07:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:38.420 01:07:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:38.420 01:07:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:38.420 01:07:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:38.420 01:07:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:38.420 { 00:21:38.420 "cntlid": 123, 00:21:38.420 "qid": 0, 00:21:38.420 "state": "enabled", 00:21:38.420 "listen_address": { 00:21:38.420 "trtype": "TCP", 00:21:38.420 "adrfam": "IPv4", 00:21:38.420 "traddr": "10.0.0.2", 00:21:38.420 "trsvcid": "4420" 00:21:38.420 }, 00:21:38.420 "peer_address": { 00:21:38.420 "trtype": "TCP", 00:21:38.420 "adrfam": "IPv4", 00:21:38.420 "traddr": "10.0.0.1", 00:21:38.420 "trsvcid": "46138" 00:21:38.420 }, 00:21:38.420 "auth": { 00:21:38.420 "state": "completed", 00:21:38.420 "digest": "sha512", 00:21:38.420 "dhgroup": "ffdhe4096" 00:21:38.420 } 00:21:38.420 } 00:21:38.420 ]' 00:21:38.420 01:07:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:38.678 01:07:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:38.678 01:07:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:38.678 01:07:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:38.678 01:07:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:38.678 01:07:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:38.678 01:07:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:38.678 01:07:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:38.936 01:07:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:ZGQ4Yzg0MmE1ZDkxY2RlNGNmZjQzYzAxOTIxOGZjZmTWfQyC: --dhchap-ctrl-secret DHHC-1:02:NzAxMTE1OTNiMGQzMjJiOTA3MWQxZjI0ODY5YzA4MjllMWMxYmM1ZjUxYzlmZDQ0Z0yyfA==: 00:21:39.868 01:07:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:39.868 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:39.868 01:07:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:39.868 01:07:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:39.868 01:07:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:39.868 01:07:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:39.869 01:07:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:39.869 01:07:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:39.869 01:07:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:40.127 01:07:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 2 00:21:40.127 01:07:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:40.127 01:07:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:40.127 01:07:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:21:40.127 01:07:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:40.127 01:07:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:40.127 01:07:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:40.127 01:07:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:40.127 01:07:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:40.127 01:07:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:40.127 01:07:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:40.127 01:07:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:40.392 00:21:40.393 01:07:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:40.393 01:07:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:40.393 01:07:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:40.653 01:07:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:40.653 01:07:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:40.653 01:07:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:40.653 01:07:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:40.653 01:07:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:40.653 01:07:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:40.653 { 00:21:40.653 "cntlid": 125, 00:21:40.653 "qid": 0, 00:21:40.653 "state": "enabled", 00:21:40.653 "listen_address": { 00:21:40.653 "trtype": "TCP", 00:21:40.653 "adrfam": "IPv4", 00:21:40.653 "traddr": "10.0.0.2", 00:21:40.653 "trsvcid": "4420" 00:21:40.653 }, 00:21:40.653 "peer_address": { 00:21:40.653 "trtype": "TCP", 00:21:40.653 "adrfam": "IPv4", 00:21:40.653 "traddr": "10.0.0.1", 00:21:40.653 "trsvcid": "46158" 00:21:40.653 }, 00:21:40.653 "auth": { 00:21:40.653 "state": "completed", 00:21:40.653 "digest": "sha512", 00:21:40.653 "dhgroup": "ffdhe4096" 00:21:40.653 } 00:21:40.653 } 00:21:40.653 ]' 00:21:40.653 01:07:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:40.653 01:07:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:40.653 01:07:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:40.911 01:07:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:40.911 01:07:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:40.911 01:07:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:40.911 01:07:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:40.911 01:07:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:41.169 01:07:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:ZDlhNzA3ZmUzYjQ5MmJhMTk0MWEwYTUzODczYTZkZTk5ZTc2MjdjMmQ2MTJlNmI3PajNMw==: --dhchap-ctrl-secret DHHC-1:01:Yjg2OTNkZGQzMWQ1Njg3ODIwNTZhMzdkNDJiMzBjYjFsLW8v: 00:21:42.103 01:07:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:42.103 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:42.103 01:07:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:42.103 01:07:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:42.103 01:07:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:42.103 01:07:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:42.103 01:07:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:42.103 01:07:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:42.103 01:07:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:42.381 01:07:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 3 00:21:42.381 01:07:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:42.381 01:07:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:42.381 01:07:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:21:42.381 01:07:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:42.381 01:07:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:42.381 01:07:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:42.381 01:07:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:42.381 01:07:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:42.381 01:07:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:42.381 01:07:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:42.381 01:07:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:42.664 00:21:42.664 01:07:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:42.664 01:07:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:42.664 01:07:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:42.922 01:07:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:42.922 01:07:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:42.922 01:07:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:42.922 01:07:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:42.922 01:07:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:42.922 01:07:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:42.922 { 00:21:42.922 "cntlid": 127, 00:21:42.922 "qid": 0, 00:21:42.922 "state": "enabled", 00:21:42.922 "listen_address": { 00:21:42.922 "trtype": "TCP", 00:21:42.922 "adrfam": "IPv4", 00:21:42.922 "traddr": "10.0.0.2", 00:21:42.922 "trsvcid": "4420" 00:21:42.922 }, 00:21:42.922 "peer_address": { 00:21:42.922 "trtype": "TCP", 00:21:42.922 "adrfam": "IPv4", 00:21:42.922 "traddr": "10.0.0.1", 00:21:42.922 "trsvcid": "46176" 00:21:42.922 }, 00:21:42.922 "auth": { 00:21:42.922 "state": "completed", 00:21:42.922 "digest": "sha512", 00:21:42.922 "dhgroup": "ffdhe4096" 00:21:42.922 } 00:21:42.922 } 00:21:42.922 ]' 00:21:42.922 01:07:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:42.922 01:07:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:42.922 01:07:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:43.180 01:07:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:43.180 01:07:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:43.180 01:07:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:43.180 01:07:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:43.180 01:07:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:43.438 01:07:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:YjA5MzliMWMzNDhkYTZlNjgzZGYwMWJiNGYwNTI0OTI0NGMwYTM0ZWIxYjY2MGI1ZGM5Yjk4OGNjNTEwZTliZkMFjv0=: 00:21:44.371 01:07:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:44.371 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:44.372 01:07:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:44.372 01:07:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:44.372 01:07:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:44.372 01:07:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:44.372 01:07:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:21:44.372 01:07:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:44.372 01:07:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:44.372 01:07:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:44.629 01:07:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 0 00:21:44.629 01:07:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:44.629 01:07:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:44.629 01:07:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:21:44.629 01:07:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:44.629 01:07:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:44.629 01:07:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:44.629 01:07:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:44.629 01:07:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:44.629 01:07:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:44.629 01:07:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:44.629 01:07:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:45.194 00:21:45.194 01:07:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:45.194 01:07:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:45.194 01:07:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:45.452 01:07:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:45.452 01:07:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:45.452 01:07:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:45.452 01:07:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:45.452 01:07:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:45.452 01:07:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:45.452 { 00:21:45.452 "cntlid": 129, 00:21:45.452 "qid": 0, 00:21:45.452 "state": "enabled", 00:21:45.452 "listen_address": { 00:21:45.452 "trtype": "TCP", 00:21:45.452 "adrfam": "IPv4", 00:21:45.452 "traddr": "10.0.0.2", 00:21:45.452 "trsvcid": "4420" 00:21:45.452 }, 00:21:45.452 "peer_address": { 00:21:45.452 "trtype": "TCP", 00:21:45.452 "adrfam": "IPv4", 00:21:45.452 "traddr": "10.0.0.1", 00:21:45.452 "trsvcid": "46210" 00:21:45.452 }, 00:21:45.452 "auth": { 00:21:45.452 "state": "completed", 00:21:45.452 "digest": "sha512", 00:21:45.452 "dhgroup": "ffdhe6144" 00:21:45.452 } 00:21:45.452 } 00:21:45.452 ]' 00:21:45.452 01:07:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:45.452 01:07:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:45.452 01:07:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:45.452 01:07:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:45.452 01:07:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:45.452 01:07:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:45.452 01:07:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:45.452 01:07:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:45.710 01:07:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:N2RjMjA3ZWQ2NWU0MTVmODY1OTNmNGJhMTA3NzMwMTQ4NjMyN2E5MTUwMjM3MGE52ez0nA==: --dhchap-ctrl-secret DHHC-1:03:NmYxYWM1Y2I1NzVhM2VmNmNhMzQ5NjBiNjg4M2YzMmUwYzJkZmU5ZjIwNmYzZmFjYjQ2MzQwMzFkNWU5YTE4ZdZZ/8A=: 00:21:46.692 01:07:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:46.692 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:46.692 01:07:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:46.692 01:07:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:46.692 01:07:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:46.692 01:07:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:46.692 01:07:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:46.692 01:07:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:46.692 01:07:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:46.950 01:07:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 1 00:21:46.950 01:07:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:46.950 01:07:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:46.950 01:07:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:21:46.950 01:07:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:46.950 01:07:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:46.950 01:07:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:46.950 01:07:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:46.950 01:07:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:46.950 01:07:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:46.950 01:07:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:46.950 01:07:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:47.515 00:21:47.515 01:07:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:47.515 01:07:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:47.515 01:07:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:47.773 01:07:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:47.773 01:07:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:47.773 01:07:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:47.773 01:07:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:47.773 01:07:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:47.773 01:07:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:47.773 { 00:21:47.773 "cntlid": 131, 00:21:47.773 "qid": 0, 00:21:47.773 "state": "enabled", 00:21:47.773 "listen_address": { 00:21:47.773 "trtype": "TCP", 00:21:47.773 "adrfam": "IPv4", 00:21:47.773 "traddr": "10.0.0.2", 00:21:47.773 "trsvcid": "4420" 00:21:47.773 }, 00:21:47.773 "peer_address": { 00:21:47.773 "trtype": "TCP", 00:21:47.773 "adrfam": "IPv4", 00:21:47.773 "traddr": "10.0.0.1", 00:21:47.773 "trsvcid": "52688" 00:21:47.773 }, 00:21:47.773 "auth": { 00:21:47.773 "state": "completed", 00:21:47.773 "digest": "sha512", 00:21:47.773 "dhgroup": "ffdhe6144" 00:21:47.773 } 00:21:47.773 } 00:21:47.773 ]' 00:21:47.773 01:07:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:47.773 01:07:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:47.773 01:07:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:47.773 01:07:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:47.773 01:07:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:47.773 01:07:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:47.773 01:07:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:47.773 01:07:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:48.029 01:07:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:ZGQ4Yzg0MmE1ZDkxY2RlNGNmZjQzYzAxOTIxOGZjZmTWfQyC: --dhchap-ctrl-secret DHHC-1:02:NzAxMTE1OTNiMGQzMjJiOTA3MWQxZjI0ODY5YzA4MjllMWMxYmM1ZjUxYzlmZDQ0Z0yyfA==: 00:21:49.398 01:07:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:49.398 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:49.398 01:07:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:49.398 01:07:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:49.398 01:07:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:49.398 01:07:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:49.398 01:07:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:49.398 01:07:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:49.398 01:07:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:49.398 01:07:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 2 00:21:49.398 01:07:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:49.398 01:07:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:49.398 01:07:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:21:49.398 01:07:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:49.398 01:07:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:49.398 01:07:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:49.398 01:07:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:49.398 01:07:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:49.398 01:07:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:49.398 01:07:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:49.398 01:07:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:49.962 00:21:49.962 01:07:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:49.962 01:07:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:49.962 01:07:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:50.220 01:07:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:50.220 01:07:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:50.220 01:07:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:50.220 01:07:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:50.220 01:07:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:50.220 01:07:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:50.220 { 00:21:50.220 "cntlid": 133, 00:21:50.220 "qid": 0, 00:21:50.220 "state": "enabled", 00:21:50.220 "listen_address": { 00:21:50.220 "trtype": "TCP", 00:21:50.220 "adrfam": "IPv4", 00:21:50.220 "traddr": "10.0.0.2", 00:21:50.220 "trsvcid": "4420" 00:21:50.220 }, 00:21:50.220 "peer_address": { 00:21:50.220 "trtype": "TCP", 00:21:50.220 "adrfam": "IPv4", 00:21:50.220 "traddr": "10.0.0.1", 00:21:50.220 "trsvcid": "52706" 00:21:50.220 }, 00:21:50.220 "auth": { 00:21:50.220 "state": "completed", 00:21:50.220 "digest": "sha512", 00:21:50.220 "dhgroup": "ffdhe6144" 00:21:50.220 } 00:21:50.220 } 00:21:50.220 ]' 00:21:50.220 01:07:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:50.220 01:07:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:50.220 01:07:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:50.220 01:07:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:50.220 01:07:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:50.220 01:07:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:50.220 01:07:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:50.220 01:07:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:50.476 01:07:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:ZDlhNzA3ZmUzYjQ5MmJhMTk0MWEwYTUzODczYTZkZTk5ZTc2MjdjMmQ2MTJlNmI3PajNMw==: --dhchap-ctrl-secret DHHC-1:01:Yjg2OTNkZGQzMWQ1Njg3ODIwNTZhMzdkNDJiMzBjYjFsLW8v: 00:21:51.846 01:07:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:51.846 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:51.846 01:07:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:51.846 01:07:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:51.846 01:07:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:51.846 01:07:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:51.846 01:07:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:51.846 01:07:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:51.846 01:07:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:51.846 01:07:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 3 00:21:51.847 01:07:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:51.847 01:07:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:51.847 01:07:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:21:51.847 01:07:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:51.847 01:07:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:51.847 01:07:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:51.847 01:07:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:51.847 01:07:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:51.847 01:07:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:51.847 01:07:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:51.847 01:07:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:52.412 00:21:52.412 01:07:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:52.412 01:07:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:52.412 01:07:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:52.669 01:07:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:52.669 01:07:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:52.669 01:07:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:52.669 01:07:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:52.669 01:07:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:52.669 01:07:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:52.669 { 00:21:52.669 "cntlid": 135, 00:21:52.669 "qid": 0, 00:21:52.669 "state": "enabled", 00:21:52.669 "listen_address": { 00:21:52.669 "trtype": "TCP", 00:21:52.669 "adrfam": "IPv4", 00:21:52.669 "traddr": "10.0.0.2", 00:21:52.669 "trsvcid": "4420" 00:21:52.669 }, 00:21:52.669 "peer_address": { 00:21:52.669 "trtype": "TCP", 00:21:52.669 "adrfam": "IPv4", 00:21:52.669 "traddr": "10.0.0.1", 00:21:52.669 "trsvcid": "52732" 00:21:52.669 }, 00:21:52.669 "auth": { 00:21:52.669 "state": "completed", 00:21:52.669 "digest": "sha512", 00:21:52.669 "dhgroup": "ffdhe6144" 00:21:52.669 } 00:21:52.669 } 00:21:52.669 ]' 00:21:52.669 01:07:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:52.669 01:07:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:52.669 01:07:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:52.669 01:07:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:52.669 01:07:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:52.926 01:07:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:52.926 01:07:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:52.926 01:07:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:52.926 01:07:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:YjA5MzliMWMzNDhkYTZlNjgzZGYwMWJiNGYwNTI0OTI0NGMwYTM0ZWIxYjY2MGI1ZGM5Yjk4OGNjNTEwZTliZkMFjv0=: 00:21:53.858 01:07:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:54.115 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:54.115 01:07:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:54.115 01:07:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:54.115 01:07:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:54.115 01:07:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:54.115 01:07:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:21:54.115 01:07:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:54.115 01:07:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:54.115 01:07:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:54.373 01:07:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 0 00:21:54.373 01:07:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:54.373 01:07:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:54.373 01:07:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:21:54.373 01:07:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:54.373 01:07:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:54.373 01:07:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:54.373 01:07:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:54.373 01:07:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:54.373 01:07:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:54.373 01:07:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:54.373 01:07:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:55.306 00:21:55.306 01:07:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:55.306 01:07:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:55.306 01:07:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:55.306 01:07:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:55.306 01:07:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:55.306 01:07:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:55.306 01:07:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:55.306 01:07:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:55.306 01:07:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:55.306 { 00:21:55.306 "cntlid": 137, 00:21:55.306 "qid": 0, 00:21:55.306 "state": "enabled", 00:21:55.306 "listen_address": { 00:21:55.306 "trtype": "TCP", 00:21:55.306 "adrfam": "IPv4", 00:21:55.306 "traddr": "10.0.0.2", 00:21:55.306 "trsvcid": "4420" 00:21:55.306 }, 00:21:55.306 "peer_address": { 00:21:55.306 "trtype": "TCP", 00:21:55.306 "adrfam": "IPv4", 00:21:55.306 "traddr": "10.0.0.1", 00:21:55.306 "trsvcid": "52756" 00:21:55.306 }, 00:21:55.306 "auth": { 00:21:55.306 "state": "completed", 00:21:55.306 "digest": "sha512", 00:21:55.306 "dhgroup": "ffdhe8192" 00:21:55.306 } 00:21:55.306 } 00:21:55.306 ]' 00:21:55.306 01:07:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:55.564 01:07:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:55.564 01:07:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:55.564 01:07:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:55.564 01:07:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:55.564 01:07:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:55.564 01:07:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:55.564 01:07:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:55.821 01:07:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:N2RjMjA3ZWQ2NWU0MTVmODY1OTNmNGJhMTA3NzMwMTQ4NjMyN2E5MTUwMjM3MGE52ez0nA==: --dhchap-ctrl-secret DHHC-1:03:NmYxYWM1Y2I1NzVhM2VmNmNhMzQ5NjBiNjg4M2YzMmUwYzJkZmU5ZjIwNmYzZmFjYjQ2MzQwMzFkNWU5YTE4ZdZZ/8A=: 00:21:56.754 01:07:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:56.754 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:56.754 01:07:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:56.754 01:07:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:56.754 01:07:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:56.754 01:07:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:56.754 01:07:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:56.754 01:07:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:56.754 01:07:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:57.012 01:07:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 1 00:21:57.012 01:07:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:57.012 01:07:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:57.012 01:07:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:21:57.012 01:07:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:57.012 01:07:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:57.012 01:07:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:57.012 01:07:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:57.012 01:07:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:57.012 01:07:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:57.012 01:07:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:57.012 01:07:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:57.946 00:21:57.946 01:07:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:57.946 01:07:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:57.946 01:07:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:58.204 01:07:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:58.204 01:07:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:58.204 01:07:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:58.204 01:07:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:58.204 01:07:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:58.204 01:07:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:58.204 { 00:21:58.204 "cntlid": 139, 00:21:58.204 "qid": 0, 00:21:58.204 "state": "enabled", 00:21:58.204 "listen_address": { 00:21:58.204 "trtype": "TCP", 00:21:58.204 "adrfam": "IPv4", 00:21:58.204 "traddr": "10.0.0.2", 00:21:58.204 "trsvcid": "4420" 00:21:58.204 }, 00:21:58.204 "peer_address": { 00:21:58.204 "trtype": "TCP", 00:21:58.204 "adrfam": "IPv4", 00:21:58.204 "traddr": "10.0.0.1", 00:21:58.204 "trsvcid": "43194" 00:21:58.204 }, 00:21:58.204 "auth": { 00:21:58.204 "state": "completed", 00:21:58.204 "digest": "sha512", 00:21:58.204 "dhgroup": "ffdhe8192" 00:21:58.204 } 00:21:58.204 } 00:21:58.204 ]' 00:21:58.204 01:07:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:58.204 01:07:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:58.204 01:07:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:58.204 01:07:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:58.204 01:07:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:58.462 01:07:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:58.462 01:07:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:58.462 01:07:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:58.719 01:07:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:ZGQ4Yzg0MmE1ZDkxY2RlNGNmZjQzYzAxOTIxOGZjZmTWfQyC: --dhchap-ctrl-secret DHHC-1:02:NzAxMTE1OTNiMGQzMjJiOTA3MWQxZjI0ODY5YzA4MjllMWMxYmM1ZjUxYzlmZDQ0Z0yyfA==: 00:21:59.653 01:07:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:59.653 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:59.653 01:07:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:59.653 01:07:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:59.653 01:07:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:59.653 01:07:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:59.653 01:07:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:59.653 01:07:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:59.653 01:07:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:59.911 01:07:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 2 00:21:59.911 01:07:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:59.911 01:07:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:59.911 01:07:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:21:59.911 01:07:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:59.911 01:07:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:59.911 01:07:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:59.911 01:07:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:59.911 01:07:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:59.911 01:07:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:59.911 01:07:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:59.911 01:07:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:00.844 00:22:00.844 01:07:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:00.844 01:07:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:00.844 01:07:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:01.102 01:07:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:01.102 01:07:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:01.102 01:07:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:01.102 01:07:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:01.102 01:07:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:01.102 01:07:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:01.102 { 00:22:01.102 "cntlid": 141, 00:22:01.102 "qid": 0, 00:22:01.102 "state": "enabled", 00:22:01.102 "listen_address": { 00:22:01.102 "trtype": "TCP", 00:22:01.102 "adrfam": "IPv4", 00:22:01.102 "traddr": "10.0.0.2", 00:22:01.102 "trsvcid": "4420" 00:22:01.102 }, 00:22:01.102 "peer_address": { 00:22:01.102 "trtype": "TCP", 00:22:01.102 "adrfam": "IPv4", 00:22:01.102 "traddr": "10.0.0.1", 00:22:01.102 "trsvcid": "43218" 00:22:01.102 }, 00:22:01.102 "auth": { 00:22:01.102 "state": "completed", 00:22:01.102 "digest": "sha512", 00:22:01.102 "dhgroup": "ffdhe8192" 00:22:01.102 } 00:22:01.102 } 00:22:01.102 ]' 00:22:01.102 01:07:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:01.102 01:07:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:01.102 01:07:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:01.102 01:07:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:01.102 01:07:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:01.102 01:07:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:01.102 01:07:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:01.102 01:07:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:01.360 01:07:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:ZDlhNzA3ZmUzYjQ5MmJhMTk0MWEwYTUzODczYTZkZTk5ZTc2MjdjMmQ2MTJlNmI3PajNMw==: --dhchap-ctrl-secret DHHC-1:01:Yjg2OTNkZGQzMWQ1Njg3ODIwNTZhMzdkNDJiMzBjYjFsLW8v: 00:22:02.293 01:07:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:02.293 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:02.293 01:07:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:02.293 01:07:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:02.293 01:07:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:02.293 01:07:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:02.293 01:07:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:02.293 01:07:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:02.293 01:07:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:02.551 01:07:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 3 00:22:02.551 01:07:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:02.551 01:07:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:02.551 01:07:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:22:02.551 01:07:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:22:02.551 01:07:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:02.551 01:07:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:22:02.551 01:07:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:02.551 01:07:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:02.551 01:07:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:02.551 01:07:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:02.551 01:07:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:03.485 00:22:03.485 01:07:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:03.485 01:07:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:03.485 01:07:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:03.743 01:07:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:03.743 01:07:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:03.743 01:07:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:03.743 01:07:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:04.001 01:07:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:04.001 01:07:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:04.001 { 00:22:04.001 "cntlid": 143, 00:22:04.001 "qid": 0, 00:22:04.001 "state": "enabled", 00:22:04.001 "listen_address": { 00:22:04.001 "trtype": "TCP", 00:22:04.001 "adrfam": "IPv4", 00:22:04.001 "traddr": "10.0.0.2", 00:22:04.001 "trsvcid": "4420" 00:22:04.001 }, 00:22:04.001 "peer_address": { 00:22:04.001 "trtype": "TCP", 00:22:04.001 "adrfam": "IPv4", 00:22:04.001 "traddr": "10.0.0.1", 00:22:04.001 "trsvcid": "43254" 00:22:04.001 }, 00:22:04.001 "auth": { 00:22:04.001 "state": "completed", 00:22:04.001 "digest": "sha512", 00:22:04.001 "dhgroup": "ffdhe8192" 00:22:04.001 } 00:22:04.001 } 00:22:04.001 ]' 00:22:04.001 01:07:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:04.001 01:07:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:04.001 01:07:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:04.001 01:07:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:04.001 01:07:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:04.001 01:07:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:04.001 01:07:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:04.001 01:07:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:04.258 01:07:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:YjA5MzliMWMzNDhkYTZlNjgzZGYwMWJiNGYwNTI0OTI0NGMwYTM0ZWIxYjY2MGI1ZGM5Yjk4OGNjNTEwZTliZkMFjv0=: 00:22:05.190 01:07:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:05.190 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:05.190 01:07:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:05.190 01:07:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:05.190 01:07:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:05.190 01:07:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:05.190 01:07:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:22:05.190 01:07:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@103 -- # printf %s sha256,sha384,sha512 00:22:05.190 01:07:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:22:05.190 01:07:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@103 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:05.190 01:07:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:05.190 01:07:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:05.448 01:07:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@114 -- # connect_authenticate sha512 ffdhe8192 0 00:22:05.448 01:07:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:05.448 01:07:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:05.448 01:07:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:22:05.448 01:07:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:22:05.448 01:07:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:05.448 01:07:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:05.448 01:07:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:05.448 01:07:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:05.448 01:07:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:05.448 01:07:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:05.448 01:07:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:06.381 00:22:06.381 01:07:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:06.381 01:07:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:06.381 01:07:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:06.637 01:07:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:06.637 01:07:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:06.638 01:07:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:06.638 01:07:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:06.638 01:07:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:06.638 01:07:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:06.638 { 00:22:06.638 "cntlid": 145, 00:22:06.638 "qid": 0, 00:22:06.638 "state": "enabled", 00:22:06.638 "listen_address": { 00:22:06.638 "trtype": "TCP", 00:22:06.638 "adrfam": "IPv4", 00:22:06.638 "traddr": "10.0.0.2", 00:22:06.638 "trsvcid": "4420" 00:22:06.638 }, 00:22:06.638 "peer_address": { 00:22:06.638 "trtype": "TCP", 00:22:06.638 "adrfam": "IPv4", 00:22:06.638 "traddr": "10.0.0.1", 00:22:06.638 "trsvcid": "43280" 00:22:06.638 }, 00:22:06.638 "auth": { 00:22:06.638 "state": "completed", 00:22:06.638 "digest": "sha512", 00:22:06.638 "dhgroup": "ffdhe8192" 00:22:06.638 } 00:22:06.638 } 00:22:06.638 ]' 00:22:06.638 01:07:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:06.638 01:07:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:06.638 01:07:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:06.895 01:07:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:06.895 01:07:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:06.895 01:07:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:06.895 01:07:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:06.895 01:07:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:07.201 01:08:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:N2RjMjA3ZWQ2NWU0MTVmODY1OTNmNGJhMTA3NzMwMTQ4NjMyN2E5MTUwMjM3MGE52ez0nA==: --dhchap-ctrl-secret DHHC-1:03:NmYxYWM1Y2I1NzVhM2VmNmNhMzQ5NjBiNjg4M2YzMmUwYzJkZmU5ZjIwNmYzZmFjYjQ2MzQwMzFkNWU5YTE4ZdZZ/8A=: 00:22:08.135 01:08:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:08.135 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:08.135 01:08:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:08.135 01:08:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:08.135 01:08:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:08.135 01:08:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:08.135 01:08:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@117 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 00:22:08.135 01:08:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:08.135 01:08:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:08.135 01:08:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:08.135 01:08:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@118 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:22:08.135 01:08:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:22:08.135 01:08:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:22:08.135 01:08:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:22:08.135 01:08:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:08.135 01:08:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:22:08.135 01:08:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:08.135 01:08:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:22:08.135 01:08:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:22:09.067 request: 00:22:09.067 { 00:22:09.067 "name": "nvme0", 00:22:09.067 "trtype": "tcp", 00:22:09.067 "traddr": "10.0.0.2", 00:22:09.067 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:09.067 "adrfam": "ipv4", 00:22:09.067 "trsvcid": "4420", 00:22:09.067 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:09.067 "dhchap_key": "key2", 00:22:09.067 "method": "bdev_nvme_attach_controller", 00:22:09.067 "req_id": 1 00:22:09.067 } 00:22:09.067 Got JSON-RPC error response 00:22:09.067 response: 00:22:09.067 { 00:22:09.067 "code": -5, 00:22:09.067 "message": "Input/output error" 00:22:09.067 } 00:22:09.067 01:08:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:22:09.067 01:08:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:09.067 01:08:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:09.067 01:08:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:09.067 01:08:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@121 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:09.067 01:08:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:09.067 01:08:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:09.067 01:08:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:09.067 01:08:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@124 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:09.067 01:08:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:09.067 01:08:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:09.067 01:08:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:09.067 01:08:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@125 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:09.067 01:08:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:22:09.067 01:08:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:09.067 01:08:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:22:09.067 01:08:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:09.067 01:08:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:22:09.067 01:08:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:09.067 01:08:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:09.067 01:08:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:10.000 request: 00:22:10.000 { 00:22:10.000 "name": "nvme0", 00:22:10.000 "trtype": "tcp", 00:22:10.000 "traddr": "10.0.0.2", 00:22:10.000 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:10.000 "adrfam": "ipv4", 00:22:10.000 "trsvcid": "4420", 00:22:10.000 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:10.000 "dhchap_key": "key1", 00:22:10.000 "dhchap_ctrlr_key": "ckey2", 00:22:10.000 "method": "bdev_nvme_attach_controller", 00:22:10.000 "req_id": 1 00:22:10.000 } 00:22:10.000 Got JSON-RPC error response 00:22:10.000 response: 00:22:10.000 { 00:22:10.000 "code": -5, 00:22:10.000 "message": "Input/output error" 00:22:10.000 } 00:22:10.000 01:08:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:22:10.000 01:08:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:10.000 01:08:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:10.000 01:08:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:10.000 01:08:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@128 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:10.000 01:08:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:10.000 01:08:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:10.000 01:08:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:10.000 01:08:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@131 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 00:22:10.000 01:08:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:10.000 01:08:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:10.000 01:08:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:10.000 01:08:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@132 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:10.000 01:08:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:22:10.000 01:08:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:10.000 01:08:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:22:10.000 01:08:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:10.000 01:08:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:22:10.000 01:08:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:10.000 01:08:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:10.000 01:08:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:11.003 request: 00:22:11.003 { 00:22:11.003 "name": "nvme0", 00:22:11.003 "trtype": "tcp", 00:22:11.003 "traddr": "10.0.0.2", 00:22:11.003 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:11.003 "adrfam": "ipv4", 00:22:11.003 "trsvcid": "4420", 00:22:11.003 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:11.003 "dhchap_key": "key1", 00:22:11.003 "dhchap_ctrlr_key": "ckey1", 00:22:11.003 "method": "bdev_nvme_attach_controller", 00:22:11.003 "req_id": 1 00:22:11.003 } 00:22:11.003 Got JSON-RPC error response 00:22:11.003 response: 00:22:11.003 { 00:22:11.003 "code": -5, 00:22:11.003 "message": "Input/output error" 00:22:11.003 } 00:22:11.003 01:08:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:22:11.003 01:08:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:11.003 01:08:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:11.003 01:08:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:11.003 01:08:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@135 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:11.003 01:08:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:11.003 01:08:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:11.003 01:08:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:11.003 01:08:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@138 -- # killprocess 3778067 00:22:11.003 01:08:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@946 -- # '[' -z 3778067 ']' 00:22:11.003 01:08:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@950 -- # kill -0 3778067 00:22:11.003 01:08:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@951 -- # uname 00:22:11.003 01:08:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:22:11.003 01:08:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3778067 00:22:11.003 01:08:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:22:11.003 01:08:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:22:11.004 01:08:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3778067' 00:22:11.004 killing process with pid 3778067 00:22:11.004 01:08:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@965 -- # kill 3778067 00:22:11.004 01:08:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@970 -- # wait 3778067 00:22:11.004 01:08:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@139 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:22:11.004 01:08:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:11.004 01:08:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@720 -- # xtrace_disable 00:22:11.004 01:08:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:11.004 01:08:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=3800629 00:22:11.004 01:08:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:22:11.004 01:08:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 3800629 00:22:11.004 01:08:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@827 -- # '[' -z 3800629 ']' 00:22:11.004 01:08:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:11.004 01:08:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@832 -- # local max_retries=100 00:22:11.004 01:08:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:11.004 01:08:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # xtrace_disable 00:22:11.004 01:08:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:11.263 01:08:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:22:11.264 01:08:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@860 -- # return 0 00:22:11.264 01:08:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:11.264 01:08:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:11.264 01:08:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:11.264 01:08:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:11.264 01:08:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@140 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:22:11.264 01:08:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@142 -- # waitforlisten 3800629 00:22:11.264 01:08:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@827 -- # '[' -z 3800629 ']' 00:22:11.264 01:08:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:11.264 01:08:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@832 -- # local max_retries=100 00:22:11.264 01:08:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:11.264 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:11.264 01:08:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # xtrace_disable 00:22:11.264 01:08:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:11.522 01:08:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:22:11.522 01:08:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@860 -- # return 0 00:22:11.522 01:08:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@143 -- # rpc_cmd 00:22:11.522 01:08:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:11.522 01:08:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:11.779 01:08:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:11.779 01:08:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@153 -- # connect_authenticate sha512 ffdhe8192 3 00:22:11.779 01:08:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:11.779 01:08:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:11.779 01:08:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:22:11.779 01:08:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:22:11.779 01:08:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:11.779 01:08:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:22:11.779 01:08:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:11.779 01:08:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:11.779 01:08:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:11.779 01:08:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:11.779 01:08:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:12.713 00:22:12.713 01:08:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:12.713 01:08:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:12.713 01:08:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:12.713 01:08:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:12.713 01:08:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:12.713 01:08:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:12.713 01:08:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:12.971 01:08:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:12.971 01:08:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:12.971 { 00:22:12.971 "cntlid": 1, 00:22:12.971 "qid": 0, 00:22:12.971 "state": "enabled", 00:22:12.971 "listen_address": { 00:22:12.971 "trtype": "TCP", 00:22:12.971 "adrfam": "IPv4", 00:22:12.971 "traddr": "10.0.0.2", 00:22:12.971 "trsvcid": "4420" 00:22:12.971 }, 00:22:12.971 "peer_address": { 00:22:12.971 "trtype": "TCP", 00:22:12.971 "adrfam": "IPv4", 00:22:12.971 "traddr": "10.0.0.1", 00:22:12.971 "trsvcid": "42116" 00:22:12.971 }, 00:22:12.971 "auth": { 00:22:12.971 "state": "completed", 00:22:12.971 "digest": "sha512", 00:22:12.971 "dhgroup": "ffdhe8192" 00:22:12.971 } 00:22:12.971 } 00:22:12.971 ]' 00:22:12.971 01:08:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:12.971 01:08:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:12.971 01:08:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:12.971 01:08:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:12.971 01:08:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:12.971 01:08:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:12.971 01:08:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:12.971 01:08:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:13.229 01:08:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:YjA5MzliMWMzNDhkYTZlNjgzZGYwMWJiNGYwNTI0OTI0NGMwYTM0ZWIxYjY2MGI1ZGM5Yjk4OGNjNTEwZTliZkMFjv0=: 00:22:14.160 01:08:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:14.160 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:14.160 01:08:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:14.160 01:08:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:14.160 01:08:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:14.160 01:08:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:14.160 01:08:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:22:14.160 01:08:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:14.160 01:08:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:14.160 01:08:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:14.161 01:08:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@157 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:22:14.161 01:08:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:22:14.418 01:08:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@158 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:14.418 01:08:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:22:14.418 01:08:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:14.418 01:08:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:22:14.418 01:08:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:14.418 01:08:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:22:14.418 01:08:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:14.418 01:08:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:14.418 01:08:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:14.676 request: 00:22:14.676 { 00:22:14.676 "name": "nvme0", 00:22:14.676 "trtype": "tcp", 00:22:14.676 "traddr": "10.0.0.2", 00:22:14.676 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:14.676 "adrfam": "ipv4", 00:22:14.676 "trsvcid": "4420", 00:22:14.676 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:14.676 "dhchap_key": "key3", 00:22:14.676 "method": "bdev_nvme_attach_controller", 00:22:14.676 "req_id": 1 00:22:14.676 } 00:22:14.676 Got JSON-RPC error response 00:22:14.676 response: 00:22:14.676 { 00:22:14.676 "code": -5, 00:22:14.676 "message": "Input/output error" 00:22:14.676 } 00:22:14.676 01:08:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:22:14.676 01:08:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:14.676 01:08:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:14.676 01:08:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:14.676 01:08:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@163 -- # IFS=, 00:22:14.676 01:08:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@164 -- # printf %s sha256,sha384,sha512 00:22:14.676 01:08:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@163 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:22:14.676 01:08:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:22:14.934 01:08:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@169 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:14.934 01:08:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:22:14.934 01:08:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:14.934 01:08:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:22:14.934 01:08:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:14.934 01:08:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:22:14.934 01:08:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:14.934 01:08:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:14.934 01:08:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:15.191 request: 00:22:15.191 { 00:22:15.191 "name": "nvme0", 00:22:15.191 "trtype": "tcp", 00:22:15.191 "traddr": "10.0.0.2", 00:22:15.191 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:15.191 "adrfam": "ipv4", 00:22:15.191 "trsvcid": "4420", 00:22:15.191 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:15.191 "dhchap_key": "key3", 00:22:15.191 "method": "bdev_nvme_attach_controller", 00:22:15.191 "req_id": 1 00:22:15.191 } 00:22:15.191 Got JSON-RPC error response 00:22:15.191 response: 00:22:15.191 { 00:22:15.191 "code": -5, 00:22:15.191 "message": "Input/output error" 00:22:15.191 } 00:22:15.191 01:08:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:22:15.191 01:08:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:15.191 01:08:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:15.191 01:08:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:15.191 01:08:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:22:15.191 01:08:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@176 -- # printf %s sha256,sha384,sha512 00:22:15.191 01:08:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:22:15.191 01:08:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@176 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:15.191 01:08:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:15.191 01:08:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:15.449 01:08:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@186 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:15.449 01:08:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:15.449 01:08:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:15.449 01:08:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:15.449 01:08:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@187 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:15.449 01:08:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:15.449 01:08:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:15.449 01:08:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:15.449 01:08:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@188 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:15.449 01:08:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:22:15.449 01:08:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:15.449 01:08:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:22:15.449 01:08:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:15.449 01:08:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:22:15.449 01:08:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:15.449 01:08:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:15.449 01:08:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:15.706 request: 00:22:15.706 { 00:22:15.706 "name": "nvme0", 00:22:15.706 "trtype": "tcp", 00:22:15.706 "traddr": "10.0.0.2", 00:22:15.706 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:15.706 "adrfam": "ipv4", 00:22:15.706 "trsvcid": "4420", 00:22:15.706 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:15.706 "dhchap_key": "key0", 00:22:15.706 "dhchap_ctrlr_key": "key1", 00:22:15.706 "method": "bdev_nvme_attach_controller", 00:22:15.706 "req_id": 1 00:22:15.706 } 00:22:15.706 Got JSON-RPC error response 00:22:15.706 response: 00:22:15.706 { 00:22:15.706 "code": -5, 00:22:15.706 "message": "Input/output error" 00:22:15.706 } 00:22:15.706 01:08:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:22:15.706 01:08:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:15.706 01:08:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:15.706 01:08:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:15.706 01:08:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@192 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:22:15.706 01:08:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:22:15.963 00:22:16.221 01:08:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # hostrpc bdev_nvme_get_controllers 00:22:16.221 01:08:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # jq -r '.[].name' 00:22:16.221 01:08:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:16.221 01:08:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:16.221 01:08:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@196 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:16.221 01:08:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:16.479 01:08:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@198 -- # trap - SIGINT SIGTERM EXIT 00:22:16.479 01:08:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@199 -- # cleanup 00:22:16.479 01:08:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 3778207 00:22:16.479 01:08:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@946 -- # '[' -z 3778207 ']' 00:22:16.479 01:08:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@950 -- # kill -0 3778207 00:22:16.479 01:08:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@951 -- # uname 00:22:16.479 01:08:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:22:16.479 01:08:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3778207 00:22:16.736 01:08:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:22:16.736 01:08:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:22:16.736 01:08:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3778207' 00:22:16.736 killing process with pid 3778207 00:22:16.736 01:08:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@965 -- # kill 3778207 00:22:16.736 01:08:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@970 -- # wait 3778207 00:22:16.993 01:08:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:22:16.993 01:08:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:16.993 01:08:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@117 -- # sync 00:22:16.993 01:08:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:16.993 01:08:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@120 -- # set +e 00:22:16.993 01:08:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:16.993 01:08:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:16.993 rmmod nvme_tcp 00:22:16.993 rmmod nvme_fabrics 00:22:16.993 rmmod nvme_keyring 00:22:16.993 01:08:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:16.993 01:08:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@124 -- # set -e 00:22:16.993 01:08:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@125 -- # return 0 00:22:16.993 01:08:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@489 -- # '[' -n 3800629 ']' 00:22:16.993 01:08:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@490 -- # killprocess 3800629 00:22:16.993 01:08:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@946 -- # '[' -z 3800629 ']' 00:22:16.993 01:08:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@950 -- # kill -0 3800629 00:22:16.993 01:08:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@951 -- # uname 00:22:16.993 01:08:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:22:16.993 01:08:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3800629 00:22:16.993 01:08:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:22:16.993 01:08:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:22:16.993 01:08:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3800629' 00:22:16.993 killing process with pid 3800629 00:22:16.993 01:08:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@965 -- # kill 3800629 00:22:16.993 01:08:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@970 -- # wait 3800629 00:22:17.250 01:08:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:17.250 01:08:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:17.250 01:08:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:17.250 01:08:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:17.250 01:08:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:17.250 01:08:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:17.250 01:08:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:17.250 01:08:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:19.803 01:08:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:19.803 01:08:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.XdH /tmp/spdk.key-sha256.IPL /tmp/spdk.key-sha384.pwc /tmp/spdk.key-sha512.NAF /tmp/spdk.key-sha512.pWA /tmp/spdk.key-sha384.jMn /tmp/spdk.key-sha256.kLe '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:22:19.803 00:22:19.803 real 3m8.727s 00:22:19.803 user 7m19.149s 00:22:19.803 sys 0m24.821s 00:22:19.803 01:08:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1122 -- # xtrace_disable 00:22:19.803 01:08:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:19.803 ************************************ 00:22:19.803 END TEST nvmf_auth_target 00:22:19.803 ************************************ 00:22:19.803 01:08:12 nvmf_tcp -- nvmf/nvmf.sh@59 -- # '[' tcp = tcp ']' 00:22:19.803 01:08:12 nvmf_tcp -- nvmf/nvmf.sh@60 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:22:19.803 01:08:12 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:22:19.803 01:08:12 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:22:19.803 01:08:12 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:19.803 ************************************ 00:22:19.803 START TEST nvmf_bdevio_no_huge 00:22:19.803 ************************************ 00:22:19.803 01:08:12 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:22:19.803 * Looking for test storage... 00:22:19.803 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:19.803 01:08:12 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:19.803 01:08:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:22:19.803 01:08:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:19.803 01:08:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:19.803 01:08:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:19.803 01:08:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:19.803 01:08:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:19.803 01:08:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:19.803 01:08:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:19.803 01:08:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:19.803 01:08:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:19.803 01:08:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:19.803 01:08:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:19.803 01:08:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:22:19.803 01:08:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:19.803 01:08:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:19.803 01:08:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:19.803 01:08:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:19.803 01:08:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:19.803 01:08:12 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:19.803 01:08:12 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:19.803 01:08:12 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:19.803 01:08:12 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:19.803 01:08:12 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:19.803 01:08:12 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:19.803 01:08:12 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:22:19.803 01:08:12 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:19.803 01:08:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@47 -- # : 0 00:22:19.803 01:08:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:19.803 01:08:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:19.803 01:08:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:19.803 01:08:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:19.803 01:08:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:19.803 01:08:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:19.803 01:08:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:19.803 01:08:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:19.803 01:08:12 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:19.803 01:08:12 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:19.803 01:08:12 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:22:19.803 01:08:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:19.803 01:08:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:19.803 01:08:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:19.803 01:08:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:19.803 01:08:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:19.803 01:08:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:19.803 01:08:12 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:19.803 01:08:12 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:19.803 01:08:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:19.803 01:08:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:19.803 01:08:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@285 -- # xtrace_disable 00:22:19.803 01:08:12 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:21.703 01:08:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:21.703 01:08:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # pci_devs=() 00:22:21.703 01:08:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:21.703 01:08:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:21.703 01:08:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:21.703 01:08:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:21.703 01:08:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:21.703 01:08:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # net_devs=() 00:22:21.703 01:08:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:21.703 01:08:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # e810=() 00:22:21.703 01:08:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # local -ga e810 00:22:21.703 01:08:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # x722=() 00:22:21.703 01:08:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # local -ga x722 00:22:21.703 01:08:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # mlx=() 00:22:21.703 01:08:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # local -ga mlx 00:22:21.703 01:08:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:21.703 01:08:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:21.704 01:08:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:21.704 01:08:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:21.704 01:08:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:21.704 01:08:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:21.704 01:08:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:21.704 01:08:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:21.704 01:08:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:21.704 01:08:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:21.704 01:08:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:21.704 01:08:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:21.704 01:08:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:21.704 01:08:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:21.704 01:08:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:21.704 01:08:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:21.704 01:08:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:21.704 01:08:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:21.704 01:08:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:22:21.704 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:22:21.704 01:08:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:21.704 01:08:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:21.704 01:08:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:21.704 01:08:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:21.704 01:08:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:21.704 01:08:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:21.704 01:08:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:22:21.704 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:22:21.704 01:08:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:21.704 01:08:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:21.704 01:08:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:21.704 01:08:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:21.704 01:08:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:21.704 01:08:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:21.704 01:08:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:21.704 01:08:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:21.704 01:08:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:21.704 01:08:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:21.704 01:08:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:21.704 01:08:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:21.704 01:08:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:21.704 01:08:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:21.704 01:08:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:21.704 01:08:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:22:21.704 Found net devices under 0000:0a:00.0: cvl_0_0 00:22:21.704 01:08:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:21.704 01:08:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:21.704 01:08:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:21.704 01:08:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:21.704 01:08:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:21.704 01:08:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:21.704 01:08:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:21.704 01:08:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:21.704 01:08:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:22:21.704 Found net devices under 0000:0a:00.1: cvl_0_1 00:22:21.704 01:08:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:21.704 01:08:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:21.704 01:08:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # is_hw=yes 00:22:21.704 01:08:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:21.704 01:08:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:21.704 01:08:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:21.704 01:08:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:21.704 01:08:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:21.704 01:08:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:21.704 01:08:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:21.704 01:08:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:21.704 01:08:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:21.704 01:08:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:21.704 01:08:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:21.704 01:08:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:21.704 01:08:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:21.704 01:08:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:21.704 01:08:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:21.704 01:08:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:21.704 01:08:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:21.704 01:08:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:21.704 01:08:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:21.704 01:08:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:21.704 01:08:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:21.704 01:08:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:21.704 01:08:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:21.704 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:21.704 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.193 ms 00:22:21.704 00:22:21.704 --- 10.0.0.2 ping statistics --- 00:22:21.704 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:21.704 rtt min/avg/max/mdev = 0.193/0.193/0.193/0.000 ms 00:22:21.704 01:08:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:21.704 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:21.704 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.194 ms 00:22:21.704 00:22:21.704 --- 10.0.0.1 ping statistics --- 00:22:21.704 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:21.704 rtt min/avg/max/mdev = 0.194/0.194/0.194/0.000 ms 00:22:21.704 01:08:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:21.704 01:08:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # return 0 00:22:21.704 01:08:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:21.704 01:08:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:21.704 01:08:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:21.704 01:08:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:21.704 01:08:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:21.704 01:08:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:21.704 01:08:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:21.704 01:08:14 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:22:21.704 01:08:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:21.704 01:08:14 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@720 -- # xtrace_disable 00:22:21.704 01:08:14 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:21.704 01:08:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@481 -- # nvmfpid=3803356 00:22:21.704 01:08:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:22:21.704 01:08:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # waitforlisten 3803356 00:22:21.704 01:08:14 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@827 -- # '[' -z 3803356 ']' 00:22:21.704 01:08:14 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:21.704 01:08:14 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@832 -- # local max_retries=100 00:22:21.704 01:08:14 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:21.704 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:21.704 01:08:14 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@836 -- # xtrace_disable 00:22:21.704 01:08:14 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:21.704 [2024-07-25 01:08:14.589759] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 23.11.0 initialization... 00:22:21.704 [2024-07-25 01:08:14.589847] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:22:21.704 [2024-07-25 01:08:14.660312] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:21.704 [2024-07-25 01:08:14.740810] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:21.705 [2024-07-25 01:08:14.740863] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:21.705 [2024-07-25 01:08:14.740876] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:21.705 [2024-07-25 01:08:14.740887] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:21.705 [2024-07-25 01:08:14.740897] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:21.705 [2024-07-25 01:08:14.740985] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:22:21.705 [2024-07-25 01:08:14.741048] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:22:21.705 [2024-07-25 01:08:14.741118] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:22:21.705 [2024-07-25 01:08:14.741120] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:22:21.705 01:08:14 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:22:21.705 01:08:14 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@860 -- # return 0 00:22:21.705 01:08:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:21.705 01:08:14 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:21.705 01:08:14 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:21.962 01:08:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:21.962 01:08:14 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:21.962 01:08:14 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:21.962 01:08:14 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:21.962 [2024-07-25 01:08:14.860532] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:21.962 01:08:14 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:21.962 01:08:14 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:22:21.962 01:08:14 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:21.962 01:08:14 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:21.962 Malloc0 00:22:21.962 01:08:14 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:21.962 01:08:14 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:21.962 01:08:14 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:21.962 01:08:14 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:21.962 01:08:14 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:21.962 01:08:14 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:21.962 01:08:14 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:21.962 01:08:14 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:21.962 01:08:14 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:21.962 01:08:14 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:21.962 01:08:14 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:21.962 01:08:14 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:21.962 [2024-07-25 01:08:14.898628] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:21.962 01:08:14 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:21.962 01:08:14 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:22:21.962 01:08:14 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:22:21.962 01:08:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # config=() 00:22:21.962 01:08:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # local subsystem config 00:22:21.962 01:08:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:21.962 01:08:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:21.963 { 00:22:21.963 "params": { 00:22:21.963 "name": "Nvme$subsystem", 00:22:21.963 "trtype": "$TEST_TRANSPORT", 00:22:21.963 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:21.963 "adrfam": "ipv4", 00:22:21.963 "trsvcid": "$NVMF_PORT", 00:22:21.963 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:21.963 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:21.963 "hdgst": ${hdgst:-false}, 00:22:21.963 "ddgst": ${ddgst:-false} 00:22:21.963 }, 00:22:21.963 "method": "bdev_nvme_attach_controller" 00:22:21.963 } 00:22:21.963 EOF 00:22:21.963 )") 00:22:21.963 01:08:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # cat 00:22:21.963 01:08:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@556 -- # jq . 00:22:21.963 01:08:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@557 -- # IFS=, 00:22:21.963 01:08:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:22:21.963 "params": { 00:22:21.963 "name": "Nvme1", 00:22:21.963 "trtype": "tcp", 00:22:21.963 "traddr": "10.0.0.2", 00:22:21.963 "adrfam": "ipv4", 00:22:21.963 "trsvcid": "4420", 00:22:21.963 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:21.963 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:21.963 "hdgst": false, 00:22:21.963 "ddgst": false 00:22:21.963 }, 00:22:21.963 "method": "bdev_nvme_attach_controller" 00:22:21.963 }' 00:22:21.963 [2024-07-25 01:08:14.946462] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 23.11.0 initialization... 00:22:21.963 [2024-07-25 01:08:14.946562] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid3803390 ] 00:22:21.963 [2024-07-25 01:08:15.009515] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:22:21.963 [2024-07-25 01:08:15.092363] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:21.963 [2024-07-25 01:08:15.092413] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:21.963 [2024-07-25 01:08:15.092416] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:22.220 I/O targets: 00:22:22.221 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:22:22.221 00:22:22.221 00:22:22.221 CUnit - A unit testing framework for C - Version 2.1-3 00:22:22.221 http://cunit.sourceforge.net/ 00:22:22.221 00:22:22.221 00:22:22.221 Suite: bdevio tests on: Nvme1n1 00:22:22.221 Test: blockdev write read block ...passed 00:22:22.221 Test: blockdev write zeroes read block ...passed 00:22:22.221 Test: blockdev write zeroes read no split ...passed 00:22:22.221 Test: blockdev write zeroes read split ...passed 00:22:22.477 Test: blockdev write zeroes read split partial ...passed 00:22:22.477 Test: blockdev reset ...[2024-07-25 01:08:15.373391] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:22.477 [2024-07-25 01:08:15.373506] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x767a00 (9): Bad file descriptor 00:22:22.477 [2024-07-25 01:08:15.442790] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:22:22.477 passed 00:22:22.477 Test: blockdev write read 8 blocks ...passed 00:22:22.477 Test: blockdev write read size > 128k ...passed 00:22:22.477 Test: blockdev write read invalid size ...passed 00:22:22.477 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:22:22.477 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:22:22.477 Test: blockdev write read max offset ...passed 00:22:22.477 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:22:22.477 Test: blockdev writev readv 8 blocks ...passed 00:22:22.734 Test: blockdev writev readv 30 x 1block ...passed 00:22:22.734 Test: blockdev writev readv block ...passed 00:22:22.734 Test: blockdev writev readv size > 128k ...passed 00:22:22.734 Test: blockdev writev readv size > 128k in two iovs ...passed 00:22:22.734 Test: blockdev comparev and writev ...[2024-07-25 01:08:15.700620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:22.734 [2024-07-25 01:08:15.700657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:22.734 [2024-07-25 01:08:15.700681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:22.734 [2024-07-25 01:08:15.700699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:22.734 [2024-07-25 01:08:15.701098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:22.734 [2024-07-25 01:08:15.701123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:22:22.734 [2024-07-25 01:08:15.701145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:22.734 [2024-07-25 01:08:15.701169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:22:22.734 [2024-07-25 01:08:15.701541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:22.734 [2024-07-25 01:08:15.701566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:22:22.734 [2024-07-25 01:08:15.701588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:22.734 [2024-07-25 01:08:15.701603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:22:22.734 [2024-07-25 01:08:15.701983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:22.734 [2024-07-25 01:08:15.702008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:22:22.734 [2024-07-25 01:08:15.702030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:22.734 [2024-07-25 01:08:15.702052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:22:22.734 passed 00:22:22.734 Test: blockdev nvme passthru rw ...passed 00:22:22.734 Test: blockdev nvme passthru vendor specific ...[2024-07-25 01:08:15.785613] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:22.734 [2024-07-25 01:08:15.785641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:22.734 [2024-07-25 01:08:15.785839] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:22.734 [2024-07-25 01:08:15.785864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:22:22.734 [2024-07-25 01:08:15.786051] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:22.734 [2024-07-25 01:08:15.786075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:22:22.734 [2024-07-25 01:08:15.786262] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:22.734 [2024-07-25 01:08:15.786287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:22:22.734 passed 00:22:22.734 Test: blockdev nvme admin passthru ...passed 00:22:22.734 Test: blockdev copy ...passed 00:22:22.734 00:22:22.734 Run Summary: Type Total Ran Passed Failed Inactive 00:22:22.734 suites 1 1 n/a 0 0 00:22:22.734 tests 23 23 23 0 0 00:22:22.734 asserts 152 152 152 0 n/a 00:22:22.734 00:22:22.734 Elapsed time = 1.205 seconds 00:22:23.299 01:08:16 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:23.299 01:08:16 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:23.299 01:08:16 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:23.299 01:08:16 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:23.299 01:08:16 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:22:23.299 01:08:16 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:22:23.299 01:08:16 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:23.299 01:08:16 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@117 -- # sync 00:22:23.299 01:08:16 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:23.299 01:08:16 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@120 -- # set +e 00:22:23.299 01:08:16 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:23.299 01:08:16 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:23.299 rmmod nvme_tcp 00:22:23.299 rmmod nvme_fabrics 00:22:23.299 rmmod nvme_keyring 00:22:23.299 01:08:16 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:23.299 01:08:16 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set -e 00:22:23.299 01:08:16 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # return 0 00:22:23.299 01:08:16 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@489 -- # '[' -n 3803356 ']' 00:22:23.299 01:08:16 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@490 -- # killprocess 3803356 00:22:23.299 01:08:16 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@946 -- # '[' -z 3803356 ']' 00:22:23.299 01:08:16 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@950 -- # kill -0 3803356 00:22:23.299 01:08:16 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@951 -- # uname 00:22:23.299 01:08:16 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:22:23.299 01:08:16 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3803356 00:22:23.299 01:08:16 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@952 -- # process_name=reactor_3 00:22:23.299 01:08:16 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@956 -- # '[' reactor_3 = sudo ']' 00:22:23.299 01:08:16 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3803356' 00:22:23.299 killing process with pid 3803356 00:22:23.299 01:08:16 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@965 -- # kill 3803356 00:22:23.299 01:08:16 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@970 -- # wait 3803356 00:22:23.557 01:08:16 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:23.557 01:08:16 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:23.557 01:08:16 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:23.557 01:08:16 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:23.557 01:08:16 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:23.557 01:08:16 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:23.557 01:08:16 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:23.557 01:08:16 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:26.090 01:08:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:26.090 00:22:26.090 real 0m6.260s 00:22:26.090 user 0m9.837s 00:22:26.090 sys 0m2.466s 00:22:26.090 01:08:18 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1122 -- # xtrace_disable 00:22:26.090 01:08:18 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:26.090 ************************************ 00:22:26.090 END TEST nvmf_bdevio_no_huge 00:22:26.090 ************************************ 00:22:26.090 01:08:18 nvmf_tcp -- nvmf/nvmf.sh@61 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:22:26.090 01:08:18 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:22:26.090 01:08:18 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:22:26.090 01:08:18 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:26.090 ************************************ 00:22:26.090 START TEST nvmf_tls 00:22:26.090 ************************************ 00:22:26.090 01:08:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:22:26.090 * Looking for test storage... 00:22:26.090 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:26.090 01:08:18 nvmf_tcp.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:26.090 01:08:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:22:26.090 01:08:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:26.090 01:08:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:26.090 01:08:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:26.090 01:08:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:26.090 01:08:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:26.090 01:08:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:26.090 01:08:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:26.090 01:08:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:26.090 01:08:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:26.090 01:08:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:26.090 01:08:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:26.090 01:08:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:22:26.090 01:08:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:26.090 01:08:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:26.090 01:08:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:26.090 01:08:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:26.090 01:08:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:26.090 01:08:18 nvmf_tcp.nvmf_tls -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:26.090 01:08:18 nvmf_tcp.nvmf_tls -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:26.090 01:08:18 nvmf_tcp.nvmf_tls -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:26.090 01:08:18 nvmf_tcp.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:26.090 01:08:18 nvmf_tcp.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:26.090 01:08:18 nvmf_tcp.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:26.090 01:08:18 nvmf_tcp.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:22:26.090 01:08:18 nvmf_tcp.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:26.090 01:08:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@47 -- # : 0 00:22:26.091 01:08:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:26.091 01:08:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:26.091 01:08:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:26.091 01:08:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:26.091 01:08:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:26.091 01:08:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:26.091 01:08:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:26.091 01:08:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:26.091 01:08:18 nvmf_tcp.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:26.091 01:08:18 nvmf_tcp.nvmf_tls -- target/tls.sh@62 -- # nvmftestinit 00:22:26.091 01:08:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:26.091 01:08:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:26.091 01:08:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:26.091 01:08:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:26.091 01:08:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:26.091 01:08:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:26.091 01:08:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:26.091 01:08:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:26.091 01:08:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:26.091 01:08:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:26.091 01:08:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@285 -- # xtrace_disable 00:22:26.091 01:08:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:28.038 01:08:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:28.038 01:08:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@291 -- # pci_devs=() 00:22:28.038 01:08:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:28.038 01:08:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:28.038 01:08:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:28.038 01:08:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:28.038 01:08:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:28.038 01:08:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@295 -- # net_devs=() 00:22:28.038 01:08:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:28.038 01:08:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@296 -- # e810=() 00:22:28.038 01:08:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@296 -- # local -ga e810 00:22:28.038 01:08:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@297 -- # x722=() 00:22:28.038 01:08:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@297 -- # local -ga x722 00:22:28.038 01:08:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@298 -- # mlx=() 00:22:28.038 01:08:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@298 -- # local -ga mlx 00:22:28.038 01:08:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:28.038 01:08:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:28.038 01:08:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:28.038 01:08:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:28.038 01:08:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:28.038 01:08:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:28.038 01:08:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:28.038 01:08:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:28.038 01:08:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:28.038 01:08:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:28.038 01:08:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:28.038 01:08:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:28.038 01:08:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:28.038 01:08:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:28.038 01:08:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:28.038 01:08:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:28.038 01:08:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:28.038 01:08:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:28.038 01:08:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:22:28.038 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:22:28.038 01:08:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:28.038 01:08:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:28.038 01:08:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:28.038 01:08:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:28.038 01:08:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:28.038 01:08:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:28.038 01:08:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:22:28.038 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:22:28.038 01:08:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:28.038 01:08:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:28.038 01:08:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:28.038 01:08:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:28.038 01:08:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:28.038 01:08:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:28.038 01:08:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:28.038 01:08:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:28.038 01:08:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:28.038 01:08:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:28.038 01:08:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:28.038 01:08:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:28.038 01:08:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:28.038 01:08:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:28.038 01:08:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:28.038 01:08:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:22:28.038 Found net devices under 0000:0a:00.0: cvl_0_0 00:22:28.038 01:08:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:28.038 01:08:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:28.038 01:08:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:28.038 01:08:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:28.038 01:08:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:28.038 01:08:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:28.038 01:08:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:28.038 01:08:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:28.038 01:08:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:22:28.038 Found net devices under 0000:0a:00.1: cvl_0_1 00:22:28.038 01:08:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:28.038 01:08:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:28.038 01:08:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # is_hw=yes 00:22:28.038 01:08:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:28.038 01:08:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:28.038 01:08:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:28.038 01:08:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:28.038 01:08:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:28.038 01:08:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:28.038 01:08:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:28.038 01:08:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:28.038 01:08:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:28.038 01:08:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:28.038 01:08:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:28.038 01:08:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:28.038 01:08:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:28.038 01:08:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:28.038 01:08:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:28.038 01:08:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:28.038 01:08:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:28.038 01:08:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:28.038 01:08:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:28.038 01:08:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:28.038 01:08:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:28.038 01:08:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:28.038 01:08:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:28.038 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:28.038 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.146 ms 00:22:28.038 00:22:28.038 --- 10.0.0.2 ping statistics --- 00:22:28.038 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:28.038 rtt min/avg/max/mdev = 0.146/0.146/0.146/0.000 ms 00:22:28.038 01:08:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:28.038 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:28.038 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.103 ms 00:22:28.038 00:22:28.038 --- 10.0.0.1 ping statistics --- 00:22:28.038 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:28.038 rtt min/avg/max/mdev = 0.103/0.103/0.103/0.000 ms 00:22:28.038 01:08:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:28.038 01:08:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@422 -- # return 0 00:22:28.038 01:08:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:28.038 01:08:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:28.038 01:08:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:28.038 01:08:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:28.038 01:08:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:28.038 01:08:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:28.038 01:08:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:28.038 01:08:20 nvmf_tcp.nvmf_tls -- target/tls.sh@63 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:22:28.038 01:08:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:28.038 01:08:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:22:28.039 01:08:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:28.039 01:08:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=3805462 00:22:28.039 01:08:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:22:28.039 01:08:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 3805462 00:22:28.039 01:08:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 3805462 ']' 00:22:28.039 01:08:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:28.039 01:08:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:22:28.039 01:08:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:28.039 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:28.039 01:08:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:22:28.039 01:08:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:28.039 [2024-07-25 01:08:20.922010] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 23.11.0 initialization... 00:22:28.039 [2024-07-25 01:08:20.922099] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:28.039 EAL: No free 2048 kB hugepages reported on node 1 00:22:28.039 [2024-07-25 01:08:20.987276] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:28.039 [2024-07-25 01:08:21.076765] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:28.039 [2024-07-25 01:08:21.076834] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:28.039 [2024-07-25 01:08:21.076858] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:28.039 [2024-07-25 01:08:21.076884] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:28.039 [2024-07-25 01:08:21.076894] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:28.039 [2024-07-25 01:08:21.076920] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:28.039 01:08:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:22:28.039 01:08:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:22:28.039 01:08:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:28.039 01:08:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:28.039 01:08:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:28.039 01:08:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:28.039 01:08:21 nvmf_tcp.nvmf_tls -- target/tls.sh@65 -- # '[' tcp '!=' tcp ']' 00:22:28.039 01:08:21 nvmf_tcp.nvmf_tls -- target/tls.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:22:28.297 true 00:22:28.554 01:08:21 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:28.554 01:08:21 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # jq -r .tls_version 00:22:28.554 01:08:21 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # version=0 00:22:28.554 01:08:21 nvmf_tcp.nvmf_tls -- target/tls.sh@74 -- # [[ 0 != \0 ]] 00:22:28.554 01:08:21 nvmf_tcp.nvmf_tls -- target/tls.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:22:28.812 01:08:21 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:28.812 01:08:21 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # jq -r .tls_version 00:22:29.070 01:08:22 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # version=13 00:22:29.070 01:08:22 nvmf_tcp.nvmf_tls -- target/tls.sh@82 -- # [[ 13 != \1\3 ]] 00:22:29.070 01:08:22 nvmf_tcp.nvmf_tls -- target/tls.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:22:29.636 01:08:22 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:29.636 01:08:22 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # jq -r .tls_version 00:22:29.636 01:08:22 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # version=7 00:22:29.636 01:08:22 nvmf_tcp.nvmf_tls -- target/tls.sh@90 -- # [[ 7 != \7 ]] 00:22:29.636 01:08:22 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:29.636 01:08:22 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # jq -r .enable_ktls 00:22:29.894 01:08:22 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # ktls=false 00:22:29.894 01:08:22 nvmf_tcp.nvmf_tls -- target/tls.sh@97 -- # [[ false != \f\a\l\s\e ]] 00:22:29.894 01:08:22 nvmf_tcp.nvmf_tls -- target/tls.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:22:30.152 01:08:23 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:30.152 01:08:23 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # jq -r .enable_ktls 00:22:30.410 01:08:23 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # ktls=true 00:22:30.410 01:08:23 nvmf_tcp.nvmf_tls -- target/tls.sh@105 -- # [[ true != \t\r\u\e ]] 00:22:30.410 01:08:23 nvmf_tcp.nvmf_tls -- target/tls.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:22:30.668 01:08:23 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:30.668 01:08:23 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # jq -r .enable_ktls 00:22:30.926 01:08:23 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # ktls=false 00:22:30.926 01:08:23 nvmf_tcp.nvmf_tls -- target/tls.sh@113 -- # [[ false != \f\a\l\s\e ]] 00:22:30.926 01:08:23 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:22:30.926 01:08:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:22:30.926 01:08:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:22:30.926 01:08:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:22:30.926 01:08:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:22:30.926 01:08:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:22:30.926 01:08:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:22:30.926 01:08:24 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:22:30.926 01:08:24 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:22:30.926 01:08:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:22:30.926 01:08:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:22:30.926 01:08:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:22:30.926 01:08:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=ffeeddccbbaa99887766554433221100 00:22:30.926 01:08:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:22:30.926 01:08:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:22:30.926 01:08:24 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:22:30.926 01:08:24 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # mktemp 00:22:30.926 01:08:24 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # key_path=/tmp/tmp.1SHZw5PyL6 00:22:30.926 01:08:24 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:22:30.926 01:08:24 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # key_2_path=/tmp/tmp.lYKicCj708 00:22:30.926 01:08:24 nvmf_tcp.nvmf_tls -- target/tls.sh@124 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:22:30.926 01:08:24 nvmf_tcp.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:22:30.926 01:08:24 nvmf_tcp.nvmf_tls -- target/tls.sh@127 -- # chmod 0600 /tmp/tmp.1SHZw5PyL6 00:22:30.926 01:08:24 nvmf_tcp.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.lYKicCj708 00:22:30.926 01:08:24 nvmf_tcp.nvmf_tls -- target/tls.sh@130 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:22:31.184 01:08:24 nvmf_tcp.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:22:31.750 01:08:24 nvmf_tcp.nvmf_tls -- target/tls.sh@133 -- # setup_nvmf_tgt /tmp/tmp.1SHZw5PyL6 00:22:31.750 01:08:24 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.1SHZw5PyL6 00:22:31.750 01:08:24 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:31.750 [2024-07-25 01:08:24.867824] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:31.750 01:08:24 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:22:32.007 01:08:25 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:22:32.265 [2024-07-25 01:08:25.357141] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:32.265 [2024-07-25 01:08:25.357437] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:32.265 01:08:25 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:22:32.523 malloc0 00:22:32.523 01:08:25 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:22:32.781 01:08:25 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.1SHZw5PyL6 00:22:33.039 [2024-07-25 01:08:26.090198] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:22:33.039 01:08:26 nvmf_tcp.nvmf_tls -- target/tls.sh@137 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.1SHZw5PyL6 00:22:33.039 EAL: No free 2048 kB hugepages reported on node 1 00:22:45.258 Initializing NVMe Controllers 00:22:45.258 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:45.258 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:45.258 Initialization complete. Launching workers. 00:22:45.258 ======================================================== 00:22:45.258 Latency(us) 00:22:45.258 Device Information : IOPS MiB/s Average min max 00:22:45.258 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 7820.99 30.55 8185.36 1196.84 9470.52 00:22:45.258 ======================================================== 00:22:45.258 Total : 7820.99 30.55 8185.36 1196.84 9470.52 00:22:45.258 00:22:45.258 01:08:36 nvmf_tcp.nvmf_tls -- target/tls.sh@143 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.1SHZw5PyL6 00:22:45.258 01:08:36 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:45.258 01:08:36 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:45.258 01:08:36 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:45.258 01:08:36 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.1SHZw5PyL6' 00:22:45.258 01:08:36 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:45.258 01:08:36 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3807345 00:22:45.258 01:08:36 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:45.258 01:08:36 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3807345 /var/tmp/bdevperf.sock 00:22:45.258 01:08:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 3807345 ']' 00:22:45.258 01:08:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:45.258 01:08:36 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:45.258 01:08:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:22:45.258 01:08:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:45.258 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:45.258 01:08:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:22:45.258 01:08:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:45.258 [2024-07-25 01:08:36.253711] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 23.11.0 initialization... 00:22:45.258 [2024-07-25 01:08:36.253798] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3807345 ] 00:22:45.258 EAL: No free 2048 kB hugepages reported on node 1 00:22:45.258 [2024-07-25 01:08:36.316102] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:45.258 [2024-07-25 01:08:36.401465] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:45.258 01:08:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:22:45.258 01:08:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:22:45.259 01:08:36 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.1SHZw5PyL6 00:22:45.259 [2024-07-25 01:08:36.752607] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:45.259 [2024-07-25 01:08:36.752729] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:22:45.259 TLSTESTn1 00:22:45.259 01:08:36 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:22:45.259 Running I/O for 10 seconds... 00:22:55.220 00:22:55.220 Latency(us) 00:22:55.220 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:55.220 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:55.220 Verification LBA range: start 0x0 length 0x2000 00:22:55.220 TLSTESTn1 : 10.02 3614.65 14.12 0.00 0.00 35346.02 7815.77 54758.97 00:22:55.220 =================================================================================================================== 00:22:55.220 Total : 3614.65 14.12 0.00 0.00 35346.02 7815.77 54758.97 00:22:55.220 0 00:22:55.220 01:08:47 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:55.220 01:08:47 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 3807345 00:22:55.220 01:08:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 3807345 ']' 00:22:55.220 01:08:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 3807345 00:22:55.220 01:08:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:22:55.220 01:08:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:22:55.220 01:08:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3807345 00:22:55.220 01:08:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:22:55.220 01:08:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:22:55.220 01:08:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3807345' 00:22:55.220 killing process with pid 3807345 00:22:55.220 01:08:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 3807345 00:22:55.220 Received shutdown signal, test time was about 10.000000 seconds 00:22:55.220 00:22:55.220 Latency(us) 00:22:55.220 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:55.220 =================================================================================================================== 00:22:55.220 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:55.220 [2024-07-25 01:08:47.042442] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:22:55.220 01:08:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 3807345 00:22:55.220 01:08:47 nvmf_tcp.nvmf_tls -- target/tls.sh@146 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.lYKicCj708 00:22:55.220 01:08:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:22:55.221 01:08:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.lYKicCj708 00:22:55.221 01:08:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:22:55.221 01:08:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:55.221 01:08:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:22:55.221 01:08:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:55.221 01:08:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.lYKicCj708 00:22:55.221 01:08:47 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:55.221 01:08:47 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:55.221 01:08:47 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:55.221 01:08:47 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.lYKicCj708' 00:22:55.221 01:08:47 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:55.221 01:08:47 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3808544 00:22:55.221 01:08:47 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:55.221 01:08:47 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:55.221 01:08:47 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3808544 /var/tmp/bdevperf.sock 00:22:55.221 01:08:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 3808544 ']' 00:22:55.221 01:08:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:55.221 01:08:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:22:55.221 01:08:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:55.221 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:55.221 01:08:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:22:55.221 01:08:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:55.221 [2024-07-25 01:08:47.288710] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 23.11.0 initialization... 00:22:55.221 [2024-07-25 01:08:47.288785] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3808544 ] 00:22:55.221 EAL: No free 2048 kB hugepages reported on node 1 00:22:55.221 [2024-07-25 01:08:47.346258] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:55.221 [2024-07-25 01:08:47.435270] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:55.221 01:08:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:22:55.221 01:08:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:22:55.221 01:08:47 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.lYKicCj708 00:22:55.221 [2024-07-25 01:08:47.753059] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:55.221 [2024-07-25 01:08:47.753171] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:22:55.221 [2024-07-25 01:08:47.758791] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:22:55.221 [2024-07-25 01:08:47.758939] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f00ed0 (107): Transport endpoint is not connected 00:22:55.221 [2024-07-25 01:08:47.759927] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f00ed0 (9): Bad file descriptor 00:22:55.221 [2024-07-25 01:08:47.760926] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:55.221 [2024-07-25 01:08:47.760946] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:22:55.221 [2024-07-25 01:08:47.760961] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:55.221 request: 00:22:55.221 { 00:22:55.221 "name": "TLSTEST", 00:22:55.221 "trtype": "tcp", 00:22:55.221 "traddr": "10.0.0.2", 00:22:55.221 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:55.221 "adrfam": "ipv4", 00:22:55.221 "trsvcid": "4420", 00:22:55.221 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:55.221 "psk": "/tmp/tmp.lYKicCj708", 00:22:55.221 "method": "bdev_nvme_attach_controller", 00:22:55.221 "req_id": 1 00:22:55.221 } 00:22:55.221 Got JSON-RPC error response 00:22:55.221 response: 00:22:55.221 { 00:22:55.221 "code": -5, 00:22:55.221 "message": "Input/output error" 00:22:55.221 } 00:22:55.221 01:08:47 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 3808544 00:22:55.221 01:08:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 3808544 ']' 00:22:55.221 01:08:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 3808544 00:22:55.221 01:08:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:22:55.221 01:08:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:22:55.221 01:08:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3808544 00:22:55.221 01:08:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:22:55.221 01:08:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:22:55.221 01:08:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3808544' 00:22:55.221 killing process with pid 3808544 00:22:55.221 01:08:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 3808544 00:22:55.221 Received shutdown signal, test time was about 10.000000 seconds 00:22:55.221 00:22:55.221 Latency(us) 00:22:55.221 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:55.221 =================================================================================================================== 00:22:55.221 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:55.221 [2024-07-25 01:08:47.808226] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:22:55.221 01:08:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 3808544 00:22:55.221 01:08:48 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:22:55.221 01:08:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:22:55.221 01:08:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:55.221 01:08:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:55.221 01:08:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:55.221 01:08:48 nvmf_tcp.nvmf_tls -- target/tls.sh@149 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.1SHZw5PyL6 00:22:55.221 01:08:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:22:55.221 01:08:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.1SHZw5PyL6 00:22:55.221 01:08:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:22:55.221 01:08:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:55.221 01:08:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:22:55.221 01:08:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:55.221 01:08:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.1SHZw5PyL6 00:22:55.221 01:08:48 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:55.221 01:08:48 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:55.221 01:08:48 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:22:55.221 01:08:48 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.1SHZw5PyL6' 00:22:55.221 01:08:48 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:55.221 01:08:48 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3808669 00:22:55.221 01:08:48 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:55.221 01:08:48 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:55.221 01:08:48 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3808669 /var/tmp/bdevperf.sock 00:22:55.221 01:08:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 3808669 ']' 00:22:55.221 01:08:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:55.221 01:08:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:22:55.221 01:08:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:55.221 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:55.221 01:08:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:22:55.221 01:08:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:55.221 [2024-07-25 01:08:48.075547] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 23.11.0 initialization... 00:22:55.221 [2024-07-25 01:08:48.075626] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3808669 ] 00:22:55.221 EAL: No free 2048 kB hugepages reported on node 1 00:22:55.221 [2024-07-25 01:08:48.141740] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:55.221 [2024-07-25 01:08:48.229869] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:55.221 01:08:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:22:55.222 01:08:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:22:55.222 01:08:48 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /tmp/tmp.1SHZw5PyL6 00:22:55.479 [2024-07-25 01:08:48.592103] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:55.479 [2024-07-25 01:08:48.592220] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:22:55.479 [2024-07-25 01:08:48.600869] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:22:55.480 [2024-07-25 01:08:48.600902] posix.c: 588:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:22:55.480 [2024-07-25 01:08:48.600962] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:22:55.480 [2024-07-25 01:08:48.601119] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6f1ed0 (107): Transport endpoint is not connected 00:22:55.480 [2024-07-25 01:08:48.602109] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6f1ed0 (9): Bad file descriptor 00:22:55.480 [2024-07-25 01:08:48.603107] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:55.480 [2024-07-25 01:08:48.603127] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:22:55.480 [2024-07-25 01:08:48.603149] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:55.480 request: 00:22:55.480 { 00:22:55.480 "name": "TLSTEST", 00:22:55.480 "trtype": "tcp", 00:22:55.480 "traddr": "10.0.0.2", 00:22:55.480 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:55.480 "adrfam": "ipv4", 00:22:55.480 "trsvcid": "4420", 00:22:55.480 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:55.480 "psk": "/tmp/tmp.1SHZw5PyL6", 00:22:55.480 "method": "bdev_nvme_attach_controller", 00:22:55.480 "req_id": 1 00:22:55.480 } 00:22:55.480 Got JSON-RPC error response 00:22:55.480 response: 00:22:55.480 { 00:22:55.480 "code": -5, 00:22:55.480 "message": "Input/output error" 00:22:55.480 } 00:22:55.480 01:08:48 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 3808669 00:22:55.480 01:08:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 3808669 ']' 00:22:55.480 01:08:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 3808669 00:22:55.480 01:08:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:22:55.480 01:08:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:22:55.480 01:08:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3808669 00:22:55.738 01:08:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:22:55.738 01:08:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:22:55.738 01:08:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3808669' 00:22:55.738 killing process with pid 3808669 00:22:55.738 01:08:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 3808669 00:22:55.738 Received shutdown signal, test time was about 10.000000 seconds 00:22:55.738 00:22:55.738 Latency(us) 00:22:55.738 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:55.738 =================================================================================================================== 00:22:55.738 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:55.738 [2024-07-25 01:08:48.647665] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:22:55.738 01:08:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 3808669 00:22:55.738 01:08:48 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:22:55.738 01:08:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:22:55.738 01:08:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:55.738 01:08:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:55.738 01:08:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:55.738 01:08:48 nvmf_tcp.nvmf_tls -- target/tls.sh@152 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.1SHZw5PyL6 00:22:55.738 01:08:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:22:55.738 01:08:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.1SHZw5PyL6 00:22:55.738 01:08:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:22:55.738 01:08:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:55.738 01:08:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:22:55.738 01:08:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:55.738 01:08:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.1SHZw5PyL6 00:22:55.738 01:08:48 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:55.738 01:08:48 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:22:55.738 01:08:48 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:55.738 01:08:48 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.1SHZw5PyL6' 00:22:55.738 01:08:48 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:55.738 01:08:48 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3808809 00:22:55.738 01:08:48 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:55.738 01:08:48 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:55.738 01:08:48 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3808809 /var/tmp/bdevperf.sock 00:22:55.738 01:08:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 3808809 ']' 00:22:55.738 01:08:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:55.738 01:08:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:22:55.738 01:08:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:55.738 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:55.738 01:08:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:22:55.738 01:08:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:55.738 [2024-07-25 01:08:48.881521] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 23.11.0 initialization... 00:22:55.738 [2024-07-25 01:08:48.881611] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3808809 ] 00:22:55.996 EAL: No free 2048 kB hugepages reported on node 1 00:22:55.996 [2024-07-25 01:08:48.940923] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:55.996 [2024-07-25 01:08:49.026330] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:55.996 01:08:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:22:55.996 01:08:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:22:55.996 01:08:49 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.1SHZw5PyL6 00:22:56.254 [2024-07-25 01:08:49.340138] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:56.254 [2024-07-25 01:08:49.340314] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:22:56.254 [2024-07-25 01:08:49.345691] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:22:56.254 [2024-07-25 01:08:49.345725] posix.c: 588:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:22:56.254 [2024-07-25 01:08:49.345779] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:22:56.254 [2024-07-25 01:08:49.346270] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13a1ed0 (107): Transport endpoint is not connected 00:22:56.254 [2024-07-25 01:08:49.347272] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13a1ed0 (9): Bad file descriptor 00:22:56.254 [2024-07-25 01:08:49.348257] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:22:56.254 [2024-07-25 01:08:49.348279] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:22:56.254 [2024-07-25 01:08:49.348296] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:22:56.254 request: 00:22:56.254 { 00:22:56.254 "name": "TLSTEST", 00:22:56.254 "trtype": "tcp", 00:22:56.254 "traddr": "10.0.0.2", 00:22:56.254 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:56.254 "adrfam": "ipv4", 00:22:56.254 "trsvcid": "4420", 00:22:56.254 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:56.254 "psk": "/tmp/tmp.1SHZw5PyL6", 00:22:56.254 "method": "bdev_nvme_attach_controller", 00:22:56.254 "req_id": 1 00:22:56.254 } 00:22:56.254 Got JSON-RPC error response 00:22:56.254 response: 00:22:56.254 { 00:22:56.254 "code": -5, 00:22:56.254 "message": "Input/output error" 00:22:56.254 } 00:22:56.254 01:08:49 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 3808809 00:22:56.254 01:08:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 3808809 ']' 00:22:56.254 01:08:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 3808809 00:22:56.254 01:08:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:22:56.254 01:08:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:22:56.254 01:08:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3808809 00:22:56.254 01:08:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:22:56.254 01:08:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:22:56.254 01:08:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3808809' 00:22:56.254 killing process with pid 3808809 00:22:56.254 01:08:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 3808809 00:22:56.254 Received shutdown signal, test time was about 10.000000 seconds 00:22:56.254 00:22:56.254 Latency(us) 00:22:56.254 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:56.254 =================================================================================================================== 00:22:56.254 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:56.254 [2024-07-25 01:08:49.397826] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:22:56.254 01:08:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 3808809 00:22:56.512 01:08:49 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:22:56.512 01:08:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:22:56.512 01:08:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:56.512 01:08:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:56.512 01:08:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:56.512 01:08:49 nvmf_tcp.nvmf_tls -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:22:56.512 01:08:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:22:56.512 01:08:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:22:56.512 01:08:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:22:56.512 01:08:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:56.512 01:08:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:22:56.512 01:08:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:56.512 01:08:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:22:56.512 01:08:49 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:56.512 01:08:49 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:56.512 01:08:49 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:56.512 01:08:49 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk= 00:22:56.512 01:08:49 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:56.512 01:08:49 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3808832 00:22:56.512 01:08:49 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:56.512 01:08:49 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:56.512 01:08:49 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3808832 /var/tmp/bdevperf.sock 00:22:56.512 01:08:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 3808832 ']' 00:22:56.512 01:08:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:56.512 01:08:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:22:56.512 01:08:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:56.512 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:56.512 01:08:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:22:56.512 01:08:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:56.512 [2024-07-25 01:08:49.663087] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 23.11.0 initialization... 00:22:56.512 [2024-07-25 01:08:49.663176] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3808832 ] 00:22:56.770 EAL: No free 2048 kB hugepages reported on node 1 00:22:56.770 [2024-07-25 01:08:49.724039] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:56.770 [2024-07-25 01:08:49.807786] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:56.770 01:08:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:22:56.770 01:08:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:22:56.770 01:08:49 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:22:57.028 [2024-07-25 01:08:50.155066] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:22:57.028 [2024-07-25 01:08:50.156837] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c855c0 (9): Bad file descriptor 00:22:57.028 [2024-07-25 01:08:50.157831] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:57.028 [2024-07-25 01:08:50.157855] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:22:57.028 [2024-07-25 01:08:50.157872] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:57.028 request: 00:22:57.028 { 00:22:57.028 "name": "TLSTEST", 00:22:57.028 "trtype": "tcp", 00:22:57.028 "traddr": "10.0.0.2", 00:22:57.028 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:57.028 "adrfam": "ipv4", 00:22:57.028 "trsvcid": "4420", 00:22:57.028 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:57.028 "method": "bdev_nvme_attach_controller", 00:22:57.028 "req_id": 1 00:22:57.028 } 00:22:57.028 Got JSON-RPC error response 00:22:57.028 response: 00:22:57.028 { 00:22:57.028 "code": -5, 00:22:57.028 "message": "Input/output error" 00:22:57.028 } 00:22:57.028 01:08:50 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 3808832 00:22:57.028 01:08:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 3808832 ']' 00:22:57.028 01:08:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 3808832 00:22:57.028 01:08:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:22:57.286 01:08:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:22:57.286 01:08:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3808832 00:22:57.286 01:08:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:22:57.286 01:08:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:22:57.286 01:08:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3808832' 00:22:57.286 killing process with pid 3808832 00:22:57.286 01:08:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 3808832 00:22:57.286 Received shutdown signal, test time was about 10.000000 seconds 00:22:57.286 00:22:57.286 Latency(us) 00:22:57.286 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:57.286 =================================================================================================================== 00:22:57.286 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:57.286 01:08:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 3808832 00:22:57.286 01:08:50 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:22:57.286 01:08:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:22:57.286 01:08:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:57.286 01:08:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:57.286 01:08:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:57.286 01:08:50 nvmf_tcp.nvmf_tls -- target/tls.sh@158 -- # killprocess 3805462 00:22:57.286 01:08:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 3805462 ']' 00:22:57.286 01:08:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 3805462 00:22:57.286 01:08:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:22:57.286 01:08:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:22:57.286 01:08:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3805462 00:22:57.544 01:08:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:22:57.544 01:08:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:22:57.544 01:08:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3805462' 00:22:57.544 killing process with pid 3805462 00:22:57.544 01:08:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 3805462 00:22:57.544 [2024-07-25 01:08:50.457933] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:22:57.544 01:08:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 3805462 00:22:57.544 01:08:50 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:22:57.544 01:08:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:22:57.544 01:08:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:22:57.544 01:08:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:22:57.544 01:08:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:22:57.544 01:08:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=2 00:22:57.544 01:08:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:22:57.803 01:08:50 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:22:57.803 01:08:50 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # mktemp 00:22:57.803 01:08:50 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # key_long_path=/tmp/tmp.dKghDIXpz6 00:22:57.803 01:08:50 nvmf_tcp.nvmf_tls -- target/tls.sh@161 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:22:57.803 01:08:50 nvmf_tcp.nvmf_tls -- target/tls.sh@162 -- # chmod 0600 /tmp/tmp.dKghDIXpz6 00:22:57.803 01:08:50 nvmf_tcp.nvmf_tls -- target/tls.sh@163 -- # nvmfappstart -m 0x2 00:22:57.803 01:08:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:57.803 01:08:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:22:57.803 01:08:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:57.803 01:08:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=3808982 00:22:57.803 01:08:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:57.803 01:08:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 3808982 00:22:57.803 01:08:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 3808982 ']' 00:22:57.803 01:08:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:57.803 01:08:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:22:57.803 01:08:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:57.803 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:57.803 01:08:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:22:57.803 01:08:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:57.803 [2024-07-25 01:08:50.796827] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 23.11.0 initialization... 00:22:57.803 [2024-07-25 01:08:50.796920] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:57.803 EAL: No free 2048 kB hugepages reported on node 1 00:22:57.803 [2024-07-25 01:08:50.864401] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:57.803 [2024-07-25 01:08:50.949109] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:57.803 [2024-07-25 01:08:50.949162] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:57.803 [2024-07-25 01:08:50.949184] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:57.803 [2024-07-25 01:08:50.949195] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:57.803 [2024-07-25 01:08:50.949204] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:57.803 [2024-07-25 01:08:50.949229] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:58.061 01:08:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:22:58.061 01:08:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:22:58.061 01:08:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:58.061 01:08:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:58.061 01:08:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:58.061 01:08:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:58.061 01:08:51 nvmf_tcp.nvmf_tls -- target/tls.sh@165 -- # setup_nvmf_tgt /tmp/tmp.dKghDIXpz6 00:22:58.061 01:08:51 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.dKghDIXpz6 00:22:58.061 01:08:51 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:58.318 [2024-07-25 01:08:51.359102] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:58.318 01:08:51 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:22:58.576 01:08:51 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:22:58.833 [2024-07-25 01:08:51.884472] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:58.833 [2024-07-25 01:08:51.884726] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:58.833 01:08:51 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:22:59.091 malloc0 00:22:59.091 01:08:52 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:22:59.348 01:08:52 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.dKghDIXpz6 00:22:59.607 [2024-07-25 01:08:52.657875] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:22:59.607 01:08:52 nvmf_tcp.nvmf_tls -- target/tls.sh@167 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.dKghDIXpz6 00:22:59.607 01:08:52 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:59.607 01:08:52 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:59.607 01:08:52 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:59.607 01:08:52 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.dKghDIXpz6' 00:22:59.607 01:08:52 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:59.607 01:08:52 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3809262 00:22:59.607 01:08:52 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:59.607 01:08:52 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:59.607 01:08:52 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3809262 /var/tmp/bdevperf.sock 00:22:59.607 01:08:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 3809262 ']' 00:22:59.607 01:08:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:59.607 01:08:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:22:59.607 01:08:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:59.607 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:59.607 01:08:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:22:59.607 01:08:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:59.607 [2024-07-25 01:08:52.711126] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 23.11.0 initialization... 00:22:59.607 [2024-07-25 01:08:52.711212] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3809262 ] 00:22:59.607 EAL: No free 2048 kB hugepages reported on node 1 00:22:59.865 [2024-07-25 01:08:52.771814] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:59.865 [2024-07-25 01:08:52.858902] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:59.865 01:08:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:22:59.865 01:08:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:22:59.866 01:08:52 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.dKghDIXpz6 00:23:00.123 [2024-07-25 01:08:53.204963] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:00.123 [2024-07-25 01:08:53.205073] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:23:00.438 TLSTESTn1 00:23:00.438 01:08:53 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:23:00.438 Running I/O for 10 seconds... 00:23:10.399 00:23:10.399 Latency(us) 00:23:10.399 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:10.399 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:10.399 Verification LBA range: start 0x0 length 0x2000 00:23:10.399 TLSTESTn1 : 10.05 2968.43 11.60 0.00 0.00 43008.58 6941.96 50098.63 00:23:10.399 =================================================================================================================== 00:23:10.399 Total : 2968.43 11.60 0.00 0.00 43008.58 6941.96 50098.63 00:23:10.399 0 00:23:10.399 01:09:03 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:10.399 01:09:03 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 3809262 00:23:10.399 01:09:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 3809262 ']' 00:23:10.399 01:09:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 3809262 00:23:10.399 01:09:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:23:10.399 01:09:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:10.399 01:09:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3809262 00:23:10.399 01:09:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:23:10.399 01:09:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:23:10.399 01:09:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3809262' 00:23:10.399 killing process with pid 3809262 00:23:10.399 01:09:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 3809262 00:23:10.399 Received shutdown signal, test time was about 10.000000 seconds 00:23:10.399 00:23:10.399 Latency(us) 00:23:10.399 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:10.399 =================================================================================================================== 00:23:10.399 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:10.399 [2024-07-25 01:09:03.507661] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:23:10.399 01:09:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 3809262 00:23:10.656 01:09:03 nvmf_tcp.nvmf_tls -- target/tls.sh@170 -- # chmod 0666 /tmp/tmp.dKghDIXpz6 00:23:10.656 01:09:03 nvmf_tcp.nvmf_tls -- target/tls.sh@171 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.dKghDIXpz6 00:23:10.656 01:09:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:23:10.656 01:09:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.dKghDIXpz6 00:23:10.656 01:09:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:23:10.656 01:09:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:10.656 01:09:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:23:10.656 01:09:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:10.656 01:09:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.dKghDIXpz6 00:23:10.656 01:09:03 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:10.656 01:09:03 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:10.656 01:09:03 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:10.656 01:09:03 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.dKghDIXpz6' 00:23:10.656 01:09:03 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:10.656 01:09:03 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3810690 00:23:10.656 01:09:03 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:10.656 01:09:03 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:10.656 01:09:03 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3810690 /var/tmp/bdevperf.sock 00:23:10.656 01:09:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 3810690 ']' 00:23:10.656 01:09:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:10.656 01:09:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:10.656 01:09:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:10.656 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:10.656 01:09:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:10.656 01:09:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:10.656 [2024-07-25 01:09:03.787813] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 23.11.0 initialization... 00:23:10.656 [2024-07-25 01:09:03.787902] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3810690 ] 00:23:10.914 EAL: No free 2048 kB hugepages reported on node 1 00:23:10.914 [2024-07-25 01:09:03.854008] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:10.914 [2024-07-25 01:09:03.942957] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:10.914 01:09:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:10.914 01:09:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:23:10.914 01:09:04 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.dKghDIXpz6 00:23:11.481 [2024-07-25 01:09:04.325338] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:11.481 [2024-07-25 01:09:04.325426] bdev_nvme.c:6122:bdev_nvme_load_psk: *ERROR*: Incorrect permissions for PSK file 00:23:11.481 [2024-07-25 01:09:04.325441] bdev_nvme.c:6231:bdev_nvme_create: *ERROR*: Could not load PSK from /tmp/tmp.dKghDIXpz6 00:23:11.481 request: 00:23:11.481 { 00:23:11.481 "name": "TLSTEST", 00:23:11.481 "trtype": "tcp", 00:23:11.481 "traddr": "10.0.0.2", 00:23:11.481 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:11.481 "adrfam": "ipv4", 00:23:11.481 "trsvcid": "4420", 00:23:11.481 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:11.481 "psk": "/tmp/tmp.dKghDIXpz6", 00:23:11.481 "method": "bdev_nvme_attach_controller", 00:23:11.481 "req_id": 1 00:23:11.481 } 00:23:11.481 Got JSON-RPC error response 00:23:11.481 response: 00:23:11.481 { 00:23:11.481 "code": -1, 00:23:11.481 "message": "Operation not permitted" 00:23:11.481 } 00:23:11.481 01:09:04 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 3810690 00:23:11.481 01:09:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 3810690 ']' 00:23:11.481 01:09:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 3810690 00:23:11.481 01:09:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:23:11.481 01:09:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:11.481 01:09:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3810690 00:23:11.481 01:09:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:23:11.481 01:09:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:23:11.481 01:09:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3810690' 00:23:11.481 killing process with pid 3810690 00:23:11.481 01:09:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 3810690 00:23:11.481 Received shutdown signal, test time was about 10.000000 seconds 00:23:11.481 00:23:11.481 Latency(us) 00:23:11.481 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:11.481 =================================================================================================================== 00:23:11.481 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:11.481 01:09:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 3810690 00:23:11.481 01:09:04 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:23:11.481 01:09:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:23:11.481 01:09:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:11.481 01:09:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:11.481 01:09:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:11.481 01:09:04 nvmf_tcp.nvmf_tls -- target/tls.sh@174 -- # killprocess 3808982 00:23:11.481 01:09:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 3808982 ']' 00:23:11.481 01:09:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 3808982 00:23:11.481 01:09:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:23:11.481 01:09:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:11.481 01:09:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3808982 00:23:11.481 01:09:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:23:11.481 01:09:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:23:11.481 01:09:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3808982' 00:23:11.481 killing process with pid 3808982 00:23:11.481 01:09:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 3808982 00:23:11.481 [2024-07-25 01:09:04.613018] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:23:11.481 01:09:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 3808982 00:23:11.739 01:09:04 nvmf_tcp.nvmf_tls -- target/tls.sh@175 -- # nvmfappstart -m 0x2 00:23:11.739 01:09:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:11.739 01:09:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:23:11.739 01:09:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:11.739 01:09:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=3810834 00:23:11.739 01:09:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:11.739 01:09:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 3810834 00:23:11.739 01:09:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 3810834 ']' 00:23:11.739 01:09:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:11.739 01:09:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:11.739 01:09:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:11.739 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:11.739 01:09:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:11.739 01:09:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:11.997 [2024-07-25 01:09:04.924768] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 23.11.0 initialization... 00:23:11.997 [2024-07-25 01:09:04.924846] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:11.997 EAL: No free 2048 kB hugepages reported on node 1 00:23:11.997 [2024-07-25 01:09:04.988909] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:11.997 [2024-07-25 01:09:05.072265] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:11.997 [2024-07-25 01:09:05.072332] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:11.997 [2024-07-25 01:09:05.072346] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:11.997 [2024-07-25 01:09:05.072371] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:11.997 [2024-07-25 01:09:05.072381] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:11.997 [2024-07-25 01:09:05.072411] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:12.254 01:09:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:12.254 01:09:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:23:12.254 01:09:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:12.254 01:09:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:12.254 01:09:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:12.254 01:09:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:12.254 01:09:05 nvmf_tcp.nvmf_tls -- target/tls.sh@177 -- # NOT setup_nvmf_tgt /tmp/tmp.dKghDIXpz6 00:23:12.254 01:09:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:23:12.254 01:09:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.dKghDIXpz6 00:23:12.254 01:09:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=setup_nvmf_tgt 00:23:12.254 01:09:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:12.254 01:09:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t setup_nvmf_tgt 00:23:12.254 01:09:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:12.254 01:09:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # setup_nvmf_tgt /tmp/tmp.dKghDIXpz6 00:23:12.254 01:09:05 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.dKghDIXpz6 00:23:12.254 01:09:05 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:12.512 [2024-07-25 01:09:05.486637] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:12.512 01:09:05 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:12.770 01:09:05 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:13.027 [2024-07-25 01:09:06.068170] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:13.027 [2024-07-25 01:09:06.068439] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:13.027 01:09:06 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:13.285 malloc0 00:23:13.285 01:09:06 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:13.543 01:09:06 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.dKghDIXpz6 00:23:13.800 [2024-07-25 01:09:06.942020] tcp.c:3575:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:23:13.800 [2024-07-25 01:09:06.942063] tcp.c:3661:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:23:13.800 [2024-07-25 01:09:06.942110] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:23:13.800 request: 00:23:13.800 { 00:23:13.800 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:13.800 "host": "nqn.2016-06.io.spdk:host1", 00:23:13.800 "psk": "/tmp/tmp.dKghDIXpz6", 00:23:13.800 "method": "nvmf_subsystem_add_host", 00:23:13.800 "req_id": 1 00:23:13.800 } 00:23:13.800 Got JSON-RPC error response 00:23:13.800 response: 00:23:13.800 { 00:23:13.800 "code": -32603, 00:23:13.800 "message": "Internal error" 00:23:13.800 } 00:23:14.058 01:09:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:23:14.058 01:09:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:14.058 01:09:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:14.058 01:09:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:14.058 01:09:06 nvmf_tcp.nvmf_tls -- target/tls.sh@180 -- # killprocess 3810834 00:23:14.058 01:09:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 3810834 ']' 00:23:14.058 01:09:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 3810834 00:23:14.058 01:09:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:23:14.058 01:09:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:14.058 01:09:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3810834 00:23:14.058 01:09:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:23:14.058 01:09:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:23:14.058 01:09:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3810834' 00:23:14.058 killing process with pid 3810834 00:23:14.058 01:09:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 3810834 00:23:14.058 01:09:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 3810834 00:23:14.315 01:09:07 nvmf_tcp.nvmf_tls -- target/tls.sh@181 -- # chmod 0600 /tmp/tmp.dKghDIXpz6 00:23:14.315 01:09:07 nvmf_tcp.nvmf_tls -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:23:14.315 01:09:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:14.315 01:09:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:23:14.315 01:09:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:14.316 01:09:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=3811140 00:23:14.316 01:09:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:14.316 01:09:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 3811140 00:23:14.316 01:09:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 3811140 ']' 00:23:14.316 01:09:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:14.316 01:09:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:14.316 01:09:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:14.316 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:14.316 01:09:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:14.316 01:09:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:14.316 [2024-07-25 01:09:07.290830] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 23.11.0 initialization... 00:23:14.316 [2024-07-25 01:09:07.290912] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:14.316 EAL: No free 2048 kB hugepages reported on node 1 00:23:14.316 [2024-07-25 01:09:07.358100] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:14.316 [2024-07-25 01:09:07.443333] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:14.316 [2024-07-25 01:09:07.443388] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:14.316 [2024-07-25 01:09:07.443418] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:14.316 [2024-07-25 01:09:07.443430] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:14.316 [2024-07-25 01:09:07.443439] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:14.316 [2024-07-25 01:09:07.443466] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:14.573 01:09:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:14.573 01:09:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:23:14.573 01:09:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:14.573 01:09:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:14.573 01:09:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:14.573 01:09:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:14.573 01:09:07 nvmf_tcp.nvmf_tls -- target/tls.sh@185 -- # setup_nvmf_tgt /tmp/tmp.dKghDIXpz6 00:23:14.573 01:09:07 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.dKghDIXpz6 00:23:14.573 01:09:07 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:14.831 [2024-07-25 01:09:07.796579] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:14.831 01:09:07 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:15.088 01:09:08 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:15.345 [2024-07-25 01:09:08.305975] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:15.345 [2024-07-25 01:09:08.306221] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:15.345 01:09:08 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:15.602 malloc0 00:23:15.603 01:09:08 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:15.860 01:09:08 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.dKghDIXpz6 00:23:16.117 [2024-07-25 01:09:09.082677] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:23:16.117 01:09:09 nvmf_tcp.nvmf_tls -- target/tls.sh@188 -- # bdevperf_pid=3811910 00:23:16.117 01:09:09 nvmf_tcp.nvmf_tls -- target/tls.sh@187 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:16.117 01:09:09 nvmf_tcp.nvmf_tls -- target/tls.sh@190 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:16.117 01:09:09 nvmf_tcp.nvmf_tls -- target/tls.sh@191 -- # waitforlisten 3811910 /var/tmp/bdevperf.sock 00:23:16.117 01:09:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 3811910 ']' 00:23:16.117 01:09:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:16.117 01:09:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:16.117 01:09:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:16.117 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:16.117 01:09:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:16.117 01:09:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:16.117 [2024-07-25 01:09:09.142641] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 23.11.0 initialization... 00:23:16.117 [2024-07-25 01:09:09.142715] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3811910 ] 00:23:16.117 EAL: No free 2048 kB hugepages reported on node 1 00:23:16.117 [2024-07-25 01:09:09.201928] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:16.375 [2024-07-25 01:09:09.289767] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:16.375 01:09:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:16.375 01:09:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:23:16.375 01:09:09 nvmf_tcp.nvmf_tls -- target/tls.sh@192 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.dKghDIXpz6 00:23:16.632 [2024-07-25 01:09:09.617085] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:16.632 [2024-07-25 01:09:09.617204] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:23:16.632 TLSTESTn1 00:23:16.632 01:09:09 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:23:16.888 01:09:10 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # tgtconf='{ 00:23:16.888 "subsystems": [ 00:23:16.888 { 00:23:16.888 "subsystem": "keyring", 00:23:16.888 "config": [] 00:23:16.888 }, 00:23:16.888 { 00:23:16.888 "subsystem": "iobuf", 00:23:16.888 "config": [ 00:23:16.888 { 00:23:16.888 "method": "iobuf_set_options", 00:23:16.888 "params": { 00:23:16.888 "small_pool_count": 8192, 00:23:16.888 "large_pool_count": 1024, 00:23:16.888 "small_bufsize": 8192, 00:23:16.888 "large_bufsize": 135168 00:23:16.888 } 00:23:16.888 } 00:23:16.888 ] 00:23:16.888 }, 00:23:16.888 { 00:23:16.888 "subsystem": "sock", 00:23:16.888 "config": [ 00:23:16.888 { 00:23:16.888 "method": "sock_set_default_impl", 00:23:16.888 "params": { 00:23:16.888 "impl_name": "posix" 00:23:16.888 } 00:23:16.888 }, 00:23:16.888 { 00:23:16.888 "method": "sock_impl_set_options", 00:23:16.888 "params": { 00:23:16.888 "impl_name": "ssl", 00:23:16.888 "recv_buf_size": 4096, 00:23:16.888 "send_buf_size": 4096, 00:23:16.888 "enable_recv_pipe": true, 00:23:16.888 "enable_quickack": false, 00:23:16.888 "enable_placement_id": 0, 00:23:16.888 "enable_zerocopy_send_server": true, 00:23:16.888 "enable_zerocopy_send_client": false, 00:23:16.888 "zerocopy_threshold": 0, 00:23:16.888 "tls_version": 0, 00:23:16.888 "enable_ktls": false 00:23:16.888 } 00:23:16.888 }, 00:23:16.888 { 00:23:16.888 "method": "sock_impl_set_options", 00:23:16.888 "params": { 00:23:16.888 "impl_name": "posix", 00:23:16.888 "recv_buf_size": 2097152, 00:23:16.888 "send_buf_size": 2097152, 00:23:16.888 "enable_recv_pipe": true, 00:23:16.888 "enable_quickack": false, 00:23:16.888 "enable_placement_id": 0, 00:23:16.888 "enable_zerocopy_send_server": true, 00:23:16.888 "enable_zerocopy_send_client": false, 00:23:16.888 "zerocopy_threshold": 0, 00:23:16.888 "tls_version": 0, 00:23:16.888 "enable_ktls": false 00:23:16.888 } 00:23:16.888 } 00:23:16.888 ] 00:23:16.888 }, 00:23:16.888 { 00:23:16.888 "subsystem": "vmd", 00:23:16.888 "config": [] 00:23:16.889 }, 00:23:16.889 { 00:23:16.889 "subsystem": "accel", 00:23:16.889 "config": [ 00:23:16.889 { 00:23:16.889 "method": "accel_set_options", 00:23:16.889 "params": { 00:23:16.889 "small_cache_size": 128, 00:23:16.889 "large_cache_size": 16, 00:23:16.889 "task_count": 2048, 00:23:16.889 "sequence_count": 2048, 00:23:16.889 "buf_count": 2048 00:23:16.889 } 00:23:16.889 } 00:23:16.889 ] 00:23:16.889 }, 00:23:16.889 { 00:23:16.889 "subsystem": "bdev", 00:23:16.889 "config": [ 00:23:16.889 { 00:23:16.889 "method": "bdev_set_options", 00:23:16.889 "params": { 00:23:16.889 "bdev_io_pool_size": 65535, 00:23:16.889 "bdev_io_cache_size": 256, 00:23:16.889 "bdev_auto_examine": true, 00:23:16.889 "iobuf_small_cache_size": 128, 00:23:16.889 "iobuf_large_cache_size": 16 00:23:16.889 } 00:23:16.889 }, 00:23:16.889 { 00:23:16.889 "method": "bdev_raid_set_options", 00:23:16.889 "params": { 00:23:16.889 "process_window_size_kb": 1024 00:23:16.889 } 00:23:16.889 }, 00:23:16.889 { 00:23:16.889 "method": "bdev_iscsi_set_options", 00:23:16.889 "params": { 00:23:16.889 "timeout_sec": 30 00:23:16.889 } 00:23:16.889 }, 00:23:16.889 { 00:23:16.889 "method": "bdev_nvme_set_options", 00:23:16.889 "params": { 00:23:16.889 "action_on_timeout": "none", 00:23:16.889 "timeout_us": 0, 00:23:16.889 "timeout_admin_us": 0, 00:23:16.889 "keep_alive_timeout_ms": 10000, 00:23:16.889 "arbitration_burst": 0, 00:23:16.889 "low_priority_weight": 0, 00:23:16.889 "medium_priority_weight": 0, 00:23:16.889 "high_priority_weight": 0, 00:23:16.889 "nvme_adminq_poll_period_us": 10000, 00:23:16.889 "nvme_ioq_poll_period_us": 0, 00:23:16.889 "io_queue_requests": 0, 00:23:16.889 "delay_cmd_submit": true, 00:23:16.889 "transport_retry_count": 4, 00:23:16.889 "bdev_retry_count": 3, 00:23:16.889 "transport_ack_timeout": 0, 00:23:16.889 "ctrlr_loss_timeout_sec": 0, 00:23:16.889 "reconnect_delay_sec": 0, 00:23:16.889 "fast_io_fail_timeout_sec": 0, 00:23:16.889 "disable_auto_failback": false, 00:23:16.889 "generate_uuids": false, 00:23:16.889 "transport_tos": 0, 00:23:16.889 "nvme_error_stat": false, 00:23:16.889 "rdma_srq_size": 0, 00:23:16.889 "io_path_stat": false, 00:23:16.889 "allow_accel_sequence": false, 00:23:16.889 "rdma_max_cq_size": 0, 00:23:16.889 "rdma_cm_event_timeout_ms": 0, 00:23:16.889 "dhchap_digests": [ 00:23:16.889 "sha256", 00:23:16.889 "sha384", 00:23:16.889 "sha512" 00:23:16.889 ], 00:23:16.889 "dhchap_dhgroups": [ 00:23:16.889 "null", 00:23:16.889 "ffdhe2048", 00:23:16.889 "ffdhe3072", 00:23:16.889 "ffdhe4096", 00:23:16.889 "ffdhe6144", 00:23:16.889 "ffdhe8192" 00:23:16.889 ] 00:23:16.889 } 00:23:16.889 }, 00:23:16.889 { 00:23:16.889 "method": "bdev_nvme_set_hotplug", 00:23:16.889 "params": { 00:23:16.889 "period_us": 100000, 00:23:16.889 "enable": false 00:23:16.889 } 00:23:16.889 }, 00:23:16.889 { 00:23:16.889 "method": "bdev_malloc_create", 00:23:16.889 "params": { 00:23:16.889 "name": "malloc0", 00:23:16.889 "num_blocks": 8192, 00:23:16.889 "block_size": 4096, 00:23:16.889 "physical_block_size": 4096, 00:23:16.889 "uuid": "e23f10da-c7da-45f7-879e-fa0fe5a325de", 00:23:16.889 "optimal_io_boundary": 0 00:23:16.889 } 00:23:16.889 }, 00:23:16.889 { 00:23:16.889 "method": "bdev_wait_for_examine" 00:23:16.889 } 00:23:16.889 ] 00:23:16.889 }, 00:23:16.889 { 00:23:16.889 "subsystem": "nbd", 00:23:16.889 "config": [] 00:23:16.889 }, 00:23:16.889 { 00:23:16.889 "subsystem": "scheduler", 00:23:16.889 "config": [ 00:23:16.889 { 00:23:16.889 "method": "framework_set_scheduler", 00:23:16.889 "params": { 00:23:16.889 "name": "static" 00:23:16.889 } 00:23:16.889 } 00:23:16.889 ] 00:23:16.889 }, 00:23:16.889 { 00:23:16.889 "subsystem": "nvmf", 00:23:16.889 "config": [ 00:23:16.889 { 00:23:16.889 "method": "nvmf_set_config", 00:23:16.889 "params": { 00:23:16.889 "discovery_filter": "match_any", 00:23:16.889 "admin_cmd_passthru": { 00:23:16.889 "identify_ctrlr": false 00:23:16.889 } 00:23:16.889 } 00:23:16.889 }, 00:23:16.889 { 00:23:16.889 "method": "nvmf_set_max_subsystems", 00:23:16.889 "params": { 00:23:16.889 "max_subsystems": 1024 00:23:16.889 } 00:23:16.889 }, 00:23:16.889 { 00:23:16.889 "method": "nvmf_set_crdt", 00:23:16.889 "params": { 00:23:16.889 "crdt1": 0, 00:23:16.889 "crdt2": 0, 00:23:16.889 "crdt3": 0 00:23:16.889 } 00:23:16.889 }, 00:23:16.889 { 00:23:16.889 "method": "nvmf_create_transport", 00:23:16.889 "params": { 00:23:16.889 "trtype": "TCP", 00:23:16.889 "max_queue_depth": 128, 00:23:16.889 "max_io_qpairs_per_ctrlr": 127, 00:23:16.889 "in_capsule_data_size": 4096, 00:23:16.889 "max_io_size": 131072, 00:23:16.889 "io_unit_size": 131072, 00:23:16.889 "max_aq_depth": 128, 00:23:16.889 "num_shared_buffers": 511, 00:23:16.889 "buf_cache_size": 4294967295, 00:23:16.889 "dif_insert_or_strip": false, 00:23:16.889 "zcopy": false, 00:23:16.889 "c2h_success": false, 00:23:16.889 "sock_priority": 0, 00:23:16.889 "abort_timeout_sec": 1, 00:23:16.889 "ack_timeout": 0, 00:23:16.889 "data_wr_pool_size": 0 00:23:16.889 } 00:23:16.889 }, 00:23:16.889 { 00:23:16.889 "method": "nvmf_create_subsystem", 00:23:16.889 "params": { 00:23:16.889 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:16.889 "allow_any_host": false, 00:23:16.889 "serial_number": "SPDK00000000000001", 00:23:16.889 "model_number": "SPDK bdev Controller", 00:23:16.889 "max_namespaces": 10, 00:23:16.889 "min_cntlid": 1, 00:23:16.889 "max_cntlid": 65519, 00:23:16.889 "ana_reporting": false 00:23:16.889 } 00:23:16.889 }, 00:23:16.889 { 00:23:16.889 "method": "nvmf_subsystem_add_host", 00:23:16.889 "params": { 00:23:16.889 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:16.889 "host": "nqn.2016-06.io.spdk:host1", 00:23:16.889 "psk": "/tmp/tmp.dKghDIXpz6" 00:23:16.889 } 00:23:16.889 }, 00:23:16.889 { 00:23:16.889 "method": "nvmf_subsystem_add_ns", 00:23:16.889 "params": { 00:23:16.889 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:16.889 "namespace": { 00:23:16.889 "nsid": 1, 00:23:16.889 "bdev_name": "malloc0", 00:23:16.889 "nguid": "E23F10DAC7DA45F7879EFA0FE5A325DE", 00:23:16.889 "uuid": "e23f10da-c7da-45f7-879e-fa0fe5a325de", 00:23:16.889 "no_auto_visible": false 00:23:16.889 } 00:23:16.889 } 00:23:16.889 }, 00:23:16.889 { 00:23:16.889 "method": "nvmf_subsystem_add_listener", 00:23:16.889 "params": { 00:23:16.889 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:16.889 "listen_address": { 00:23:16.889 "trtype": "TCP", 00:23:16.889 "adrfam": "IPv4", 00:23:16.889 "traddr": "10.0.0.2", 00:23:16.889 "trsvcid": "4420" 00:23:16.889 }, 00:23:16.889 "secure_channel": true 00:23:16.889 } 00:23:16.889 } 00:23:16.889 ] 00:23:16.889 } 00:23:16.889 ] 00:23:16.889 }' 00:23:16.889 01:09:10 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:23:17.452 01:09:10 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # bdevperfconf='{ 00:23:17.452 "subsystems": [ 00:23:17.452 { 00:23:17.452 "subsystem": "keyring", 00:23:17.452 "config": [] 00:23:17.452 }, 00:23:17.452 { 00:23:17.452 "subsystem": "iobuf", 00:23:17.452 "config": [ 00:23:17.453 { 00:23:17.453 "method": "iobuf_set_options", 00:23:17.453 "params": { 00:23:17.453 "small_pool_count": 8192, 00:23:17.453 "large_pool_count": 1024, 00:23:17.453 "small_bufsize": 8192, 00:23:17.453 "large_bufsize": 135168 00:23:17.453 } 00:23:17.453 } 00:23:17.453 ] 00:23:17.453 }, 00:23:17.453 { 00:23:17.453 "subsystem": "sock", 00:23:17.453 "config": [ 00:23:17.453 { 00:23:17.453 "method": "sock_set_default_impl", 00:23:17.453 "params": { 00:23:17.453 "impl_name": "posix" 00:23:17.453 } 00:23:17.453 }, 00:23:17.453 { 00:23:17.453 "method": "sock_impl_set_options", 00:23:17.453 "params": { 00:23:17.453 "impl_name": "ssl", 00:23:17.453 "recv_buf_size": 4096, 00:23:17.453 "send_buf_size": 4096, 00:23:17.453 "enable_recv_pipe": true, 00:23:17.453 "enable_quickack": false, 00:23:17.453 "enable_placement_id": 0, 00:23:17.453 "enable_zerocopy_send_server": true, 00:23:17.453 "enable_zerocopy_send_client": false, 00:23:17.453 "zerocopy_threshold": 0, 00:23:17.453 "tls_version": 0, 00:23:17.453 "enable_ktls": false 00:23:17.453 } 00:23:17.453 }, 00:23:17.453 { 00:23:17.453 "method": "sock_impl_set_options", 00:23:17.453 "params": { 00:23:17.453 "impl_name": "posix", 00:23:17.453 "recv_buf_size": 2097152, 00:23:17.453 "send_buf_size": 2097152, 00:23:17.453 "enable_recv_pipe": true, 00:23:17.453 "enable_quickack": false, 00:23:17.453 "enable_placement_id": 0, 00:23:17.453 "enable_zerocopy_send_server": true, 00:23:17.453 "enable_zerocopy_send_client": false, 00:23:17.453 "zerocopy_threshold": 0, 00:23:17.453 "tls_version": 0, 00:23:17.453 "enable_ktls": false 00:23:17.453 } 00:23:17.453 } 00:23:17.453 ] 00:23:17.453 }, 00:23:17.453 { 00:23:17.453 "subsystem": "vmd", 00:23:17.453 "config": [] 00:23:17.453 }, 00:23:17.453 { 00:23:17.453 "subsystem": "accel", 00:23:17.453 "config": [ 00:23:17.453 { 00:23:17.453 "method": "accel_set_options", 00:23:17.453 "params": { 00:23:17.453 "small_cache_size": 128, 00:23:17.453 "large_cache_size": 16, 00:23:17.453 "task_count": 2048, 00:23:17.453 "sequence_count": 2048, 00:23:17.453 "buf_count": 2048 00:23:17.453 } 00:23:17.453 } 00:23:17.453 ] 00:23:17.453 }, 00:23:17.453 { 00:23:17.453 "subsystem": "bdev", 00:23:17.453 "config": [ 00:23:17.453 { 00:23:17.453 "method": "bdev_set_options", 00:23:17.453 "params": { 00:23:17.453 "bdev_io_pool_size": 65535, 00:23:17.453 "bdev_io_cache_size": 256, 00:23:17.453 "bdev_auto_examine": true, 00:23:17.453 "iobuf_small_cache_size": 128, 00:23:17.453 "iobuf_large_cache_size": 16 00:23:17.453 } 00:23:17.453 }, 00:23:17.453 { 00:23:17.453 "method": "bdev_raid_set_options", 00:23:17.453 "params": { 00:23:17.453 "process_window_size_kb": 1024 00:23:17.453 } 00:23:17.453 }, 00:23:17.453 { 00:23:17.453 "method": "bdev_iscsi_set_options", 00:23:17.453 "params": { 00:23:17.453 "timeout_sec": 30 00:23:17.453 } 00:23:17.453 }, 00:23:17.453 { 00:23:17.453 "method": "bdev_nvme_set_options", 00:23:17.453 "params": { 00:23:17.453 "action_on_timeout": "none", 00:23:17.453 "timeout_us": 0, 00:23:17.453 "timeout_admin_us": 0, 00:23:17.453 "keep_alive_timeout_ms": 10000, 00:23:17.453 "arbitration_burst": 0, 00:23:17.453 "low_priority_weight": 0, 00:23:17.453 "medium_priority_weight": 0, 00:23:17.453 "high_priority_weight": 0, 00:23:17.453 "nvme_adminq_poll_period_us": 10000, 00:23:17.453 "nvme_ioq_poll_period_us": 0, 00:23:17.453 "io_queue_requests": 512, 00:23:17.453 "delay_cmd_submit": true, 00:23:17.453 "transport_retry_count": 4, 00:23:17.453 "bdev_retry_count": 3, 00:23:17.453 "transport_ack_timeout": 0, 00:23:17.453 "ctrlr_loss_timeout_sec": 0, 00:23:17.453 "reconnect_delay_sec": 0, 00:23:17.453 "fast_io_fail_timeout_sec": 0, 00:23:17.453 "disable_auto_failback": false, 00:23:17.453 "generate_uuids": false, 00:23:17.453 "transport_tos": 0, 00:23:17.453 "nvme_error_stat": false, 00:23:17.453 "rdma_srq_size": 0, 00:23:17.453 "io_path_stat": false, 00:23:17.453 "allow_accel_sequence": false, 00:23:17.453 "rdma_max_cq_size": 0, 00:23:17.453 "rdma_cm_event_timeout_ms": 0, 00:23:17.453 "dhchap_digests": [ 00:23:17.453 "sha256", 00:23:17.453 "sha384", 00:23:17.453 "sha512" 00:23:17.453 ], 00:23:17.453 "dhchap_dhgroups": [ 00:23:17.453 "null", 00:23:17.453 "ffdhe2048", 00:23:17.453 "ffdhe3072", 00:23:17.453 "ffdhe4096", 00:23:17.453 "ffdhe6144", 00:23:17.453 "ffdhe8192" 00:23:17.453 ] 00:23:17.453 } 00:23:17.453 }, 00:23:17.453 { 00:23:17.453 "method": "bdev_nvme_attach_controller", 00:23:17.453 "params": { 00:23:17.453 "name": "TLSTEST", 00:23:17.453 "trtype": "TCP", 00:23:17.453 "adrfam": "IPv4", 00:23:17.453 "traddr": "10.0.0.2", 00:23:17.453 "trsvcid": "4420", 00:23:17.453 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:17.453 "prchk_reftag": false, 00:23:17.453 "prchk_guard": false, 00:23:17.453 "ctrlr_loss_timeout_sec": 0, 00:23:17.453 "reconnect_delay_sec": 0, 00:23:17.453 "fast_io_fail_timeout_sec": 0, 00:23:17.453 "psk": "/tmp/tmp.dKghDIXpz6", 00:23:17.453 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:17.453 "hdgst": false, 00:23:17.453 "ddgst": false 00:23:17.453 } 00:23:17.453 }, 00:23:17.453 { 00:23:17.453 "method": "bdev_nvme_set_hotplug", 00:23:17.453 "params": { 00:23:17.453 "period_us": 100000, 00:23:17.453 "enable": false 00:23:17.453 } 00:23:17.453 }, 00:23:17.453 { 00:23:17.453 "method": "bdev_wait_for_examine" 00:23:17.453 } 00:23:17.453 ] 00:23:17.453 }, 00:23:17.453 { 00:23:17.453 "subsystem": "nbd", 00:23:17.453 "config": [] 00:23:17.453 } 00:23:17.453 ] 00:23:17.453 }' 00:23:17.453 01:09:10 nvmf_tcp.nvmf_tls -- target/tls.sh@199 -- # killprocess 3811910 00:23:17.453 01:09:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 3811910 ']' 00:23:17.453 01:09:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 3811910 00:23:17.453 01:09:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:23:17.453 01:09:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:17.453 01:09:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3811910 00:23:17.453 01:09:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:23:17.453 01:09:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:23:17.453 01:09:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3811910' 00:23:17.453 killing process with pid 3811910 00:23:17.453 01:09:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 3811910 00:23:17.453 Received shutdown signal, test time was about 10.000000 seconds 00:23:17.453 00:23:17.453 Latency(us) 00:23:17.453 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:17.453 =================================================================================================================== 00:23:17.453 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:17.453 [2024-07-25 01:09:10.395705] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:23:17.453 01:09:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 3811910 00:23:17.453 01:09:10 nvmf_tcp.nvmf_tls -- target/tls.sh@200 -- # killprocess 3811140 00:23:17.453 01:09:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 3811140 ']' 00:23:17.453 01:09:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 3811140 00:23:17.453 01:09:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:23:17.453 01:09:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:17.453 01:09:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3811140 00:23:17.713 01:09:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:23:17.713 01:09:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:23:17.713 01:09:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3811140' 00:23:17.713 killing process with pid 3811140 00:23:17.713 01:09:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 3811140 00:23:17.713 [2024-07-25 01:09:10.622906] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:23:17.713 01:09:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 3811140 00:23:17.713 01:09:10 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:23:17.713 01:09:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:17.713 01:09:10 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # echo '{ 00:23:17.713 "subsystems": [ 00:23:17.713 { 00:23:17.713 "subsystem": "keyring", 00:23:17.713 "config": [] 00:23:17.713 }, 00:23:17.713 { 00:23:17.713 "subsystem": "iobuf", 00:23:17.713 "config": [ 00:23:17.713 { 00:23:17.713 "method": "iobuf_set_options", 00:23:17.713 "params": { 00:23:17.713 "small_pool_count": 8192, 00:23:17.713 "large_pool_count": 1024, 00:23:17.713 "small_bufsize": 8192, 00:23:17.713 "large_bufsize": 135168 00:23:17.713 } 00:23:17.713 } 00:23:17.713 ] 00:23:17.713 }, 00:23:17.713 { 00:23:17.713 "subsystem": "sock", 00:23:17.713 "config": [ 00:23:17.713 { 00:23:17.713 "method": "sock_set_default_impl", 00:23:17.713 "params": { 00:23:17.713 "impl_name": "posix" 00:23:17.713 } 00:23:17.713 }, 00:23:17.713 { 00:23:17.713 "method": "sock_impl_set_options", 00:23:17.713 "params": { 00:23:17.713 "impl_name": "ssl", 00:23:17.713 "recv_buf_size": 4096, 00:23:17.713 "send_buf_size": 4096, 00:23:17.713 "enable_recv_pipe": true, 00:23:17.713 "enable_quickack": false, 00:23:17.713 "enable_placement_id": 0, 00:23:17.713 "enable_zerocopy_send_server": true, 00:23:17.713 "enable_zerocopy_send_client": false, 00:23:17.713 "zerocopy_threshold": 0, 00:23:17.713 "tls_version": 0, 00:23:17.713 "enable_ktls": false 00:23:17.713 } 00:23:17.713 }, 00:23:17.713 { 00:23:17.713 "method": "sock_impl_set_options", 00:23:17.713 "params": { 00:23:17.713 "impl_name": "posix", 00:23:17.713 "recv_buf_size": 2097152, 00:23:17.713 "send_buf_size": 2097152, 00:23:17.713 "enable_recv_pipe": true, 00:23:17.713 "enable_quickack": false, 00:23:17.713 "enable_placement_id": 0, 00:23:17.713 "enable_zerocopy_send_server": true, 00:23:17.713 "enable_zerocopy_send_client": false, 00:23:17.713 "zerocopy_threshold": 0, 00:23:17.713 "tls_version": 0, 00:23:17.713 "enable_ktls": false 00:23:17.713 } 00:23:17.713 } 00:23:17.713 ] 00:23:17.713 }, 00:23:17.713 { 00:23:17.713 "subsystem": "vmd", 00:23:17.713 "config": [] 00:23:17.713 }, 00:23:17.713 { 00:23:17.713 "subsystem": "accel", 00:23:17.713 "config": [ 00:23:17.713 { 00:23:17.713 "method": "accel_set_options", 00:23:17.713 "params": { 00:23:17.713 "small_cache_size": 128, 00:23:17.713 "large_cache_size": 16, 00:23:17.713 "task_count": 2048, 00:23:17.713 "sequence_count": 2048, 00:23:17.713 "buf_count": 2048 00:23:17.713 } 00:23:17.713 } 00:23:17.713 ] 00:23:17.713 }, 00:23:17.713 { 00:23:17.713 "subsystem": "bdev", 00:23:17.713 "config": [ 00:23:17.713 { 00:23:17.713 "method": "bdev_set_options", 00:23:17.713 "params": { 00:23:17.713 "bdev_io_pool_size": 65535, 00:23:17.713 "bdev_io_cache_size": 256, 00:23:17.713 "bdev_auto_examine": true, 00:23:17.713 "iobuf_small_cache_size": 128, 00:23:17.713 "iobuf_large_cache_size": 16 00:23:17.713 } 00:23:17.713 }, 00:23:17.713 { 00:23:17.713 "method": "bdev_raid_set_options", 00:23:17.713 "params": { 00:23:17.713 "process_window_size_kb": 1024 00:23:17.713 } 00:23:17.713 }, 00:23:17.713 { 00:23:17.713 "method": "bdev_iscsi_set_options", 00:23:17.713 "params": { 00:23:17.713 "timeout_sec": 30 00:23:17.713 } 00:23:17.713 }, 00:23:17.713 { 00:23:17.713 "method": "bdev_nvme_set_options", 00:23:17.713 "params": { 00:23:17.713 "action_on_timeout": "none", 00:23:17.713 "timeout_us": 0, 00:23:17.713 "timeout_admin_us": 0, 00:23:17.713 "keep_alive_timeout_ms": 10000, 00:23:17.713 "arbitration_burst": 0, 00:23:17.713 "low_priority_weight": 0, 00:23:17.713 "medium_priority_weight": 0, 00:23:17.713 "high_priority_weight": 0, 00:23:17.713 "nvme_adminq_poll_period_us": 10000, 00:23:17.713 "nvme_ioq_poll_period_us": 0, 00:23:17.713 "io_queue_requests": 0, 00:23:17.713 "delay_cmd_submit": true, 00:23:17.713 "transport_retry_count": 4, 00:23:17.713 "bdev_retry_count": 3, 00:23:17.713 "transport_ack_timeout": 0, 00:23:17.713 "ctrlr_loss_timeout_sec": 0, 00:23:17.713 "reconnect_delay_sec": 0, 00:23:17.713 "fast_io_fail_timeout_sec": 0, 00:23:17.713 "disable_auto_failback": false, 00:23:17.713 "generate_uuids": false, 00:23:17.713 "transport_tos": 0, 00:23:17.713 "nvme_error_stat": false, 00:23:17.713 "rdma_srq_size": 0, 00:23:17.713 "io_path_stat": false, 00:23:17.713 "allow_accel_sequence": false, 00:23:17.713 "rdma_max_cq_size": 0, 00:23:17.713 "rdma_cm_event_timeout_ms": 0, 00:23:17.713 "dhchap_digests": [ 00:23:17.713 "sha256", 00:23:17.713 "sha384", 00:23:17.713 "sha512" 00:23:17.713 ], 00:23:17.713 "dhchap_dhgroups": [ 00:23:17.713 "null", 00:23:17.713 "ffdhe2048", 00:23:17.713 "ffdhe3072", 00:23:17.713 "ffdhe4096", 00:23:17.713 "ffdhe6144", 00:23:17.713 "ffdhe8192" 00:23:17.713 ] 00:23:17.713 } 00:23:17.713 }, 00:23:17.713 { 00:23:17.713 "method": "bdev_nvme_set_hotplug", 00:23:17.713 "params": { 00:23:17.713 "period_us": 100000, 00:23:17.713 "enable": false 00:23:17.713 } 00:23:17.713 }, 00:23:17.713 { 00:23:17.713 "method": "bdev_malloc_create", 00:23:17.713 "params": { 00:23:17.713 "name": "malloc0", 00:23:17.713 "num_blocks": 8192, 00:23:17.713 "block_size": 4096, 00:23:17.713 "physical_block_size": 4096, 00:23:17.713 "uuid": "e23f10da-c7da-45f7-879e-fa0fe5a325de", 00:23:17.713 "optimal_io_boundary": 0 00:23:17.713 } 00:23:17.713 }, 00:23:17.713 { 00:23:17.713 "method": "bdev_wait_for_examine" 00:23:17.713 } 00:23:17.713 ] 00:23:17.713 }, 00:23:17.713 { 00:23:17.713 "subsystem": "nbd", 00:23:17.714 "config": [] 00:23:17.714 }, 00:23:17.714 { 00:23:17.714 "subsystem": "scheduler", 00:23:17.714 "config": [ 00:23:17.714 { 00:23:17.714 "method": "framework_set_scheduler", 00:23:17.714 "params": { 00:23:17.714 "name": "static" 00:23:17.714 } 00:23:17.714 } 00:23:17.714 ] 00:23:17.714 }, 00:23:17.714 { 00:23:17.714 "subsystem": "nvmf", 00:23:17.714 "config": [ 00:23:17.714 { 00:23:17.714 "method": "nvmf_set_config", 00:23:17.714 "params": { 00:23:17.714 "discovery_filter": "match_any", 00:23:17.714 "admin_cmd_passthru": { 00:23:17.714 "identify_ctrlr": false 00:23:17.714 } 00:23:17.714 } 00:23:17.714 }, 00:23:17.714 { 00:23:17.714 "method": "nvmf_set_max_subsystems", 00:23:17.714 "params": { 00:23:17.714 "max_subsystems": 1024 00:23:17.714 } 00:23:17.714 }, 00:23:17.714 { 00:23:17.714 "method": "nvmf_set_crdt", 00:23:17.714 "params": { 00:23:17.714 "crdt1": 0, 00:23:17.714 "crdt2": 0, 00:23:17.714 "crdt3": 0 00:23:17.714 } 00:23:17.714 }, 00:23:17.714 { 00:23:17.714 "method": "nvmf_create_transport", 00:23:17.714 "params": { 00:23:17.714 "trtype": "TCP", 00:23:17.714 "max_queue_depth": 128, 00:23:17.714 "max_io_qpairs_per_ctrlr": 127, 00:23:17.714 "in_capsule_data_size": 4096, 00:23:17.714 "max_io_size": 131072, 00:23:17.714 "io_unit_size": 131072, 00:23:17.714 "max_aq_depth": 128, 00:23:17.714 "num_shared_buffers": 511, 00:23:17.714 "buf_cache_size": 4294967295, 00:23:17.714 "dif_insert_or_strip": false, 00:23:17.714 "zcopy": false, 00:23:17.714 "c2h_success": false, 00:23:17.714 "sock_priority": 0, 00:23:17.714 "abort_timeout_sec": 1, 00:23:17.714 "ack_timeout": 0, 00:23:17.714 "data_wr_pool_size": 0 00:23:17.714 } 00:23:17.714 }, 00:23:17.714 { 00:23:17.714 "method": "nvmf_create_subsystem", 00:23:17.714 "params": { 00:23:17.714 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:17.714 "allow_any_host": false, 00:23:17.714 "serial_number": "SPDK00000000000001", 00:23:17.714 "model_number": "SPDK bdev Controller", 00:23:17.714 "max_namespaces": 10, 00:23:17.714 "min_cntlid": 1, 00:23:17.714 "max_cntlid": 65519, 00:23:17.714 "ana_reporting": false 00:23:17.714 } 00:23:17.714 }, 00:23:17.714 { 00:23:17.714 "method": "nvmf_subsystem_add_host", 00:23:17.714 "params": { 00:23:17.714 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:17.714 "host": "nqn.2016-06.io.spdk:host1", 00:23:17.714 "psk": "/tmp/tmp.dKghDIXpz6" 00:23:17.714 } 00:23:17.714 }, 00:23:17.714 { 00:23:17.714 "method": "nvmf_subsystem_add_ns", 00:23:17.714 "params": { 00:23:17.714 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:17.714 "namespace": { 00:23:17.714 "nsid": 1, 00:23:17.714 "bdev_name": "malloc0", 00:23:17.714 "nguid": "E23F10DAC7DA45F7879EFA0FE5A325DE", 00:23:17.714 "uuid": "e23f10da-c7da-45f7-879e-fa0fe5a325de", 00:23:17.714 "no_auto_visible": false 00:23:17.714 } 00:23:17.714 } 00:23:17.714 }, 00:23:17.714 { 00:23:17.714 "method": "nvmf_subsystem_add_listener", 00:23:17.714 "params": { 00:23:17.714 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:17.714 "listen_address": { 00:23:17.714 "trtype": "TCP", 00:23:17.714 "adrfam": "IPv4", 00:23:17.714 "traddr": "10.0.0.2", 00:23:17.714 "trsvcid": "4420" 00:23:17.714 }, 00:23:17.714 "secure_channel": true 00:23:17.714 } 00:23:17.714 } 00:23:17.714 ] 00:23:17.714 } 00:23:17.714 ] 00:23:17.714 }' 00:23:17.714 01:09:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:23:17.714 01:09:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:17.714 01:09:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=3812076 00:23:17.714 01:09:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:23:17.714 01:09:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 3812076 00:23:17.714 01:09:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 3812076 ']' 00:23:17.714 01:09:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:17.714 01:09:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:17.714 01:09:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:17.714 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:17.714 01:09:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:17.714 01:09:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:17.972 [2024-07-25 01:09:10.898501] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 23.11.0 initialization... 00:23:17.972 [2024-07-25 01:09:10.898590] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:17.972 EAL: No free 2048 kB hugepages reported on node 1 00:23:17.972 [2024-07-25 01:09:10.965374] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:17.972 [2024-07-25 01:09:11.055504] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:17.972 [2024-07-25 01:09:11.055575] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:17.972 [2024-07-25 01:09:11.055592] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:17.972 [2024-07-25 01:09:11.055606] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:17.972 [2024-07-25 01:09:11.055618] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:17.972 [2024-07-25 01:09:11.055705] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:18.230 [2024-07-25 01:09:11.288339] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:18.230 [2024-07-25 01:09:11.304272] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:23:18.230 [2024-07-25 01:09:11.320335] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:18.230 [2024-07-25 01:09:11.331469] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:18.794 01:09:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:18.794 01:09:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:23:18.794 01:09:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:18.794 01:09:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:18.794 01:09:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:19.052 01:09:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:19.052 01:09:11 nvmf_tcp.nvmf_tls -- target/tls.sh@207 -- # bdevperf_pid=3812224 00:23:19.052 01:09:11 nvmf_tcp.nvmf_tls -- target/tls.sh@208 -- # waitforlisten 3812224 /var/tmp/bdevperf.sock 00:23:19.052 01:09:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 3812224 ']' 00:23:19.052 01:09:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:19.052 01:09:11 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:23:19.052 01:09:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:19.052 01:09:11 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # echo '{ 00:23:19.052 "subsystems": [ 00:23:19.053 { 00:23:19.053 "subsystem": "keyring", 00:23:19.053 "config": [] 00:23:19.053 }, 00:23:19.053 { 00:23:19.053 "subsystem": "iobuf", 00:23:19.053 "config": [ 00:23:19.053 { 00:23:19.053 "method": "iobuf_set_options", 00:23:19.053 "params": { 00:23:19.053 "small_pool_count": 8192, 00:23:19.053 "large_pool_count": 1024, 00:23:19.053 "small_bufsize": 8192, 00:23:19.053 "large_bufsize": 135168 00:23:19.053 } 00:23:19.053 } 00:23:19.053 ] 00:23:19.053 }, 00:23:19.053 { 00:23:19.053 "subsystem": "sock", 00:23:19.053 "config": [ 00:23:19.053 { 00:23:19.053 "method": "sock_set_default_impl", 00:23:19.053 "params": { 00:23:19.053 "impl_name": "posix" 00:23:19.053 } 00:23:19.053 }, 00:23:19.053 { 00:23:19.053 "method": "sock_impl_set_options", 00:23:19.053 "params": { 00:23:19.053 "impl_name": "ssl", 00:23:19.053 "recv_buf_size": 4096, 00:23:19.053 "send_buf_size": 4096, 00:23:19.053 "enable_recv_pipe": true, 00:23:19.053 "enable_quickack": false, 00:23:19.053 "enable_placement_id": 0, 00:23:19.053 "enable_zerocopy_send_server": true, 00:23:19.053 "enable_zerocopy_send_client": false, 00:23:19.053 "zerocopy_threshold": 0, 00:23:19.053 "tls_version": 0, 00:23:19.053 "enable_ktls": false 00:23:19.053 } 00:23:19.053 }, 00:23:19.053 { 00:23:19.053 "method": "sock_impl_set_options", 00:23:19.053 "params": { 00:23:19.053 "impl_name": "posix", 00:23:19.053 "recv_buf_size": 2097152, 00:23:19.053 "send_buf_size": 2097152, 00:23:19.053 "enable_recv_pipe": true, 00:23:19.053 "enable_quickack": false, 00:23:19.053 "enable_placement_id": 0, 00:23:19.053 "enable_zerocopy_send_server": true, 00:23:19.053 "enable_zerocopy_send_client": false, 00:23:19.053 "zerocopy_threshold": 0, 00:23:19.053 "tls_version": 0, 00:23:19.053 "enable_ktls": false 00:23:19.053 } 00:23:19.053 } 00:23:19.053 ] 00:23:19.053 }, 00:23:19.053 { 00:23:19.053 "subsystem": "vmd", 00:23:19.053 "config": [] 00:23:19.053 }, 00:23:19.053 { 00:23:19.053 "subsystem": "accel", 00:23:19.053 "config": [ 00:23:19.053 { 00:23:19.053 "method": "accel_set_options", 00:23:19.053 "params": { 00:23:19.053 "small_cache_size": 128, 00:23:19.053 "large_cache_size": 16, 00:23:19.053 "task_count": 2048, 00:23:19.053 "sequence_count": 2048, 00:23:19.053 "buf_count": 2048 00:23:19.053 } 00:23:19.053 } 00:23:19.053 ] 00:23:19.053 }, 00:23:19.053 { 00:23:19.053 "subsystem": "bdev", 00:23:19.053 "config": [ 00:23:19.053 { 00:23:19.053 "method": "bdev_set_options", 00:23:19.053 "params": { 00:23:19.053 "bdev_io_pool_size": 65535, 00:23:19.053 "bdev_io_cache_size": 256, 00:23:19.053 "bdev_auto_examine": true, 00:23:19.053 "iobuf_small_cache_size": 128, 00:23:19.053 "iobuf_large_cache_size": 16 00:23:19.053 } 00:23:19.053 }, 00:23:19.053 { 00:23:19.053 "method": "bdev_raid_set_options", 00:23:19.053 "params": { 00:23:19.053 "process_window_size_kb": 1024 00:23:19.053 } 00:23:19.053 }, 00:23:19.053 { 00:23:19.053 "method": "bdev_iscsi_set_options", 00:23:19.053 "params": { 00:23:19.053 "timeout_sec": 30 00:23:19.053 } 00:23:19.053 }, 00:23:19.053 { 00:23:19.053 "method": "bdev_nvme_set_options", 00:23:19.053 "params": { 00:23:19.053 "action_on_timeout": "none", 00:23:19.053 "timeout_us": 0, 00:23:19.053 "timeout_admin_us": 0, 00:23:19.053 "keep_alive_timeout_ms": 10000, 00:23:19.053 "arbitration_burst": 0, 00:23:19.053 "low_priority_weight": 0, 00:23:19.053 "medium_priority_weight": 0, 00:23:19.053 "high_priority_weight": 0, 00:23:19.053 "nvme_adminq_poll_period_us": 10000, 00:23:19.053 "nvme_ioq_poll_period_us": 0, 00:23:19.053 "io_queue_requests": 512, 00:23:19.053 "delay_cmd_submit": true, 00:23:19.053 "transport_retry_count": 4, 00:23:19.053 "bdev_retry_count": 3, 00:23:19.053 "transport_ack_timeout": 0, 00:23:19.053 "ctrlr_loss_timeout_sec": 0, 00:23:19.053 "reconnect_delay_sec": 0, 00:23:19.053 "fast_io_fail_timeout_sec": 0, 00:23:19.053 "disable_auto_failback": false, 00:23:19.053 "generate_uuids": false, 00:23:19.053 "transport_tos": 0, 00:23:19.053 "nvme_error_stat": false, 00:23:19.053 "rdma_srq_size": 0, 00:23:19.053 "io_path_stat": false, 00:23:19.053 "allow_accel_sequence": false, 00:23:19.053 "rdma_max_cq_size": 0, 00:23:19.053 "rdma_cm_event_timeout_ms": 0, 00:23:19.053 "dhchap_digests": [ 00:23:19.053 "sha256", 00:23:19.053 "sha384", 00:23:19.053 "sha512" 00:23:19.053 ], 00:23:19.053 "dhchap_dhgroups": [ 00:23:19.053 "null", 00:23:19.053 "ffdhe2048", 00:23:19.053 "ffdhe3072", 00:23:19.053 "ffdhe4096", 00:23:19.053 "ffd 01:09:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:19.053 he6144", 00:23:19.053 "ffdhe8192" 00:23:19.053 ] 00:23:19.053 } 00:23:19.053 }, 00:23:19.053 { 00:23:19.053 "method": "bdev_nvme_attach_controller", 00:23:19.053 "params": { 00:23:19.053 "name": "TLSTEST", 00:23:19.053 "trtype": "TCP", 00:23:19.053 "adrfam": "IPv4", 00:23:19.053 "traddr": "10.0.0.2", 00:23:19.053 "trsvcid": "4420", 00:23:19.053 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:19.053 "prchk_reftag": false, 00:23:19.053 "prchk_guard": false, 00:23:19.053 "ctrlr_loss_timeout_sec": 0, 00:23:19.053 "reconnect_delay_sec": 0, 00:23:19.053 "fast_io_fail_timeout_sec": 0, 00:23:19.053 "psk": "/tmp/tmp.dKghDIXpz6", 00:23:19.053 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:19.053 "hdgst": false, 00:23:19.053 "ddgst": false 00:23:19.053 } 00:23:19.053 }, 00:23:19.053 { 00:23:19.053 "method": "bdev_nvme_set_hotplug", 00:23:19.053 "params": { 00:23:19.053 "period_us": 100000, 00:23:19.053 "enable": false 00:23:19.053 } 00:23:19.053 }, 00:23:19.053 { 00:23:19.053 "method": "bdev_wait_for_examine" 00:23:19.053 } 00:23:19.053 ] 00:23:19.053 }, 00:23:19.053 { 00:23:19.053 "subsystem": "nbd", 00:23:19.053 "config": [] 00:23:19.053 } 00:23:19.053 ] 00:23:19.053 }' 00:23:19.053 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:19.053 01:09:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:19.053 01:09:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:19.053 [2024-07-25 01:09:12.005937] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 23.11.0 initialization... 00:23:19.053 [2024-07-25 01:09:12.006016] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3812224 ] 00:23:19.053 EAL: No free 2048 kB hugepages reported on node 1 00:23:19.053 [2024-07-25 01:09:12.063267] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:19.053 [2024-07-25 01:09:12.150605] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:19.311 [2024-07-25 01:09:12.321307] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:19.311 [2024-07-25 01:09:12.321440] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:23:19.875 01:09:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:19.875 01:09:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:23:19.875 01:09:12 nvmf_tcp.nvmf_tls -- target/tls.sh@211 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:23:20.132 Running I/O for 10 seconds... 00:23:30.179 00:23:30.179 Latency(us) 00:23:30.179 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:30.179 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:30.179 Verification LBA range: start 0x0 length 0x2000 00:23:30.179 TLSTESTn1 : 10.03 3519.81 13.75 0.00 0.00 36295.57 9709.04 52817.16 00:23:30.179 =================================================================================================================== 00:23:30.179 Total : 3519.81 13.75 0.00 0.00 36295.57 9709.04 52817.16 00:23:30.179 0 00:23:30.179 01:09:23 nvmf_tcp.nvmf_tls -- target/tls.sh@213 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:30.179 01:09:23 nvmf_tcp.nvmf_tls -- target/tls.sh@214 -- # killprocess 3812224 00:23:30.179 01:09:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 3812224 ']' 00:23:30.179 01:09:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 3812224 00:23:30.180 01:09:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:23:30.180 01:09:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:30.180 01:09:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3812224 00:23:30.180 01:09:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:23:30.180 01:09:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:23:30.180 01:09:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3812224' 00:23:30.180 killing process with pid 3812224 00:23:30.180 01:09:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 3812224 00:23:30.180 Received shutdown signal, test time was about 10.000000 seconds 00:23:30.180 00:23:30.180 Latency(us) 00:23:30.180 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:30.180 =================================================================================================================== 00:23:30.180 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:30.180 [2024-07-25 01:09:23.158150] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:23:30.180 01:09:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 3812224 00:23:30.437 01:09:23 nvmf_tcp.nvmf_tls -- target/tls.sh@215 -- # killprocess 3812076 00:23:30.437 01:09:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 3812076 ']' 00:23:30.437 01:09:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 3812076 00:23:30.437 01:09:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:23:30.437 01:09:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:30.437 01:09:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3812076 00:23:30.437 01:09:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:23:30.437 01:09:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:23:30.437 01:09:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3812076' 00:23:30.437 killing process with pid 3812076 00:23:30.437 01:09:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 3812076 00:23:30.437 [2024-07-25 01:09:23.411803] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:23:30.437 01:09:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 3812076 00:23:30.695 01:09:23 nvmf_tcp.nvmf_tls -- target/tls.sh@218 -- # nvmfappstart 00:23:30.695 01:09:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:30.695 01:09:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:23:30.695 01:09:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:30.695 01:09:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=3813558 00:23:30.695 01:09:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:23:30.695 01:09:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 3813558 00:23:30.695 01:09:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 3813558 ']' 00:23:30.695 01:09:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:30.695 01:09:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:30.695 01:09:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:30.695 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:30.695 01:09:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:30.695 01:09:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:30.695 [2024-07-25 01:09:23.704372] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 23.11.0 initialization... 00:23:30.695 [2024-07-25 01:09:23.704452] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:30.695 EAL: No free 2048 kB hugepages reported on node 1 00:23:30.695 [2024-07-25 01:09:23.766578] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:30.953 [2024-07-25 01:09:23.852385] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:30.953 [2024-07-25 01:09:23.852434] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:30.953 [2024-07-25 01:09:23.852461] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:30.953 [2024-07-25 01:09:23.852472] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:30.953 [2024-07-25 01:09:23.852482] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:30.953 [2024-07-25 01:09:23.852508] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:30.953 01:09:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:30.953 01:09:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:23:30.953 01:09:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:30.953 01:09:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:30.953 01:09:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:30.953 01:09:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:30.953 01:09:23 nvmf_tcp.nvmf_tls -- target/tls.sh@219 -- # setup_nvmf_tgt /tmp/tmp.dKghDIXpz6 00:23:30.953 01:09:23 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.dKghDIXpz6 00:23:30.953 01:09:23 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:31.211 [2024-07-25 01:09:24.264298] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:31.211 01:09:24 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:31.468 01:09:24 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:31.726 [2024-07-25 01:09:24.745544] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:31.726 [2024-07-25 01:09:24.745807] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:31.726 01:09:24 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:31.984 malloc0 00:23:31.984 01:09:25 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:32.243 01:09:25 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.dKghDIXpz6 00:23:32.501 [2024-07-25 01:09:25.583778] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:23:32.501 01:09:25 nvmf_tcp.nvmf_tls -- target/tls.sh@222 -- # bdevperf_pid=3813842 00:23:32.501 01:09:25 nvmf_tcp.nvmf_tls -- target/tls.sh@220 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:23:32.501 01:09:25 nvmf_tcp.nvmf_tls -- target/tls.sh@224 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:32.501 01:09:25 nvmf_tcp.nvmf_tls -- target/tls.sh@225 -- # waitforlisten 3813842 /var/tmp/bdevperf.sock 00:23:32.501 01:09:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 3813842 ']' 00:23:32.501 01:09:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:32.501 01:09:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:32.501 01:09:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:32.501 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:32.501 01:09:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:32.501 01:09:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:32.501 [2024-07-25 01:09:25.647536] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 23.11.0 initialization... 00:23:32.501 [2024-07-25 01:09:25.647604] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3813842 ] 00:23:32.759 EAL: No free 2048 kB hugepages reported on node 1 00:23:32.759 [2024-07-25 01:09:25.706013] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:32.759 [2024-07-25 01:09:25.791465] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:32.759 01:09:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:32.759 01:09:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:23:32.759 01:09:25 nvmf_tcp.nvmf_tls -- target/tls.sh@227 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.dKghDIXpz6 00:23:33.324 01:09:26 nvmf_tcp.nvmf_tls -- target/tls.sh@228 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:23:33.324 [2024-07-25 01:09:26.460952] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:33.582 nvme0n1 00:23:33.582 01:09:26 nvmf_tcp.nvmf_tls -- target/tls.sh@232 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:33.582 Running I/O for 1 seconds... 00:23:34.956 00:23:34.956 Latency(us) 00:23:34.956 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:34.956 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:23:34.956 Verification LBA range: start 0x0 length 0x2000 00:23:34.956 nvme0n1 : 1.03 3279.86 12.81 0.00 0.00 38533.35 6407.96 62914.56 00:23:34.956 =================================================================================================================== 00:23:34.956 Total : 3279.86 12.81 0.00 0.00 38533.35 6407.96 62914.56 00:23:34.956 0 00:23:34.956 01:09:27 nvmf_tcp.nvmf_tls -- target/tls.sh@234 -- # killprocess 3813842 00:23:34.956 01:09:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 3813842 ']' 00:23:34.956 01:09:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 3813842 00:23:34.956 01:09:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:23:34.956 01:09:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:34.956 01:09:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3813842 00:23:34.956 01:09:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:23:34.956 01:09:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:23:34.956 01:09:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3813842' 00:23:34.956 killing process with pid 3813842 00:23:34.956 01:09:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 3813842 00:23:34.956 Received shutdown signal, test time was about 1.000000 seconds 00:23:34.956 00:23:34.956 Latency(us) 00:23:34.956 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:34.956 =================================================================================================================== 00:23:34.956 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:34.956 01:09:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 3813842 00:23:34.956 01:09:27 nvmf_tcp.nvmf_tls -- target/tls.sh@235 -- # killprocess 3813558 00:23:34.956 01:09:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 3813558 ']' 00:23:34.956 01:09:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 3813558 00:23:34.956 01:09:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:23:34.956 01:09:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:34.956 01:09:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3813558 00:23:34.956 01:09:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:23:34.956 01:09:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:23:34.956 01:09:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3813558' 00:23:34.956 killing process with pid 3813558 00:23:34.956 01:09:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 3813558 00:23:34.956 [2024-07-25 01:09:27.981691] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:23:34.956 01:09:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 3813558 00:23:35.214 01:09:28 nvmf_tcp.nvmf_tls -- target/tls.sh@238 -- # nvmfappstart 00:23:35.214 01:09:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:35.214 01:09:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:23:35.214 01:09:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:35.214 01:09:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=3814118 00:23:35.214 01:09:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:23:35.214 01:09:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 3814118 00:23:35.214 01:09:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 3814118 ']' 00:23:35.214 01:09:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:35.214 01:09:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:35.214 01:09:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:35.214 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:35.214 01:09:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:35.214 01:09:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:35.214 [2024-07-25 01:09:28.289193] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 23.11.0 initialization... 00:23:35.214 [2024-07-25 01:09:28.289298] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:35.214 EAL: No free 2048 kB hugepages reported on node 1 00:23:35.214 [2024-07-25 01:09:28.358311] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:35.472 [2024-07-25 01:09:28.449399] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:35.472 [2024-07-25 01:09:28.449461] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:35.472 [2024-07-25 01:09:28.449478] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:35.472 [2024-07-25 01:09:28.449491] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:35.472 [2024-07-25 01:09:28.449503] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:35.472 [2024-07-25 01:09:28.449548] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:35.472 01:09:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:35.472 01:09:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:23:35.472 01:09:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:35.472 01:09:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:35.472 01:09:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:35.472 01:09:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:35.472 01:09:28 nvmf_tcp.nvmf_tls -- target/tls.sh@239 -- # rpc_cmd 00:23:35.472 01:09:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:35.472 01:09:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:35.472 [2024-07-25 01:09:28.595941] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:35.472 malloc0 00:23:35.730 [2024-07-25 01:09:28.628685] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:35.730 [2024-07-25 01:09:28.628968] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:35.730 01:09:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:35.730 01:09:28 nvmf_tcp.nvmf_tls -- target/tls.sh@252 -- # bdevperf_pid=3814258 00:23:35.730 01:09:28 nvmf_tcp.nvmf_tls -- target/tls.sh@250 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:23:35.730 01:09:28 nvmf_tcp.nvmf_tls -- target/tls.sh@254 -- # waitforlisten 3814258 /var/tmp/bdevperf.sock 00:23:35.730 01:09:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 3814258 ']' 00:23:35.730 01:09:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:35.730 01:09:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:35.730 01:09:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:35.730 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:35.730 01:09:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:35.730 01:09:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:35.730 [2024-07-25 01:09:28.698676] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 23.11.0 initialization... 00:23:35.730 [2024-07-25 01:09:28.698758] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3814258 ] 00:23:35.730 EAL: No free 2048 kB hugepages reported on node 1 00:23:35.730 [2024-07-25 01:09:28.761527] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:35.730 [2024-07-25 01:09:28.852470] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:35.988 01:09:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:35.988 01:09:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:23:35.988 01:09:28 nvmf_tcp.nvmf_tls -- target/tls.sh@255 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.dKghDIXpz6 00:23:36.246 01:09:29 nvmf_tcp.nvmf_tls -- target/tls.sh@256 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:23:36.503 [2024-07-25 01:09:29.427139] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:36.503 nvme0n1 00:23:36.503 01:09:29 nvmf_tcp.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:36.503 Running I/O for 1 seconds... 00:23:37.875 00:23:37.875 Latency(us) 00:23:37.875 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:37.875 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:23:37.875 Verification LBA range: start 0x0 length 0x2000 00:23:37.875 nvme0n1 : 1.05 2449.61 9.57 0.00 0.00 51235.51 6068.15 77672.30 00:23:37.875 =================================================================================================================== 00:23:37.875 Total : 2449.61 9.57 0.00 0.00 51235.51 6068.15 77672.30 00:23:37.875 0 00:23:37.875 01:09:30 nvmf_tcp.nvmf_tls -- target/tls.sh@263 -- # rpc_cmd save_config 00:23:37.875 01:09:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:37.875 01:09:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:37.875 01:09:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:37.876 01:09:30 nvmf_tcp.nvmf_tls -- target/tls.sh@263 -- # tgtcfg='{ 00:23:37.876 "subsystems": [ 00:23:37.876 { 00:23:37.876 "subsystem": "keyring", 00:23:37.876 "config": [ 00:23:37.876 { 00:23:37.876 "method": "keyring_file_add_key", 00:23:37.876 "params": { 00:23:37.876 "name": "key0", 00:23:37.876 "path": "/tmp/tmp.dKghDIXpz6" 00:23:37.876 } 00:23:37.876 } 00:23:37.876 ] 00:23:37.876 }, 00:23:37.876 { 00:23:37.876 "subsystem": "iobuf", 00:23:37.876 "config": [ 00:23:37.876 { 00:23:37.876 "method": "iobuf_set_options", 00:23:37.876 "params": { 00:23:37.876 "small_pool_count": 8192, 00:23:37.876 "large_pool_count": 1024, 00:23:37.876 "small_bufsize": 8192, 00:23:37.876 "large_bufsize": 135168 00:23:37.876 } 00:23:37.876 } 00:23:37.876 ] 00:23:37.876 }, 00:23:37.876 { 00:23:37.876 "subsystem": "sock", 00:23:37.876 "config": [ 00:23:37.876 { 00:23:37.876 "method": "sock_set_default_impl", 00:23:37.876 "params": { 00:23:37.876 "impl_name": "posix" 00:23:37.876 } 00:23:37.876 }, 00:23:37.876 { 00:23:37.876 "method": "sock_impl_set_options", 00:23:37.876 "params": { 00:23:37.876 "impl_name": "ssl", 00:23:37.876 "recv_buf_size": 4096, 00:23:37.876 "send_buf_size": 4096, 00:23:37.876 "enable_recv_pipe": true, 00:23:37.876 "enable_quickack": false, 00:23:37.876 "enable_placement_id": 0, 00:23:37.876 "enable_zerocopy_send_server": true, 00:23:37.876 "enable_zerocopy_send_client": false, 00:23:37.876 "zerocopy_threshold": 0, 00:23:37.876 "tls_version": 0, 00:23:37.876 "enable_ktls": false 00:23:37.876 } 00:23:37.876 }, 00:23:37.876 { 00:23:37.876 "method": "sock_impl_set_options", 00:23:37.876 "params": { 00:23:37.876 "impl_name": "posix", 00:23:37.876 "recv_buf_size": 2097152, 00:23:37.876 "send_buf_size": 2097152, 00:23:37.876 "enable_recv_pipe": true, 00:23:37.876 "enable_quickack": false, 00:23:37.876 "enable_placement_id": 0, 00:23:37.876 "enable_zerocopy_send_server": true, 00:23:37.876 "enable_zerocopy_send_client": false, 00:23:37.876 "zerocopy_threshold": 0, 00:23:37.876 "tls_version": 0, 00:23:37.876 "enable_ktls": false 00:23:37.876 } 00:23:37.876 } 00:23:37.876 ] 00:23:37.876 }, 00:23:37.876 { 00:23:37.876 "subsystem": "vmd", 00:23:37.876 "config": [] 00:23:37.876 }, 00:23:37.876 { 00:23:37.876 "subsystem": "accel", 00:23:37.876 "config": [ 00:23:37.876 { 00:23:37.876 "method": "accel_set_options", 00:23:37.876 "params": { 00:23:37.876 "small_cache_size": 128, 00:23:37.876 "large_cache_size": 16, 00:23:37.876 "task_count": 2048, 00:23:37.876 "sequence_count": 2048, 00:23:37.876 "buf_count": 2048 00:23:37.876 } 00:23:37.876 } 00:23:37.876 ] 00:23:37.876 }, 00:23:37.876 { 00:23:37.876 "subsystem": "bdev", 00:23:37.876 "config": [ 00:23:37.876 { 00:23:37.876 "method": "bdev_set_options", 00:23:37.876 "params": { 00:23:37.876 "bdev_io_pool_size": 65535, 00:23:37.876 "bdev_io_cache_size": 256, 00:23:37.876 "bdev_auto_examine": true, 00:23:37.876 "iobuf_small_cache_size": 128, 00:23:37.876 "iobuf_large_cache_size": 16 00:23:37.876 } 00:23:37.876 }, 00:23:37.876 { 00:23:37.876 "method": "bdev_raid_set_options", 00:23:37.876 "params": { 00:23:37.876 "process_window_size_kb": 1024 00:23:37.876 } 00:23:37.876 }, 00:23:37.876 { 00:23:37.876 "method": "bdev_iscsi_set_options", 00:23:37.876 "params": { 00:23:37.876 "timeout_sec": 30 00:23:37.876 } 00:23:37.876 }, 00:23:37.876 { 00:23:37.876 "method": "bdev_nvme_set_options", 00:23:37.876 "params": { 00:23:37.876 "action_on_timeout": "none", 00:23:37.876 "timeout_us": 0, 00:23:37.876 "timeout_admin_us": 0, 00:23:37.876 "keep_alive_timeout_ms": 10000, 00:23:37.876 "arbitration_burst": 0, 00:23:37.876 "low_priority_weight": 0, 00:23:37.876 "medium_priority_weight": 0, 00:23:37.876 "high_priority_weight": 0, 00:23:37.876 "nvme_adminq_poll_period_us": 10000, 00:23:37.876 "nvme_ioq_poll_period_us": 0, 00:23:37.876 "io_queue_requests": 0, 00:23:37.876 "delay_cmd_submit": true, 00:23:37.876 "transport_retry_count": 4, 00:23:37.876 "bdev_retry_count": 3, 00:23:37.876 "transport_ack_timeout": 0, 00:23:37.876 "ctrlr_loss_timeout_sec": 0, 00:23:37.876 "reconnect_delay_sec": 0, 00:23:37.876 "fast_io_fail_timeout_sec": 0, 00:23:37.876 "disable_auto_failback": false, 00:23:37.876 "generate_uuids": false, 00:23:37.876 "transport_tos": 0, 00:23:37.876 "nvme_error_stat": false, 00:23:37.876 "rdma_srq_size": 0, 00:23:37.876 "io_path_stat": false, 00:23:37.876 "allow_accel_sequence": false, 00:23:37.876 "rdma_max_cq_size": 0, 00:23:37.876 "rdma_cm_event_timeout_ms": 0, 00:23:37.876 "dhchap_digests": [ 00:23:37.876 "sha256", 00:23:37.876 "sha384", 00:23:37.876 "sha512" 00:23:37.876 ], 00:23:37.876 "dhchap_dhgroups": [ 00:23:37.876 "null", 00:23:37.876 "ffdhe2048", 00:23:37.876 "ffdhe3072", 00:23:37.876 "ffdhe4096", 00:23:37.876 "ffdhe6144", 00:23:37.876 "ffdhe8192" 00:23:37.876 ] 00:23:37.876 } 00:23:37.876 }, 00:23:37.876 { 00:23:37.876 "method": "bdev_nvme_set_hotplug", 00:23:37.876 "params": { 00:23:37.876 "period_us": 100000, 00:23:37.876 "enable": false 00:23:37.876 } 00:23:37.876 }, 00:23:37.876 { 00:23:37.876 "method": "bdev_malloc_create", 00:23:37.876 "params": { 00:23:37.876 "name": "malloc0", 00:23:37.876 "num_blocks": 8192, 00:23:37.876 "block_size": 4096, 00:23:37.876 "physical_block_size": 4096, 00:23:37.876 "uuid": "266971dc-cb95-48c4-9a41-4dd0e92c2c69", 00:23:37.876 "optimal_io_boundary": 0 00:23:37.876 } 00:23:37.876 }, 00:23:37.876 { 00:23:37.876 "method": "bdev_wait_for_examine" 00:23:37.876 } 00:23:37.876 ] 00:23:37.876 }, 00:23:37.876 { 00:23:37.876 "subsystem": "nbd", 00:23:37.876 "config": [] 00:23:37.876 }, 00:23:37.876 { 00:23:37.876 "subsystem": "scheduler", 00:23:37.876 "config": [ 00:23:37.876 { 00:23:37.876 "method": "framework_set_scheduler", 00:23:37.876 "params": { 00:23:37.876 "name": "static" 00:23:37.876 } 00:23:37.876 } 00:23:37.876 ] 00:23:37.876 }, 00:23:37.876 { 00:23:37.876 "subsystem": "nvmf", 00:23:37.876 "config": [ 00:23:37.876 { 00:23:37.876 "method": "nvmf_set_config", 00:23:37.876 "params": { 00:23:37.876 "discovery_filter": "match_any", 00:23:37.876 "admin_cmd_passthru": { 00:23:37.876 "identify_ctrlr": false 00:23:37.876 } 00:23:37.876 } 00:23:37.876 }, 00:23:37.876 { 00:23:37.876 "method": "nvmf_set_max_subsystems", 00:23:37.876 "params": { 00:23:37.876 "max_subsystems": 1024 00:23:37.876 } 00:23:37.876 }, 00:23:37.876 { 00:23:37.876 "method": "nvmf_set_crdt", 00:23:37.876 "params": { 00:23:37.876 "crdt1": 0, 00:23:37.876 "crdt2": 0, 00:23:37.876 "crdt3": 0 00:23:37.876 } 00:23:37.876 }, 00:23:37.876 { 00:23:37.876 "method": "nvmf_create_transport", 00:23:37.876 "params": { 00:23:37.876 "trtype": "TCP", 00:23:37.876 "max_queue_depth": 128, 00:23:37.876 "max_io_qpairs_per_ctrlr": 127, 00:23:37.876 "in_capsule_data_size": 4096, 00:23:37.876 "max_io_size": 131072, 00:23:37.876 "io_unit_size": 131072, 00:23:37.876 "max_aq_depth": 128, 00:23:37.876 "num_shared_buffers": 511, 00:23:37.876 "buf_cache_size": 4294967295, 00:23:37.877 "dif_insert_or_strip": false, 00:23:37.877 "zcopy": false, 00:23:37.877 "c2h_success": false, 00:23:37.877 "sock_priority": 0, 00:23:37.877 "abort_timeout_sec": 1, 00:23:37.877 "ack_timeout": 0, 00:23:37.877 "data_wr_pool_size": 0 00:23:37.877 } 00:23:37.877 }, 00:23:37.877 { 00:23:37.877 "method": "nvmf_create_subsystem", 00:23:37.877 "params": { 00:23:37.877 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:37.877 "allow_any_host": false, 00:23:37.877 "serial_number": "00000000000000000000", 00:23:37.877 "model_number": "SPDK bdev Controller", 00:23:37.877 "max_namespaces": 32, 00:23:37.877 "min_cntlid": 1, 00:23:37.877 "max_cntlid": 65519, 00:23:37.877 "ana_reporting": false 00:23:37.877 } 00:23:37.877 }, 00:23:37.877 { 00:23:37.877 "method": "nvmf_subsystem_add_host", 00:23:37.877 "params": { 00:23:37.877 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:37.877 "host": "nqn.2016-06.io.spdk:host1", 00:23:37.877 "psk": "key0" 00:23:37.877 } 00:23:37.877 }, 00:23:37.877 { 00:23:37.877 "method": "nvmf_subsystem_add_ns", 00:23:37.877 "params": { 00:23:37.877 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:37.877 "namespace": { 00:23:37.877 "nsid": 1, 00:23:37.877 "bdev_name": "malloc0", 00:23:37.877 "nguid": "266971DCCB9548C49A414DD0E92C2C69", 00:23:37.877 "uuid": "266971dc-cb95-48c4-9a41-4dd0e92c2c69", 00:23:37.877 "no_auto_visible": false 00:23:37.877 } 00:23:37.877 } 00:23:37.877 }, 00:23:37.877 { 00:23:37.877 "method": "nvmf_subsystem_add_listener", 00:23:37.877 "params": { 00:23:37.877 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:37.877 "listen_address": { 00:23:37.877 "trtype": "TCP", 00:23:37.877 "adrfam": "IPv4", 00:23:37.877 "traddr": "10.0.0.2", 00:23:37.877 "trsvcid": "4420" 00:23:37.877 }, 00:23:37.877 "secure_channel": true 00:23:37.877 } 00:23:37.877 } 00:23:37.877 ] 00:23:37.877 } 00:23:37.877 ] 00:23:37.877 }' 00:23:37.877 01:09:30 nvmf_tcp.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:23:38.135 01:09:31 nvmf_tcp.nvmf_tls -- target/tls.sh@264 -- # bperfcfg='{ 00:23:38.135 "subsystems": [ 00:23:38.135 { 00:23:38.135 "subsystem": "keyring", 00:23:38.135 "config": [ 00:23:38.135 { 00:23:38.135 "method": "keyring_file_add_key", 00:23:38.135 "params": { 00:23:38.135 "name": "key0", 00:23:38.135 "path": "/tmp/tmp.dKghDIXpz6" 00:23:38.135 } 00:23:38.135 } 00:23:38.135 ] 00:23:38.135 }, 00:23:38.135 { 00:23:38.135 "subsystem": "iobuf", 00:23:38.135 "config": [ 00:23:38.135 { 00:23:38.135 "method": "iobuf_set_options", 00:23:38.135 "params": { 00:23:38.135 "small_pool_count": 8192, 00:23:38.135 "large_pool_count": 1024, 00:23:38.135 "small_bufsize": 8192, 00:23:38.135 "large_bufsize": 135168 00:23:38.135 } 00:23:38.135 } 00:23:38.135 ] 00:23:38.135 }, 00:23:38.135 { 00:23:38.135 "subsystem": "sock", 00:23:38.135 "config": [ 00:23:38.135 { 00:23:38.135 "method": "sock_set_default_impl", 00:23:38.135 "params": { 00:23:38.135 "impl_name": "posix" 00:23:38.135 } 00:23:38.135 }, 00:23:38.135 { 00:23:38.135 "method": "sock_impl_set_options", 00:23:38.135 "params": { 00:23:38.135 "impl_name": "ssl", 00:23:38.135 "recv_buf_size": 4096, 00:23:38.135 "send_buf_size": 4096, 00:23:38.135 "enable_recv_pipe": true, 00:23:38.135 "enable_quickack": false, 00:23:38.135 "enable_placement_id": 0, 00:23:38.135 "enable_zerocopy_send_server": true, 00:23:38.135 "enable_zerocopy_send_client": false, 00:23:38.135 "zerocopy_threshold": 0, 00:23:38.135 "tls_version": 0, 00:23:38.135 "enable_ktls": false 00:23:38.135 } 00:23:38.135 }, 00:23:38.135 { 00:23:38.135 "method": "sock_impl_set_options", 00:23:38.135 "params": { 00:23:38.135 "impl_name": "posix", 00:23:38.135 "recv_buf_size": 2097152, 00:23:38.135 "send_buf_size": 2097152, 00:23:38.135 "enable_recv_pipe": true, 00:23:38.135 "enable_quickack": false, 00:23:38.135 "enable_placement_id": 0, 00:23:38.135 "enable_zerocopy_send_server": true, 00:23:38.135 "enable_zerocopy_send_client": false, 00:23:38.135 "zerocopy_threshold": 0, 00:23:38.135 "tls_version": 0, 00:23:38.135 "enable_ktls": false 00:23:38.135 } 00:23:38.135 } 00:23:38.135 ] 00:23:38.135 }, 00:23:38.135 { 00:23:38.135 "subsystem": "vmd", 00:23:38.135 "config": [] 00:23:38.135 }, 00:23:38.135 { 00:23:38.135 "subsystem": "accel", 00:23:38.135 "config": [ 00:23:38.135 { 00:23:38.135 "method": "accel_set_options", 00:23:38.135 "params": { 00:23:38.135 "small_cache_size": 128, 00:23:38.135 "large_cache_size": 16, 00:23:38.135 "task_count": 2048, 00:23:38.135 "sequence_count": 2048, 00:23:38.135 "buf_count": 2048 00:23:38.135 } 00:23:38.135 } 00:23:38.135 ] 00:23:38.135 }, 00:23:38.135 { 00:23:38.135 "subsystem": "bdev", 00:23:38.135 "config": [ 00:23:38.135 { 00:23:38.135 "method": "bdev_set_options", 00:23:38.135 "params": { 00:23:38.135 "bdev_io_pool_size": 65535, 00:23:38.135 "bdev_io_cache_size": 256, 00:23:38.135 "bdev_auto_examine": true, 00:23:38.135 "iobuf_small_cache_size": 128, 00:23:38.135 "iobuf_large_cache_size": 16 00:23:38.135 } 00:23:38.135 }, 00:23:38.135 { 00:23:38.135 "method": "bdev_raid_set_options", 00:23:38.135 "params": { 00:23:38.135 "process_window_size_kb": 1024 00:23:38.135 } 00:23:38.135 }, 00:23:38.135 { 00:23:38.135 "method": "bdev_iscsi_set_options", 00:23:38.135 "params": { 00:23:38.135 "timeout_sec": 30 00:23:38.135 } 00:23:38.135 }, 00:23:38.135 { 00:23:38.135 "method": "bdev_nvme_set_options", 00:23:38.135 "params": { 00:23:38.135 "action_on_timeout": "none", 00:23:38.135 "timeout_us": 0, 00:23:38.135 "timeout_admin_us": 0, 00:23:38.135 "keep_alive_timeout_ms": 10000, 00:23:38.135 "arbitration_burst": 0, 00:23:38.135 "low_priority_weight": 0, 00:23:38.135 "medium_priority_weight": 0, 00:23:38.135 "high_priority_weight": 0, 00:23:38.135 "nvme_adminq_poll_period_us": 10000, 00:23:38.135 "nvme_ioq_poll_period_us": 0, 00:23:38.135 "io_queue_requests": 512, 00:23:38.135 "delay_cmd_submit": true, 00:23:38.135 "transport_retry_count": 4, 00:23:38.135 "bdev_retry_count": 3, 00:23:38.135 "transport_ack_timeout": 0, 00:23:38.135 "ctrlr_loss_timeout_sec": 0, 00:23:38.135 "reconnect_delay_sec": 0, 00:23:38.135 "fast_io_fail_timeout_sec": 0, 00:23:38.135 "disable_auto_failback": false, 00:23:38.135 "generate_uuids": false, 00:23:38.135 "transport_tos": 0, 00:23:38.135 "nvme_error_stat": false, 00:23:38.135 "rdma_srq_size": 0, 00:23:38.135 "io_path_stat": false, 00:23:38.135 "allow_accel_sequence": false, 00:23:38.135 "rdma_max_cq_size": 0, 00:23:38.135 "rdma_cm_event_timeout_ms": 0, 00:23:38.135 "dhchap_digests": [ 00:23:38.135 "sha256", 00:23:38.135 "sha384", 00:23:38.135 "sha512" 00:23:38.135 ], 00:23:38.135 "dhchap_dhgroups": [ 00:23:38.135 "null", 00:23:38.135 "ffdhe2048", 00:23:38.135 "ffdhe3072", 00:23:38.135 "ffdhe4096", 00:23:38.135 "ffdhe6144", 00:23:38.135 "ffdhe8192" 00:23:38.135 ] 00:23:38.135 } 00:23:38.135 }, 00:23:38.135 { 00:23:38.135 "method": "bdev_nvme_attach_controller", 00:23:38.136 "params": { 00:23:38.136 "name": "nvme0", 00:23:38.136 "trtype": "TCP", 00:23:38.136 "adrfam": "IPv4", 00:23:38.136 "traddr": "10.0.0.2", 00:23:38.136 "trsvcid": "4420", 00:23:38.136 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:38.136 "prchk_reftag": false, 00:23:38.136 "prchk_guard": false, 00:23:38.136 "ctrlr_loss_timeout_sec": 0, 00:23:38.136 "reconnect_delay_sec": 0, 00:23:38.136 "fast_io_fail_timeout_sec": 0, 00:23:38.136 "psk": "key0", 00:23:38.136 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:38.136 "hdgst": false, 00:23:38.136 "ddgst": false 00:23:38.136 } 00:23:38.136 }, 00:23:38.136 { 00:23:38.136 "method": "bdev_nvme_set_hotplug", 00:23:38.136 "params": { 00:23:38.136 "period_us": 100000, 00:23:38.136 "enable": false 00:23:38.136 } 00:23:38.136 }, 00:23:38.136 { 00:23:38.136 "method": "bdev_enable_histogram", 00:23:38.136 "params": { 00:23:38.136 "name": "nvme0n1", 00:23:38.136 "enable": true 00:23:38.136 } 00:23:38.136 }, 00:23:38.136 { 00:23:38.136 "method": "bdev_wait_for_examine" 00:23:38.136 } 00:23:38.136 ] 00:23:38.136 }, 00:23:38.136 { 00:23:38.136 "subsystem": "nbd", 00:23:38.136 "config": [] 00:23:38.136 } 00:23:38.136 ] 00:23:38.136 }' 00:23:38.136 01:09:31 nvmf_tcp.nvmf_tls -- target/tls.sh@266 -- # killprocess 3814258 00:23:38.136 01:09:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 3814258 ']' 00:23:38.136 01:09:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 3814258 00:23:38.136 01:09:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:23:38.136 01:09:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:38.136 01:09:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3814258 00:23:38.136 01:09:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:23:38.136 01:09:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:23:38.136 01:09:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3814258' 00:23:38.136 killing process with pid 3814258 00:23:38.136 01:09:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 3814258 00:23:38.136 Received shutdown signal, test time was about 1.000000 seconds 00:23:38.136 00:23:38.136 Latency(us) 00:23:38.136 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:38.136 =================================================================================================================== 00:23:38.136 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:38.136 01:09:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 3814258 00:23:38.393 01:09:31 nvmf_tcp.nvmf_tls -- target/tls.sh@267 -- # killprocess 3814118 00:23:38.393 01:09:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 3814118 ']' 00:23:38.393 01:09:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 3814118 00:23:38.393 01:09:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:23:38.393 01:09:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:38.393 01:09:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3814118 00:23:38.393 01:09:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:23:38.393 01:09:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:23:38.393 01:09:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3814118' 00:23:38.393 killing process with pid 3814118 00:23:38.393 01:09:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 3814118 00:23:38.393 01:09:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 3814118 00:23:38.651 01:09:31 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # nvmfappstart -c /dev/fd/62 00:23:38.651 01:09:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:38.651 01:09:31 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # echo '{ 00:23:38.651 "subsystems": [ 00:23:38.651 { 00:23:38.651 "subsystem": "keyring", 00:23:38.651 "config": [ 00:23:38.651 { 00:23:38.651 "method": "keyring_file_add_key", 00:23:38.651 "params": { 00:23:38.651 "name": "key0", 00:23:38.651 "path": "/tmp/tmp.dKghDIXpz6" 00:23:38.651 } 00:23:38.651 } 00:23:38.651 ] 00:23:38.651 }, 00:23:38.651 { 00:23:38.651 "subsystem": "iobuf", 00:23:38.651 "config": [ 00:23:38.651 { 00:23:38.651 "method": "iobuf_set_options", 00:23:38.651 "params": { 00:23:38.651 "small_pool_count": 8192, 00:23:38.651 "large_pool_count": 1024, 00:23:38.651 "small_bufsize": 8192, 00:23:38.651 "large_bufsize": 135168 00:23:38.651 } 00:23:38.651 } 00:23:38.651 ] 00:23:38.651 }, 00:23:38.651 { 00:23:38.651 "subsystem": "sock", 00:23:38.651 "config": [ 00:23:38.651 { 00:23:38.651 "method": "sock_set_default_impl", 00:23:38.651 "params": { 00:23:38.651 "impl_name": "posix" 00:23:38.651 } 00:23:38.651 }, 00:23:38.651 { 00:23:38.651 "method": "sock_impl_set_options", 00:23:38.651 "params": { 00:23:38.651 "impl_name": "ssl", 00:23:38.651 "recv_buf_size": 4096, 00:23:38.651 "send_buf_size": 4096, 00:23:38.651 "enable_recv_pipe": true, 00:23:38.651 "enable_quickack": false, 00:23:38.651 "enable_placement_id": 0, 00:23:38.651 "enable_zerocopy_send_server": true, 00:23:38.651 "enable_zerocopy_send_client": false, 00:23:38.651 "zerocopy_threshold": 0, 00:23:38.651 "tls_version": 0, 00:23:38.651 "enable_ktls": false 00:23:38.651 } 00:23:38.651 }, 00:23:38.651 { 00:23:38.651 "method": "sock_impl_set_options", 00:23:38.651 "params": { 00:23:38.651 "impl_name": "posix", 00:23:38.651 "recv_buf_size": 2097152, 00:23:38.651 "send_buf_size": 2097152, 00:23:38.651 "enable_recv_pipe": true, 00:23:38.651 "enable_quickack": false, 00:23:38.651 "enable_placement_id": 0, 00:23:38.651 "enable_zerocopy_send_server": true, 00:23:38.651 "enable_zerocopy_send_client": false, 00:23:38.651 "zerocopy_threshold": 0, 00:23:38.651 "tls_version": 0, 00:23:38.651 "enable_ktls": false 00:23:38.651 } 00:23:38.651 } 00:23:38.651 ] 00:23:38.651 }, 00:23:38.651 { 00:23:38.651 "subsystem": "vmd", 00:23:38.651 "config": [] 00:23:38.651 }, 00:23:38.651 { 00:23:38.651 "subsystem": "accel", 00:23:38.651 "config": [ 00:23:38.651 { 00:23:38.651 "method": "accel_set_options", 00:23:38.651 "params": { 00:23:38.651 "small_cache_size": 128, 00:23:38.651 "large_cache_size": 16, 00:23:38.651 "task_count": 2048, 00:23:38.651 "sequence_count": 2048, 00:23:38.651 "buf_count": 2048 00:23:38.651 } 00:23:38.651 } 00:23:38.651 ] 00:23:38.651 }, 00:23:38.651 { 00:23:38.651 "subsystem": "bdev", 00:23:38.651 "config": [ 00:23:38.651 { 00:23:38.651 "method": "bdev_set_options", 00:23:38.651 "params": { 00:23:38.651 "bdev_io_pool_size": 65535, 00:23:38.651 "bdev_io_cache_size": 256, 00:23:38.651 "bdev_auto_examine": true, 00:23:38.651 "iobuf_small_cache_size": 128, 00:23:38.651 "iobuf_large_cache_size": 16 00:23:38.651 } 00:23:38.651 }, 00:23:38.651 { 00:23:38.651 "method": "bdev_raid_set_options", 00:23:38.651 "params": { 00:23:38.651 "process_window_size_kb": 1024 00:23:38.651 } 00:23:38.651 }, 00:23:38.651 { 00:23:38.651 "method": "bdev_iscsi_set_options", 00:23:38.651 "params": { 00:23:38.651 "timeout_sec": 30 00:23:38.651 } 00:23:38.651 }, 00:23:38.651 { 00:23:38.651 "method": "bdev_nvme_set_options", 00:23:38.651 "params": { 00:23:38.651 "action_on_timeout": "none", 00:23:38.651 "timeout_us": 0, 00:23:38.651 "timeout_admin_us": 0, 00:23:38.651 "keep_alive_timeout_ms": 10000, 00:23:38.651 "arbitration_burst": 0, 00:23:38.651 "low_priority_weight": 0, 00:23:38.651 "medium_priority_weight": 0, 00:23:38.651 "high_priority_weight": 0, 00:23:38.651 "nvme_adminq_poll_period_us": 10000, 00:23:38.651 "nvme_ioq_poll_period_us": 0, 00:23:38.651 "io_queue_requests": 0, 00:23:38.651 "delay_cmd_submit": true, 00:23:38.651 "transport_retry_count": 4, 00:23:38.651 "bdev_retry_count": 3, 00:23:38.651 "transport_ack_timeout": 0, 00:23:38.651 "ctrlr_loss_timeout_sec": 0, 00:23:38.651 "reconnect_delay_sec": 0, 00:23:38.651 "fast_io_fail_timeout_sec": 0, 00:23:38.651 "disable_auto_failback": false, 00:23:38.651 "generate_uuids": false, 00:23:38.651 "transport_tos": 0, 00:23:38.651 "nvme_error_stat": false, 00:23:38.651 "rdma_srq_size": 0, 00:23:38.651 "io_path_stat": false, 00:23:38.651 "allow_accel_sequence": false, 00:23:38.651 "rdma_max_cq_size": 0, 00:23:38.651 "rdma_cm_event_timeout_ms": 0, 00:23:38.651 "dhchap_digests": [ 00:23:38.651 "sha256", 00:23:38.651 "sha384", 00:23:38.651 "sha512" 00:23:38.651 ], 00:23:38.651 "dhchap_dhgroups": [ 00:23:38.651 "null", 00:23:38.651 "ffdhe2048", 00:23:38.651 "ffdhe3072", 00:23:38.651 "ffdhe4096", 00:23:38.651 "ffdhe6144", 00:23:38.651 "ffdhe8192" 00:23:38.651 ] 00:23:38.651 } 00:23:38.651 }, 00:23:38.651 { 00:23:38.652 "method": "bdev_nvme_set_hotplug", 00:23:38.652 "params": { 00:23:38.652 "period_us": 100000, 00:23:38.652 "enable": false 00:23:38.652 } 00:23:38.652 }, 00:23:38.652 { 00:23:38.652 "method": "bdev_malloc_create", 00:23:38.652 "params": { 00:23:38.652 "name": "malloc0", 00:23:38.652 "num_blocks": 8192, 00:23:38.652 "block_size": 4096, 00:23:38.652 "physical_block_size": 4096, 00:23:38.652 "uuid": "266971dc-cb95-48c4-9a41-4dd0e92c2c69", 00:23:38.652 "optimal_io_boundary": 0 00:23:38.652 } 00:23:38.652 }, 00:23:38.652 { 00:23:38.652 "method": "bdev_wait_for_examine" 00:23:38.652 } 00:23:38.652 ] 00:23:38.652 }, 00:23:38.652 { 00:23:38.652 "subsystem": "nbd", 00:23:38.652 "config": [] 00:23:38.652 }, 00:23:38.652 { 00:23:38.652 "subsystem": "scheduler", 00:23:38.652 "config": [ 00:23:38.652 { 00:23:38.652 "method": "framework_set_scheduler", 00:23:38.652 "params": { 00:23:38.652 "name": "static" 00:23:38.652 } 00:23:38.652 } 00:23:38.652 ] 00:23:38.652 }, 00:23:38.652 { 00:23:38.652 "subsystem": "nvmf", 00:23:38.652 "config": [ 00:23:38.652 { 00:23:38.652 "method": "nvmf_set_config", 00:23:38.652 "params": { 00:23:38.652 "discovery_filter": "match_any", 00:23:38.652 "admin_cmd_passthru": { 00:23:38.652 "identify_ctrlr": false 00:23:38.652 } 00:23:38.652 } 00:23:38.652 }, 00:23:38.652 { 00:23:38.652 "method": "nvmf_set_max_subsystems", 00:23:38.652 "params": { 00:23:38.652 "max_subsystems": 1024 00:23:38.652 } 00:23:38.652 }, 00:23:38.652 { 00:23:38.652 "method": "nvmf_set_crdt", 00:23:38.652 "params": { 00:23:38.652 "crdt1": 0, 00:23:38.652 "crdt2": 0, 00:23:38.652 "crdt3": 0 00:23:38.652 } 00:23:38.652 }, 00:23:38.652 { 00:23:38.652 "method": "nvmf_create_transport", 00:23:38.652 "params": { 00:23:38.652 "trtype": "TCP", 00:23:38.652 "max_queue_depth": 128, 00:23:38.652 "max_io_qpairs_per_ctrlr": 127, 00:23:38.652 "in_capsule_data_size": 4096, 00:23:38.652 "max_io_size": 131072, 00:23:38.652 "io_unit_size": 131072, 00:23:38.652 "max_aq_depth": 128, 00:23:38.652 "num_shared_buffers": 511, 00:23:38.652 "buf_cache_size": 4294967295, 00:23:38.652 "dif_insert_or_strip": false, 00:23:38.652 "zcopy": false, 00:23:38.652 "c2h_success": false, 00:23:38.652 "sock_priority": 0, 00:23:38.652 "abort_timeout_sec": 1, 00:23:38.652 "ack_timeout": 0, 00:23:38.652 "data_wr_pool_size": 0 00:23:38.652 } 00:23:38.652 }, 00:23:38.652 { 00:23:38.652 "method": "nvmf_create_subsystem", 00:23:38.652 "params": { 00:23:38.652 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:38.652 "allow_any_host": false, 00:23:38.652 "serial_number": "00000000000000000000", 00:23:38.652 "model_number": "SPDK bdev Controller", 00:23:38.652 "max_namespaces": 32, 00:23:38.652 "min_cntlid": 1, 00:23:38.652 "max_cntlid": 65519, 00:23:38.652 "ana_reporting": false 00:23:38.652 } 00:23:38.652 }, 00:23:38.652 { 00:23:38.652 "method": "nvmf_subsystem_add_host", 00:23:38.652 "params": { 00:23:38.652 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:38.652 "host": "nqn.2016-06.io.spdk:host1", 00:23:38.652 "psk": "key0" 00:23:38.652 } 00:23:38.652 }, 00:23:38.652 { 00:23:38.652 "method": "nvmf_subsystem_add_ns", 00:23:38.652 "params": { 00:23:38.652 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:38.652 "namespace": { 00:23:38.652 "nsid": 1, 00:23:38.652 "bdev_name": "malloc0", 00:23:38.652 "nguid": "266971DCCB9548C49A414DD0E92C2C69", 00:23:38.652 "uuid": "266971dc-cb95-48c4-9a41-4dd0e92c2c69", 00:23:38.652 "no_auto_visible": false 00:23:38.652 } 00:23:38.652 } 00:23:38.652 }, 00:23:38.652 { 00:23:38.652 "method": "nvmf_subsystem_add_listener", 00:23:38.652 "params": { 00:23:38.652 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:38.652 "listen_address": { 00:23:38.652 "trtype": "TCP", 00:23:38.652 "adrfam": "IPv4", 00:23:38.652 "traddr": "10.0.0.2", 00:23:38.652 "trsvcid": "4420" 00:23:38.652 }, 00:23:38.652 "secure_channel": true 00:23:38.652 } 00:23:38.652 } 00:23:38.652 ] 00:23:38.652 } 00:23:38.652 ] 00:23:38.652 }' 00:23:38.652 01:09:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:23:38.652 01:09:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:38.652 01:09:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=3814553 00:23:38.652 01:09:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:23:38.652 01:09:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 3814553 00:23:38.652 01:09:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 3814553 ']' 00:23:38.652 01:09:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:38.652 01:09:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:38.652 01:09:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:38.652 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:38.652 01:09:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:38.652 01:09:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:38.652 [2024-07-25 01:09:31.631193] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 23.11.0 initialization... 00:23:38.652 [2024-07-25 01:09:31.631314] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:38.652 EAL: No free 2048 kB hugepages reported on node 1 00:23:38.652 [2024-07-25 01:09:31.696479] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:38.652 [2024-07-25 01:09:31.782844] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:38.652 [2024-07-25 01:09:31.782899] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:38.652 [2024-07-25 01:09:31.782926] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:38.652 [2024-07-25 01:09:31.782937] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:38.652 [2024-07-25 01:09:31.782946] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:38.652 [2024-07-25 01:09:31.783035] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:38.910 [2024-07-25 01:09:32.027422] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:38.910 [2024-07-25 01:09:32.059390] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:39.167 [2024-07-25 01:09:32.069465] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:39.732 01:09:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:39.732 01:09:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:23:39.732 01:09:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:39.732 01:09:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:39.732 01:09:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:39.732 01:09:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:39.732 01:09:32 nvmf_tcp.nvmf_tls -- target/tls.sh@272 -- # bdevperf_pid=3814706 00:23:39.732 01:09:32 nvmf_tcp.nvmf_tls -- target/tls.sh@273 -- # waitforlisten 3814706 /var/tmp/bdevperf.sock 00:23:39.732 01:09:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 3814706 ']' 00:23:39.732 01:09:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:39.732 01:09:32 nvmf_tcp.nvmf_tls -- target/tls.sh@270 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:23:39.732 01:09:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:39.732 01:09:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:39.732 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:39.732 01:09:32 nvmf_tcp.nvmf_tls -- target/tls.sh@270 -- # echo '{ 00:23:39.732 "subsystems": [ 00:23:39.732 { 00:23:39.732 "subsystem": "keyring", 00:23:39.732 "config": [ 00:23:39.732 { 00:23:39.732 "method": "keyring_file_add_key", 00:23:39.732 "params": { 00:23:39.732 "name": "key0", 00:23:39.732 "path": "/tmp/tmp.dKghDIXpz6" 00:23:39.732 } 00:23:39.732 } 00:23:39.732 ] 00:23:39.732 }, 00:23:39.732 { 00:23:39.732 "subsystem": "iobuf", 00:23:39.733 "config": [ 00:23:39.733 { 00:23:39.733 "method": "iobuf_set_options", 00:23:39.733 "params": { 00:23:39.733 "small_pool_count": 8192, 00:23:39.733 "large_pool_count": 1024, 00:23:39.733 "small_bufsize": 8192, 00:23:39.733 "large_bufsize": 135168 00:23:39.733 } 00:23:39.733 } 00:23:39.733 ] 00:23:39.733 }, 00:23:39.733 { 00:23:39.733 "subsystem": "sock", 00:23:39.733 "config": [ 00:23:39.733 { 00:23:39.733 "method": "sock_set_default_impl", 00:23:39.733 "params": { 00:23:39.733 "impl_name": "posix" 00:23:39.733 } 00:23:39.733 }, 00:23:39.733 { 00:23:39.733 "method": "sock_impl_set_options", 00:23:39.733 "params": { 00:23:39.733 "impl_name": "ssl", 00:23:39.733 "recv_buf_size": 4096, 00:23:39.733 "send_buf_size": 4096, 00:23:39.733 "enable_recv_pipe": true, 00:23:39.733 "enable_quickack": false, 00:23:39.733 "enable_placement_id": 0, 00:23:39.733 "enable_zerocopy_send_server": true, 00:23:39.733 "enable_zerocopy_send_client": false, 00:23:39.733 "zerocopy_threshold": 0, 00:23:39.733 "tls_version": 0, 00:23:39.733 "enable_ktls": false 00:23:39.733 } 00:23:39.733 }, 00:23:39.733 { 00:23:39.733 "method": "sock_impl_set_options", 00:23:39.733 "params": { 00:23:39.733 "impl_name": "posix", 00:23:39.733 "recv_buf_size": 2097152, 00:23:39.733 "send_buf_size": 2097152, 00:23:39.733 "enable_recv_pipe": true, 00:23:39.733 "enable_quickack": false, 00:23:39.733 "enable_placement_id": 0, 00:23:39.733 "enable_zerocopy_send_server": true, 00:23:39.733 "enable_zerocopy_send_client": false, 00:23:39.733 "zerocopy_threshold": 0, 00:23:39.733 "tls_version": 0, 00:23:39.733 "enable_ktls": false 00:23:39.733 } 00:23:39.733 } 00:23:39.733 ] 00:23:39.733 }, 00:23:39.733 { 00:23:39.733 "subsystem": "vmd", 00:23:39.733 "config": [] 00:23:39.733 }, 00:23:39.733 { 00:23:39.733 "subsystem": "accel", 00:23:39.733 "config": [ 00:23:39.733 { 00:23:39.733 "method": "accel_set_options", 00:23:39.733 "params": { 00:23:39.733 "small_cache_size": 128, 00:23:39.733 "large_cache_size": 16, 00:23:39.733 "task_count": 2048, 00:23:39.733 "sequence_count": 2048, 00:23:39.733 "buf_count": 2048 00:23:39.733 } 00:23:39.733 } 00:23:39.733 ] 00:23:39.733 }, 00:23:39.733 { 00:23:39.733 "subsystem": "bdev", 00:23:39.733 "config": [ 00:23:39.733 { 00:23:39.733 "method": "bdev_set_options", 00:23:39.733 "params": { 00:23:39.733 "bdev_io_pool_size": 65535, 00:23:39.733 "bdev_io_cache_size": 256, 00:23:39.733 "bdev_auto_examine": true, 00:23:39.733 "iobuf_small_cache_size": 128, 00:23:39.733 "iobuf_large_cache_size": 16 00:23:39.733 } 00:23:39.733 }, 00:23:39.733 { 00:23:39.733 "method": "bdev_raid_set_options", 00:23:39.733 "params": { 00:23:39.733 "process_window_size_kb": 1024 00:23:39.733 } 00:23:39.733 }, 00:23:39.733 { 00:23:39.733 "method": "bdev_iscsi_set_options", 00:23:39.733 "params": { 00:23:39.733 "timeout_sec": 30 00:23:39.733 } 00:23:39.733 }, 00:23:39.733 { 00:23:39.733 "method": "bdev_nvme_set_options", 00:23:39.733 "params": { 00:23:39.733 "action_on_timeout": "none", 00:23:39.733 "timeout_us": 0, 00:23:39.733 "timeout_admin_us": 0, 00:23:39.733 "keep_alive_timeout_ms": 10000, 00:23:39.733 "arbitration_burst": 0, 00:23:39.733 "low_priority_weight": 0, 00:23:39.733 "medium_priority_weight": 0, 00:23:39.733 "high_priority_weight": 0, 00:23:39.733 "nvme_adminq_poll_period_us": 10000, 00:23:39.733 "nvme_ioq_poll_period_us": 0, 00:23:39.733 "io_queue_requests": 512, 00:23:39.733 "delay_cmd_submit": true, 00:23:39.733 "transport_retry_count": 4, 00:23:39.733 "bdev_retry_count": 3, 00:23:39.733 "transport_ack_timeout": 0, 00:23:39.733 "ctrlr_loss_timeout_sec": 0, 00:23:39.733 "reconnect_delay_sec": 0, 00:23:39.733 "fast_io_fail_timeout_sec": 0, 00:23:39.733 "disable_auto_failback": false, 00:23:39.733 "generate_uuids": false, 00:23:39.733 "transport_tos": 0, 00:23:39.733 "nvme_error_stat": false, 00:23:39.733 "rdma_srq_size": 0, 00:23:39.733 "io_path_stat": false, 00:23:39.733 "allow_accel_sequence": false, 00:23:39.733 "rdma_max_cq_size": 0, 00:23:39.733 "rdma_cm_event_timeout_ms": 0, 00:23:39.733 "dhchap_digests": [ 00:23:39.733 "sha256", 00:23:39.733 "sha384", 00:23:39.733 "sha512" 00:23:39.733 ], 00:23:39.733 "dhchap_dhgroups": [ 00:23:39.733 "null", 00:23:39.733 "ffdhe2048", 00:23:39.733 "ffdhe3072", 00:23:39.733 "ffdhe4096", 00:23:39.733 "ffdhe6144", 00:23:39.733 "ffdhe8192" 00:23:39.733 ] 00:23:39.733 } 00:23:39.733 }, 00:23:39.733 { 00:23:39.733 "method": "bdev_nvme_attach_controller", 00:23:39.733 "params": { 00:23:39.733 "name": "nvme0", 00:23:39.733 "trtype": "TCP", 00:23:39.733 "adrfam": "IPv4", 00:23:39.733 "traddr": "10.0.0.2", 00:23:39.733 "trsvcid": "4420", 00:23:39.733 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:39.733 "prchk_reftag": false, 00:23:39.733 "prchk_guard": false, 00:23:39.733 "ctrlr_loss_timeout_sec": 0, 00:23:39.733 "reconnect_delay_sec": 0, 00:23:39.733 "fast_io_fail_timeout_sec": 0, 00:23:39.733 "psk": "key0", 00:23:39.733 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:39.733 "hdgst": false, 00:23:39.733 "ddgst": false 00:23:39.733 } 00:23:39.733 }, 00:23:39.733 { 00:23:39.733 "method": "bdev_nvme_set_hotplug", 00:23:39.733 "params": { 00:23:39.733 "period_us": 100000, 00:23:39.733 "enable": false 00:23:39.733 } 00:23:39.733 }, 00:23:39.733 { 00:23:39.733 "method": "bdev_enable_histogram", 00:23:39.733 "params": { 00:23:39.733 "name": "nvme0n1", 00:23:39.733 "enable": true 00:23:39.733 } 00:23:39.733 }, 00:23:39.733 { 00:23:39.733 "method": "bdev_wait_for_examine" 00:23:39.733 } 00:23:39.733 ] 00:23:39.733 }, 00:23:39.733 { 00:23:39.733 "subsystem": "nbd", 00:23:39.733 "config": [] 00:23:39.733 } 00:23:39.733 ] 00:23:39.733 }' 00:23:39.733 01:09:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:39.733 01:09:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:39.733 [2024-07-25 01:09:32.642172] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 23.11.0 initialization... 00:23:39.733 [2024-07-25 01:09:32.642269] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3814706 ] 00:23:39.733 EAL: No free 2048 kB hugepages reported on node 1 00:23:39.733 [2024-07-25 01:09:32.703874] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:39.733 [2024-07-25 01:09:32.794113] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:39.990 [2024-07-25 01:09:32.970605] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:40.554 01:09:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:40.554 01:09:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:23:40.554 01:09:33 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:40.554 01:09:33 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # jq -r '.[].name' 00:23:40.810 01:09:33 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:40.810 01:09:33 nvmf_tcp.nvmf_tls -- target/tls.sh@276 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:40.810 Running I/O for 1 seconds... 00:23:42.180 00:23:42.180 Latency(us) 00:23:42.180 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:42.180 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:23:42.180 Verification LBA range: start 0x0 length 0x2000 00:23:42.180 nvme0n1 : 1.05 2515.54 9.83 0.00 0.00 49859.06 8592.50 83497.72 00:23:42.180 =================================================================================================================== 00:23:42.180 Total : 2515.54 9.83 0.00 0.00 49859.06 8592.50 83497.72 00:23:42.180 0 00:23:42.180 01:09:35 nvmf_tcp.nvmf_tls -- target/tls.sh@278 -- # trap - SIGINT SIGTERM EXIT 00:23:42.180 01:09:35 nvmf_tcp.nvmf_tls -- target/tls.sh@279 -- # cleanup 00:23:42.180 01:09:35 nvmf_tcp.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:23:42.180 01:09:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@804 -- # type=--id 00:23:42.180 01:09:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@805 -- # id=0 00:23:42.180 01:09:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@806 -- # '[' --id = --pid ']' 00:23:42.180 01:09:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@810 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:23:42.180 01:09:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@810 -- # shm_files=nvmf_trace.0 00:23:42.180 01:09:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@812 -- # [[ -z nvmf_trace.0 ]] 00:23:42.180 01:09:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@816 -- # for n in $shm_files 00:23:42.181 01:09:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@817 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:23:42.181 nvmf_trace.0 00:23:42.181 01:09:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@819 -- # return 0 00:23:42.181 01:09:35 nvmf_tcp.nvmf_tls -- target/tls.sh@16 -- # killprocess 3814706 00:23:42.181 01:09:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 3814706 ']' 00:23:42.181 01:09:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 3814706 00:23:42.181 01:09:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:23:42.181 01:09:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:42.181 01:09:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3814706 00:23:42.181 01:09:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:23:42.181 01:09:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:23:42.181 01:09:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3814706' 00:23:42.181 killing process with pid 3814706 00:23:42.181 01:09:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 3814706 00:23:42.181 Received shutdown signal, test time was about 1.000000 seconds 00:23:42.181 00:23:42.181 Latency(us) 00:23:42.181 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:42.181 =================================================================================================================== 00:23:42.181 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:42.181 01:09:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 3814706 00:23:42.181 01:09:35 nvmf_tcp.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:23:42.181 01:09:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:42.181 01:09:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@117 -- # sync 00:23:42.438 01:09:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:42.438 01:09:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@120 -- # set +e 00:23:42.438 01:09:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:42.438 01:09:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:42.438 rmmod nvme_tcp 00:23:42.438 rmmod nvme_fabrics 00:23:42.438 rmmod nvme_keyring 00:23:42.438 01:09:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:42.438 01:09:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@124 -- # set -e 00:23:42.438 01:09:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@125 -- # return 0 00:23:42.438 01:09:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@489 -- # '[' -n 3814553 ']' 00:23:42.438 01:09:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@490 -- # killprocess 3814553 00:23:42.438 01:09:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 3814553 ']' 00:23:42.438 01:09:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 3814553 00:23:42.438 01:09:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:23:42.438 01:09:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:42.438 01:09:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3814553 00:23:42.438 01:09:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:23:42.438 01:09:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:23:42.438 01:09:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3814553' 00:23:42.438 killing process with pid 3814553 00:23:42.438 01:09:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 3814553 00:23:42.438 01:09:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 3814553 00:23:42.696 01:09:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:42.696 01:09:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:42.696 01:09:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:42.696 01:09:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:42.696 01:09:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:42.696 01:09:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:42.696 01:09:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:42.696 01:09:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:44.597 01:09:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:44.597 01:09:37 nvmf_tcp.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.1SHZw5PyL6 /tmp/tmp.lYKicCj708 /tmp/tmp.dKghDIXpz6 00:23:44.597 00:23:44.597 real 1m19.007s 00:23:44.597 user 2m5.697s 00:23:44.597 sys 0m26.377s 00:23:44.597 01:09:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1122 -- # xtrace_disable 00:23:44.597 01:09:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:44.597 ************************************ 00:23:44.597 END TEST nvmf_tls 00:23:44.597 ************************************ 00:23:44.856 01:09:37 nvmf_tcp -- nvmf/nvmf.sh@62 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:23:44.856 01:09:37 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:23:44.856 01:09:37 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:23:44.856 01:09:37 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:44.856 ************************************ 00:23:44.856 START TEST nvmf_fips 00:23:44.856 ************************************ 00:23:44.856 01:09:37 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:23:44.856 * Looking for test storage... 00:23:44.856 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:23:44.856 01:09:37 nvmf_tcp.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:44.856 01:09:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:23:44.856 01:09:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:44.856 01:09:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:44.856 01:09:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:44.856 01:09:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:44.856 01:09:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:44.856 01:09:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:44.856 01:09:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:44.856 01:09:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:44.856 01:09:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:44.856 01:09:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:44.856 01:09:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:23:44.856 01:09:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:23:44.856 01:09:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:44.856 01:09:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:44.856 01:09:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:44.856 01:09:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:44.856 01:09:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:44.856 01:09:37 nvmf_tcp.nvmf_fips -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:44.856 01:09:37 nvmf_tcp.nvmf_fips -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:44.856 01:09:37 nvmf_tcp.nvmf_fips -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:44.856 01:09:37 nvmf_tcp.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:44.856 01:09:37 nvmf_tcp.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:44.857 01:09:37 nvmf_tcp.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:44.857 01:09:37 nvmf_tcp.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:23:44.857 01:09:37 nvmf_tcp.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:44.857 01:09:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@47 -- # : 0 00:23:44.857 01:09:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:44.857 01:09:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:44.857 01:09:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:44.857 01:09:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:44.857 01:09:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:44.857 01:09:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:44.857 01:09:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:44.857 01:09:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:44.857 01:09:37 nvmf_tcp.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:44.857 01:09:37 nvmf_tcp.nvmf_fips -- fips/fips.sh@89 -- # check_openssl_version 00:23:44.857 01:09:37 nvmf_tcp.nvmf_fips -- fips/fips.sh@83 -- # local target=3.0.0 00:23:44.857 01:09:37 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # openssl version 00:23:44.857 01:09:37 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # awk '{print $2}' 00:23:44.857 01:09:37 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:23:44.857 01:09:37 nvmf_tcp.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:23:44.857 01:09:37 nvmf_tcp.nvmf_fips -- scripts/common.sh@330 -- # local ver1 ver1_l 00:23:44.857 01:09:37 nvmf_tcp.nvmf_fips -- scripts/common.sh@331 -- # local ver2 ver2_l 00:23:44.857 01:09:37 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # IFS=.-: 00:23:44.857 01:09:37 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # read -ra ver1 00:23:44.857 01:09:37 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # IFS=.-: 00:23:44.857 01:09:37 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # read -ra ver2 00:23:44.857 01:09:37 nvmf_tcp.nvmf_fips -- scripts/common.sh@335 -- # local 'op=>=' 00:23:44.857 01:09:37 nvmf_tcp.nvmf_fips -- scripts/common.sh@337 -- # ver1_l=3 00:23:44.857 01:09:37 nvmf_tcp.nvmf_fips -- scripts/common.sh@338 -- # ver2_l=3 00:23:44.857 01:09:37 nvmf_tcp.nvmf_fips -- scripts/common.sh@340 -- # local lt=0 gt=0 eq=0 v 00:23:44.857 01:09:37 nvmf_tcp.nvmf_fips -- scripts/common.sh@341 -- # case "$op" in 00:23:44.857 01:09:37 nvmf_tcp.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:23:44.857 01:09:37 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v = 0 )) 00:23:44.857 01:09:37 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:44.857 01:09:37 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 3 00:23:44.857 01:09:37 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:23:44.857 01:09:37 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:23:44.857 01:09:37 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:23:44.857 01:09:37 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=3 00:23:44.857 01:09:37 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 3 00:23:44.857 01:09:37 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:23:44.857 01:09:37 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:23:44.857 01:09:37 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:23:44.857 01:09:37 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=3 00:23:44.857 01:09:37 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:23:44.857 01:09:37 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:23:44.857 01:09:37 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:23:44.857 01:09:37 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:44.857 01:09:37 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 0 00:23:44.857 01:09:37 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:23:44.857 01:09:37 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:23:44.857 01:09:37 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:23:44.857 01:09:37 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=0 00:23:44.857 01:09:37 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:23:44.857 01:09:37 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:23:44.857 01:09:37 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:23:44.857 01:09:37 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:23:44.857 01:09:37 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:23:44.857 01:09:37 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:23:44.857 01:09:37 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:23:44.857 01:09:37 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:23:44.857 01:09:37 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:44.857 01:09:37 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 9 00:23:44.857 01:09:37 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=9 00:23:44.857 01:09:37 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 9 =~ ^[0-9]+$ ]] 00:23:44.857 01:09:37 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 9 00:23:44.857 01:09:37 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=9 00:23:44.857 01:09:37 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:23:44.857 01:09:37 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:23:44.857 01:09:37 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:23:44.857 01:09:37 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:23:44.857 01:09:37 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:23:44.857 01:09:37 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:23:44.857 01:09:37 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # return 0 00:23:44.857 01:09:37 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # openssl info -modulesdir 00:23:44.857 01:09:37 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:23:44.857 01:09:37 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:23:44.857 01:09:37 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:23:44.857 01:09:37 nvmf_tcp.nvmf_fips -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:23:44.857 01:09:37 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:23:44.857 01:09:37 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # callback=build_openssl_config 00:23:44.857 01:09:37 nvmf_tcp.nvmf_fips -- fips/fips.sh@113 -- # build_openssl_config 00:23:44.857 01:09:37 nvmf_tcp.nvmf_fips -- fips/fips.sh@37 -- # cat 00:23:44.857 01:09:37 nvmf_tcp.nvmf_fips -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:23:44.857 01:09:37 nvmf_tcp.nvmf_fips -- fips/fips.sh@58 -- # cat - 00:23:44.857 01:09:37 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:23:44.857 01:09:37 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:23:44.857 01:09:37 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # mapfile -t providers 00:23:44.857 01:09:37 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # openssl list -providers 00:23:44.857 01:09:37 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # grep name 00:23:44.857 01:09:37 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:23:44.857 01:09:37 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:23:44.857 01:09:37 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:23:44.857 01:09:37 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:23:44.857 01:09:37 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # : 00:23:44.857 01:09:37 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@648 -- # local es=0 00:23:44.857 01:09:37 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@650 -- # valid_exec_arg openssl md5 /dev/fd/62 00:23:44.857 01:09:37 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@636 -- # local arg=openssl 00:23:44.857 01:09:37 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:44.857 01:09:37 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # type -t openssl 00:23:44.857 01:09:37 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:44.857 01:09:37 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # type -P openssl 00:23:44.857 01:09:37 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:44.857 01:09:37 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # arg=/usr/bin/openssl 00:23:44.857 01:09:37 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # [[ -x /usr/bin/openssl ]] 00:23:44.857 01:09:37 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # openssl md5 /dev/fd/62 00:23:44.857 Error setting digest 00:23:44.857 0062EB32E87F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:23:44.857 0062EB32E87F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:23:44.857 01:09:37 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # es=1 00:23:44.857 01:09:37 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:44.857 01:09:37 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:44.857 01:09:37 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:44.857 01:09:37 nvmf_tcp.nvmf_fips -- fips/fips.sh@130 -- # nvmftestinit 00:23:44.857 01:09:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:44.858 01:09:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:44.858 01:09:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:44.858 01:09:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:44.858 01:09:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:44.858 01:09:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:44.858 01:09:37 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:44.858 01:09:37 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:44.858 01:09:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:44.858 01:09:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:44.858 01:09:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@285 -- # xtrace_disable 00:23:44.858 01:09:37 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:23:47.386 01:09:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:47.386 01:09:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@291 -- # pci_devs=() 00:23:47.386 01:09:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:47.386 01:09:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:47.386 01:09:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:47.386 01:09:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:47.386 01:09:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:47.386 01:09:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@295 -- # net_devs=() 00:23:47.386 01:09:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:47.386 01:09:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@296 -- # e810=() 00:23:47.386 01:09:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@296 -- # local -ga e810 00:23:47.386 01:09:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@297 -- # x722=() 00:23:47.386 01:09:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@297 -- # local -ga x722 00:23:47.386 01:09:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@298 -- # mlx=() 00:23:47.386 01:09:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@298 -- # local -ga mlx 00:23:47.386 01:09:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:47.386 01:09:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:47.386 01:09:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:47.386 01:09:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:47.386 01:09:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:47.386 01:09:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:47.386 01:09:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:47.386 01:09:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:47.386 01:09:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:47.386 01:09:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:47.386 01:09:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:47.386 01:09:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:47.386 01:09:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:47.386 01:09:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:47.386 01:09:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:47.386 01:09:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:47.386 01:09:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:47.386 01:09:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:47.386 01:09:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:23:47.386 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:23:47.386 01:09:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:47.386 01:09:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:47.386 01:09:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:47.386 01:09:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:47.386 01:09:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:47.386 01:09:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:47.386 01:09:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:23:47.386 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:23:47.386 01:09:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:47.386 01:09:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:47.386 01:09:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:47.386 01:09:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:47.386 01:09:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:47.386 01:09:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:47.386 01:09:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:47.386 01:09:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:47.386 01:09:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:47.386 01:09:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:47.386 01:09:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:47.386 01:09:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:47.386 01:09:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:47.386 01:09:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:47.386 01:09:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:47.386 01:09:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:23:47.386 Found net devices under 0000:0a:00.0: cvl_0_0 00:23:47.386 01:09:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:47.386 01:09:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:47.386 01:09:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:47.386 01:09:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:47.386 01:09:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:47.386 01:09:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:47.386 01:09:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:47.386 01:09:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:47.386 01:09:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:23:47.386 Found net devices under 0000:0a:00.1: cvl_0_1 00:23:47.386 01:09:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:47.386 01:09:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:47.386 01:09:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # is_hw=yes 00:23:47.386 01:09:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:47.386 01:09:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:47.386 01:09:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:47.386 01:09:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:47.386 01:09:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:47.386 01:09:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:47.386 01:09:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:47.386 01:09:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:47.386 01:09:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:47.386 01:09:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:47.386 01:09:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:47.386 01:09:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:47.386 01:09:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:47.386 01:09:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:47.386 01:09:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:47.386 01:09:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:47.386 01:09:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:47.386 01:09:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:47.386 01:09:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:47.386 01:09:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:47.386 01:09:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:47.386 01:09:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:47.386 01:09:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:47.386 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:47.386 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.214 ms 00:23:47.386 00:23:47.386 --- 10.0.0.2 ping statistics --- 00:23:47.386 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:47.386 rtt min/avg/max/mdev = 0.214/0.214/0.214/0.000 ms 00:23:47.386 01:09:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:47.387 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:47.387 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.083 ms 00:23:47.387 00:23:47.387 --- 10.0.0.1 ping statistics --- 00:23:47.387 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:47.387 rtt min/avg/max/mdev = 0.083/0.083/0.083/0.000 ms 00:23:47.387 01:09:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:47.387 01:09:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@422 -- # return 0 00:23:47.387 01:09:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:47.387 01:09:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:47.387 01:09:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:47.387 01:09:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:47.387 01:09:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:47.387 01:09:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:47.387 01:09:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:47.387 01:09:40 nvmf_tcp.nvmf_fips -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:23:47.387 01:09:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:47.387 01:09:40 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@720 -- # xtrace_disable 00:23:47.387 01:09:40 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:23:47.387 01:09:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@481 -- # nvmfpid=3817058 00:23:47.387 01:09:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:47.387 01:09:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@482 -- # waitforlisten 3817058 00:23:47.387 01:09:40 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@827 -- # '[' -z 3817058 ']' 00:23:47.387 01:09:40 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:47.387 01:09:40 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:47.387 01:09:40 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:47.387 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:47.387 01:09:40 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:47.387 01:09:40 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:23:47.387 [2024-07-25 01:09:40.236800] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 23.11.0 initialization... 00:23:47.387 [2024-07-25 01:09:40.236879] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:47.387 EAL: No free 2048 kB hugepages reported on node 1 00:23:47.387 [2024-07-25 01:09:40.304347] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:47.387 [2024-07-25 01:09:40.396043] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:47.387 [2024-07-25 01:09:40.396113] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:47.387 [2024-07-25 01:09:40.396130] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:47.387 [2024-07-25 01:09:40.396143] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:47.387 [2024-07-25 01:09:40.396155] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:47.387 [2024-07-25 01:09:40.396184] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:47.387 01:09:40 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:47.387 01:09:40 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@860 -- # return 0 00:23:47.387 01:09:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:47.387 01:09:40 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:47.387 01:09:40 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:23:47.387 01:09:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:47.387 01:09:40 nvmf_tcp.nvmf_fips -- fips/fips.sh@133 -- # trap cleanup EXIT 00:23:47.387 01:09:40 nvmf_tcp.nvmf_fips -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:23:47.387 01:09:40 nvmf_tcp.nvmf_fips -- fips/fips.sh@137 -- # key_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:23:47.387 01:09:40 nvmf_tcp.nvmf_fips -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:23:47.387 01:09:40 nvmf_tcp.nvmf_fips -- fips/fips.sh@139 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:23:47.387 01:09:40 nvmf_tcp.nvmf_fips -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:23:47.387 01:09:40 nvmf_tcp.nvmf_fips -- fips/fips.sh@22 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:23:47.387 01:09:40 nvmf_tcp.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:47.645 [2024-07-25 01:09:40.748005] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:47.645 [2024-07-25 01:09:40.764008] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:47.645 [2024-07-25 01:09:40.764235] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:47.645 [2024-07-25 01:09:40.795198] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:23:47.903 malloc0 00:23:47.903 01:09:40 nvmf_tcp.nvmf_fips -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:47.903 01:09:40 nvmf_tcp.nvmf_fips -- fips/fips.sh@147 -- # bdevperf_pid=3817091 00:23:47.903 01:09:40 nvmf_tcp.nvmf_fips -- fips/fips.sh@145 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:47.903 01:09:40 nvmf_tcp.nvmf_fips -- fips/fips.sh@148 -- # waitforlisten 3817091 /var/tmp/bdevperf.sock 00:23:47.903 01:09:40 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@827 -- # '[' -z 3817091 ']' 00:23:47.903 01:09:40 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:47.903 01:09:40 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:47.903 01:09:40 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:47.903 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:47.903 01:09:40 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:47.903 01:09:40 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:23:47.903 [2024-07-25 01:09:40.886665] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 23.11.0 initialization... 00:23:47.903 [2024-07-25 01:09:40.886741] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3817091 ] 00:23:47.903 EAL: No free 2048 kB hugepages reported on node 1 00:23:47.903 [2024-07-25 01:09:40.947628] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:47.903 [2024-07-25 01:09:41.032293] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:48.161 01:09:41 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:48.161 01:09:41 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@860 -- # return 0 00:23:48.161 01:09:41 nvmf_tcp.nvmf_fips -- fips/fips.sh@150 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:23:48.419 [2024-07-25 01:09:41.361237] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:48.419 [2024-07-25 01:09:41.361403] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:23:48.419 TLSTESTn1 00:23:48.419 01:09:41 nvmf_tcp.nvmf_fips -- fips/fips.sh@154 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:48.419 Running I/O for 10 seconds... 00:24:00.663 00:24:00.663 Latency(us) 00:24:00.663 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:00.663 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:24:00.663 Verification LBA range: start 0x0 length 0x2000 00:24:00.663 TLSTESTn1 : 10.03 2957.09 11.55 0.00 0.00 43206.17 6092.42 64079.64 00:24:00.663 =================================================================================================================== 00:24:00.663 Total : 2957.09 11.55 0.00 0.00 43206.17 6092.42 64079.64 00:24:00.663 0 00:24:00.663 01:09:51 nvmf_tcp.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:24:00.663 01:09:51 nvmf_tcp.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:24:00.663 01:09:51 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@804 -- # type=--id 00:24:00.663 01:09:51 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@805 -- # id=0 00:24:00.663 01:09:51 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@806 -- # '[' --id = --pid ']' 00:24:00.663 01:09:51 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@810 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:24:00.663 01:09:51 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@810 -- # shm_files=nvmf_trace.0 00:24:00.663 01:09:51 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@812 -- # [[ -z nvmf_trace.0 ]] 00:24:00.663 01:09:51 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@816 -- # for n in $shm_files 00:24:00.663 01:09:51 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@817 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:24:00.663 nvmf_trace.0 00:24:00.663 01:09:51 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@819 -- # return 0 00:24:00.663 01:09:51 nvmf_tcp.nvmf_fips -- fips/fips.sh@16 -- # killprocess 3817091 00:24:00.663 01:09:51 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@946 -- # '[' -z 3817091 ']' 00:24:00.663 01:09:51 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@950 -- # kill -0 3817091 00:24:00.663 01:09:51 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@951 -- # uname 00:24:00.663 01:09:51 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:24:00.663 01:09:51 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3817091 00:24:00.663 01:09:51 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:24:00.663 01:09:51 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:24:00.663 01:09:51 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3817091' 00:24:00.663 killing process with pid 3817091 00:24:00.663 01:09:51 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@965 -- # kill 3817091 00:24:00.663 Received shutdown signal, test time was about 10.000000 seconds 00:24:00.663 00:24:00.663 Latency(us) 00:24:00.663 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:00.663 =================================================================================================================== 00:24:00.663 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:00.663 [2024-07-25 01:09:51.699566] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:24:00.663 01:09:51 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@970 -- # wait 3817091 00:24:00.663 01:09:51 nvmf_tcp.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:24:00.663 01:09:51 nvmf_tcp.nvmf_fips -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:00.663 01:09:51 nvmf_tcp.nvmf_fips -- nvmf/common.sh@117 -- # sync 00:24:00.663 01:09:51 nvmf_tcp.nvmf_fips -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:00.663 01:09:51 nvmf_tcp.nvmf_fips -- nvmf/common.sh@120 -- # set +e 00:24:00.663 01:09:51 nvmf_tcp.nvmf_fips -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:00.663 01:09:51 nvmf_tcp.nvmf_fips -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:00.663 rmmod nvme_tcp 00:24:00.663 rmmod nvme_fabrics 00:24:00.663 rmmod nvme_keyring 00:24:00.663 01:09:51 nvmf_tcp.nvmf_fips -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:00.663 01:09:51 nvmf_tcp.nvmf_fips -- nvmf/common.sh@124 -- # set -e 00:24:00.663 01:09:51 nvmf_tcp.nvmf_fips -- nvmf/common.sh@125 -- # return 0 00:24:00.663 01:09:51 nvmf_tcp.nvmf_fips -- nvmf/common.sh@489 -- # '[' -n 3817058 ']' 00:24:00.663 01:09:51 nvmf_tcp.nvmf_fips -- nvmf/common.sh@490 -- # killprocess 3817058 00:24:00.663 01:09:51 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@946 -- # '[' -z 3817058 ']' 00:24:00.663 01:09:51 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@950 -- # kill -0 3817058 00:24:00.663 01:09:51 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@951 -- # uname 00:24:00.663 01:09:51 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:24:00.663 01:09:51 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3817058 00:24:00.663 01:09:51 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:24:00.663 01:09:51 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:24:00.663 01:09:51 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3817058' 00:24:00.663 killing process with pid 3817058 00:24:00.663 01:09:51 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@965 -- # kill 3817058 00:24:00.663 [2024-07-25 01:09:51.975416] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:24:00.663 01:09:51 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@970 -- # wait 3817058 00:24:00.663 01:09:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:00.664 01:09:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:00.664 01:09:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:00.664 01:09:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:00.664 01:09:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:00.664 01:09:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:00.664 01:09:52 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:00.664 01:09:52 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:01.230 01:09:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:01.230 01:09:54 nvmf_tcp.nvmf_fips -- fips/fips.sh@18 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:24:01.230 00:24:01.230 real 0m16.487s 00:24:01.230 user 0m20.475s 00:24:01.230 sys 0m6.228s 00:24:01.230 01:09:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1122 -- # xtrace_disable 00:24:01.230 01:09:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:01.230 ************************************ 00:24:01.230 END TEST nvmf_fips 00:24:01.230 ************************************ 00:24:01.230 01:09:54 nvmf_tcp -- nvmf/nvmf.sh@65 -- # '[' 1 -eq 1 ']' 00:24:01.230 01:09:54 nvmf_tcp -- nvmf/nvmf.sh@66 -- # run_test nvmf_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:24:01.230 01:09:54 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:24:01.230 01:09:54 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:24:01.230 01:09:54 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:01.230 ************************************ 00:24:01.230 START TEST nvmf_fuzz 00:24:01.230 ************************************ 00:24:01.230 01:09:54 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:24:01.230 * Looking for test storage... 00:24:01.230 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:24:01.230 01:09:54 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:01.230 01:09:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@7 -- # uname -s 00:24:01.230 01:09:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:01.230 01:09:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:01.230 01:09:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:01.230 01:09:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:01.230 01:09:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:01.230 01:09:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:01.230 01:09:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:01.230 01:09:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:01.230 01:09:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:01.230 01:09:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:01.230 01:09:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:24:01.230 01:09:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:24:01.230 01:09:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:01.230 01:09:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:01.230 01:09:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:01.230 01:09:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:01.230 01:09:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:01.488 01:09:54 nvmf_tcp.nvmf_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:01.488 01:09:54 nvmf_tcp.nvmf_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:01.488 01:09:54 nvmf_tcp.nvmf_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:01.488 01:09:54 nvmf_tcp.nvmf_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:01.488 01:09:54 nvmf_tcp.nvmf_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:01.488 01:09:54 nvmf_tcp.nvmf_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:01.488 01:09:54 nvmf_tcp.nvmf_fuzz -- paths/export.sh@5 -- # export PATH 00:24:01.488 01:09:54 nvmf_tcp.nvmf_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:01.488 01:09:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@47 -- # : 0 00:24:01.488 01:09:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:01.488 01:09:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:01.488 01:09:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:01.488 01:09:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:01.488 01:09:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:01.488 01:09:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:01.488 01:09:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:01.488 01:09:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:01.488 01:09:54 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@11 -- # nvmftestinit 00:24:01.488 01:09:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:01.488 01:09:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:01.488 01:09:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:01.488 01:09:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:01.488 01:09:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:01.488 01:09:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:01.488 01:09:54 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:01.488 01:09:54 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:01.488 01:09:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:01.488 01:09:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:01.488 01:09:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@285 -- # xtrace_disable 00:24:01.488 01:09:54 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:03.388 01:09:56 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:03.388 01:09:56 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@291 -- # pci_devs=() 00:24:03.388 01:09:56 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:03.388 01:09:56 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:03.388 01:09:56 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:03.388 01:09:56 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:03.388 01:09:56 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:03.388 01:09:56 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@295 -- # net_devs=() 00:24:03.388 01:09:56 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:03.388 01:09:56 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@296 -- # e810=() 00:24:03.388 01:09:56 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@296 -- # local -ga e810 00:24:03.388 01:09:56 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@297 -- # x722=() 00:24:03.388 01:09:56 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@297 -- # local -ga x722 00:24:03.388 01:09:56 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@298 -- # mlx=() 00:24:03.388 01:09:56 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@298 -- # local -ga mlx 00:24:03.388 01:09:56 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:03.388 01:09:56 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:03.388 01:09:56 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:03.388 01:09:56 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:03.388 01:09:56 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:03.388 01:09:56 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:03.388 01:09:56 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:03.388 01:09:56 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:03.388 01:09:56 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:03.388 01:09:56 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:03.388 01:09:56 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:03.388 01:09:56 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:03.388 01:09:56 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:03.388 01:09:56 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:03.388 01:09:56 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:03.388 01:09:56 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:03.388 01:09:56 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:03.388 01:09:56 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:03.388 01:09:56 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:24:03.389 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:24:03.389 01:09:56 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:03.389 01:09:56 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:03.389 01:09:56 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:03.389 01:09:56 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:03.389 01:09:56 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:03.389 01:09:56 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:03.389 01:09:56 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:24:03.389 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:24:03.389 01:09:56 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:03.389 01:09:56 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:03.389 01:09:56 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:03.389 01:09:56 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:03.389 01:09:56 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:03.389 01:09:56 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:03.389 01:09:56 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:03.389 01:09:56 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:03.389 01:09:56 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:03.389 01:09:56 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:03.389 01:09:56 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:03.389 01:09:56 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:03.389 01:09:56 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:03.389 01:09:56 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:03.389 01:09:56 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:03.389 01:09:56 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:24:03.389 Found net devices under 0000:0a:00.0: cvl_0_0 00:24:03.389 01:09:56 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:03.389 01:09:56 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:03.389 01:09:56 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:03.389 01:09:56 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:03.389 01:09:56 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:03.389 01:09:56 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:03.389 01:09:56 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:03.389 01:09:56 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:03.389 01:09:56 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:24:03.389 Found net devices under 0000:0a:00.1: cvl_0_1 00:24:03.389 01:09:56 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:03.389 01:09:56 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:03.389 01:09:56 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@414 -- # is_hw=yes 00:24:03.389 01:09:56 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:03.389 01:09:56 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:03.389 01:09:56 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:03.389 01:09:56 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:03.389 01:09:56 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:03.389 01:09:56 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:03.389 01:09:56 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:03.389 01:09:56 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:03.389 01:09:56 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:03.389 01:09:56 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:03.389 01:09:56 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:03.389 01:09:56 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:03.389 01:09:56 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:03.389 01:09:56 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:03.389 01:09:56 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:03.389 01:09:56 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:03.647 01:09:56 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:03.647 01:09:56 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:03.647 01:09:56 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:03.647 01:09:56 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:03.647 01:09:56 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:03.647 01:09:56 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:03.647 01:09:56 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:03.647 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:03.647 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.219 ms 00:24:03.647 00:24:03.647 --- 10.0.0.2 ping statistics --- 00:24:03.647 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:03.647 rtt min/avg/max/mdev = 0.219/0.219/0.219/0.000 ms 00:24:03.647 01:09:56 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:03.647 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:03.647 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.060 ms 00:24:03.647 00:24:03.647 --- 10.0.0.1 ping statistics --- 00:24:03.647 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:03.647 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:24:03.647 01:09:56 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:03.647 01:09:56 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@422 -- # return 0 00:24:03.647 01:09:56 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:03.647 01:09:56 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:03.647 01:09:56 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:03.647 01:09:56 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:03.647 01:09:56 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:03.647 01:09:56 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:03.647 01:09:56 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:03.647 01:09:56 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@14 -- # nvmfpid=3820332 00:24:03.647 01:09:56 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@13 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:24:03.647 01:09:56 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@16 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:24:03.647 01:09:56 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@18 -- # waitforlisten 3820332 00:24:03.647 01:09:56 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@827 -- # '[' -z 3820332 ']' 00:24:03.648 01:09:56 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:03.648 01:09:56 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@832 -- # local max_retries=100 00:24:03.648 01:09:56 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:03.648 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:03.648 01:09:56 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@836 -- # xtrace_disable 00:24:03.648 01:09:56 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:03.906 01:09:56 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:24:03.906 01:09:56 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@860 -- # return 0 00:24:03.906 01:09:56 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:03.906 01:09:56 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:03.906 01:09:56 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:03.906 01:09:56 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:03.906 01:09:56 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 64 512 00:24:03.906 01:09:56 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:03.906 01:09:56 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:03.906 Malloc0 00:24:03.906 01:09:56 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:03.906 01:09:56 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:03.906 01:09:56 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:03.906 01:09:56 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:03.906 01:09:57 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:03.906 01:09:57 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:03.906 01:09:57 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:03.906 01:09:57 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:03.906 01:09:57 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:03.906 01:09:57 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:03.906 01:09:57 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:03.906 01:09:57 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:03.906 01:09:57 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:03.906 01:09:57 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@27 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' 00:24:03.906 01:09:57 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -N -a 00:24:35.958 Fuzzing completed. Shutting down the fuzz application 00:24:35.958 00:24:35.958 Dumping successful admin opcodes: 00:24:35.958 8, 9, 10, 24, 00:24:35.958 Dumping successful io opcodes: 00:24:35.958 0, 9, 00:24:35.958 NS: 0x200003aeff00 I/O qp, Total commands completed: 461272, total successful commands: 2668, random_seed: 3623249984 00:24:35.958 NS: 0x200003aeff00 admin qp, Total commands completed: 56176, total successful commands: 447, random_seed: 1825547328 00:24:35.958 01:10:27 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -j /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/example.json -a 00:24:35.958 Fuzzing completed. Shutting down the fuzz application 00:24:35.958 00:24:35.958 Dumping successful admin opcodes: 00:24:35.958 24, 00:24:35.958 Dumping successful io opcodes: 00:24:35.958 00:24:35.958 NS: 0x200003aeff00 I/O qp, Total commands completed: 0, total successful commands: 0, random_seed: 1806779530 00:24:35.958 NS: 0x200003aeff00 admin qp, Total commands completed: 16, total successful commands: 4, random_seed: 1806887677 00:24:35.958 01:10:28 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:35.958 01:10:28 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:35.958 01:10:28 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:35.958 01:10:28 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:35.958 01:10:28 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:24:35.958 01:10:28 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@38 -- # nvmftestfini 00:24:35.958 01:10:28 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:35.958 01:10:28 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@117 -- # sync 00:24:35.958 01:10:28 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:35.958 01:10:28 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@120 -- # set +e 00:24:35.958 01:10:28 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:35.958 01:10:28 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:35.958 rmmod nvme_tcp 00:24:35.958 rmmod nvme_fabrics 00:24:35.958 rmmod nvme_keyring 00:24:35.958 01:10:28 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:35.958 01:10:28 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@124 -- # set -e 00:24:35.959 01:10:28 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@125 -- # return 0 00:24:35.959 01:10:28 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@489 -- # '[' -n 3820332 ']' 00:24:35.959 01:10:28 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@490 -- # killprocess 3820332 00:24:35.959 01:10:28 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@946 -- # '[' -z 3820332 ']' 00:24:35.959 01:10:28 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@950 -- # kill -0 3820332 00:24:35.959 01:10:28 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@951 -- # uname 00:24:35.959 01:10:28 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:24:35.959 01:10:28 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3820332 00:24:35.959 01:10:28 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:24:35.959 01:10:28 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:24:35.959 01:10:28 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3820332' 00:24:35.959 killing process with pid 3820332 00:24:35.959 01:10:28 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@965 -- # kill 3820332 00:24:35.959 01:10:28 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@970 -- # wait 3820332 00:24:36.239 01:10:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:36.239 01:10:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:36.239 01:10:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:36.239 01:10:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:36.239 01:10:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:36.239 01:10:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:36.239 01:10:29 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:36.239 01:10:29 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:38.764 01:10:31 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:38.764 01:10:31 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@39 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs1.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs2.txt 00:24:38.764 00:24:38.764 real 0m37.006s 00:24:38.764 user 0m50.751s 00:24:38.764 sys 0m15.646s 00:24:38.764 01:10:31 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@1122 -- # xtrace_disable 00:24:38.764 01:10:31 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:38.764 ************************************ 00:24:38.764 END TEST nvmf_fuzz 00:24:38.764 ************************************ 00:24:38.764 01:10:31 nvmf_tcp -- nvmf/nvmf.sh@67 -- # run_test nvmf_multiconnection /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:24:38.764 01:10:31 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:24:38.764 01:10:31 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:24:38.764 01:10:31 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:38.764 ************************************ 00:24:38.764 START TEST nvmf_multiconnection 00:24:38.764 ************************************ 00:24:38.764 01:10:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:24:38.764 * Looking for test storage... 00:24:38.764 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:24:38.764 01:10:31 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:38.764 01:10:31 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@7 -- # uname -s 00:24:38.764 01:10:31 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:38.764 01:10:31 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:38.764 01:10:31 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:38.764 01:10:31 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:38.764 01:10:31 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:38.764 01:10:31 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:38.764 01:10:31 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:38.764 01:10:31 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:38.764 01:10:31 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:38.764 01:10:31 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:38.764 01:10:31 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:24:38.764 01:10:31 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:24:38.764 01:10:31 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:38.764 01:10:31 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:38.764 01:10:31 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:38.764 01:10:31 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:38.764 01:10:31 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:38.764 01:10:31 nvmf_tcp.nvmf_multiconnection -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:38.764 01:10:31 nvmf_tcp.nvmf_multiconnection -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:38.764 01:10:31 nvmf_tcp.nvmf_multiconnection -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:38.764 01:10:31 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:38.764 01:10:31 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:38.764 01:10:31 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:38.764 01:10:31 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@5 -- # export PATH 00:24:38.764 01:10:31 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:38.764 01:10:31 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@47 -- # : 0 00:24:38.764 01:10:31 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:38.764 01:10:31 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:38.764 01:10:31 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:38.764 01:10:31 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:38.764 01:10:31 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:38.764 01:10:31 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:38.764 01:10:31 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:38.764 01:10:31 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:38.764 01:10:31 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:38.764 01:10:31 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:38.764 01:10:31 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@14 -- # NVMF_SUBSYS=11 00:24:38.764 01:10:31 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@16 -- # nvmftestinit 00:24:38.764 01:10:31 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:38.764 01:10:31 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:38.764 01:10:31 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:38.764 01:10:31 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:38.764 01:10:31 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:38.764 01:10:31 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:38.764 01:10:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:38.764 01:10:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:38.764 01:10:31 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:38.764 01:10:31 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:38.764 01:10:31 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@285 -- # xtrace_disable 00:24:38.764 01:10:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:40.664 01:10:33 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:40.664 01:10:33 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@291 -- # pci_devs=() 00:24:40.664 01:10:33 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:40.664 01:10:33 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:40.664 01:10:33 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:40.664 01:10:33 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:40.664 01:10:33 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:40.664 01:10:33 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@295 -- # net_devs=() 00:24:40.664 01:10:33 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:40.664 01:10:33 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@296 -- # e810=() 00:24:40.664 01:10:33 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@296 -- # local -ga e810 00:24:40.664 01:10:33 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@297 -- # x722=() 00:24:40.664 01:10:33 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@297 -- # local -ga x722 00:24:40.664 01:10:33 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@298 -- # mlx=() 00:24:40.664 01:10:33 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@298 -- # local -ga mlx 00:24:40.664 01:10:33 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:40.664 01:10:33 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:40.664 01:10:33 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:40.664 01:10:33 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:40.664 01:10:33 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:40.664 01:10:33 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:40.664 01:10:33 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:40.664 01:10:33 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:40.664 01:10:33 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:40.664 01:10:33 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:40.664 01:10:33 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:40.664 01:10:33 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:40.664 01:10:33 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:40.664 01:10:33 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:40.664 01:10:33 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:40.664 01:10:33 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:40.664 01:10:33 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:40.664 01:10:33 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:40.664 01:10:33 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:24:40.664 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:24:40.664 01:10:33 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:40.664 01:10:33 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:40.664 01:10:33 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:40.664 01:10:33 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:40.664 01:10:33 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:40.664 01:10:33 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:40.664 01:10:33 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:24:40.664 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:24:40.664 01:10:33 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:40.664 01:10:33 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:40.664 01:10:33 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:40.664 01:10:33 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:40.664 01:10:33 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:40.664 01:10:33 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:40.664 01:10:33 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:40.664 01:10:33 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:40.664 01:10:33 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:40.664 01:10:33 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:40.664 01:10:33 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:40.664 01:10:33 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:40.664 01:10:33 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:40.664 01:10:33 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:40.664 01:10:33 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:40.664 01:10:33 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:24:40.664 Found net devices under 0000:0a:00.0: cvl_0_0 00:24:40.664 01:10:33 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:40.664 01:10:33 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:40.664 01:10:33 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:40.664 01:10:33 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:40.664 01:10:33 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:40.664 01:10:33 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:40.664 01:10:33 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:40.664 01:10:33 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:40.664 01:10:33 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:24:40.664 Found net devices under 0000:0a:00.1: cvl_0_1 00:24:40.664 01:10:33 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:40.664 01:10:33 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:40.664 01:10:33 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@414 -- # is_hw=yes 00:24:40.664 01:10:33 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:40.664 01:10:33 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:40.664 01:10:33 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:40.664 01:10:33 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:40.664 01:10:33 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:40.664 01:10:33 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:40.664 01:10:33 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:40.664 01:10:33 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:40.664 01:10:33 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:40.664 01:10:33 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:40.664 01:10:33 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:40.664 01:10:33 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:40.664 01:10:33 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:40.665 01:10:33 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:40.665 01:10:33 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:40.665 01:10:33 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:40.665 01:10:33 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:40.665 01:10:33 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:40.665 01:10:33 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:40.665 01:10:33 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:40.665 01:10:33 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:40.665 01:10:33 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:40.665 01:10:33 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:40.665 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:40.665 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.193 ms 00:24:40.665 00:24:40.665 --- 10.0.0.2 ping statistics --- 00:24:40.665 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:40.665 rtt min/avg/max/mdev = 0.193/0.193/0.193/0.000 ms 00:24:40.665 01:10:33 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:40.665 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:40.665 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.163 ms 00:24:40.665 00:24:40.665 --- 10.0.0.1 ping statistics --- 00:24:40.665 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:40.665 rtt min/avg/max/mdev = 0.163/0.163/0.163/0.000 ms 00:24:40.665 01:10:33 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:40.665 01:10:33 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@422 -- # return 0 00:24:40.665 01:10:33 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:40.665 01:10:33 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:40.665 01:10:33 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:40.665 01:10:33 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:40.665 01:10:33 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:40.665 01:10:33 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:40.665 01:10:33 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:40.665 01:10:33 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@17 -- # nvmfappstart -m 0xF 00:24:40.665 01:10:33 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:40.665 01:10:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@720 -- # xtrace_disable 00:24:40.665 01:10:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:40.665 01:10:33 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@481 -- # nvmfpid=3826055 00:24:40.665 01:10:33 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:40.665 01:10:33 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@482 -- # waitforlisten 3826055 00:24:40.665 01:10:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@827 -- # '[' -z 3826055 ']' 00:24:40.665 01:10:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:40.665 01:10:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@832 -- # local max_retries=100 00:24:40.665 01:10:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:40.665 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:40.665 01:10:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@836 -- # xtrace_disable 00:24:40.665 01:10:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:40.665 [2024-07-25 01:10:33.655187] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 23.11.0 initialization... 00:24:40.665 [2024-07-25 01:10:33.655309] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:40.665 EAL: No free 2048 kB hugepages reported on node 1 00:24:40.665 [2024-07-25 01:10:33.727655] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:40.923 [2024-07-25 01:10:33.820389] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:40.923 [2024-07-25 01:10:33.820442] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:40.923 [2024-07-25 01:10:33.820466] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:40.923 [2024-07-25 01:10:33.820479] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:40.923 [2024-07-25 01:10:33.820491] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:40.923 [2024-07-25 01:10:33.820584] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:40.923 [2024-07-25 01:10:33.820639] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:40.923 [2024-07-25 01:10:33.820675] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:24:40.923 [2024-07-25 01:10:33.820677] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:40.923 01:10:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:24:40.923 01:10:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@860 -- # return 0 00:24:40.923 01:10:33 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:40.923 01:10:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:40.923 01:10:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:40.923 01:10:33 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:40.923 01:10:33 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:40.923 01:10:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:40.923 01:10:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:40.923 [2024-07-25 01:10:33.965800] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:40.923 01:10:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:40.923 01:10:33 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # seq 1 11 00:24:40.923 01:10:33 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:40.923 01:10:33 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:24:40.923 01:10:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:40.923 01:10:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:40.923 Malloc1 00:24:40.923 01:10:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:40.923 01:10:34 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK1 00:24:40.923 01:10:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:40.923 01:10:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:40.923 01:10:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:40.923 01:10:34 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:24:40.923 01:10:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:40.923 01:10:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:40.923 01:10:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:40.923 01:10:34 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:40.923 01:10:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:40.923 01:10:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:40.923 [2024-07-25 01:10:34.022748] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:40.923 01:10:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:40.923 01:10:34 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:40.923 01:10:34 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:24:40.923 01:10:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:40.923 01:10:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:40.923 Malloc2 00:24:40.923 01:10:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:40.923 01:10:34 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:24:40.923 01:10:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:40.923 01:10:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:40.923 01:10:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:40.923 01:10:34 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:24:40.923 01:10:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:40.923 01:10:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:40.923 01:10:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:40.923 01:10:34 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:24:40.923 01:10:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:40.923 01:10:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:41.182 01:10:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:41.182 01:10:34 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:41.182 01:10:34 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:24:41.182 01:10:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:41.182 01:10:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:41.182 Malloc3 00:24:41.182 01:10:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:41.182 01:10:34 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK3 00:24:41.182 01:10:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:41.182 01:10:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:41.182 01:10:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:41.182 01:10:34 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:24:41.182 01:10:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:41.182 01:10:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:41.182 01:10:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:41.182 01:10:34 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:24:41.182 01:10:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:41.182 01:10:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:41.182 01:10:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:41.182 01:10:34 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:41.182 01:10:34 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:24:41.182 01:10:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:41.182 01:10:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:41.182 Malloc4 00:24:41.182 01:10:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:41.182 01:10:34 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK4 00:24:41.182 01:10:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:41.182 01:10:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:41.182 01:10:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:41.182 01:10:34 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:24:41.182 01:10:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:41.182 01:10:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:41.182 01:10:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:41.182 01:10:34 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:24:41.182 01:10:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:41.182 01:10:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:41.182 01:10:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:41.182 01:10:34 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:41.183 01:10:34 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:24:41.183 01:10:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:41.183 01:10:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:41.183 Malloc5 00:24:41.183 01:10:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:41.183 01:10:34 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK5 00:24:41.183 01:10:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:41.183 01:10:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:41.183 01:10:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:41.183 01:10:34 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:24:41.183 01:10:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:41.183 01:10:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:41.183 01:10:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:41.183 01:10:34 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t tcp -a 10.0.0.2 -s 4420 00:24:41.183 01:10:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:41.183 01:10:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:41.183 01:10:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:41.183 01:10:34 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:41.183 01:10:34 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc6 00:24:41.183 01:10:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:41.183 01:10:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:41.183 Malloc6 00:24:41.183 01:10:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:41.183 01:10:34 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6 -a -s SPDK6 00:24:41.183 01:10:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:41.183 01:10:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:41.183 01:10:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:41.183 01:10:34 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode6 Malloc6 00:24:41.183 01:10:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:41.183 01:10:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:41.183 01:10:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:41.183 01:10:34 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode6 -t tcp -a 10.0.0.2 -s 4420 00:24:41.183 01:10:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:41.183 01:10:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:41.183 01:10:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:41.183 01:10:34 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:41.183 01:10:34 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc7 00:24:41.183 01:10:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:41.183 01:10:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:41.183 Malloc7 00:24:41.183 01:10:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:41.183 01:10:34 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7 -a -s SPDK7 00:24:41.183 01:10:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:41.183 01:10:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:41.183 01:10:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:41.183 01:10:34 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode7 Malloc7 00:24:41.183 01:10:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:41.183 01:10:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:41.183 01:10:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:41.183 01:10:34 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode7 -t tcp -a 10.0.0.2 -s 4420 00:24:41.183 01:10:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:41.183 01:10:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:41.183 01:10:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:41.183 01:10:34 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:41.183 01:10:34 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc8 00:24:41.183 01:10:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:41.183 01:10:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:41.441 Malloc8 00:24:41.441 01:10:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:41.442 01:10:34 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8 -a -s SPDK8 00:24:41.442 01:10:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:41.442 01:10:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:41.442 01:10:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:41.442 01:10:34 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode8 Malloc8 00:24:41.442 01:10:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:41.442 01:10:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:41.442 01:10:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:41.442 01:10:34 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode8 -t tcp -a 10.0.0.2 -s 4420 00:24:41.442 01:10:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:41.442 01:10:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:41.442 01:10:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:41.442 01:10:34 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:41.442 01:10:34 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc9 00:24:41.442 01:10:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:41.442 01:10:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:41.442 Malloc9 00:24:41.442 01:10:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:41.442 01:10:34 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9 -a -s SPDK9 00:24:41.442 01:10:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:41.442 01:10:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:41.442 01:10:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:41.442 01:10:34 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode9 Malloc9 00:24:41.442 01:10:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:41.442 01:10:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:41.442 01:10:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:41.442 01:10:34 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode9 -t tcp -a 10.0.0.2 -s 4420 00:24:41.442 01:10:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:41.442 01:10:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:41.442 01:10:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:41.442 01:10:34 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:41.442 01:10:34 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc10 00:24:41.442 01:10:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:41.442 01:10:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:41.442 Malloc10 00:24:41.442 01:10:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:41.442 01:10:34 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10 -a -s SPDK10 00:24:41.442 01:10:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:41.442 01:10:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:41.442 01:10:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:41.442 01:10:34 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode10 Malloc10 00:24:41.442 01:10:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:41.442 01:10:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:41.442 01:10:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:41.442 01:10:34 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode10 -t tcp -a 10.0.0.2 -s 4420 00:24:41.442 01:10:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:41.442 01:10:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:41.442 01:10:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:41.442 01:10:34 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:41.442 01:10:34 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc11 00:24:41.442 01:10:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:41.442 01:10:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:41.442 Malloc11 00:24:41.442 01:10:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:41.442 01:10:34 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11 -a -s SPDK11 00:24:41.442 01:10:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:41.442 01:10:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:41.442 01:10:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:41.442 01:10:34 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode11 Malloc11 00:24:41.442 01:10:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:41.442 01:10:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:41.442 01:10:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:41.442 01:10:34 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode11 -t tcp -a 10.0.0.2 -s 4420 00:24:41.442 01:10:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:41.442 01:10:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:41.442 01:10:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:41.442 01:10:34 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # seq 1 11 00:24:41.442 01:10:34 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:41.442 01:10:34 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:24:42.006 01:10:35 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK1 00:24:42.006 01:10:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:24:42.006 01:10:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:24:42.007 01:10:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:24:42.007 01:10:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:24:44.534 01:10:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:24:44.534 01:10:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:24:44.534 01:10:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK1 00:24:44.534 01:10:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:24:44.534 01:10:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:24:44.534 01:10:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:24:44.534 01:10:37 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:44.534 01:10:37 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode2 -a 10.0.0.2 -s 4420 00:24:44.828 01:10:37 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK2 00:24:44.828 01:10:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:24:44.828 01:10:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:24:44.828 01:10:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:24:44.828 01:10:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:24:46.725 01:10:39 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:24:46.725 01:10:39 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:24:46.725 01:10:39 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK2 00:24:46.725 01:10:39 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:24:46.725 01:10:39 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:24:46.725 01:10:39 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:24:46.725 01:10:39 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:46.725 01:10:39 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode3 -a 10.0.0.2 -s 4420 00:24:47.658 01:10:40 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK3 00:24:47.658 01:10:40 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:24:47.658 01:10:40 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:24:47.658 01:10:40 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:24:47.658 01:10:40 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:24:49.556 01:10:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:24:49.556 01:10:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:24:49.556 01:10:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK3 00:24:49.556 01:10:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:24:49.556 01:10:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:24:49.556 01:10:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:24:49.556 01:10:42 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:49.556 01:10:42 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode4 -a 10.0.0.2 -s 4420 00:24:50.121 01:10:43 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK4 00:24:50.121 01:10:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:24:50.121 01:10:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:24:50.121 01:10:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:24:50.121 01:10:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:24:52.017 01:10:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:24:52.017 01:10:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:24:52.017 01:10:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK4 00:24:52.274 01:10:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:24:52.274 01:10:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:24:52.274 01:10:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:24:52.274 01:10:45 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:52.274 01:10:45 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode5 -a 10.0.0.2 -s 4420 00:24:52.840 01:10:45 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK5 00:24:52.840 01:10:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:24:52.840 01:10:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:24:52.840 01:10:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:24:52.840 01:10:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:24:55.365 01:10:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:24:55.365 01:10:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:24:55.365 01:10:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK5 00:24:55.365 01:10:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:24:55.365 01:10:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:24:55.365 01:10:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:24:55.365 01:10:47 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:55.365 01:10:47 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode6 -a 10.0.0.2 -s 4420 00:24:55.929 01:10:48 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK6 00:24:55.929 01:10:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:24:55.929 01:10:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:24:55.929 01:10:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:24:55.930 01:10:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:24:57.825 01:10:50 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:24:57.825 01:10:50 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:24:57.825 01:10:50 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK6 00:24:57.825 01:10:50 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:24:57.825 01:10:50 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:24:57.825 01:10:50 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:24:57.825 01:10:50 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:57.826 01:10:50 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode7 -a 10.0.0.2 -s 4420 00:24:58.758 01:10:51 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK7 00:24:58.758 01:10:51 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:24:58.758 01:10:51 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:24:58.758 01:10:51 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:24:58.758 01:10:51 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:25:00.659 01:10:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:25:00.659 01:10:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:25:00.659 01:10:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK7 00:25:00.659 01:10:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:25:00.659 01:10:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:25:00.659 01:10:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:25:00.659 01:10:53 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:00.659 01:10:53 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode8 -a 10.0.0.2 -s 4420 00:25:01.590 01:10:54 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK8 00:25:01.590 01:10:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:25:01.590 01:10:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:25:01.590 01:10:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:25:01.590 01:10:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:25:03.485 01:10:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:25:03.485 01:10:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:25:03.485 01:10:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK8 00:25:03.485 01:10:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:25:03.485 01:10:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:25:03.485 01:10:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:25:03.485 01:10:56 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:03.485 01:10:56 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode9 -a 10.0.0.2 -s 4420 00:25:04.416 01:10:57 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK9 00:25:04.416 01:10:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:25:04.416 01:10:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:25:04.416 01:10:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:25:04.416 01:10:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:25:06.310 01:10:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:25:06.310 01:10:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:25:06.310 01:10:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK9 00:25:06.310 01:10:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:25:06.310 01:10:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:25:06.310 01:10:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:25:06.310 01:10:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:06.310 01:10:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode10 -a 10.0.0.2 -s 4420 00:25:07.276 01:11:00 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK10 00:25:07.276 01:11:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:25:07.276 01:11:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:25:07.276 01:11:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:25:07.276 01:11:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:25:09.171 01:11:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:25:09.171 01:11:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:25:09.171 01:11:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK10 00:25:09.429 01:11:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:25:09.429 01:11:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:25:09.429 01:11:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:25:09.429 01:11:02 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:09.429 01:11:02 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode11 -a 10.0.0.2 -s 4420 00:25:09.994 01:11:03 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK11 00:25:09.994 01:11:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:25:09.994 01:11:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:25:09.994 01:11:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:25:09.994 01:11:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:25:12.515 01:11:05 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:25:12.515 01:11:05 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:25:12.515 01:11:05 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK11 00:25:12.515 01:11:05 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:25:12.515 01:11:05 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:25:12.515 01:11:05 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:25:12.515 01:11:05 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t read -r 10 00:25:12.515 [global] 00:25:12.515 thread=1 00:25:12.515 invalidate=1 00:25:12.515 rw=read 00:25:12.515 time_based=1 00:25:12.515 runtime=10 00:25:12.515 ioengine=libaio 00:25:12.515 direct=1 00:25:12.515 bs=262144 00:25:12.515 iodepth=64 00:25:12.515 norandommap=1 00:25:12.515 numjobs=1 00:25:12.515 00:25:12.515 [job0] 00:25:12.515 filename=/dev/nvme0n1 00:25:12.515 [job1] 00:25:12.515 filename=/dev/nvme10n1 00:25:12.515 [job2] 00:25:12.515 filename=/dev/nvme1n1 00:25:12.515 [job3] 00:25:12.515 filename=/dev/nvme2n1 00:25:12.515 [job4] 00:25:12.515 filename=/dev/nvme3n1 00:25:12.515 [job5] 00:25:12.515 filename=/dev/nvme4n1 00:25:12.515 [job6] 00:25:12.515 filename=/dev/nvme5n1 00:25:12.515 [job7] 00:25:12.515 filename=/dev/nvme6n1 00:25:12.515 [job8] 00:25:12.515 filename=/dev/nvme7n1 00:25:12.515 [job9] 00:25:12.515 filename=/dev/nvme8n1 00:25:12.515 [job10] 00:25:12.515 filename=/dev/nvme9n1 00:25:12.515 Could not set queue depth (nvme0n1) 00:25:12.515 Could not set queue depth (nvme10n1) 00:25:12.515 Could not set queue depth (nvme1n1) 00:25:12.515 Could not set queue depth (nvme2n1) 00:25:12.515 Could not set queue depth (nvme3n1) 00:25:12.515 Could not set queue depth (nvme4n1) 00:25:12.515 Could not set queue depth (nvme5n1) 00:25:12.515 Could not set queue depth (nvme6n1) 00:25:12.515 Could not set queue depth (nvme7n1) 00:25:12.515 Could not set queue depth (nvme8n1) 00:25:12.515 Could not set queue depth (nvme9n1) 00:25:12.515 job0: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:12.515 job1: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:12.515 job2: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:12.515 job3: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:12.515 job4: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:12.515 job5: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:12.515 job6: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:12.515 job7: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:12.515 job8: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:12.515 job9: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:12.515 job10: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:12.515 fio-3.35 00:25:12.515 Starting 11 threads 00:25:24.714 00:25:24.714 job0: (groupid=0, jobs=1): err= 0: pid=3830331: Thu Jul 25 01:11:15 2024 00:25:24.714 read: IOPS=708, BW=177MiB/s (186MB/s)(1791MiB/10117msec) 00:25:24.714 slat (usec): min=9, max=167004, avg=1175.68, stdev=4204.86 00:25:24.714 clat (usec): min=1129, max=247497, avg=89121.05, stdev=44461.91 00:25:24.714 lat (usec): min=1152, max=313464, avg=90296.73, stdev=45011.52 00:25:24.714 clat percentiles (msec): 00:25:24.714 | 1.00th=[ 5], 5.00th=[ 12], 10.00th=[ 29], 20.00th=[ 58], 00:25:24.714 | 30.00th=[ 67], 40.00th=[ 74], 50.00th=[ 82], 60.00th=[ 92], 00:25:24.714 | 70.00th=[ 114], 80.00th=[ 131], 90.00th=[ 148], 95.00th=[ 163], 00:25:24.714 | 99.00th=[ 209], 99.50th=[ 220], 99.90th=[ 247], 99.95th=[ 247], 00:25:24.714 | 99.99th=[ 249] 00:25:24.714 bw ( KiB/s): min=118272, max=308736, per=10.34%, avg=181718.15, stdev=56569.56, samples=20 00:25:24.714 iops : min= 462, max= 1206, avg=709.80, stdev=220.97, samples=20 00:25:24.714 lat (msec) : 2=0.25%, 4=0.54%, 10=3.00%, 20=3.52%, 50=7.87% 00:25:24.714 lat (msec) : 100=49.48%, 250=35.33% 00:25:24.714 cpu : usr=0.51%, sys=2.14%, ctx=1433, majf=0, minf=3722 00:25:24.714 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:25:24.714 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:24.714 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:24.714 issued rwts: total=7163,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:24.714 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:24.714 job1: (groupid=0, jobs=1): err= 0: pid=3830332: Thu Jul 25 01:11:15 2024 00:25:24.714 read: IOPS=527, BW=132MiB/s (138MB/s)(1329MiB/10082msec) 00:25:24.714 slat (usec): min=9, max=143583, avg=1267.82, stdev=5610.25 00:25:24.714 clat (msec): min=2, max=347, avg=119.97, stdev=63.88 00:25:24.714 lat (msec): min=2, max=391, avg=121.24, stdev=64.69 00:25:24.714 clat percentiles (msec): 00:25:24.714 | 1.00th=[ 10], 5.00th=[ 20], 10.00th=[ 31], 20.00th=[ 62], 00:25:24.714 | 30.00th=[ 88], 40.00th=[ 102], 50.00th=[ 123], 60.00th=[ 140], 00:25:24.714 | 70.00th=[ 153], 80.00th=[ 165], 90.00th=[ 194], 95.00th=[ 245], 00:25:24.714 | 99.00th=[ 284], 99.50th=[ 288], 99.90th=[ 309], 99.95th=[ 313], 00:25:24.714 | 99.99th=[ 347] 00:25:24.714 bw ( KiB/s): min=60928, max=231984, per=7.65%, avg=134469.65, stdev=48430.65, samples=20 00:25:24.714 iops : min= 238, max= 906, avg=525.25, stdev=189.16, samples=20 00:25:24.714 lat (msec) : 4=0.09%, 10=1.11%, 20=4.19%, 50=11.96%, 100=21.82% 00:25:24.714 lat (msec) : 250=56.18%, 500=4.65% 00:25:24.714 cpu : usr=0.32%, sys=1.64%, ctx=1266, majf=0, minf=4097 00:25:24.714 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:25:24.714 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:24.714 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:24.714 issued rwts: total=5317,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:24.714 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:24.714 job2: (groupid=0, jobs=1): err= 0: pid=3830333: Thu Jul 25 01:11:15 2024 00:25:24.714 read: IOPS=554, BW=139MiB/s (145MB/s)(1398MiB/10081msec) 00:25:24.714 slat (usec): min=9, max=139430, avg=1429.27, stdev=5422.58 00:25:24.714 clat (usec): min=1612, max=306789, avg=113891.88, stdev=70086.84 00:25:24.714 lat (usec): min=1633, max=311407, avg=115321.15, stdev=70810.94 00:25:24.714 clat percentiles (msec): 00:25:24.714 | 1.00th=[ 6], 5.00th=[ 16], 10.00th=[ 32], 20.00th=[ 36], 00:25:24.714 | 30.00th=[ 58], 40.00th=[ 94], 50.00th=[ 116], 60.00th=[ 136], 00:25:24.714 | 70.00th=[ 153], 80.00th=[ 171], 90.00th=[ 209], 95.00th=[ 245], 00:25:24.714 | 99.00th=[ 279], 99.50th=[ 284], 99.90th=[ 300], 99.95th=[ 305], 00:25:24.714 | 99.99th=[ 309] 00:25:24.714 bw ( KiB/s): min=62976, max=342016, per=8.05%, avg=141497.65, stdev=80977.49, samples=20 00:25:24.714 iops : min= 246, max= 1336, avg=552.70, stdev=316.32, samples=20 00:25:24.714 lat (msec) : 2=0.21%, 4=0.50%, 10=2.15%, 20=3.63%, 50=22.06% 00:25:24.714 lat (msec) : 100=14.38%, 250=52.92%, 500=4.15% 00:25:24.714 cpu : usr=0.25%, sys=1.68%, ctx=1183, majf=0, minf=4097 00:25:24.714 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:25:24.714 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:24.714 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:24.714 issued rwts: total=5590,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:24.714 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:24.714 job3: (groupid=0, jobs=1): err= 0: pid=3830334: Thu Jul 25 01:11:15 2024 00:25:24.714 read: IOPS=483, BW=121MiB/s (127MB/s)(1223MiB/10118msec) 00:25:24.714 slat (usec): min=9, max=141244, avg=1605.14, stdev=7001.52 00:25:24.714 clat (usec): min=858, max=364130, avg=130649.91, stdev=65026.37 00:25:24.714 lat (usec): min=884, max=364157, avg=132255.05, stdev=66171.63 00:25:24.714 clat percentiles (msec): 00:25:24.714 | 1.00th=[ 4], 5.00th=[ 14], 10.00th=[ 32], 20.00th=[ 73], 00:25:24.714 | 30.00th=[ 111], 40.00th=[ 123], 50.00th=[ 132], 60.00th=[ 146], 00:25:24.714 | 70.00th=[ 157], 80.00th=[ 178], 90.00th=[ 224], 95.00th=[ 245], 00:25:24.714 | 99.00th=[ 275], 99.50th=[ 288], 99.90th=[ 355], 99.95th=[ 359], 00:25:24.714 | 99.99th=[ 363] 00:25:24.714 bw ( KiB/s): min=61440, max=293376, per=7.03%, avg=123586.60, stdev=54590.71, samples=20 00:25:24.714 iops : min= 240, max= 1146, avg=482.75, stdev=213.25, samples=20 00:25:24.714 lat (usec) : 1000=0.02% 00:25:24.714 lat (msec) : 2=0.12%, 4=1.14%, 10=2.41%, 20=3.07%, 50=7.67% 00:25:24.714 lat (msec) : 100=10.92%, 250=70.46%, 500=4.19% 00:25:24.714 cpu : usr=0.19%, sys=1.43%, ctx=1031, majf=0, minf=4097 00:25:24.714 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:25:24.714 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:24.714 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:24.714 issued rwts: total=4891,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:24.714 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:24.714 job4: (groupid=0, jobs=1): err= 0: pid=3830335: Thu Jul 25 01:11:15 2024 00:25:24.714 read: IOPS=746, BW=187MiB/s (196MB/s)(1883MiB/10082msec) 00:25:24.714 slat (usec): min=9, max=156343, avg=783.94, stdev=5069.96 00:25:24.714 clat (msec): min=5, max=378, avg=84.82, stdev=63.71 00:25:24.714 lat (msec): min=5, max=403, avg=85.60, stdev=64.27 00:25:24.714 clat percentiles (msec): 00:25:24.714 | 1.00th=[ 9], 5.00th=[ 17], 10.00th=[ 26], 20.00th=[ 31], 00:25:24.714 | 30.00th=[ 34], 40.00th=[ 53], 50.00th=[ 68], 60.00th=[ 80], 00:25:24.714 | 70.00th=[ 106], 80.00th=[ 142], 90.00th=[ 180], 95.00th=[ 213], 00:25:24.714 | 99.00th=[ 271], 99.50th=[ 284], 99.90th=[ 309], 99.95th=[ 359], 00:25:24.714 | 99.99th=[ 380] 00:25:24.714 bw ( KiB/s): min=91136, max=531456, per=10.87%, avg=191143.65, stdev=105015.50, samples=20 00:25:24.714 iops : min= 356, max= 2076, avg=746.65, stdev=410.22, samples=20 00:25:24.714 lat (msec) : 10=1.33%, 20=6.57%, 50=31.06%, 100=29.44%, 250=29.43% 00:25:24.714 lat (msec) : 500=2.18% 00:25:24.714 cpu : usr=0.41%, sys=2.34%, ctx=1660, majf=0, minf=4097 00:25:24.714 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:25:24.714 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:24.714 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:24.714 issued rwts: total=7531,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:24.714 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:24.714 job5: (groupid=0, jobs=1): err= 0: pid=3830336: Thu Jul 25 01:11:15 2024 00:25:24.714 read: IOPS=680, BW=170MiB/s (178MB/s)(1714MiB/10078msec) 00:25:24.714 slat (usec): min=13, max=57253, avg=1401.53, stdev=4183.36 00:25:24.714 clat (msec): min=2, max=238, avg=92.62, stdev=48.44 00:25:24.714 lat (msec): min=2, max=238, avg=94.02, stdev=49.16 00:25:24.714 clat percentiles (msec): 00:25:24.714 | 1.00th=[ 25], 5.00th=[ 35], 10.00th=[ 39], 20.00th=[ 45], 00:25:24.714 | 30.00th=[ 55], 40.00th=[ 65], 50.00th=[ 83], 60.00th=[ 102], 00:25:24.714 | 70.00th=[ 127], 80.00th=[ 144], 90.00th=[ 161], 95.00th=[ 176], 00:25:24.714 | 99.00th=[ 203], 99.50th=[ 209], 99.90th=[ 226], 99.95th=[ 228], 00:25:24.714 | 99.99th=[ 239] 00:25:24.714 bw ( KiB/s): min=85504, max=394752, per=9.89%, avg=173791.05, stdev=84440.28, samples=20 00:25:24.714 iops : min= 334, max= 1542, avg=678.80, stdev=329.86, samples=20 00:25:24.714 lat (msec) : 4=0.07%, 10=0.35%, 20=0.36%, 50=25.17%, 100=33.47% 00:25:24.714 lat (msec) : 250=40.57% 00:25:24.714 cpu : usr=0.40%, sys=2.39%, ctx=1318, majf=0, minf=4097 00:25:24.715 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:25:24.715 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:24.715 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:24.715 issued rwts: total=6854,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:24.715 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:24.715 job6: (groupid=0, jobs=1): err= 0: pid=3830337: Thu Jul 25 01:11:15 2024 00:25:24.715 read: IOPS=501, BW=125MiB/s (131MB/s)(1269MiB/10119msec) 00:25:24.715 slat (usec): min=10, max=124132, avg=1533.82, stdev=5241.13 00:25:24.715 clat (msec): min=5, max=315, avg=125.95, stdev=51.73 00:25:24.715 lat (msec): min=5, max=430, avg=127.48, stdev=52.43 00:25:24.715 clat percentiles (msec): 00:25:24.715 | 1.00th=[ 16], 5.00th=[ 36], 10.00th=[ 57], 20.00th=[ 90], 00:25:24.715 | 30.00th=[ 101], 40.00th=[ 116], 50.00th=[ 128], 60.00th=[ 138], 00:25:24.715 | 70.00th=[ 148], 80.00th=[ 161], 90.00th=[ 190], 95.00th=[ 215], 00:25:24.715 | 99.00th=[ 275], 99.50th=[ 288], 99.90th=[ 309], 99.95th=[ 317], 00:25:24.715 | 99.99th=[ 317] 00:25:24.715 bw ( KiB/s): min=88064, max=219648, per=7.30%, avg=128297.05, stdev=32966.69, samples=20 00:25:24.715 iops : min= 344, max= 858, avg=501.15, stdev=128.79, samples=20 00:25:24.715 lat (msec) : 10=0.53%, 20=1.46%, 50=6.38%, 100=21.79%, 250=67.86% 00:25:24.715 lat (msec) : 500=1.97% 00:25:24.715 cpu : usr=0.35%, sys=1.62%, ctx=1231, majf=0, minf=4097 00:25:24.715 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:25:24.715 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:24.715 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:24.715 issued rwts: total=5075,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:24.715 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:24.715 job7: (groupid=0, jobs=1): err= 0: pid=3830339: Thu Jul 25 01:11:15 2024 00:25:24.715 read: IOPS=561, BW=140MiB/s (147MB/s)(1420MiB/10115msec) 00:25:24.715 slat (usec): min=10, max=192633, avg=1300.15, stdev=6021.03 00:25:24.715 clat (usec): min=1490, max=447857, avg=112591.29, stdev=64618.18 00:25:24.715 lat (usec): min=1535, max=447876, avg=113891.44, stdev=65660.89 00:25:24.715 clat percentiles (msec): 00:25:24.715 | 1.00th=[ 6], 5.00th=[ 20], 10.00th=[ 29], 20.00th=[ 43], 00:25:24.715 | 30.00th=[ 80], 40.00th=[ 100], 50.00th=[ 118], 60.00th=[ 129], 00:25:24.715 | 70.00th=[ 140], 80.00th=[ 155], 90.00th=[ 190], 95.00th=[ 236], 00:25:24.715 | 99.00th=[ 292], 99.50th=[ 305], 99.90th=[ 334], 99.95th=[ 334], 00:25:24.715 | 99.99th=[ 447] 00:25:24.715 bw ( KiB/s): min=65536, max=286208, per=8.18%, avg=143725.15, stdev=61961.88, samples=20 00:25:24.715 iops : min= 256, max= 1118, avg=561.40, stdev=242.02, samples=20 00:25:24.715 lat (msec) : 2=0.02%, 4=0.56%, 10=1.92%, 20=3.22%, 50=17.03% 00:25:24.715 lat (msec) : 100=17.59%, 250=55.61%, 500=4.05% 00:25:24.715 cpu : usr=0.34%, sys=1.86%, ctx=1292, majf=0, minf=4097 00:25:24.715 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:25:24.715 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:24.715 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:24.715 issued rwts: total=5679,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:24.715 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:24.715 job8: (groupid=0, jobs=1): err= 0: pid=3830347: Thu Jul 25 01:11:15 2024 00:25:24.715 read: IOPS=957, BW=239MiB/s (251MB/s)(2397MiB/10010msec) 00:25:24.715 slat (usec): min=9, max=85970, avg=840.78, stdev=3011.81 00:25:24.715 clat (msec): min=3, max=200, avg=65.93, stdev=41.21 00:25:24.715 lat (msec): min=3, max=216, avg=66.77, stdev=41.63 00:25:24.715 clat percentiles (msec): 00:25:24.715 | 1.00th=[ 13], 5.00th=[ 29], 10.00th=[ 31], 20.00th=[ 33], 00:25:24.715 | 30.00th=[ 37], 40.00th=[ 43], 50.00th=[ 51], 60.00th=[ 61], 00:25:24.715 | 70.00th=[ 75], 80.00th=[ 99], 90.00th=[ 138], 95.00th=[ 153], 00:25:24.715 | 99.00th=[ 176], 99.50th=[ 184], 99.90th=[ 197], 99.95th=[ 199], 00:25:24.715 | 99.99th=[ 201] 00:25:24.715 bw ( KiB/s): min=102400, max=425472, per=13.15%, avg=231236.00, stdev=110451.01, samples=19 00:25:24.715 iops : min= 400, max= 1662, avg=903.26, stdev=431.45, samples=19 00:25:24.715 lat (msec) : 4=0.02%, 10=0.66%, 20=1.70%, 50=47.74%, 100=30.30% 00:25:24.715 lat (msec) : 250=19.58% 00:25:24.715 cpu : usr=0.42%, sys=2.99%, ctx=1804, majf=0, minf=4097 00:25:24.715 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:25:24.715 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:24.715 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:24.715 issued rwts: total=9586,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:24.715 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:24.715 job9: (groupid=0, jobs=1): err= 0: pid=3830350: Thu Jul 25 01:11:15 2024 00:25:24.715 read: IOPS=528, BW=132MiB/s (139MB/s)(1337MiB/10115msec) 00:25:24.715 slat (usec): min=9, max=120793, avg=1095.39, stdev=5425.96 00:25:24.715 clat (usec): min=1913, max=427073, avg=119811.85, stdev=67883.26 00:25:24.715 lat (usec): min=1962, max=427141, avg=120907.24, stdev=68696.23 00:25:24.715 clat percentiles (msec): 00:25:24.715 | 1.00th=[ 9], 5.00th=[ 23], 10.00th=[ 34], 20.00th=[ 55], 00:25:24.715 | 30.00th=[ 74], 40.00th=[ 90], 50.00th=[ 118], 60.00th=[ 144], 00:25:24.715 | 70.00th=[ 159], 80.00th=[ 174], 90.00th=[ 209], 95.00th=[ 251], 00:25:24.715 | 99.00th=[ 284], 99.50th=[ 292], 99.90th=[ 313], 99.95th=[ 372], 00:25:24.715 | 99.99th=[ 426] 00:25:24.715 bw ( KiB/s): min=65536, max=228352, per=7.70%, avg=135309.90, stdev=52952.52, samples=20 00:25:24.715 iops : min= 256, max= 892, avg=528.55, stdev=206.85, samples=20 00:25:24.715 lat (msec) : 2=0.02%, 4=0.13%, 10=1.27%, 20=2.77%, 50=14.25% 00:25:24.715 lat (msec) : 100=25.84%, 250=50.70%, 500=5.03% 00:25:24.715 cpu : usr=0.28%, sys=1.63%, ctx=1378, majf=0, minf=4097 00:25:24.715 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.8% 00:25:24.715 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:24.715 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:24.715 issued rwts: total=5349,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:24.715 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:24.715 job10: (groupid=0, jobs=1): err= 0: pid=3830351: Thu Jul 25 01:11:15 2024 00:25:24.715 read: IOPS=637, BW=159MiB/s (167MB/s)(1613MiB/10117msec) 00:25:24.715 slat (usec): min=13, max=35555, avg=1548.52, stdev=4219.25 00:25:24.715 clat (msec): min=22, max=263, avg=98.75, stdev=39.87 00:25:24.715 lat (msec): min=22, max=263, avg=100.30, stdev=40.47 00:25:24.715 clat percentiles (msec): 00:25:24.715 | 1.00th=[ 32], 5.00th=[ 47], 10.00th=[ 56], 20.00th=[ 64], 00:25:24.715 | 30.00th=[ 71], 40.00th=[ 79], 50.00th=[ 89], 60.00th=[ 105], 00:25:24.715 | 70.00th=[ 123], 80.00th=[ 136], 90.00th=[ 155], 95.00th=[ 169], 00:25:24.715 | 99.00th=[ 207], 99.50th=[ 215], 99.90th=[ 245], 99.95th=[ 245], 00:25:24.715 | 99.99th=[ 264] 00:25:24.715 bw ( KiB/s): min=89088, max=279040, per=9.30%, avg=163440.70, stdev=60029.86, samples=20 00:25:24.715 iops : min= 348, max= 1090, avg=638.40, stdev=234.49, samples=20 00:25:24.715 lat (msec) : 50=6.14%, 100=51.41%, 250=42.43%, 500=0.02% 00:25:24.715 cpu : usr=0.46%, sys=2.09%, ctx=1258, majf=0, minf=4097 00:25:24.715 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:25:24.715 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:24.715 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:24.715 issued rwts: total=6450,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:24.715 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:24.715 00:25:24.715 Run status group 0 (all jobs): 00:25:24.715 READ: bw=1717MiB/s (1800MB/s), 121MiB/s-239MiB/s (127MB/s-251MB/s), io=17.0GiB (18.2GB), run=10010-10119msec 00:25:24.715 00:25:24.715 Disk stats (read/write): 00:25:24.715 nvme0n1: ios=14112/0, merge=0/0, ticks=1230716/0, in_queue=1230716, util=96.82% 00:25:24.715 nvme10n1: ios=10403/0, merge=0/0, ticks=1231515/0, in_queue=1231515, util=97.07% 00:25:24.715 nvme1n1: ios=10957/0, merge=0/0, ticks=1231989/0, in_queue=1231989, util=97.38% 00:25:24.715 nvme2n1: ios=9581/0, merge=0/0, ticks=1231325/0, in_queue=1231325, util=97.57% 00:25:24.715 nvme3n1: ios=14835/0, merge=0/0, ticks=1235034/0, in_queue=1235034, util=97.67% 00:25:24.715 nvme4n1: ios=13486/0, merge=0/0, ticks=1228019/0, in_queue=1228019, util=98.06% 00:25:24.715 nvme5n1: ios=9921/0, merge=0/0, ticks=1229543/0, in_queue=1229543, util=98.26% 00:25:24.715 nvme6n1: ios=11138/0, merge=0/0, ticks=1228686/0, in_queue=1228686, util=98.42% 00:25:24.715 nvme7n1: ios=18610/0, merge=0/0, ticks=1238317/0, in_queue=1238317, util=98.89% 00:25:24.715 nvme8n1: ios=10486/0, merge=0/0, ticks=1233245/0, in_queue=1233245, util=99.08% 00:25:24.715 nvme9n1: ios=12667/0, merge=0/0, ticks=1224421/0, in_queue=1224421, util=99.19% 00:25:24.715 01:11:15 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t randwrite -r 10 00:25:24.715 [global] 00:25:24.715 thread=1 00:25:24.715 invalidate=1 00:25:24.715 rw=randwrite 00:25:24.715 time_based=1 00:25:24.715 runtime=10 00:25:24.715 ioengine=libaio 00:25:24.715 direct=1 00:25:24.715 bs=262144 00:25:24.715 iodepth=64 00:25:24.715 norandommap=1 00:25:24.715 numjobs=1 00:25:24.715 00:25:24.715 [job0] 00:25:24.715 filename=/dev/nvme0n1 00:25:24.715 [job1] 00:25:24.715 filename=/dev/nvme10n1 00:25:24.715 [job2] 00:25:24.715 filename=/dev/nvme1n1 00:25:24.715 [job3] 00:25:24.715 filename=/dev/nvme2n1 00:25:24.715 [job4] 00:25:24.715 filename=/dev/nvme3n1 00:25:24.715 [job5] 00:25:24.715 filename=/dev/nvme4n1 00:25:24.715 [job6] 00:25:24.715 filename=/dev/nvme5n1 00:25:24.715 [job7] 00:25:24.715 filename=/dev/nvme6n1 00:25:24.715 [job8] 00:25:24.715 filename=/dev/nvme7n1 00:25:24.715 [job9] 00:25:24.715 filename=/dev/nvme8n1 00:25:24.715 [job10] 00:25:24.715 filename=/dev/nvme9n1 00:25:24.715 Could not set queue depth (nvme0n1) 00:25:24.715 Could not set queue depth (nvme10n1) 00:25:24.715 Could not set queue depth (nvme1n1) 00:25:24.715 Could not set queue depth (nvme2n1) 00:25:24.716 Could not set queue depth (nvme3n1) 00:25:24.716 Could not set queue depth (nvme4n1) 00:25:24.716 Could not set queue depth (nvme5n1) 00:25:24.716 Could not set queue depth (nvme6n1) 00:25:24.716 Could not set queue depth (nvme7n1) 00:25:24.716 Could not set queue depth (nvme8n1) 00:25:24.716 Could not set queue depth (nvme9n1) 00:25:24.716 job0: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:24.716 job1: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:24.716 job2: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:24.716 job3: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:24.716 job4: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:24.716 job5: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:24.716 job6: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:24.716 job7: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:24.716 job8: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:24.716 job9: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:24.716 job10: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:24.716 fio-3.35 00:25:24.716 Starting 11 threads 00:25:34.682 00:25:34.682 job0: (groupid=0, jobs=1): err= 0: pid=3831371: Thu Jul 25 01:11:26 2024 00:25:34.682 write: IOPS=456, BW=114MiB/s (120MB/s)(1166MiB/10204msec); 0 zone resets 00:25:34.682 slat (usec): min=20, max=124976, avg=1650.02, stdev=4411.41 00:25:34.682 clat (usec): min=1299, max=434365, avg=138292.89, stdev=69568.49 00:25:34.682 lat (usec): min=1340, max=434469, avg=139942.92, stdev=70308.37 00:25:34.682 clat percentiles (msec): 00:25:34.682 | 1.00th=[ 6], 5.00th=[ 17], 10.00th=[ 32], 20.00th=[ 85], 00:25:34.682 | 30.00th=[ 111], 40.00th=[ 128], 50.00th=[ 136], 60.00th=[ 148], 00:25:34.682 | 70.00th=[ 171], 80.00th=[ 194], 90.00th=[ 220], 95.00th=[ 251], 00:25:34.683 | 99.00th=[ 321], 99.50th=[ 363], 99.90th=[ 426], 99.95th=[ 430], 00:25:34.683 | 99.99th=[ 435] 00:25:34.683 bw ( KiB/s): min=63488, max=190976, per=8.21%, avg=117743.05, stdev=35016.47, samples=20 00:25:34.683 iops : min= 248, max= 746, avg=459.90, stdev=136.73, samples=20 00:25:34.683 lat (msec) : 2=0.09%, 4=0.49%, 10=2.40%, 20=2.94%, 50=6.91% 00:25:34.683 lat (msec) : 100=11.71%, 250=70.28%, 500=5.19% 00:25:34.683 cpu : usr=1.41%, sys=1.72%, ctx=2416, majf=0, minf=1 00:25:34.683 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.6% 00:25:34.683 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:34.683 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:34.683 issued rwts: total=0,4663,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:34.683 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:34.683 job1: (groupid=0, jobs=1): err= 0: pid=3831383: Thu Jul 25 01:11:26 2024 00:25:34.683 write: IOPS=635, BW=159MiB/s (167MB/s)(1622MiB/10212msec); 0 zone resets 00:25:34.683 slat (usec): min=15, max=90606, avg=902.18, stdev=2989.34 00:25:34.683 clat (usec): min=1141, max=450328, avg=99785.11, stdev=68880.48 00:25:34.683 lat (usec): min=1197, max=450448, avg=100687.29, stdev=69576.35 00:25:34.683 clat percentiles (msec): 00:25:34.683 | 1.00th=[ 4], 5.00th=[ 13], 10.00th=[ 20], 20.00th=[ 36], 00:25:34.683 | 30.00th=[ 43], 40.00th=[ 67], 50.00th=[ 100], 60.00th=[ 115], 00:25:34.683 | 70.00th=[ 134], 80.00th=[ 163], 90.00th=[ 180], 95.00th=[ 222], 00:25:34.683 | 99.00th=[ 266], 99.50th=[ 372], 99.90th=[ 439], 99.95th=[ 443], 00:25:34.683 | 99.99th=[ 451] 00:25:34.683 bw ( KiB/s): min=86016, max=405504, per=11.47%, avg=164439.65, stdev=72204.50, samples=20 00:25:34.683 iops : min= 336, max= 1584, avg=642.30, stdev=282.06, samples=20 00:25:34.683 lat (msec) : 2=0.28%, 4=0.72%, 10=2.07%, 20=7.58%, 50=22.81% 00:25:34.683 lat (msec) : 100=16.97%, 250=47.44%, 500=2.13% 00:25:34.683 cpu : usr=1.84%, sys=2.05%, ctx=4203, majf=0, minf=1 00:25:34.683 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:25:34.683 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:34.683 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:34.683 issued rwts: total=0,6488,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:34.683 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:34.683 job2: (groupid=0, jobs=1): err= 0: pid=3831384: Thu Jul 25 01:11:26 2024 00:25:34.683 write: IOPS=530, BW=133MiB/s (139MB/s)(1336MiB/10077msec); 0 zone resets 00:25:34.683 slat (usec): min=19, max=58371, avg=1471.28, stdev=3502.80 00:25:34.683 clat (usec): min=1202, max=252517, avg=119216.03, stdev=58337.28 00:25:34.683 lat (usec): min=1248, max=256637, avg=120687.31, stdev=59058.80 00:25:34.683 clat percentiles (msec): 00:25:34.683 | 1.00th=[ 9], 5.00th=[ 28], 10.00th=[ 42], 20.00th=[ 50], 00:25:34.683 | 30.00th=[ 89], 40.00th=[ 111], 50.00th=[ 124], 60.00th=[ 142], 00:25:34.683 | 70.00th=[ 157], 80.00th=[ 171], 90.00th=[ 190], 95.00th=[ 218], 00:25:34.683 | 99.00th=[ 239], 99.50th=[ 243], 99.90th=[ 249], 99.95th=[ 251], 00:25:34.683 | 99.99th=[ 253] 00:25:34.683 bw ( KiB/s): min=78336, max=314880, per=9.43%, avg=135128.25, stdev=58869.80, samples=20 00:25:34.683 iops : min= 306, max= 1230, avg=527.80, stdev=229.96, samples=20 00:25:34.683 lat (msec) : 2=0.06%, 4=0.09%, 10=1.20%, 20=2.06%, 50=16.74% 00:25:34.683 lat (msec) : 100=14.17%, 250=65.63%, 500=0.06% 00:25:34.683 cpu : usr=1.72%, sys=1.69%, ctx=2447, majf=0, minf=1 00:25:34.683 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.8% 00:25:34.683 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:34.683 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:34.683 issued rwts: total=0,5342,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:34.683 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:34.683 job3: (groupid=0, jobs=1): err= 0: pid=3831385: Thu Jul 25 01:11:26 2024 00:25:34.683 write: IOPS=470, BW=118MiB/s (123MB/s)(1191MiB/10120msec); 0 zone resets 00:25:34.683 slat (usec): min=18, max=29224, avg=1454.51, stdev=3635.90 00:25:34.683 clat (msec): min=3, max=299, avg=134.47, stdev=54.94 00:25:34.683 lat (msec): min=5, max=299, avg=135.92, stdev=55.66 00:25:34.683 clat percentiles (msec): 00:25:34.683 | 1.00th=[ 16], 5.00th=[ 33], 10.00th=[ 51], 20.00th=[ 85], 00:25:34.683 | 30.00th=[ 111], 40.00th=[ 128], 50.00th=[ 136], 60.00th=[ 159], 00:25:34.683 | 70.00th=[ 171], 80.00th=[ 184], 90.00th=[ 199], 95.00th=[ 209], 00:25:34.683 | 99.00th=[ 247], 99.50th=[ 257], 99.90th=[ 292], 99.95th=[ 296], 00:25:34.683 | 99.99th=[ 300] 00:25:34.683 bw ( KiB/s): min=83968, max=179046, per=8.39%, avg=120329.35, stdev=29088.45, samples=20 00:25:34.683 iops : min= 328, max= 699, avg=470.00, stdev=113.61, samples=20 00:25:34.683 lat (msec) : 4=0.02%, 10=0.40%, 20=1.28%, 50=8.36%, 100=14.07% 00:25:34.683 lat (msec) : 250=75.12%, 500=0.76% 00:25:34.683 cpu : usr=1.62%, sys=1.49%, ctx=2691, majf=0, minf=1 00:25:34.683 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:25:34.683 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:34.683 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:34.683 issued rwts: total=0,4763,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:34.683 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:34.683 job4: (groupid=0, jobs=1): err= 0: pid=3831386: Thu Jul 25 01:11:26 2024 00:25:34.683 write: IOPS=469, BW=117MiB/s (123MB/s)(1191MiB/10144msec); 0 zone resets 00:25:34.683 slat (usec): min=19, max=119520, avg=1384.28, stdev=4375.36 00:25:34.683 clat (usec): min=1056, max=312655, avg=134799.29, stdev=64599.40 00:25:34.683 lat (usec): min=1100, max=312713, avg=136183.57, stdev=65395.36 00:25:34.683 clat percentiles (msec): 00:25:34.683 | 1.00th=[ 7], 5.00th=[ 22], 10.00th=[ 35], 20.00th=[ 71], 00:25:34.683 | 30.00th=[ 104], 40.00th=[ 126], 50.00th=[ 144], 60.00th=[ 161], 00:25:34.683 | 70.00th=[ 174], 80.00th=[ 190], 90.00th=[ 213], 95.00th=[ 239], 00:25:34.683 | 99.00th=[ 275], 99.50th=[ 292], 99.90th=[ 309], 99.95th=[ 309], 00:25:34.683 | 99.99th=[ 313] 00:25:34.683 bw ( KiB/s): min=67584, max=202240, per=8.39%, avg=120329.85, stdev=35367.92, samples=20 00:25:34.683 iops : min= 264, max= 790, avg=470.00, stdev=138.11, samples=20 00:25:34.683 lat (msec) : 2=0.17%, 4=0.34%, 10=1.28%, 20=3.02%, 50=8.73% 00:25:34.683 lat (msec) : 100=15.53%, 250=68.85%, 500=2.08% 00:25:34.683 cpu : usr=1.17%, sys=1.65%, ctx=2834, majf=0, minf=1 00:25:34.683 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:25:34.683 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:34.683 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:34.683 issued rwts: total=0,4764,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:34.683 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:34.683 job5: (groupid=0, jobs=1): err= 0: pid=3831387: Thu Jul 25 01:11:26 2024 00:25:34.683 write: IOPS=502, BW=126MiB/s (132MB/s)(1264MiB/10059msec); 0 zone resets 00:25:34.683 slat (usec): min=19, max=109048, avg=1570.69, stdev=4166.64 00:25:34.683 clat (usec): min=1398, max=297467, avg=125304.89, stdev=56833.93 00:25:34.683 lat (usec): min=1454, max=310318, avg=126875.58, stdev=57477.03 00:25:34.683 clat percentiles (msec): 00:25:34.683 | 1.00th=[ 7], 5.00th=[ 27], 10.00th=[ 46], 20.00th=[ 77], 00:25:34.683 | 30.00th=[ 95], 40.00th=[ 114], 50.00th=[ 128], 60.00th=[ 136], 00:25:34.683 | 70.00th=[ 155], 80.00th=[ 180], 90.00th=[ 199], 95.00th=[ 215], 00:25:34.683 | 99.00th=[ 259], 99.50th=[ 271], 99.90th=[ 284], 99.95th=[ 296], 00:25:34.683 | 99.99th=[ 296] 00:25:34.683 bw ( KiB/s): min=77824, max=216064, per=8.92%, avg=127836.85, stdev=34390.94, samples=20 00:25:34.683 iops : min= 304, max= 844, avg=499.35, stdev=134.35, samples=20 00:25:34.683 lat (msec) : 2=0.02%, 4=0.28%, 10=1.29%, 20=2.02%, 50=7.67% 00:25:34.683 lat (msec) : 100=20.68%, 250=66.70%, 500=1.34% 00:25:34.683 cpu : usr=1.49%, sys=1.84%, ctx=2399, majf=0, minf=1 00:25:34.683 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:25:34.683 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:34.683 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:34.683 issued rwts: total=0,5057,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:34.683 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:34.683 job6: (groupid=0, jobs=1): err= 0: pid=3831388: Thu Jul 25 01:11:26 2024 00:25:34.683 write: IOPS=427, BW=107MiB/s (112MB/s)(1091MiB/10210msec); 0 zone resets 00:25:34.683 slat (usec): min=29, max=143556, avg=2123.08, stdev=5392.00 00:25:34.683 clat (msec): min=5, max=442, avg=147.41, stdev=64.13 00:25:34.683 lat (msec): min=6, max=442, avg=149.54, stdev=64.87 00:25:34.683 clat percentiles (msec): 00:25:34.683 | 1.00th=[ 28], 5.00th=[ 48], 10.00th=[ 68], 20.00th=[ 81], 00:25:34.683 | 30.00th=[ 106], 40.00th=[ 140], 50.00th=[ 155], 60.00th=[ 167], 00:25:34.683 | 70.00th=[ 178], 80.00th=[ 199], 90.00th=[ 226], 95.00th=[ 247], 00:25:34.683 | 99.00th=[ 313], 99.50th=[ 368], 99.90th=[ 430], 99.95th=[ 430], 00:25:34.683 | 99.99th=[ 443] 00:25:34.683 bw ( KiB/s): min=65536, max=219136, per=7.68%, avg=110112.65, stdev=43707.43, samples=20 00:25:34.683 iops : min= 256, max= 856, avg=430.10, stdev=170.68, samples=20 00:25:34.683 lat (msec) : 10=0.07%, 20=0.39%, 50=7.01%, 100=21.21%, 250=67.10% 00:25:34.683 lat (msec) : 500=4.22% 00:25:34.683 cpu : usr=1.69%, sys=1.42%, ctx=1466, majf=0, minf=1 00:25:34.683 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:25:34.683 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:34.683 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:34.683 issued rwts: total=0,4365,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:34.683 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:34.683 job7: (groupid=0, jobs=1): err= 0: pid=3831389: Thu Jul 25 01:11:26 2024 00:25:34.683 write: IOPS=524, BW=131MiB/s (137MB/s)(1338MiB/10210msec); 0 zone resets 00:25:34.683 slat (usec): min=21, max=47090, avg=1417.24, stdev=3545.85 00:25:34.683 clat (msec): min=4, max=456, avg=120.61, stdev=63.89 00:25:34.683 lat (msec): min=5, max=456, avg=122.03, stdev=64.61 00:25:34.683 clat percentiles (msec): 00:25:34.683 | 1.00th=[ 7], 5.00th=[ 27], 10.00th=[ 47], 20.00th=[ 72], 00:25:34.683 | 30.00th=[ 80], 40.00th=[ 92], 50.00th=[ 114], 60.00th=[ 136], 00:25:34.683 | 70.00th=[ 157], 80.00th=[ 174], 90.00th=[ 203], 95.00th=[ 220], 00:25:34.684 | 99.00th=[ 292], 99.50th=[ 351], 99.90th=[ 443], 99.95th=[ 443], 00:25:34.684 | 99.99th=[ 456] 00:25:34.684 bw ( KiB/s): min=73728, max=255488, per=9.44%, avg=135354.00, stdev=50646.56, samples=20 00:25:34.684 iops : min= 288, max= 998, avg=528.70, stdev=197.81, samples=20 00:25:34.684 lat (msec) : 10=2.30%, 20=1.79%, 50=9.05%, 100=30.48%, 250=53.93% 00:25:34.684 lat (msec) : 500=2.45% 00:25:34.684 cpu : usr=1.86%, sys=1.61%, ctx=2646, majf=0, minf=1 00:25:34.684 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.8% 00:25:34.684 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:34.684 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:34.684 issued rwts: total=0,5351,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:34.684 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:34.684 job8: (groupid=0, jobs=1): err= 0: pid=3831390: Thu Jul 25 01:11:26 2024 00:25:34.684 write: IOPS=562, BW=141MiB/s (147MB/s)(1435MiB/10209msec); 0 zone resets 00:25:34.684 slat (usec): min=21, max=41165, avg=1147.44, stdev=2959.35 00:25:34.684 clat (usec): min=1083, max=452715, avg=112621.62, stdev=57100.14 00:25:34.684 lat (usec): min=1136, max=452829, avg=113769.07, stdev=57633.73 00:25:34.684 clat percentiles (msec): 00:25:34.684 | 1.00th=[ 11], 5.00th=[ 26], 10.00th=[ 42], 20.00th=[ 58], 00:25:34.684 | 30.00th=[ 86], 40.00th=[ 101], 50.00th=[ 115], 60.00th=[ 123], 00:25:34.684 | 70.00th=[ 136], 80.00th=[ 159], 90.00th=[ 176], 95.00th=[ 190], 00:25:34.684 | 99.00th=[ 317], 99.50th=[ 347], 99.90th=[ 439], 99.95th=[ 439], 00:25:34.684 | 99.99th=[ 451] 00:25:34.684 bw ( KiB/s): min=78336, max=270336, per=10.14%, avg=145314.80, stdev=42958.67, samples=20 00:25:34.684 iops : min= 306, max= 1056, avg=567.60, stdev=167.79, samples=20 00:25:34.684 lat (msec) : 2=0.10%, 4=0.17%, 10=0.70%, 20=2.13%, 50=9.81% 00:25:34.684 lat (msec) : 100=27.46%, 250=57.72%, 500=1.92% 00:25:34.684 cpu : usr=2.13%, sys=1.85%, ctx=3258, majf=0, minf=1 00:25:34.684 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:25:34.684 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:34.684 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:34.684 issued rwts: total=0,5740,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:34.684 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:34.684 job9: (groupid=0, jobs=1): err= 0: pid=3831391: Thu Jul 25 01:11:26 2024 00:25:34.684 write: IOPS=474, BW=119MiB/s (124MB/s)(1212MiB/10212msec); 0 zone resets 00:25:34.684 slat (usec): min=29, max=122841, avg=1942.06, stdev=4137.35 00:25:34.684 clat (msec): min=2, max=437, avg=132.80, stdev=51.58 00:25:34.684 lat (msec): min=2, max=437, avg=134.74, stdev=52.10 00:25:34.684 clat percentiles (msec): 00:25:34.684 | 1.00th=[ 27], 5.00th=[ 75], 10.00th=[ 80], 20.00th=[ 83], 00:25:34.684 | 30.00th=[ 95], 40.00th=[ 112], 50.00th=[ 132], 60.00th=[ 146], 00:25:34.684 | 70.00th=[ 161], 80.00th=[ 171], 90.00th=[ 190], 95.00th=[ 215], 00:25:34.684 | 99.00th=[ 288], 99.50th=[ 347], 99.90th=[ 422], 99.95th=[ 422], 00:25:34.684 | 99.99th=[ 439] 00:25:34.684 bw ( KiB/s): min=81920, max=200192, per=8.54%, avg=122410.90, stdev=37918.92, samples=20 00:25:34.684 iops : min= 320, max= 782, avg=478.15, stdev=148.14, samples=20 00:25:34.684 lat (msec) : 4=0.08%, 10=0.23%, 20=0.25%, 50=1.53%, 100=32.44% 00:25:34.684 lat (msec) : 250=63.50%, 500=1.98% 00:25:34.684 cpu : usr=1.91%, sys=1.42%, ctx=1516, majf=0, minf=1 00:25:34.684 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:25:34.684 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:34.684 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:34.684 issued rwts: total=0,4846,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:34.684 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:34.684 job10: (groupid=0, jobs=1): err= 0: pid=3831392: Thu Jul 25 01:11:26 2024 00:25:34.684 write: IOPS=573, BW=143MiB/s (150MB/s)(1452MiB/10118msec); 0 zone resets 00:25:34.684 slat (usec): min=22, max=73208, avg=1129.91, stdev=3088.10 00:25:34.684 clat (usec): min=1515, max=265169, avg=110311.49, stdev=51220.80 00:25:34.684 lat (usec): min=1567, max=293543, avg=111441.40, stdev=51765.81 00:25:34.684 clat percentiles (msec): 00:25:34.684 | 1.00th=[ 9], 5.00th=[ 23], 10.00th=[ 37], 20.00th=[ 73], 00:25:34.684 | 30.00th=[ 83], 40.00th=[ 93], 50.00th=[ 114], 60.00th=[ 124], 00:25:34.684 | 70.00th=[ 138], 80.00th=[ 157], 90.00th=[ 171], 95.00th=[ 197], 00:25:34.684 | 99.00th=[ 234], 99.50th=[ 247], 99.90th=[ 257], 99.95th=[ 259], 00:25:34.684 | 99.99th=[ 266] 00:25:34.684 bw ( KiB/s): min=78336, max=264192, per=10.26%, avg=147034.55, stdev=48042.13, samples=20 00:25:34.684 iops : min= 306, max= 1032, avg=574.35, stdev=187.67, samples=20 00:25:34.684 lat (msec) : 2=0.07%, 4=0.17%, 10=1.05%, 20=3.05%, 50=9.80% 00:25:34.684 lat (msec) : 100=28.78%, 250=56.83%, 500=0.26% 00:25:34.684 cpu : usr=1.81%, sys=2.17%, ctx=3362, majf=0, minf=1 00:25:34.684 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:25:34.684 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:34.684 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:34.684 issued rwts: total=0,5807,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:34.684 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:34.684 00:25:34.684 Run status group 0 (all jobs): 00:25:34.684 WRITE: bw=1400MiB/s (1468MB/s), 107MiB/s-159MiB/s (112MB/s-167MB/s), io=14.0GiB (15.0GB), run=10059-10212msec 00:25:34.684 00:25:34.684 Disk stats (read/write): 00:25:34.684 nvme0n1: ios=45/9285, merge=0/0, ticks=1141/1237258, in_queue=1238399, util=100.00% 00:25:34.684 nvme10n1: ios=36/12922, merge=0/0, ticks=244/1243915, in_queue=1244159, util=97.67% 00:25:34.684 nvme1n1: ios=0/10347, merge=0/0, ticks=0/1208999, in_queue=1208999, util=97.34% 00:25:34.684 nvme2n1: ios=0/9328, merge=0/0, ticks=0/1214409, in_queue=1214409, util=97.53% 00:25:34.684 nvme3n1: ios=40/9235, merge=0/0, ticks=1064/1192505, in_queue=1193569, util=99.88% 00:25:34.684 nvme4n1: ios=50/9794, merge=0/0, ticks=1543/1212175, in_queue=1213718, util=100.00% 00:25:34.684 nvme5n1: ios=48/8686, merge=0/0, ticks=3625/1219455, in_queue=1223080, util=100.00% 00:25:34.684 nvme6n1: ios=24/10654, merge=0/0, ticks=283/1237956, in_queue=1238239, util=99.88% 00:25:34.684 nvme7n1: ios=0/11437, merge=0/0, ticks=0/1243574, in_queue=1243574, util=98.80% 00:25:34.684 nvme8n1: ios=41/9645, merge=0/0, ticks=1114/1227416, in_queue=1228530, util=99.85% 00:25:34.684 nvme9n1: ios=16/11271, merge=0/0, ticks=407/1222511, in_queue=1222918, util=99.95% 00:25:34.684 01:11:26 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@36 -- # sync 00:25:34.684 01:11:26 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # seq 1 11 00:25:34.684 01:11:26 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:34.684 01:11:26 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:25:34.684 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:25:34.684 01:11:27 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK1 00:25:34.684 01:11:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:25:34.684 01:11:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:25:34.684 01:11:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK1 00:25:34.684 01:11:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:25:34.684 01:11:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK1 00:25:34.684 01:11:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:25:34.684 01:11:27 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:34.684 01:11:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:34.684 01:11:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:34.684 01:11:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:34.684 01:11:27 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:34.684 01:11:27 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:25:34.684 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:25:34.684 01:11:27 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK2 00:25:34.684 01:11:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:25:34.684 01:11:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:25:34.684 01:11:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK2 00:25:34.684 01:11:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:25:34.684 01:11:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK2 00:25:34.684 01:11:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:25:34.684 01:11:27 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:25:34.684 01:11:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:34.684 01:11:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:34.684 01:11:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:34.684 01:11:27 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:34.684 01:11:27 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:25:34.942 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:25:34.942 01:11:27 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK3 00:25:34.942 01:11:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:25:34.942 01:11:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:25:34.942 01:11:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK3 00:25:34.942 01:11:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:25:34.942 01:11:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK3 00:25:34.942 01:11:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:25:34.942 01:11:27 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:25:34.942 01:11:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:34.942 01:11:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:34.942 01:11:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:34.942 01:11:27 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:34.942 01:11:27 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:25:35.199 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:25:35.199 01:11:28 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK4 00:25:35.199 01:11:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:25:35.199 01:11:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:25:35.200 01:11:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK4 00:25:35.200 01:11:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:25:35.200 01:11:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK4 00:25:35.200 01:11:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:25:35.200 01:11:28 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:25:35.200 01:11:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:35.200 01:11:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:35.200 01:11:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:35.200 01:11:28 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:35.200 01:11:28 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:25:35.457 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:25:35.457 01:11:28 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK5 00:25:35.457 01:11:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:25:35.457 01:11:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:25:35.457 01:11:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK5 00:25:35.457 01:11:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:25:35.457 01:11:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK5 00:25:35.457 01:11:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:25:35.457 01:11:28 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:25:35.457 01:11:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:35.457 01:11:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:35.457 01:11:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:35.457 01:11:28 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:35.457 01:11:28 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode6 00:25:35.715 NQN:nqn.2016-06.io.spdk:cnode6 disconnected 1 controller(s) 00:25:35.715 01:11:28 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK6 00:25:35.715 01:11:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:25:35.715 01:11:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:25:35.715 01:11:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK6 00:25:35.715 01:11:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:25:35.715 01:11:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK6 00:25:35.715 01:11:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:25:35.715 01:11:28 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode6 00:25:35.715 01:11:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:35.715 01:11:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:35.715 01:11:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:35.715 01:11:28 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:35.715 01:11:28 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode7 00:25:35.972 NQN:nqn.2016-06.io.spdk:cnode7 disconnected 1 controller(s) 00:25:35.972 01:11:28 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK7 00:25:35.972 01:11:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:25:35.972 01:11:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:25:35.972 01:11:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK7 00:25:35.972 01:11:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:25:35.972 01:11:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK7 00:25:35.972 01:11:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:25:35.972 01:11:28 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode7 00:25:35.972 01:11:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:35.972 01:11:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:35.972 01:11:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:35.972 01:11:28 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:35.972 01:11:28 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode8 00:25:35.972 NQN:nqn.2016-06.io.spdk:cnode8 disconnected 1 controller(s) 00:25:35.972 01:11:29 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK8 00:25:35.972 01:11:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:25:35.972 01:11:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:25:35.972 01:11:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK8 00:25:35.972 01:11:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:25:35.972 01:11:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK8 00:25:35.972 01:11:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:25:35.972 01:11:29 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode8 00:25:35.972 01:11:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:35.972 01:11:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:35.972 01:11:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:35.972 01:11:29 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:35.972 01:11:29 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode9 00:25:36.230 NQN:nqn.2016-06.io.spdk:cnode9 disconnected 1 controller(s) 00:25:36.230 01:11:29 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK9 00:25:36.230 01:11:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:25:36.230 01:11:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:25:36.230 01:11:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK9 00:25:36.230 01:11:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:25:36.230 01:11:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK9 00:25:36.230 01:11:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:25:36.230 01:11:29 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode9 00:25:36.230 01:11:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:36.230 01:11:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:36.230 01:11:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:36.230 01:11:29 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:36.230 01:11:29 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode10 00:25:36.230 NQN:nqn.2016-06.io.spdk:cnode10 disconnected 1 controller(s) 00:25:36.230 01:11:29 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK10 00:25:36.230 01:11:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:25:36.230 01:11:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:25:36.230 01:11:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK10 00:25:36.230 01:11:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:25:36.230 01:11:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK10 00:25:36.230 01:11:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:25:36.230 01:11:29 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode10 00:25:36.230 01:11:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:36.230 01:11:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:36.230 01:11:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:36.230 01:11:29 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:36.230 01:11:29 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode11 00:25:36.487 NQN:nqn.2016-06.io.spdk:cnode11 disconnected 1 controller(s) 00:25:36.487 01:11:29 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK11 00:25:36.487 01:11:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:25:36.487 01:11:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:25:36.487 01:11:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK11 00:25:36.487 01:11:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:25:36.487 01:11:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK11 00:25:36.487 01:11:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:25:36.487 01:11:29 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode11 00:25:36.487 01:11:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:36.487 01:11:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:36.488 01:11:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:36.488 01:11:29 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@43 -- # rm -f ./local-job0-0-verify.state 00:25:36.488 01:11:29 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:25:36.488 01:11:29 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@47 -- # nvmftestfini 00:25:36.488 01:11:29 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:36.488 01:11:29 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@117 -- # sync 00:25:36.488 01:11:29 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:36.488 01:11:29 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@120 -- # set +e 00:25:36.488 01:11:29 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:36.488 01:11:29 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:36.488 rmmod nvme_tcp 00:25:36.488 rmmod nvme_fabrics 00:25:36.488 rmmod nvme_keyring 00:25:36.488 01:11:29 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:36.488 01:11:29 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@124 -- # set -e 00:25:36.488 01:11:29 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@125 -- # return 0 00:25:36.488 01:11:29 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@489 -- # '[' -n 3826055 ']' 00:25:36.488 01:11:29 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@490 -- # killprocess 3826055 00:25:36.488 01:11:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@946 -- # '[' -z 3826055 ']' 00:25:36.488 01:11:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@950 -- # kill -0 3826055 00:25:36.488 01:11:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@951 -- # uname 00:25:36.488 01:11:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:25:36.488 01:11:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3826055 00:25:36.488 01:11:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:25:36.488 01:11:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:25:36.488 01:11:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3826055' 00:25:36.488 killing process with pid 3826055 00:25:36.488 01:11:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@965 -- # kill 3826055 00:25:36.488 01:11:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@970 -- # wait 3826055 00:25:37.053 01:11:30 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:37.053 01:11:30 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:37.053 01:11:30 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:37.053 01:11:30 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:37.053 01:11:30 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:37.053 01:11:30 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:37.053 01:11:30 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:37.053 01:11:30 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:38.953 01:11:32 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:38.953 00:25:38.953 real 1m0.711s 00:25:38.953 user 3m22.512s 00:25:38.953 sys 0m24.696s 00:25:38.953 01:11:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1122 -- # xtrace_disable 00:25:38.953 01:11:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:38.953 ************************************ 00:25:38.953 END TEST nvmf_multiconnection 00:25:38.953 ************************************ 00:25:38.954 01:11:32 nvmf_tcp -- nvmf/nvmf.sh@68 -- # run_test nvmf_initiator_timeout /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:25:38.954 01:11:32 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:25:38.954 01:11:32 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:25:39.212 01:11:32 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:39.212 ************************************ 00:25:39.212 START TEST nvmf_initiator_timeout 00:25:39.212 ************************************ 00:25:39.212 01:11:32 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:25:39.212 * Looking for test storage... 00:25:39.212 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:25:39.212 01:11:32 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:39.212 01:11:32 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # uname -s 00:25:39.212 01:11:32 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:39.212 01:11:32 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:39.212 01:11:32 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:39.212 01:11:32 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:39.212 01:11:32 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:39.212 01:11:32 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:39.212 01:11:32 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:39.212 01:11:32 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:39.213 01:11:32 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:39.213 01:11:32 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:39.213 01:11:32 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:25:39.213 01:11:32 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:25:39.213 01:11:32 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:39.213 01:11:32 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:39.213 01:11:32 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:39.213 01:11:32 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:39.213 01:11:32 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:39.213 01:11:32 nvmf_tcp.nvmf_initiator_timeout -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:39.213 01:11:32 nvmf_tcp.nvmf_initiator_timeout -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:39.213 01:11:32 nvmf_tcp.nvmf_initiator_timeout -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:39.213 01:11:32 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:39.213 01:11:32 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:39.213 01:11:32 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:39.213 01:11:32 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@5 -- # export PATH 00:25:39.213 01:11:32 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:39.213 01:11:32 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@47 -- # : 0 00:25:39.213 01:11:32 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:39.213 01:11:32 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:39.213 01:11:32 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:39.213 01:11:32 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:39.213 01:11:32 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:39.213 01:11:32 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:39.213 01:11:32 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:39.213 01:11:32 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:39.213 01:11:32 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:39.213 01:11:32 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:25:39.213 01:11:32 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@14 -- # nvmftestinit 00:25:39.213 01:11:32 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:39.213 01:11:32 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:39.213 01:11:32 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:39.213 01:11:32 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:39.213 01:11:32 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:39.213 01:11:32 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:39.213 01:11:32 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:39.213 01:11:32 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:39.213 01:11:32 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:39.213 01:11:32 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:39.213 01:11:32 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@285 -- # xtrace_disable 00:25:39.213 01:11:32 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:41.114 01:11:34 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:41.114 01:11:34 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@291 -- # pci_devs=() 00:25:41.114 01:11:34 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:41.114 01:11:34 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:41.114 01:11:34 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:41.114 01:11:34 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:41.114 01:11:34 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:41.114 01:11:34 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@295 -- # net_devs=() 00:25:41.114 01:11:34 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:41.114 01:11:34 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@296 -- # e810=() 00:25:41.114 01:11:34 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@296 -- # local -ga e810 00:25:41.114 01:11:34 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@297 -- # x722=() 00:25:41.114 01:11:34 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@297 -- # local -ga x722 00:25:41.114 01:11:34 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@298 -- # mlx=() 00:25:41.114 01:11:34 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@298 -- # local -ga mlx 00:25:41.114 01:11:34 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:41.114 01:11:34 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:41.114 01:11:34 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:41.114 01:11:34 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:41.114 01:11:34 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:41.114 01:11:34 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:41.114 01:11:34 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:41.114 01:11:34 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:41.114 01:11:34 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:41.114 01:11:34 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:41.114 01:11:34 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:41.114 01:11:34 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:41.114 01:11:34 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:41.114 01:11:34 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:41.114 01:11:34 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:41.114 01:11:34 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:41.114 01:11:34 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:41.114 01:11:34 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:41.114 01:11:34 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:25:41.114 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:25:41.114 01:11:34 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:41.114 01:11:34 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:41.114 01:11:34 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:41.114 01:11:34 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:41.114 01:11:34 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:41.114 01:11:34 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:41.114 01:11:34 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:25:41.114 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:25:41.114 01:11:34 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:41.114 01:11:34 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:41.114 01:11:34 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:41.114 01:11:34 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:41.114 01:11:34 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:41.114 01:11:34 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:41.114 01:11:34 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:41.114 01:11:34 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:41.114 01:11:34 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:41.114 01:11:34 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:41.114 01:11:34 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:41.114 01:11:34 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:41.114 01:11:34 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:41.114 01:11:34 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:41.114 01:11:34 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:41.114 01:11:34 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:25:41.114 Found net devices under 0000:0a:00.0: cvl_0_0 00:25:41.114 01:11:34 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:41.114 01:11:34 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:41.114 01:11:34 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:41.114 01:11:34 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:41.114 01:11:34 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:41.114 01:11:34 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:41.114 01:11:34 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:41.114 01:11:34 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:41.114 01:11:34 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:25:41.114 Found net devices under 0000:0a:00.1: cvl_0_1 00:25:41.114 01:11:34 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:41.114 01:11:34 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:41.114 01:11:34 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@414 -- # is_hw=yes 00:25:41.114 01:11:34 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:41.115 01:11:34 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:41.115 01:11:34 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:41.115 01:11:34 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:41.115 01:11:34 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:41.115 01:11:34 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:41.115 01:11:34 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:41.115 01:11:34 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:41.115 01:11:34 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:41.115 01:11:34 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:41.115 01:11:34 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:41.115 01:11:34 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:41.115 01:11:34 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:41.115 01:11:34 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:41.115 01:11:34 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:41.372 01:11:34 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:41.373 01:11:34 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:41.373 01:11:34 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:41.373 01:11:34 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:41.373 01:11:34 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:41.373 01:11:34 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:41.373 01:11:34 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:41.373 01:11:34 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:41.373 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:41.373 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.203 ms 00:25:41.373 00:25:41.373 --- 10.0.0.2 ping statistics --- 00:25:41.373 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:41.373 rtt min/avg/max/mdev = 0.203/0.203/0.203/0.000 ms 00:25:41.373 01:11:34 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:41.373 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:41.373 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.129 ms 00:25:41.373 00:25:41.373 --- 10.0.0.1 ping statistics --- 00:25:41.373 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:41.373 rtt min/avg/max/mdev = 0.129/0.129/0.129/0.000 ms 00:25:41.373 01:11:34 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:41.373 01:11:34 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@422 -- # return 0 00:25:41.373 01:11:34 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:41.373 01:11:34 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:41.373 01:11:34 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:41.373 01:11:34 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:41.373 01:11:34 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:41.373 01:11:34 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:41.373 01:11:34 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:41.373 01:11:34 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@15 -- # nvmfappstart -m 0xF 00:25:41.373 01:11:34 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:41.373 01:11:34 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@720 -- # xtrace_disable 00:25:41.373 01:11:34 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:41.373 01:11:34 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@481 -- # nvmfpid=3834857 00:25:41.373 01:11:34 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:25:41.373 01:11:34 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@482 -- # waitforlisten 3834857 00:25:41.373 01:11:34 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@827 -- # '[' -z 3834857 ']' 00:25:41.373 01:11:34 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:41.373 01:11:34 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@832 -- # local max_retries=100 00:25:41.373 01:11:34 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:41.373 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:41.373 01:11:34 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@836 -- # xtrace_disable 00:25:41.373 01:11:34 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:41.373 [2024-07-25 01:11:34.463161] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 23.11.0 initialization... 00:25:41.373 [2024-07-25 01:11:34.463254] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:41.373 EAL: No free 2048 kB hugepages reported on node 1 00:25:41.631 [2024-07-25 01:11:34.532434] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:41.631 [2024-07-25 01:11:34.625588] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:41.631 [2024-07-25 01:11:34.625638] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:41.631 [2024-07-25 01:11:34.625653] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:41.631 [2024-07-25 01:11:34.625667] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:41.631 [2024-07-25 01:11:34.625678] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:41.631 [2024-07-25 01:11:34.625949] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:41.631 [2024-07-25 01:11:34.625980] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:25:41.631 [2024-07-25 01:11:34.626030] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:25:41.631 [2024-07-25 01:11:34.626032] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:41.631 01:11:34 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:25:41.631 01:11:34 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@860 -- # return 0 00:25:41.631 01:11:34 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:41.631 01:11:34 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:41.631 01:11:34 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:41.631 01:11:34 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:41.631 01:11:34 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@17 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:25:41.631 01:11:34 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:25:41.631 01:11:34 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:41.631 01:11:34 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:41.916 Malloc0 00:25:41.916 01:11:34 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:41.916 01:11:34 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@22 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 30 -t 30 -w 30 -n 30 00:25:41.916 01:11:34 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:41.916 01:11:34 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:41.916 Delay0 00:25:41.916 01:11:34 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:41.916 01:11:34 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:41.916 01:11:34 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:41.916 01:11:34 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:41.916 [2024-07-25 01:11:34.812034] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:41.916 01:11:34 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:41.916 01:11:34 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:25:41.916 01:11:34 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:41.916 01:11:34 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:41.916 01:11:34 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:41.916 01:11:34 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:25:41.916 01:11:34 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:41.916 01:11:34 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:41.916 01:11:34 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:41.916 01:11:34 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:41.916 01:11:34 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:41.916 01:11:34 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:41.916 [2024-07-25 01:11:34.840345] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:41.916 01:11:34 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:41.916 01:11:34 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:25:42.482 01:11:35 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@31 -- # waitforserial SPDKISFASTANDAWESOME 00:25:42.482 01:11:35 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1194 -- # local i=0 00:25:42.482 01:11:35 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:25:42.482 01:11:35 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:25:42.482 01:11:35 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1201 -- # sleep 2 00:25:44.379 01:11:37 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:25:44.379 01:11:37 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:25:44.379 01:11:37 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:25:44.379 01:11:37 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:25:44.379 01:11:37 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:25:44.379 01:11:37 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1204 -- # return 0 00:25:44.379 01:11:37 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@35 -- # fio_pid=3835204 00:25:44.379 01:11:37 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 60 -v 00:25:44.379 01:11:37 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@37 -- # sleep 3 00:25:44.379 [global] 00:25:44.379 thread=1 00:25:44.379 invalidate=1 00:25:44.379 rw=write 00:25:44.379 time_based=1 00:25:44.379 runtime=60 00:25:44.379 ioengine=libaio 00:25:44.379 direct=1 00:25:44.379 bs=4096 00:25:44.379 iodepth=1 00:25:44.379 norandommap=0 00:25:44.379 numjobs=1 00:25:44.379 00:25:44.379 verify_dump=1 00:25:44.379 verify_backlog=512 00:25:44.379 verify_state_save=0 00:25:44.379 do_verify=1 00:25:44.379 verify=crc32c-intel 00:25:44.379 [job0] 00:25:44.379 filename=/dev/nvme0n1 00:25:44.379 Could not set queue depth (nvme0n1) 00:25:44.637 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:25:44.637 fio-3.35 00:25:44.637 Starting 1 thread 00:25:47.915 01:11:40 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@40 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 31000000 00:25:47.915 01:11:40 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:47.915 01:11:40 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:47.915 true 00:25:47.915 01:11:40 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:47.915 01:11:40 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@41 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 31000000 00:25:47.915 01:11:40 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:47.915 01:11:40 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:47.915 true 00:25:47.915 01:11:40 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:47.915 01:11:40 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@42 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 31000000 00:25:47.915 01:11:40 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:47.915 01:11:40 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:47.915 true 00:25:47.915 01:11:40 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:47.915 01:11:40 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@43 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 310000000 00:25:47.915 01:11:40 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:47.915 01:11:40 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:47.915 true 00:25:47.915 01:11:40 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:47.915 01:11:40 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@45 -- # sleep 3 00:25:50.438 01:11:43 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@48 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 30 00:25:50.438 01:11:43 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:50.438 01:11:43 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:50.438 true 00:25:50.438 01:11:43 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:50.438 01:11:43 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@49 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 30 00:25:50.438 01:11:43 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:50.438 01:11:43 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:50.438 true 00:25:50.438 01:11:43 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:50.438 01:11:43 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@50 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 30 00:25:50.438 01:11:43 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:50.438 01:11:43 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:50.438 true 00:25:50.438 01:11:43 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:50.438 01:11:43 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@51 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 30 00:25:50.438 01:11:43 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:50.438 01:11:43 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:50.438 true 00:25:50.438 01:11:43 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:50.438 01:11:43 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@53 -- # fio_status=0 00:25:50.438 01:11:43 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@54 -- # wait 3835204 00:26:46.672 00:26:46.672 job0: (groupid=0, jobs=1): err= 0: pid=3835358: Thu Jul 25 01:12:37 2024 00:26:46.672 read: IOPS=7, BW=31.1KiB/s (31.9kB/s)(1868KiB/60014msec) 00:26:46.672 slat (usec): min=12, max=7805, avg=42.07, stdev=360.14 00:26:46.672 clat (usec): min=542, max=40961k, avg=128023.57, stdev=1893597.78 00:26:46.672 lat (usec): min=576, max=40961k, avg=128065.64, stdev=1893596.54 00:26:46.672 clat percentiles (usec): 00:26:46.672 | 1.00th=[ 603], 5.00th=[ 40633], 10.00th=[ 41157], 00:26:46.672 | 20.00th=[ 41157], 30.00th=[ 41157], 40.00th=[ 41157], 00:26:46.672 | 50.00th=[ 41157], 60.00th=[ 41157], 70.00th=[ 41157], 00:26:46.672 | 80.00th=[ 42206], 90.00th=[ 42206], 95.00th=[ 42206], 00:26:46.672 | 99.00th=[ 42206], 99.50th=[ 42206], 99.90th=[17112761], 00:26:46.672 | 99.95th=[17112761], 99.99th=[17112761] 00:26:46.672 write: IOPS=8, BW=34.1KiB/s (34.9kB/s)(2048KiB/60014msec); 0 zone resets 00:26:46.672 slat (usec): min=10, max=31355, avg=86.44, stdev=1384.66 00:26:46.672 clat (usec): min=227, max=452, avg=305.05, stdev=37.18 00:26:46.672 lat (usec): min=238, max=31717, avg=391.50, stdev=1387.79 00:26:46.672 clat percentiles (usec): 00:26:46.672 | 1.00th=[ 235], 5.00th=[ 247], 10.00th=[ 265], 20.00th=[ 281], 00:26:46.672 | 30.00th=[ 289], 40.00th=[ 293], 50.00th=[ 297], 60.00th=[ 306], 00:26:46.672 | 70.00th=[ 314], 80.00th=[ 334], 90.00th=[ 363], 95.00th=[ 375], 00:26:46.672 | 99.00th=[ 392], 99.50th=[ 404], 99.90th=[ 453], 99.95th=[ 453], 00:26:46.672 | 99.99th=[ 453] 00:26:46.672 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:26:46.672 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:26:46.672 lat (usec) : 250=3.17%, 500=49.13%, 750=1.02% 00:26:46.672 lat (msec) : 50=46.58%, >=2000=0.10% 00:26:46.672 cpu : usr=0.02%, sys=0.07%, ctx=983, majf=0, minf=2 00:26:46.672 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:46.672 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:46.672 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:46.672 issued rwts: total=467,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:46.672 latency : target=0, window=0, percentile=100.00%, depth=1 00:26:46.672 00:26:46.672 Run status group 0 (all jobs): 00:26:46.672 READ: bw=31.1KiB/s (31.9kB/s), 31.1KiB/s-31.1KiB/s (31.9kB/s-31.9kB/s), io=1868KiB (1913kB), run=60014-60014msec 00:26:46.672 WRITE: bw=34.1KiB/s (34.9kB/s), 34.1KiB/s-34.1KiB/s (34.9kB/s-34.9kB/s), io=2048KiB (2097kB), run=60014-60014msec 00:26:46.672 00:26:46.672 Disk stats (read/write): 00:26:46.672 nvme0n1: ios=516/512, merge=0/0, ticks=20073/147, in_queue=20220, util=99.64% 00:26:46.672 01:12:37 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@56 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:26:46.672 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:26:46.672 01:12:37 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@57 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:26:46.672 01:12:37 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1215 -- # local i=0 00:26:46.672 01:12:37 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:26:46.672 01:12:37 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:26:46.672 01:12:37 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:26:46.672 01:12:37 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:26:46.672 01:12:37 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1227 -- # return 0 00:26:46.672 01:12:37 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@59 -- # '[' 0 -eq 0 ']' 00:26:46.672 01:12:37 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@60 -- # echo 'nvmf hotplug test: fio successful as expected' 00:26:46.672 nvmf hotplug test: fio successful as expected 00:26:46.672 01:12:37 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:46.672 01:12:37 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:46.672 01:12:37 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:46.672 01:12:37 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:46.672 01:12:37 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@69 -- # rm -f ./local-job0-0-verify.state 00:26:46.672 01:12:37 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@71 -- # trap - SIGINT SIGTERM EXIT 00:26:46.672 01:12:37 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@73 -- # nvmftestfini 00:26:46.672 01:12:37 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:46.672 01:12:37 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@117 -- # sync 00:26:46.672 01:12:37 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:46.672 01:12:37 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@120 -- # set +e 00:26:46.672 01:12:37 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:46.672 01:12:37 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:46.672 rmmod nvme_tcp 00:26:46.672 rmmod nvme_fabrics 00:26:46.672 rmmod nvme_keyring 00:26:46.672 01:12:37 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:46.672 01:12:37 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@124 -- # set -e 00:26:46.672 01:12:37 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@125 -- # return 0 00:26:46.672 01:12:37 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@489 -- # '[' -n 3834857 ']' 00:26:46.672 01:12:37 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@490 -- # killprocess 3834857 00:26:46.672 01:12:37 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@946 -- # '[' -z 3834857 ']' 00:26:46.672 01:12:37 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@950 -- # kill -0 3834857 00:26:46.672 01:12:37 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@951 -- # uname 00:26:46.672 01:12:37 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:26:46.672 01:12:37 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3834857 00:26:46.672 01:12:37 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:26:46.672 01:12:37 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:26:46.672 01:12:37 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3834857' 00:26:46.672 killing process with pid 3834857 00:26:46.672 01:12:37 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@965 -- # kill 3834857 00:26:46.672 01:12:37 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@970 -- # wait 3834857 00:26:46.672 01:12:38 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:46.672 01:12:38 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:46.672 01:12:38 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:46.672 01:12:38 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:46.672 01:12:38 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:46.672 01:12:38 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:46.672 01:12:38 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:46.672 01:12:38 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:47.238 01:12:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:47.238 00:26:47.238 real 1m8.135s 00:26:47.238 user 4m10.955s 00:26:47.238 sys 0m5.954s 00:26:47.239 01:12:40 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1122 -- # xtrace_disable 00:26:47.239 01:12:40 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:47.239 ************************************ 00:26:47.239 END TEST nvmf_initiator_timeout 00:26:47.239 ************************************ 00:26:47.239 01:12:40 nvmf_tcp -- nvmf/nvmf.sh@71 -- # [[ phy == phy ]] 00:26:47.239 01:12:40 nvmf_tcp -- nvmf/nvmf.sh@72 -- # '[' tcp = tcp ']' 00:26:47.239 01:12:40 nvmf_tcp -- nvmf/nvmf.sh@73 -- # gather_supported_nvmf_pci_devs 00:26:47.239 01:12:40 nvmf_tcp -- nvmf/common.sh@285 -- # xtrace_disable 00:26:47.239 01:12:40 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:49.139 01:12:42 nvmf_tcp -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:49.139 01:12:42 nvmf_tcp -- nvmf/common.sh@291 -- # pci_devs=() 00:26:49.139 01:12:42 nvmf_tcp -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:49.139 01:12:42 nvmf_tcp -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:49.139 01:12:42 nvmf_tcp -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:49.139 01:12:42 nvmf_tcp -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:49.139 01:12:42 nvmf_tcp -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:49.139 01:12:42 nvmf_tcp -- nvmf/common.sh@295 -- # net_devs=() 00:26:49.139 01:12:42 nvmf_tcp -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:49.139 01:12:42 nvmf_tcp -- nvmf/common.sh@296 -- # e810=() 00:26:49.139 01:12:42 nvmf_tcp -- nvmf/common.sh@296 -- # local -ga e810 00:26:49.139 01:12:42 nvmf_tcp -- nvmf/common.sh@297 -- # x722=() 00:26:49.139 01:12:42 nvmf_tcp -- nvmf/common.sh@297 -- # local -ga x722 00:26:49.139 01:12:42 nvmf_tcp -- nvmf/common.sh@298 -- # mlx=() 00:26:49.139 01:12:42 nvmf_tcp -- nvmf/common.sh@298 -- # local -ga mlx 00:26:49.139 01:12:42 nvmf_tcp -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:49.139 01:12:42 nvmf_tcp -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:49.139 01:12:42 nvmf_tcp -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:49.139 01:12:42 nvmf_tcp -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:49.139 01:12:42 nvmf_tcp -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:49.139 01:12:42 nvmf_tcp -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:49.139 01:12:42 nvmf_tcp -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:49.139 01:12:42 nvmf_tcp -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:49.139 01:12:42 nvmf_tcp -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:49.139 01:12:42 nvmf_tcp -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:49.139 01:12:42 nvmf_tcp -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:49.139 01:12:42 nvmf_tcp -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:49.139 01:12:42 nvmf_tcp -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:49.139 01:12:42 nvmf_tcp -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:49.139 01:12:42 nvmf_tcp -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:49.139 01:12:42 nvmf_tcp -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:49.139 01:12:42 nvmf_tcp -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:49.139 01:12:42 nvmf_tcp -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:49.139 01:12:42 nvmf_tcp -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:26:49.139 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:26:49.139 01:12:42 nvmf_tcp -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:49.139 01:12:42 nvmf_tcp -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:49.139 01:12:42 nvmf_tcp -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:49.139 01:12:42 nvmf_tcp -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:49.139 01:12:42 nvmf_tcp -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:49.139 01:12:42 nvmf_tcp -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:49.139 01:12:42 nvmf_tcp -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:26:49.139 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:26:49.139 01:12:42 nvmf_tcp -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:49.139 01:12:42 nvmf_tcp -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:49.139 01:12:42 nvmf_tcp -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:49.139 01:12:42 nvmf_tcp -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:49.139 01:12:42 nvmf_tcp -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:49.139 01:12:42 nvmf_tcp -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:49.139 01:12:42 nvmf_tcp -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:49.139 01:12:42 nvmf_tcp -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:49.139 01:12:42 nvmf_tcp -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:49.140 01:12:42 nvmf_tcp -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:49.140 01:12:42 nvmf_tcp -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:49.140 01:12:42 nvmf_tcp -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:49.140 01:12:42 nvmf_tcp -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:49.140 01:12:42 nvmf_tcp -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:49.140 01:12:42 nvmf_tcp -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:49.140 01:12:42 nvmf_tcp -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:26:49.140 Found net devices under 0000:0a:00.0: cvl_0_0 00:26:49.140 01:12:42 nvmf_tcp -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:49.140 01:12:42 nvmf_tcp -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:49.140 01:12:42 nvmf_tcp -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:49.140 01:12:42 nvmf_tcp -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:49.140 01:12:42 nvmf_tcp -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:49.140 01:12:42 nvmf_tcp -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:49.140 01:12:42 nvmf_tcp -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:49.140 01:12:42 nvmf_tcp -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:49.140 01:12:42 nvmf_tcp -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:26:49.140 Found net devices under 0000:0a:00.1: cvl_0_1 00:26:49.140 01:12:42 nvmf_tcp -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:49.140 01:12:42 nvmf_tcp -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:49.140 01:12:42 nvmf_tcp -- nvmf/nvmf.sh@74 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:49.140 01:12:42 nvmf_tcp -- nvmf/nvmf.sh@75 -- # (( 2 > 0 )) 00:26:49.140 01:12:42 nvmf_tcp -- nvmf/nvmf.sh@76 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:26:49.140 01:12:42 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:26:49.140 01:12:42 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:26:49.140 01:12:42 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:49.398 ************************************ 00:26:49.398 START TEST nvmf_perf_adq 00:26:49.398 ************************************ 00:26:49.398 01:12:42 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:26:49.398 * Looking for test storage... 00:26:49.398 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:26:49.398 01:12:42 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:49.398 01:12:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:26:49.398 01:12:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:49.398 01:12:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:49.398 01:12:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:49.398 01:12:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:49.398 01:12:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:49.398 01:12:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:49.398 01:12:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:49.398 01:12:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:49.398 01:12:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:49.398 01:12:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:49.398 01:12:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:26:49.398 01:12:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:26:49.398 01:12:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:49.398 01:12:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:49.398 01:12:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:49.398 01:12:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:49.398 01:12:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:49.398 01:12:42 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:49.398 01:12:42 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:49.398 01:12:42 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:49.398 01:12:42 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:49.398 01:12:42 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:49.398 01:12:42 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:49.398 01:12:42 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:26:49.399 01:12:42 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:49.399 01:12:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@47 -- # : 0 00:26:49.399 01:12:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:49.399 01:12:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:49.399 01:12:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:49.399 01:12:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:49.399 01:12:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:49.399 01:12:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:49.399 01:12:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:49.399 01:12:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:49.399 01:12:42 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:26:49.399 01:12:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:26:49.399 01:12:42 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:51.299 01:12:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:51.299 01:12:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:26:51.299 01:12:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:51.299 01:12:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:51.299 01:12:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:51.299 01:12:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:51.299 01:12:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:51.299 01:12:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:26:51.299 01:12:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:51.299 01:12:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:26:51.299 01:12:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:26:51.299 01:12:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:26:51.299 01:12:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:26:51.299 01:12:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:26:51.299 01:12:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:26:51.299 01:12:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:51.299 01:12:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:51.299 01:12:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:51.299 01:12:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:51.299 01:12:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:51.299 01:12:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:51.299 01:12:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:51.299 01:12:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:51.299 01:12:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:51.299 01:12:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:51.299 01:12:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:51.299 01:12:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:51.299 01:12:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:51.299 01:12:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:51.299 01:12:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:51.299 01:12:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:51.299 01:12:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:51.299 01:12:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:51.299 01:12:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:26:51.299 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:26:51.299 01:12:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:51.299 01:12:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:51.299 01:12:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:51.299 01:12:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:51.299 01:12:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:51.299 01:12:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:51.299 01:12:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:26:51.299 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:26:51.299 01:12:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:51.299 01:12:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:51.299 01:12:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:51.299 01:12:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:51.299 01:12:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:51.299 01:12:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:51.299 01:12:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:51.299 01:12:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:51.299 01:12:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:51.299 01:12:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:51.299 01:12:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:51.299 01:12:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:51.299 01:12:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:51.299 01:12:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:51.299 01:12:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:51.299 01:12:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:26:51.299 Found net devices under 0000:0a:00.0: cvl_0_0 00:26:51.299 01:12:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:51.299 01:12:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:51.299 01:12:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:51.299 01:12:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:51.299 01:12:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:51.299 01:12:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:51.299 01:12:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:51.299 01:12:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:51.299 01:12:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:26:51.299 Found net devices under 0000:0a:00.1: cvl_0_1 00:26:51.299 01:12:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:51.299 01:12:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:51.299 01:12:44 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:51.299 01:12:44 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:26:51.299 01:12:44 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:26:51.299 01:12:44 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@60 -- # adq_reload_driver 00:26:51.299 01:12:44 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:26:51.864 01:12:44 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:26:53.789 01:12:46 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:26:59.054 01:12:51 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@68 -- # nvmftestinit 00:26:59.054 01:12:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:59.054 01:12:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:59.054 01:12:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:59.054 01:12:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:59.055 01:12:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:59.055 01:12:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:59.055 01:12:51 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:59.055 01:12:51 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:59.055 01:12:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:59.055 01:12:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:59.055 01:12:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:26:59.055 01:12:51 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:59.055 01:12:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:59.055 01:12:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:26:59.055 01:12:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:59.055 01:12:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:59.055 01:12:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:59.055 01:12:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:59.055 01:12:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:59.055 01:12:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:26:59.055 01:12:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:59.055 01:12:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:26:59.055 01:12:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:26:59.055 01:12:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:26:59.055 01:12:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:26:59.055 01:12:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:26:59.055 01:12:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:26:59.055 01:12:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:59.055 01:12:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:59.055 01:12:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:59.055 01:12:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:59.055 01:12:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:59.055 01:12:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:59.055 01:12:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:59.055 01:12:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:59.055 01:12:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:59.055 01:12:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:59.055 01:12:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:59.055 01:12:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:59.055 01:12:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:59.055 01:12:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:59.055 01:12:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:59.055 01:12:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:59.055 01:12:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:59.055 01:12:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:59.055 01:12:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:26:59.055 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:26:59.055 01:12:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:59.055 01:12:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:59.055 01:12:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:59.055 01:12:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:59.055 01:12:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:59.055 01:12:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:59.055 01:12:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:26:59.055 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:26:59.055 01:12:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:59.055 01:12:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:59.055 01:12:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:59.055 01:12:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:59.055 01:12:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:59.055 01:12:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:59.055 01:12:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:59.055 01:12:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:59.055 01:12:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:59.055 01:12:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:59.055 01:12:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:59.055 01:12:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:59.055 01:12:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:59.055 01:12:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:59.055 01:12:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:59.055 01:12:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:26:59.055 Found net devices under 0000:0a:00.0: cvl_0_0 00:26:59.055 01:12:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:59.055 01:12:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:59.055 01:12:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:59.055 01:12:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:59.055 01:12:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:59.055 01:12:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:59.055 01:12:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:59.055 01:12:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:59.055 01:12:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:26:59.055 Found net devices under 0000:0a:00.1: cvl_0_1 00:26:59.055 01:12:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:59.055 01:12:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:59.055 01:12:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:26:59.055 01:12:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:59.055 01:12:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:26:59.055 01:12:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:26:59.055 01:12:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:59.055 01:12:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:59.055 01:12:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:59.055 01:12:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:59.055 01:12:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:59.055 01:12:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:59.055 01:12:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:59.055 01:12:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:59.055 01:12:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:59.055 01:12:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:59.055 01:12:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:59.055 01:12:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:59.055 01:12:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:59.055 01:12:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:59.055 01:12:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:59.055 01:12:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:59.055 01:12:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:59.055 01:12:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:59.055 01:12:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:59.055 01:12:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:59.055 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:59.055 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.142 ms 00:26:59.055 00:26:59.055 --- 10.0.0.2 ping statistics --- 00:26:59.055 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:59.055 rtt min/avg/max/mdev = 0.142/0.142/0.142/0.000 ms 00:26:59.055 01:12:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:59.055 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:59.055 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.132 ms 00:26:59.055 00:26:59.055 --- 10.0.0.1 ping statistics --- 00:26:59.055 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:59.055 rtt min/avg/max/mdev = 0.132/0.132/0.132/0.000 ms 00:26:59.055 01:12:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:59.055 01:12:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:26:59.055 01:12:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:59.055 01:12:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:59.055 01:12:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:59.055 01:12:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:59.055 01:12:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:59.055 01:12:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:59.055 01:12:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:59.056 01:12:51 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@69 -- # nvmfappstart -m 0xF --wait-for-rpc 00:26:59.056 01:12:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:59.056 01:12:51 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@720 -- # xtrace_disable 00:26:59.056 01:12:51 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:59.056 01:12:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=3847477 00:26:59.056 01:12:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:26:59.056 01:12:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 3847477 00:26:59.056 01:12:51 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@827 -- # '[' -z 3847477 ']' 00:26:59.056 01:12:51 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:59.056 01:12:51 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@832 -- # local max_retries=100 00:26:59.056 01:12:51 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:59.056 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:59.056 01:12:51 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@836 -- # xtrace_disable 00:26:59.056 01:12:51 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:59.056 [2024-07-25 01:12:51.910727] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 23.11.0 initialization... 00:26:59.056 [2024-07-25 01:12:51.910807] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:59.056 EAL: No free 2048 kB hugepages reported on node 1 00:26:59.056 [2024-07-25 01:12:51.975951] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:59.056 [2024-07-25 01:12:52.060833] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:59.056 [2024-07-25 01:12:52.060884] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:59.056 [2024-07-25 01:12:52.060905] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:59.056 [2024-07-25 01:12:52.060922] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:59.056 [2024-07-25 01:12:52.060937] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:59.056 [2024-07-25 01:12:52.061018] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:59.056 [2024-07-25 01:12:52.061080] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:26:59.056 [2024-07-25 01:12:52.061144] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:26:59.056 [2024-07-25 01:12:52.061150] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:59.056 01:12:52 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:26:59.056 01:12:52 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@860 -- # return 0 00:26:59.056 01:12:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:59.056 01:12:52 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:59.056 01:12:52 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:59.056 01:12:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:59.056 01:12:52 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@70 -- # adq_configure_nvmf_target 0 00:26:59.056 01:12:52 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:26:59.056 01:12:52 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:26:59.056 01:12:52 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:59.056 01:12:52 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:59.056 01:12:52 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:59.056 01:12:52 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:26:59.056 01:12:52 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:26:59.056 01:12:52 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:59.056 01:12:52 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:59.056 01:12:52 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:59.056 01:12:52 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:26:59.056 01:12:52 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:59.056 01:12:52 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:59.314 01:12:52 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:59.314 01:12:52 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:26:59.314 01:12:52 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:59.314 01:12:52 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:59.314 [2024-07-25 01:12:52.307217] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:59.314 01:12:52 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:59.314 01:12:52 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:26:59.314 01:12:52 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:59.314 01:12:52 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:59.314 Malloc1 00:26:59.314 01:12:52 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:59.314 01:12:52 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:59.314 01:12:52 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:59.314 01:12:52 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:59.314 01:12:52 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:59.314 01:12:52 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:26:59.314 01:12:52 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:59.314 01:12:52 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:59.314 01:12:52 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:59.314 01:12:52 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:59.314 01:12:52 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:59.314 01:12:52 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:59.314 [2024-07-25 01:12:52.360442] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:59.314 01:12:52 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:59.314 01:12:52 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@74 -- # perfpid=3847506 00:26:59.314 01:12:52 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@75 -- # sleep 2 00:26:59.314 01:12:52 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:26:59.314 EAL: No free 2048 kB hugepages reported on node 1 00:27:01.839 01:12:54 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@77 -- # rpc_cmd nvmf_get_stats 00:27:01.839 01:12:54 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:01.839 01:12:54 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:01.839 01:12:54 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:01.839 01:12:54 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmf_stats='{ 00:27:01.839 "tick_rate": 2700000000, 00:27:01.839 "poll_groups": [ 00:27:01.839 { 00:27:01.839 "name": "nvmf_tgt_poll_group_000", 00:27:01.839 "admin_qpairs": 1, 00:27:01.839 "io_qpairs": 1, 00:27:01.839 "current_admin_qpairs": 1, 00:27:01.839 "current_io_qpairs": 1, 00:27:01.839 "pending_bdev_io": 0, 00:27:01.839 "completed_nvme_io": 19764, 00:27:01.839 "transports": [ 00:27:01.839 { 00:27:01.839 "trtype": "TCP" 00:27:01.839 } 00:27:01.839 ] 00:27:01.839 }, 00:27:01.839 { 00:27:01.839 "name": "nvmf_tgt_poll_group_001", 00:27:01.839 "admin_qpairs": 0, 00:27:01.839 "io_qpairs": 1, 00:27:01.839 "current_admin_qpairs": 0, 00:27:01.839 "current_io_qpairs": 1, 00:27:01.839 "pending_bdev_io": 0, 00:27:01.839 "completed_nvme_io": 20536, 00:27:01.839 "transports": [ 00:27:01.839 { 00:27:01.839 "trtype": "TCP" 00:27:01.839 } 00:27:01.839 ] 00:27:01.839 }, 00:27:01.839 { 00:27:01.839 "name": "nvmf_tgt_poll_group_002", 00:27:01.839 "admin_qpairs": 0, 00:27:01.839 "io_qpairs": 1, 00:27:01.839 "current_admin_qpairs": 0, 00:27:01.839 "current_io_qpairs": 1, 00:27:01.839 "pending_bdev_io": 0, 00:27:01.839 "completed_nvme_io": 20161, 00:27:01.839 "transports": [ 00:27:01.839 { 00:27:01.839 "trtype": "TCP" 00:27:01.839 } 00:27:01.839 ] 00:27:01.839 }, 00:27:01.839 { 00:27:01.839 "name": "nvmf_tgt_poll_group_003", 00:27:01.839 "admin_qpairs": 0, 00:27:01.839 "io_qpairs": 1, 00:27:01.839 "current_admin_qpairs": 0, 00:27:01.839 "current_io_qpairs": 1, 00:27:01.839 "pending_bdev_io": 0, 00:27:01.839 "completed_nvme_io": 19566, 00:27:01.839 "transports": [ 00:27:01.839 { 00:27:01.839 "trtype": "TCP" 00:27:01.839 } 00:27:01.839 ] 00:27:01.839 } 00:27:01.839 ] 00:27:01.839 }' 00:27:01.839 01:12:54 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:27:01.839 01:12:54 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # wc -l 00:27:01.839 01:12:54 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # count=4 00:27:01.839 01:12:54 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@79 -- # [[ 4 -ne 4 ]] 00:27:01.839 01:12:54 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@83 -- # wait 3847506 00:27:09.943 Initializing NVMe Controllers 00:27:09.943 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:09.943 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:27:09.943 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:27:09.943 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:27:09.943 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:27:09.943 Initialization complete. Launching workers. 00:27:09.943 ======================================================== 00:27:09.943 Latency(us) 00:27:09.943 Device Information : IOPS MiB/s Average min max 00:27:09.943 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 10251.00 40.04 6243.20 2484.50 9238.23 00:27:09.943 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 10800.70 42.19 5927.00 2760.76 8734.22 00:27:09.943 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 10631.40 41.53 6019.34 2652.47 9516.66 00:27:09.943 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 10448.60 40.81 6126.89 3203.24 7800.46 00:27:09.943 ======================================================== 00:27:09.943 Total : 42131.70 164.58 6076.81 2484.50 9516.66 00:27:09.943 00:27:09.943 01:13:02 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@84 -- # nvmftestfini 00:27:09.943 01:13:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:09.943 01:13:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:27:09.943 01:13:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:09.943 01:13:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:27:09.943 01:13:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:09.943 01:13:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:09.943 rmmod nvme_tcp 00:27:09.943 rmmod nvme_fabrics 00:27:09.943 rmmod nvme_keyring 00:27:09.943 01:13:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:09.943 01:13:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:27:09.943 01:13:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:27:09.943 01:13:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 3847477 ']' 00:27:09.944 01:13:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 3847477 00:27:09.944 01:13:02 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@946 -- # '[' -z 3847477 ']' 00:27:09.944 01:13:02 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@950 -- # kill -0 3847477 00:27:09.944 01:13:02 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@951 -- # uname 00:27:09.944 01:13:02 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:27:09.944 01:13:02 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3847477 00:27:09.944 01:13:02 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:27:09.944 01:13:02 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:27:09.944 01:13:02 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3847477' 00:27:09.944 killing process with pid 3847477 00:27:09.944 01:13:02 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@965 -- # kill 3847477 00:27:09.944 01:13:02 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@970 -- # wait 3847477 00:27:09.944 01:13:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:09.944 01:13:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:09.944 01:13:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:09.944 01:13:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:09.944 01:13:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:09.944 01:13:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:09.944 01:13:02 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:09.944 01:13:02 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:11.843 01:13:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:11.843 01:13:04 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@86 -- # adq_reload_driver 00:27:11.843 01:13:04 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:27:12.776 01:13:05 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:27:14.675 01:13:07 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:27:19.943 01:13:12 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@89 -- # nvmftestinit 00:27:19.943 01:13:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:19.943 01:13:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:19.943 01:13:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:19.943 01:13:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:19.943 01:13:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:19.943 01:13:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:19.943 01:13:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:19.943 01:13:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:19.943 01:13:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:19.943 01:13:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:19.943 01:13:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:27:19.943 01:13:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:19.943 01:13:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:19.943 01:13:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:27:19.943 01:13:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:19.943 01:13:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:19.943 01:13:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:19.943 01:13:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:19.943 01:13:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:19.943 01:13:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:27:19.943 01:13:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:19.943 01:13:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:27:19.943 01:13:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:27:19.943 01:13:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:27:19.943 01:13:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:27:19.943 01:13:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:27:19.943 01:13:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:27:19.943 01:13:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:19.943 01:13:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:19.943 01:13:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:19.943 01:13:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:19.944 01:13:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:19.944 01:13:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:19.944 01:13:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:19.944 01:13:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:19.944 01:13:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:19.944 01:13:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:19.944 01:13:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:19.944 01:13:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:19.944 01:13:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:19.944 01:13:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:19.944 01:13:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:19.944 01:13:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:19.944 01:13:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:19.944 01:13:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:19.944 01:13:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:27:19.944 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:27:19.944 01:13:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:19.944 01:13:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:19.944 01:13:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:19.944 01:13:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:19.944 01:13:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:19.944 01:13:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:19.944 01:13:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:27:19.944 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:27:19.944 01:13:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:19.944 01:13:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:19.944 01:13:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:19.944 01:13:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:19.944 01:13:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:19.944 01:13:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:19.944 01:13:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:19.944 01:13:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:19.944 01:13:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:19.944 01:13:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:19.944 01:13:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:19.944 01:13:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:19.944 01:13:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:19.944 01:13:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:19.944 01:13:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:19.944 01:13:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:27:19.944 Found net devices under 0000:0a:00.0: cvl_0_0 00:27:19.944 01:13:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:19.944 01:13:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:19.944 01:13:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:19.944 01:13:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:19.944 01:13:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:19.944 01:13:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:19.944 01:13:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:19.944 01:13:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:19.944 01:13:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:27:19.944 Found net devices under 0000:0a:00.1: cvl_0_1 00:27:19.944 01:13:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:19.944 01:13:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:19.944 01:13:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:27:19.944 01:13:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:19.944 01:13:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:19.944 01:13:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:19.944 01:13:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:19.944 01:13:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:19.944 01:13:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:19.944 01:13:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:19.944 01:13:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:19.944 01:13:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:19.944 01:13:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:19.944 01:13:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:19.944 01:13:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:19.944 01:13:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:19.944 01:13:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:19.944 01:13:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:19.944 01:13:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:19.944 01:13:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:19.944 01:13:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:19.944 01:13:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:19.944 01:13:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:19.944 01:13:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:19.944 01:13:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:19.944 01:13:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:19.944 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:19.944 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.188 ms 00:27:19.944 00:27:19.944 --- 10.0.0.2 ping statistics --- 00:27:19.944 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:19.944 rtt min/avg/max/mdev = 0.188/0.188/0.188/0.000 ms 00:27:19.944 01:13:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:19.944 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:19.944 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.139 ms 00:27:19.944 00:27:19.944 --- 10.0.0.1 ping statistics --- 00:27:19.945 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:19.945 rtt min/avg/max/mdev = 0.139/0.139/0.139/0.000 ms 00:27:19.945 01:13:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:19.945 01:13:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:27:19.945 01:13:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:19.945 01:13:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:19.945 01:13:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:19.945 01:13:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:19.945 01:13:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:19.945 01:13:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:19.945 01:13:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:19.945 01:13:12 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@90 -- # adq_configure_driver 00:27:19.945 01:13:12 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:27:19.945 01:13:12 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:27:19.945 01:13:12 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:27:19.945 net.core.busy_poll = 1 00:27:19.945 01:13:12 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:27:19.945 net.core.busy_read = 1 00:27:19.945 01:13:12 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:27:19.945 01:13:12 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:27:19.945 01:13:12 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:27:19.945 01:13:12 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:27:19.945 01:13:12 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:27:19.945 01:13:12 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@91 -- # nvmfappstart -m 0xF --wait-for-rpc 00:27:19.945 01:13:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:19.945 01:13:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@720 -- # xtrace_disable 00:27:19.945 01:13:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:19.945 01:13:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=3850113 00:27:19.945 01:13:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:27:19.945 01:13:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 3850113 00:27:19.945 01:13:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@827 -- # '[' -z 3850113 ']' 00:27:19.945 01:13:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:19.945 01:13:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@832 -- # local max_retries=100 00:27:19.945 01:13:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:19.945 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:19.945 01:13:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@836 -- # xtrace_disable 00:27:19.945 01:13:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:19.945 [2024-07-25 01:13:12.839331] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 23.11.0 initialization... 00:27:19.945 [2024-07-25 01:13:12.839419] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:19.945 EAL: No free 2048 kB hugepages reported on node 1 00:27:19.945 [2024-07-25 01:13:12.903033] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:19.945 [2024-07-25 01:13:12.987493] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:19.945 [2024-07-25 01:13:12.987548] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:19.945 [2024-07-25 01:13:12.987577] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:19.945 [2024-07-25 01:13:12.987589] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:19.945 [2024-07-25 01:13:12.987599] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:19.945 [2024-07-25 01:13:12.987657] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:19.945 [2024-07-25 01:13:12.987718] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:27:19.945 [2024-07-25 01:13:12.987782] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:27:19.945 [2024-07-25 01:13:12.987785] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:19.945 01:13:13 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:27:19.945 01:13:13 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@860 -- # return 0 00:27:19.945 01:13:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:19.945 01:13:13 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:19.945 01:13:13 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:19.945 01:13:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:19.945 01:13:13 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@92 -- # adq_configure_nvmf_target 1 00:27:19.945 01:13:13 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:27:19.945 01:13:13 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:27:19.945 01:13:13 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:19.945 01:13:13 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:19.945 01:13:13 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:20.203 01:13:13 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:27:20.204 01:13:13 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:27:20.204 01:13:13 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:20.204 01:13:13 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:20.204 01:13:13 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:20.204 01:13:13 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:27:20.204 01:13:13 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:20.204 01:13:13 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:20.204 01:13:13 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:20.204 01:13:13 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:27:20.204 01:13:13 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:20.204 01:13:13 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:20.204 [2024-07-25 01:13:13.210150] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:20.204 01:13:13 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:20.204 01:13:13 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:27:20.204 01:13:13 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:20.204 01:13:13 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:20.204 Malloc1 00:27:20.204 01:13:13 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:20.204 01:13:13 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:20.204 01:13:13 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:20.204 01:13:13 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:20.204 01:13:13 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:20.204 01:13:13 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:27:20.204 01:13:13 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:20.204 01:13:13 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:20.204 01:13:13 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:20.204 01:13:13 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:20.204 01:13:13 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:20.204 01:13:13 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:20.204 [2024-07-25 01:13:13.263506] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:20.204 01:13:13 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:20.204 01:13:13 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@96 -- # perfpid=3850145 00:27:20.204 01:13:13 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@97 -- # sleep 2 00:27:20.204 01:13:13 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:27:20.204 EAL: No free 2048 kB hugepages reported on node 1 00:27:22.730 01:13:15 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@99 -- # rpc_cmd nvmf_get_stats 00:27:22.730 01:13:15 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:22.730 01:13:15 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:22.730 01:13:15 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:22.730 01:13:15 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmf_stats='{ 00:27:22.730 "tick_rate": 2700000000, 00:27:22.730 "poll_groups": [ 00:27:22.730 { 00:27:22.730 "name": "nvmf_tgt_poll_group_000", 00:27:22.730 "admin_qpairs": 1, 00:27:22.730 "io_qpairs": 2, 00:27:22.730 "current_admin_qpairs": 1, 00:27:22.730 "current_io_qpairs": 2, 00:27:22.730 "pending_bdev_io": 0, 00:27:22.730 "completed_nvme_io": 26513, 00:27:22.730 "transports": [ 00:27:22.730 { 00:27:22.730 "trtype": "TCP" 00:27:22.730 } 00:27:22.730 ] 00:27:22.730 }, 00:27:22.730 { 00:27:22.730 "name": "nvmf_tgt_poll_group_001", 00:27:22.730 "admin_qpairs": 0, 00:27:22.730 "io_qpairs": 2, 00:27:22.730 "current_admin_qpairs": 0, 00:27:22.730 "current_io_qpairs": 2, 00:27:22.730 "pending_bdev_io": 0, 00:27:22.730 "completed_nvme_io": 25410, 00:27:22.730 "transports": [ 00:27:22.730 { 00:27:22.730 "trtype": "TCP" 00:27:22.730 } 00:27:22.730 ] 00:27:22.730 }, 00:27:22.730 { 00:27:22.730 "name": "nvmf_tgt_poll_group_002", 00:27:22.730 "admin_qpairs": 0, 00:27:22.730 "io_qpairs": 0, 00:27:22.730 "current_admin_qpairs": 0, 00:27:22.730 "current_io_qpairs": 0, 00:27:22.730 "pending_bdev_io": 0, 00:27:22.730 "completed_nvme_io": 0, 00:27:22.730 "transports": [ 00:27:22.730 { 00:27:22.730 "trtype": "TCP" 00:27:22.730 } 00:27:22.730 ] 00:27:22.730 }, 00:27:22.730 { 00:27:22.730 "name": "nvmf_tgt_poll_group_003", 00:27:22.730 "admin_qpairs": 0, 00:27:22.730 "io_qpairs": 0, 00:27:22.730 "current_admin_qpairs": 0, 00:27:22.730 "current_io_qpairs": 0, 00:27:22.730 "pending_bdev_io": 0, 00:27:22.730 "completed_nvme_io": 0, 00:27:22.730 "transports": [ 00:27:22.730 { 00:27:22.730 "trtype": "TCP" 00:27:22.730 } 00:27:22.730 ] 00:27:22.730 } 00:27:22.730 ] 00:27:22.730 }' 00:27:22.730 01:13:15 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:27:22.730 01:13:15 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # wc -l 00:27:22.730 01:13:15 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # count=2 00:27:22.730 01:13:15 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@101 -- # [[ 2 -lt 2 ]] 00:27:22.730 01:13:15 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@106 -- # wait 3850145 00:27:30.868 Initializing NVMe Controllers 00:27:30.868 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:30.868 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:27:30.868 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:27:30.868 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:27:30.868 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:27:30.868 Initialization complete. Launching workers. 00:27:30.868 ======================================================== 00:27:30.868 Latency(us) 00:27:30.868 Device Information : IOPS MiB/s Average min max 00:27:30.868 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 6135.40 23.97 10434.87 2274.42 54113.52 00:27:30.868 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 6086.00 23.77 10517.43 1792.83 53341.13 00:27:30.868 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 7188.90 28.08 8906.35 1747.58 53614.76 00:27:30.868 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 7871.10 30.75 8131.46 1696.86 54501.22 00:27:30.868 ======================================================== 00:27:30.868 Total : 27281.39 106.57 9385.94 1696.86 54501.22 00:27:30.868 00:27:30.868 01:13:23 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmftestfini 00:27:30.868 01:13:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:30.868 01:13:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:27:30.868 01:13:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:30.868 01:13:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:27:30.868 01:13:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:30.868 01:13:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:30.868 rmmod nvme_tcp 00:27:30.868 rmmod nvme_fabrics 00:27:30.868 rmmod nvme_keyring 00:27:30.868 01:13:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:30.868 01:13:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:27:30.868 01:13:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:27:30.868 01:13:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 3850113 ']' 00:27:30.868 01:13:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 3850113 00:27:30.868 01:13:23 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@946 -- # '[' -z 3850113 ']' 00:27:30.868 01:13:23 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@950 -- # kill -0 3850113 00:27:30.868 01:13:23 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@951 -- # uname 00:27:30.868 01:13:23 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:27:30.868 01:13:23 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3850113 00:27:30.868 01:13:23 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:27:30.868 01:13:23 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:27:30.868 01:13:23 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3850113' 00:27:30.868 killing process with pid 3850113 00:27:30.868 01:13:23 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@965 -- # kill 3850113 00:27:30.868 01:13:23 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@970 -- # wait 3850113 00:27:30.868 01:13:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:30.868 01:13:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:30.868 01:13:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:30.868 01:13:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:30.868 01:13:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:30.868 01:13:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:30.868 01:13:23 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:30.868 01:13:23 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:34.170 01:13:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:34.170 01:13:26 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:27:34.170 00:27:34.170 real 0m44.521s 00:27:34.170 user 2m36.925s 00:27:34.170 sys 0m10.191s 00:27:34.170 01:13:26 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@1122 -- # xtrace_disable 00:27:34.170 01:13:26 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:34.170 ************************************ 00:27:34.170 END TEST nvmf_perf_adq 00:27:34.170 ************************************ 00:27:34.170 01:13:26 nvmf_tcp -- nvmf/nvmf.sh@83 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:27:34.170 01:13:26 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:27:34.170 01:13:26 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:27:34.170 01:13:26 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:34.170 ************************************ 00:27:34.170 START TEST nvmf_shutdown 00:27:34.170 ************************************ 00:27:34.170 01:13:26 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:27:34.170 * Looking for test storage... 00:27:34.170 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:27:34.170 01:13:26 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:34.170 01:13:26 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:27:34.170 01:13:26 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:34.170 01:13:26 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:34.170 01:13:26 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:34.170 01:13:26 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:34.170 01:13:26 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:34.170 01:13:26 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:34.170 01:13:26 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:34.170 01:13:26 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:34.170 01:13:26 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:34.170 01:13:26 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:34.170 01:13:26 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:27:34.170 01:13:26 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:27:34.170 01:13:26 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:34.170 01:13:26 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:34.170 01:13:26 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:34.170 01:13:26 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:34.170 01:13:26 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:34.170 01:13:26 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:34.170 01:13:26 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:34.170 01:13:26 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:34.170 01:13:26 nvmf_tcp.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:34.170 01:13:26 nvmf_tcp.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:34.170 01:13:26 nvmf_tcp.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:34.170 01:13:26 nvmf_tcp.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:27:34.170 01:13:26 nvmf_tcp.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:34.170 01:13:26 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@47 -- # : 0 00:27:34.170 01:13:26 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:34.170 01:13:26 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:34.170 01:13:26 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:34.171 01:13:26 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:34.171 01:13:26 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:34.171 01:13:26 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:34.171 01:13:26 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:34.171 01:13:26 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:34.171 01:13:26 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@11 -- # MALLOC_BDEV_SIZE=64 00:27:34.171 01:13:26 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:27:34.171 01:13:26 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@147 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:27:34.171 01:13:26 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:27:34.171 01:13:26 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1103 -- # xtrace_disable 00:27:34.171 01:13:26 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:27:34.171 ************************************ 00:27:34.171 START TEST nvmf_shutdown_tc1 00:27:34.171 ************************************ 00:27:34.171 01:13:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1121 -- # nvmf_shutdown_tc1 00:27:34.171 01:13:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@74 -- # starttarget 00:27:34.171 01:13:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@15 -- # nvmftestinit 00:27:34.171 01:13:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:34.171 01:13:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:34.171 01:13:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:34.171 01:13:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:34.171 01:13:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:34.171 01:13:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:34.171 01:13:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:34.171 01:13:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:34.171 01:13:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:34.171 01:13:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:34.171 01:13:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@285 -- # xtrace_disable 00:27:34.171 01:13:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:36.072 01:13:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:36.072 01:13:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # pci_devs=() 00:27:36.072 01:13:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:36.072 01:13:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:36.072 01:13:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:36.072 01:13:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:36.072 01:13:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:36.072 01:13:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # net_devs=() 00:27:36.072 01:13:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:36.072 01:13:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # e810=() 00:27:36.072 01:13:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # local -ga e810 00:27:36.072 01:13:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # x722=() 00:27:36.072 01:13:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # local -ga x722 00:27:36.072 01:13:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # mlx=() 00:27:36.072 01:13:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # local -ga mlx 00:27:36.072 01:13:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:36.072 01:13:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:36.072 01:13:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:36.072 01:13:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:36.072 01:13:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:36.072 01:13:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:36.072 01:13:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:36.072 01:13:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:36.072 01:13:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:36.072 01:13:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:36.072 01:13:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:36.072 01:13:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:36.072 01:13:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:36.072 01:13:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:36.072 01:13:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:36.072 01:13:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:36.072 01:13:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:36.072 01:13:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:36.072 01:13:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:27:36.072 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:27:36.072 01:13:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:36.072 01:13:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:36.072 01:13:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:36.072 01:13:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:36.072 01:13:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:36.072 01:13:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:36.072 01:13:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:27:36.072 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:27:36.072 01:13:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:36.072 01:13:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:36.072 01:13:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:36.072 01:13:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:36.072 01:13:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:36.072 01:13:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:36.072 01:13:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:36.072 01:13:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:36.072 01:13:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:36.072 01:13:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:36.072 01:13:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:36.072 01:13:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:36.072 01:13:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:36.072 01:13:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:36.072 01:13:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:36.072 01:13:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:27:36.072 Found net devices under 0000:0a:00.0: cvl_0_0 00:27:36.072 01:13:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:36.072 01:13:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:36.072 01:13:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:36.072 01:13:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:36.072 01:13:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:36.072 01:13:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:36.072 01:13:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:36.072 01:13:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:36.072 01:13:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:27:36.072 Found net devices under 0000:0a:00.1: cvl_0_1 00:27:36.072 01:13:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:36.072 01:13:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:36.072 01:13:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # is_hw=yes 00:27:36.072 01:13:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:36.072 01:13:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:36.072 01:13:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:36.072 01:13:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:36.072 01:13:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:36.072 01:13:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:36.072 01:13:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:36.073 01:13:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:36.073 01:13:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:36.073 01:13:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:36.073 01:13:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:36.073 01:13:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:36.073 01:13:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:36.073 01:13:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:36.073 01:13:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:36.073 01:13:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:36.073 01:13:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:36.073 01:13:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:36.073 01:13:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:36.073 01:13:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:36.073 01:13:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:36.073 01:13:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:36.073 01:13:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:36.073 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:36.073 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.146 ms 00:27:36.073 00:27:36.073 --- 10.0.0.2 ping statistics --- 00:27:36.073 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:36.073 rtt min/avg/max/mdev = 0.146/0.146/0.146/0.000 ms 00:27:36.073 01:13:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:36.073 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:36.073 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.169 ms 00:27:36.073 00:27:36.073 --- 10.0.0.1 ping statistics --- 00:27:36.073 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:36.073 rtt min/avg/max/mdev = 0.169/0.169/0.169/0.000 ms 00:27:36.073 01:13:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:36.073 01:13:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # return 0 00:27:36.073 01:13:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:36.073 01:13:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:36.073 01:13:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:36.073 01:13:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:36.073 01:13:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:36.073 01:13:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:36.073 01:13:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:36.073 01:13:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:27:36.073 01:13:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:36.073 01:13:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@720 -- # xtrace_disable 00:27:36.073 01:13:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:36.073 01:13:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@481 -- # nvmfpid=3853428 00:27:36.073 01:13:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:27:36.073 01:13:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # waitforlisten 3853428 00:27:36.073 01:13:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@827 -- # '[' -z 3853428 ']' 00:27:36.073 01:13:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:36.073 01:13:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@832 -- # local max_retries=100 00:27:36.073 01:13:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:36.073 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:36.073 01:13:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # xtrace_disable 00:27:36.073 01:13:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:36.073 [2024-07-25 01:13:29.152066] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 23.11.0 initialization... 00:27:36.073 [2024-07-25 01:13:29.152134] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:36.073 EAL: No free 2048 kB hugepages reported on node 1 00:27:36.073 [2024-07-25 01:13:29.217644] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:36.331 [2024-07-25 01:13:29.310096] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:36.331 [2024-07-25 01:13:29.310167] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:36.331 [2024-07-25 01:13:29.310194] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:36.331 [2024-07-25 01:13:29.310208] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:36.331 [2024-07-25 01:13:29.310220] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:36.331 [2024-07-25 01:13:29.310333] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:27:36.331 [2024-07-25 01:13:29.310490] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:27:36.331 [2024-07-25 01:13:29.310541] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:27:36.331 [2024-07-25 01:13:29.310544] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:36.331 01:13:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:27:36.331 01:13:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@860 -- # return 0 00:27:36.331 01:13:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:36.331 01:13:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:36.331 01:13:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:36.331 01:13:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:36.331 01:13:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:36.331 01:13:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:36.331 01:13:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:36.331 [2024-07-25 01:13:29.461889] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:36.331 01:13:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:36.331 01:13:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:27:36.331 01:13:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:27:36.331 01:13:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@720 -- # xtrace_disable 00:27:36.331 01:13:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:36.331 01:13:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:27:36.331 01:13:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:36.331 01:13:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:27:36.331 01:13:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:36.331 01:13:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:27:36.331 01:13:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:36.331 01:13:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:27:36.589 01:13:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:36.589 01:13:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:27:36.589 01:13:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:36.589 01:13:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:27:36.589 01:13:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:36.589 01:13:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:27:36.589 01:13:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:36.589 01:13:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:27:36.589 01:13:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:36.589 01:13:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:27:36.589 01:13:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:36.589 01:13:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:27:36.589 01:13:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:36.589 01:13:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:27:36.589 01:13:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@35 -- # rpc_cmd 00:27:36.589 01:13:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:36.589 01:13:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:36.589 Malloc1 00:27:36.589 [2024-07-25 01:13:29.543135] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:36.589 Malloc2 00:27:36.589 Malloc3 00:27:36.589 Malloc4 00:27:36.589 Malloc5 00:27:36.847 Malloc6 00:27:36.847 Malloc7 00:27:36.847 Malloc8 00:27:36.847 Malloc9 00:27:36.847 Malloc10 00:27:36.847 01:13:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:36.847 01:13:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:27:36.847 01:13:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:36.847 01:13:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:37.105 01:13:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # perfpid=3853605 00:27:37.105 01:13:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # waitforlisten 3853605 /var/tmp/bdevperf.sock 00:27:37.105 01:13:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:27:37.105 01:13:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:27:37.105 01:13:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@827 -- # '[' -z 3853605 ']' 00:27:37.105 01:13:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:27:37.105 01:13:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:37.105 01:13:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:27:37.105 01:13:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:37.105 01:13:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@832 -- # local max_retries=100 00:27:37.105 01:13:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:37.105 { 00:27:37.105 "params": { 00:27:37.105 "name": "Nvme$subsystem", 00:27:37.105 "trtype": "$TEST_TRANSPORT", 00:27:37.105 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:37.105 "adrfam": "ipv4", 00:27:37.105 "trsvcid": "$NVMF_PORT", 00:27:37.105 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:37.105 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:37.105 "hdgst": ${hdgst:-false}, 00:27:37.105 "ddgst": ${ddgst:-false} 00:27:37.105 }, 00:27:37.105 "method": "bdev_nvme_attach_controller" 00:27:37.105 } 00:27:37.105 EOF 00:27:37.105 )") 00:27:37.105 01:13:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:37.105 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:37.105 01:13:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # xtrace_disable 00:27:37.105 01:13:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:37.105 01:13:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:37.105 01:13:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:37.105 01:13:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:37.105 { 00:27:37.105 "params": { 00:27:37.105 "name": "Nvme$subsystem", 00:27:37.105 "trtype": "$TEST_TRANSPORT", 00:27:37.105 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:37.105 "adrfam": "ipv4", 00:27:37.105 "trsvcid": "$NVMF_PORT", 00:27:37.105 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:37.105 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:37.105 "hdgst": ${hdgst:-false}, 00:27:37.106 "ddgst": ${ddgst:-false} 00:27:37.106 }, 00:27:37.106 "method": "bdev_nvme_attach_controller" 00:27:37.106 } 00:27:37.106 EOF 00:27:37.106 )") 00:27:37.106 01:13:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:37.106 01:13:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:37.106 01:13:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:37.106 { 00:27:37.106 "params": { 00:27:37.106 "name": "Nvme$subsystem", 00:27:37.106 "trtype": "$TEST_TRANSPORT", 00:27:37.106 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:37.106 "adrfam": "ipv4", 00:27:37.106 "trsvcid": "$NVMF_PORT", 00:27:37.106 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:37.106 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:37.106 "hdgst": ${hdgst:-false}, 00:27:37.106 "ddgst": ${ddgst:-false} 00:27:37.106 }, 00:27:37.106 "method": "bdev_nvme_attach_controller" 00:27:37.106 } 00:27:37.106 EOF 00:27:37.106 )") 00:27:37.106 01:13:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:37.106 01:13:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:37.106 01:13:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:37.106 { 00:27:37.106 "params": { 00:27:37.106 "name": "Nvme$subsystem", 00:27:37.106 "trtype": "$TEST_TRANSPORT", 00:27:37.106 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:37.106 "adrfam": "ipv4", 00:27:37.106 "trsvcid": "$NVMF_PORT", 00:27:37.106 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:37.106 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:37.106 "hdgst": ${hdgst:-false}, 00:27:37.106 "ddgst": ${ddgst:-false} 00:27:37.106 }, 00:27:37.106 "method": "bdev_nvme_attach_controller" 00:27:37.106 } 00:27:37.106 EOF 00:27:37.106 )") 00:27:37.106 01:13:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:37.106 01:13:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:37.106 01:13:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:37.106 { 00:27:37.106 "params": { 00:27:37.106 "name": "Nvme$subsystem", 00:27:37.106 "trtype": "$TEST_TRANSPORT", 00:27:37.106 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:37.106 "adrfam": "ipv4", 00:27:37.106 "trsvcid": "$NVMF_PORT", 00:27:37.106 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:37.106 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:37.106 "hdgst": ${hdgst:-false}, 00:27:37.106 "ddgst": ${ddgst:-false} 00:27:37.106 }, 00:27:37.106 "method": "bdev_nvme_attach_controller" 00:27:37.106 } 00:27:37.106 EOF 00:27:37.106 )") 00:27:37.106 01:13:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:37.106 01:13:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:37.106 01:13:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:37.106 { 00:27:37.106 "params": { 00:27:37.106 "name": "Nvme$subsystem", 00:27:37.106 "trtype": "$TEST_TRANSPORT", 00:27:37.106 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:37.106 "adrfam": "ipv4", 00:27:37.106 "trsvcid": "$NVMF_PORT", 00:27:37.106 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:37.106 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:37.106 "hdgst": ${hdgst:-false}, 00:27:37.106 "ddgst": ${ddgst:-false} 00:27:37.106 }, 00:27:37.106 "method": "bdev_nvme_attach_controller" 00:27:37.106 } 00:27:37.106 EOF 00:27:37.106 )") 00:27:37.106 01:13:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:37.106 01:13:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:37.106 01:13:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:37.106 { 00:27:37.106 "params": { 00:27:37.106 "name": "Nvme$subsystem", 00:27:37.106 "trtype": "$TEST_TRANSPORT", 00:27:37.106 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:37.106 "adrfam": "ipv4", 00:27:37.106 "trsvcid": "$NVMF_PORT", 00:27:37.106 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:37.106 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:37.106 "hdgst": ${hdgst:-false}, 00:27:37.106 "ddgst": ${ddgst:-false} 00:27:37.106 }, 00:27:37.106 "method": "bdev_nvme_attach_controller" 00:27:37.106 } 00:27:37.106 EOF 00:27:37.106 )") 00:27:37.106 01:13:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:37.106 01:13:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:37.106 01:13:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:37.106 { 00:27:37.106 "params": { 00:27:37.106 "name": "Nvme$subsystem", 00:27:37.106 "trtype": "$TEST_TRANSPORT", 00:27:37.106 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:37.106 "adrfam": "ipv4", 00:27:37.106 "trsvcid": "$NVMF_PORT", 00:27:37.106 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:37.106 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:37.106 "hdgst": ${hdgst:-false}, 00:27:37.106 "ddgst": ${ddgst:-false} 00:27:37.106 }, 00:27:37.106 "method": "bdev_nvme_attach_controller" 00:27:37.106 } 00:27:37.106 EOF 00:27:37.106 )") 00:27:37.106 01:13:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:37.106 01:13:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:37.106 01:13:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:37.106 { 00:27:37.106 "params": { 00:27:37.106 "name": "Nvme$subsystem", 00:27:37.106 "trtype": "$TEST_TRANSPORT", 00:27:37.106 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:37.106 "adrfam": "ipv4", 00:27:37.106 "trsvcid": "$NVMF_PORT", 00:27:37.106 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:37.106 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:37.106 "hdgst": ${hdgst:-false}, 00:27:37.106 "ddgst": ${ddgst:-false} 00:27:37.106 }, 00:27:37.106 "method": "bdev_nvme_attach_controller" 00:27:37.106 } 00:27:37.106 EOF 00:27:37.106 )") 00:27:37.106 01:13:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:37.106 01:13:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:37.106 01:13:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:37.106 { 00:27:37.106 "params": { 00:27:37.106 "name": "Nvme$subsystem", 00:27:37.106 "trtype": "$TEST_TRANSPORT", 00:27:37.106 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:37.106 "adrfam": "ipv4", 00:27:37.106 "trsvcid": "$NVMF_PORT", 00:27:37.106 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:37.106 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:37.106 "hdgst": ${hdgst:-false}, 00:27:37.106 "ddgst": ${ddgst:-false} 00:27:37.106 }, 00:27:37.106 "method": "bdev_nvme_attach_controller" 00:27:37.106 } 00:27:37.106 EOF 00:27:37.106 )") 00:27:37.106 01:13:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:37.106 01:13:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:27:37.106 01:13:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:27:37.106 01:13:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:27:37.106 "params": { 00:27:37.106 "name": "Nvme1", 00:27:37.106 "trtype": "tcp", 00:27:37.106 "traddr": "10.0.0.2", 00:27:37.106 "adrfam": "ipv4", 00:27:37.106 "trsvcid": "4420", 00:27:37.106 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:37.106 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:37.106 "hdgst": false, 00:27:37.106 "ddgst": false 00:27:37.106 }, 00:27:37.106 "method": "bdev_nvme_attach_controller" 00:27:37.106 },{ 00:27:37.106 "params": { 00:27:37.106 "name": "Nvme2", 00:27:37.106 "trtype": "tcp", 00:27:37.106 "traddr": "10.0.0.2", 00:27:37.106 "adrfam": "ipv4", 00:27:37.106 "trsvcid": "4420", 00:27:37.106 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:27:37.106 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:27:37.106 "hdgst": false, 00:27:37.106 "ddgst": false 00:27:37.106 }, 00:27:37.106 "method": "bdev_nvme_attach_controller" 00:27:37.106 },{ 00:27:37.106 "params": { 00:27:37.106 "name": "Nvme3", 00:27:37.106 "trtype": "tcp", 00:27:37.106 "traddr": "10.0.0.2", 00:27:37.106 "adrfam": "ipv4", 00:27:37.106 "trsvcid": "4420", 00:27:37.106 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:27:37.106 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:27:37.106 "hdgst": false, 00:27:37.106 "ddgst": false 00:27:37.106 }, 00:27:37.106 "method": "bdev_nvme_attach_controller" 00:27:37.106 },{ 00:27:37.106 "params": { 00:27:37.106 "name": "Nvme4", 00:27:37.106 "trtype": "tcp", 00:27:37.106 "traddr": "10.0.0.2", 00:27:37.106 "adrfam": "ipv4", 00:27:37.106 "trsvcid": "4420", 00:27:37.106 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:27:37.106 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:27:37.106 "hdgst": false, 00:27:37.106 "ddgst": false 00:27:37.106 }, 00:27:37.106 "method": "bdev_nvme_attach_controller" 00:27:37.106 },{ 00:27:37.106 "params": { 00:27:37.106 "name": "Nvme5", 00:27:37.106 "trtype": "tcp", 00:27:37.106 "traddr": "10.0.0.2", 00:27:37.106 "adrfam": "ipv4", 00:27:37.107 "trsvcid": "4420", 00:27:37.107 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:27:37.107 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:27:37.107 "hdgst": false, 00:27:37.107 "ddgst": false 00:27:37.107 }, 00:27:37.107 "method": "bdev_nvme_attach_controller" 00:27:37.107 },{ 00:27:37.107 "params": { 00:27:37.107 "name": "Nvme6", 00:27:37.107 "trtype": "tcp", 00:27:37.107 "traddr": "10.0.0.2", 00:27:37.107 "adrfam": "ipv4", 00:27:37.107 "trsvcid": "4420", 00:27:37.107 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:27:37.107 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:27:37.107 "hdgst": false, 00:27:37.107 "ddgst": false 00:27:37.107 }, 00:27:37.107 "method": "bdev_nvme_attach_controller" 00:27:37.107 },{ 00:27:37.107 "params": { 00:27:37.107 "name": "Nvme7", 00:27:37.107 "trtype": "tcp", 00:27:37.107 "traddr": "10.0.0.2", 00:27:37.107 "adrfam": "ipv4", 00:27:37.107 "trsvcid": "4420", 00:27:37.107 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:27:37.107 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:27:37.107 "hdgst": false, 00:27:37.107 "ddgst": false 00:27:37.107 }, 00:27:37.107 "method": "bdev_nvme_attach_controller" 00:27:37.107 },{ 00:27:37.107 "params": { 00:27:37.107 "name": "Nvme8", 00:27:37.107 "trtype": "tcp", 00:27:37.107 "traddr": "10.0.0.2", 00:27:37.107 "adrfam": "ipv4", 00:27:37.107 "trsvcid": "4420", 00:27:37.107 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:27:37.107 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:27:37.107 "hdgst": false, 00:27:37.107 "ddgst": false 00:27:37.107 }, 00:27:37.107 "method": "bdev_nvme_attach_controller" 00:27:37.107 },{ 00:27:37.107 "params": { 00:27:37.107 "name": "Nvme9", 00:27:37.107 "trtype": "tcp", 00:27:37.107 "traddr": "10.0.0.2", 00:27:37.107 "adrfam": "ipv4", 00:27:37.107 "trsvcid": "4420", 00:27:37.107 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:27:37.107 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:27:37.107 "hdgst": false, 00:27:37.107 "ddgst": false 00:27:37.107 }, 00:27:37.107 "method": "bdev_nvme_attach_controller" 00:27:37.107 },{ 00:27:37.107 "params": { 00:27:37.107 "name": "Nvme10", 00:27:37.107 "trtype": "tcp", 00:27:37.107 "traddr": "10.0.0.2", 00:27:37.107 "adrfam": "ipv4", 00:27:37.107 "trsvcid": "4420", 00:27:37.107 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:27:37.107 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:27:37.107 "hdgst": false, 00:27:37.107 "ddgst": false 00:27:37.107 }, 00:27:37.107 "method": "bdev_nvme_attach_controller" 00:27:37.107 }' 00:27:37.107 [2024-07-25 01:13:30.057793] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 23.11.0 initialization... 00:27:37.107 [2024-07-25 01:13:30.057867] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:27:37.107 EAL: No free 2048 kB hugepages reported on node 1 00:27:37.107 [2024-07-25 01:13:30.121328] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:37.107 [2024-07-25 01:13:30.207969] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:39.005 01:13:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:27:39.005 01:13:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@860 -- # return 0 00:27:39.005 01:13:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:27:39.005 01:13:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:39.005 01:13:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:39.005 01:13:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:39.005 01:13:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@83 -- # kill -9 3853605 00:27:39.005 01:13:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # rm -f /var/run/spdk_bdev1 00:27:39.005 01:13:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@87 -- # sleep 1 00:27:39.938 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 73: 3853605 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:27:39.938 01:13:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # kill -0 3853428 00:27:39.938 01:13:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:27:39.938 01:13:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:27:39.938 01:13:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:27:39.938 01:13:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:27:39.938 01:13:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:39.938 01:13:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:39.938 { 00:27:39.938 "params": { 00:27:39.938 "name": "Nvme$subsystem", 00:27:39.938 "trtype": "$TEST_TRANSPORT", 00:27:39.938 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:39.938 "adrfam": "ipv4", 00:27:39.938 "trsvcid": "$NVMF_PORT", 00:27:39.938 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:39.938 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:39.938 "hdgst": ${hdgst:-false}, 00:27:39.938 "ddgst": ${ddgst:-false} 00:27:39.938 }, 00:27:39.938 "method": "bdev_nvme_attach_controller" 00:27:39.938 } 00:27:39.938 EOF 00:27:39.938 )") 00:27:39.938 01:13:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:39.938 01:13:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:39.938 01:13:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:39.938 { 00:27:39.938 "params": { 00:27:39.938 "name": "Nvme$subsystem", 00:27:39.938 "trtype": "$TEST_TRANSPORT", 00:27:39.938 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:39.938 "adrfam": "ipv4", 00:27:39.938 "trsvcid": "$NVMF_PORT", 00:27:39.938 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:39.938 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:39.938 "hdgst": ${hdgst:-false}, 00:27:39.938 "ddgst": ${ddgst:-false} 00:27:39.938 }, 00:27:39.938 "method": "bdev_nvme_attach_controller" 00:27:39.938 } 00:27:39.938 EOF 00:27:39.938 )") 00:27:39.938 01:13:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:39.938 01:13:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:39.938 01:13:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:39.938 { 00:27:39.938 "params": { 00:27:39.938 "name": "Nvme$subsystem", 00:27:39.938 "trtype": "$TEST_TRANSPORT", 00:27:39.938 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:39.938 "adrfam": "ipv4", 00:27:39.938 "trsvcid": "$NVMF_PORT", 00:27:39.938 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:39.938 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:39.938 "hdgst": ${hdgst:-false}, 00:27:39.938 "ddgst": ${ddgst:-false} 00:27:39.938 }, 00:27:39.938 "method": "bdev_nvme_attach_controller" 00:27:39.938 } 00:27:39.938 EOF 00:27:39.938 )") 00:27:39.938 01:13:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:39.938 01:13:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:39.938 01:13:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:39.938 { 00:27:39.938 "params": { 00:27:39.938 "name": "Nvme$subsystem", 00:27:39.938 "trtype": "$TEST_TRANSPORT", 00:27:39.938 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:39.938 "adrfam": "ipv4", 00:27:39.938 "trsvcid": "$NVMF_PORT", 00:27:39.938 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:39.938 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:39.938 "hdgst": ${hdgst:-false}, 00:27:39.938 "ddgst": ${ddgst:-false} 00:27:39.938 }, 00:27:39.938 "method": "bdev_nvme_attach_controller" 00:27:39.938 } 00:27:39.938 EOF 00:27:39.938 )") 00:27:39.938 01:13:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:39.938 01:13:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:39.938 01:13:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:39.938 { 00:27:39.938 "params": { 00:27:39.938 "name": "Nvme$subsystem", 00:27:39.938 "trtype": "$TEST_TRANSPORT", 00:27:39.938 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:39.938 "adrfam": "ipv4", 00:27:39.938 "trsvcid": "$NVMF_PORT", 00:27:39.939 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:39.939 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:39.939 "hdgst": ${hdgst:-false}, 00:27:39.939 "ddgst": ${ddgst:-false} 00:27:39.939 }, 00:27:39.939 "method": "bdev_nvme_attach_controller" 00:27:39.939 } 00:27:39.939 EOF 00:27:39.939 )") 00:27:39.939 01:13:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:39.939 01:13:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:39.939 01:13:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:39.939 { 00:27:39.939 "params": { 00:27:39.939 "name": "Nvme$subsystem", 00:27:39.939 "trtype": "$TEST_TRANSPORT", 00:27:39.939 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:39.939 "adrfam": "ipv4", 00:27:39.939 "trsvcid": "$NVMF_PORT", 00:27:39.939 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:39.939 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:39.939 "hdgst": ${hdgst:-false}, 00:27:39.939 "ddgst": ${ddgst:-false} 00:27:39.939 }, 00:27:39.939 "method": "bdev_nvme_attach_controller" 00:27:39.939 } 00:27:39.939 EOF 00:27:39.939 )") 00:27:39.939 01:13:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:39.939 01:13:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:39.939 01:13:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:39.939 { 00:27:39.939 "params": { 00:27:39.939 "name": "Nvme$subsystem", 00:27:39.939 "trtype": "$TEST_TRANSPORT", 00:27:39.939 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:39.939 "adrfam": "ipv4", 00:27:39.939 "trsvcid": "$NVMF_PORT", 00:27:39.939 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:39.939 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:39.939 "hdgst": ${hdgst:-false}, 00:27:39.939 "ddgst": ${ddgst:-false} 00:27:39.939 }, 00:27:39.939 "method": "bdev_nvme_attach_controller" 00:27:39.939 } 00:27:39.939 EOF 00:27:39.939 )") 00:27:39.939 01:13:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:39.939 01:13:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:39.939 01:13:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:39.939 { 00:27:39.939 "params": { 00:27:39.939 "name": "Nvme$subsystem", 00:27:39.939 "trtype": "$TEST_TRANSPORT", 00:27:39.939 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:39.939 "adrfam": "ipv4", 00:27:39.939 "trsvcid": "$NVMF_PORT", 00:27:39.939 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:39.939 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:39.939 "hdgst": ${hdgst:-false}, 00:27:39.939 "ddgst": ${ddgst:-false} 00:27:39.939 }, 00:27:39.939 "method": "bdev_nvme_attach_controller" 00:27:39.939 } 00:27:39.939 EOF 00:27:39.939 )") 00:27:39.939 01:13:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:39.939 01:13:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:39.939 01:13:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:39.939 { 00:27:39.939 "params": { 00:27:39.939 "name": "Nvme$subsystem", 00:27:39.939 "trtype": "$TEST_TRANSPORT", 00:27:39.939 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:39.939 "adrfam": "ipv4", 00:27:39.939 "trsvcid": "$NVMF_PORT", 00:27:39.939 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:39.939 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:39.939 "hdgst": ${hdgst:-false}, 00:27:39.939 "ddgst": ${ddgst:-false} 00:27:39.939 }, 00:27:39.939 "method": "bdev_nvme_attach_controller" 00:27:39.939 } 00:27:39.939 EOF 00:27:39.939 )") 00:27:39.939 01:13:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:39.939 01:13:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:39.939 01:13:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:39.939 { 00:27:39.939 "params": { 00:27:39.939 "name": "Nvme$subsystem", 00:27:39.939 "trtype": "$TEST_TRANSPORT", 00:27:39.939 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:39.939 "adrfam": "ipv4", 00:27:39.939 "trsvcid": "$NVMF_PORT", 00:27:39.939 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:39.939 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:39.939 "hdgst": ${hdgst:-false}, 00:27:39.939 "ddgst": ${ddgst:-false} 00:27:39.939 }, 00:27:39.939 "method": "bdev_nvme_attach_controller" 00:27:39.939 } 00:27:39.939 EOF 00:27:39.939 )") 00:27:39.939 01:13:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:39.939 01:13:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:27:39.939 01:13:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:27:39.939 01:13:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:27:39.939 "params": { 00:27:39.939 "name": "Nvme1", 00:27:39.939 "trtype": "tcp", 00:27:39.939 "traddr": "10.0.0.2", 00:27:39.939 "adrfam": "ipv4", 00:27:39.939 "trsvcid": "4420", 00:27:39.939 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:39.939 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:39.939 "hdgst": false, 00:27:39.939 "ddgst": false 00:27:39.939 }, 00:27:39.939 "method": "bdev_nvme_attach_controller" 00:27:39.939 },{ 00:27:39.939 "params": { 00:27:39.939 "name": "Nvme2", 00:27:39.939 "trtype": "tcp", 00:27:39.939 "traddr": "10.0.0.2", 00:27:39.939 "adrfam": "ipv4", 00:27:39.939 "trsvcid": "4420", 00:27:39.939 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:27:39.939 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:27:39.939 "hdgst": false, 00:27:39.939 "ddgst": false 00:27:39.939 }, 00:27:39.939 "method": "bdev_nvme_attach_controller" 00:27:39.939 },{ 00:27:39.939 "params": { 00:27:39.939 "name": "Nvme3", 00:27:39.939 "trtype": "tcp", 00:27:39.939 "traddr": "10.0.0.2", 00:27:39.939 "adrfam": "ipv4", 00:27:39.939 "trsvcid": "4420", 00:27:39.939 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:27:39.939 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:27:39.939 "hdgst": false, 00:27:39.939 "ddgst": false 00:27:39.939 }, 00:27:39.939 "method": "bdev_nvme_attach_controller" 00:27:39.939 },{ 00:27:39.939 "params": { 00:27:39.939 "name": "Nvme4", 00:27:39.939 "trtype": "tcp", 00:27:39.939 "traddr": "10.0.0.2", 00:27:39.939 "adrfam": "ipv4", 00:27:39.939 "trsvcid": "4420", 00:27:39.939 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:27:39.939 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:27:39.939 "hdgst": false, 00:27:39.939 "ddgst": false 00:27:39.939 }, 00:27:39.939 "method": "bdev_nvme_attach_controller" 00:27:39.939 },{ 00:27:39.939 "params": { 00:27:39.939 "name": "Nvme5", 00:27:39.939 "trtype": "tcp", 00:27:39.939 "traddr": "10.0.0.2", 00:27:39.939 "adrfam": "ipv4", 00:27:39.939 "trsvcid": "4420", 00:27:39.939 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:27:39.939 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:27:39.939 "hdgst": false, 00:27:39.939 "ddgst": false 00:27:39.939 }, 00:27:39.939 "method": "bdev_nvme_attach_controller" 00:27:39.939 },{ 00:27:39.939 "params": { 00:27:39.939 "name": "Nvme6", 00:27:39.939 "trtype": "tcp", 00:27:39.939 "traddr": "10.0.0.2", 00:27:39.939 "adrfam": "ipv4", 00:27:39.939 "trsvcid": "4420", 00:27:39.939 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:27:39.939 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:27:39.939 "hdgst": false, 00:27:39.939 "ddgst": false 00:27:39.939 }, 00:27:39.939 "method": "bdev_nvme_attach_controller" 00:27:39.939 },{ 00:27:39.939 "params": { 00:27:39.939 "name": "Nvme7", 00:27:39.939 "trtype": "tcp", 00:27:39.939 "traddr": "10.0.0.2", 00:27:39.939 "adrfam": "ipv4", 00:27:39.939 "trsvcid": "4420", 00:27:39.939 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:27:39.939 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:27:39.939 "hdgst": false, 00:27:39.939 "ddgst": false 00:27:39.939 }, 00:27:39.939 "method": "bdev_nvme_attach_controller" 00:27:39.939 },{ 00:27:39.939 "params": { 00:27:39.939 "name": "Nvme8", 00:27:39.939 "trtype": "tcp", 00:27:39.939 "traddr": "10.0.0.2", 00:27:39.939 "adrfam": "ipv4", 00:27:39.939 "trsvcid": "4420", 00:27:39.939 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:27:39.939 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:27:39.939 "hdgst": false, 00:27:39.939 "ddgst": false 00:27:39.939 }, 00:27:39.939 "method": "bdev_nvme_attach_controller" 00:27:39.939 },{ 00:27:39.939 "params": { 00:27:39.939 "name": "Nvme9", 00:27:39.939 "trtype": "tcp", 00:27:39.939 "traddr": "10.0.0.2", 00:27:39.939 "adrfam": "ipv4", 00:27:39.939 "trsvcid": "4420", 00:27:39.939 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:27:39.939 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:27:39.939 "hdgst": false, 00:27:39.939 "ddgst": false 00:27:39.939 }, 00:27:39.939 "method": "bdev_nvme_attach_controller" 00:27:39.939 },{ 00:27:39.939 "params": { 00:27:39.940 "name": "Nvme10", 00:27:39.940 "trtype": "tcp", 00:27:39.940 "traddr": "10.0.0.2", 00:27:39.940 "adrfam": "ipv4", 00:27:39.940 "trsvcid": "4420", 00:27:39.940 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:27:39.940 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:27:39.940 "hdgst": false, 00:27:39.940 "ddgst": false 00:27:39.940 }, 00:27:39.940 "method": "bdev_nvme_attach_controller" 00:27:39.940 }' 00:27:40.197 [2024-07-25 01:13:33.098615] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 23.11.0 initialization... 00:27:40.197 [2024-07-25 01:13:33.098705] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3854022 ] 00:27:40.197 EAL: No free 2048 kB hugepages reported on node 1 00:27:40.197 [2024-07-25 01:13:33.165802] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:40.197 [2024-07-25 01:13:33.256239] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:42.093 Running I/O for 1 seconds... 00:27:43.028 00:27:43.028 Latency(us) 00:27:43.028 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:43.028 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:43.028 Verification LBA range: start 0x0 length 0x400 00:27:43.028 Nvme1n1 : 1.10 237.13 14.82 0.00 0.00 259874.89 18252.99 268746.15 00:27:43.028 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:43.028 Verification LBA range: start 0x0 length 0x400 00:27:43.028 Nvme2n1 : 1.15 223.30 13.96 0.00 0.00 279223.37 22039.51 257872.02 00:27:43.028 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:43.028 Verification LBA range: start 0x0 length 0x400 00:27:43.028 Nvme3n1 : 1.10 249.01 15.56 0.00 0.00 239212.58 8932.31 246997.90 00:27:43.028 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:43.028 Verification LBA range: start 0x0 length 0x400 00:27:43.028 Nvme4n1 : 1.17 274.53 17.16 0.00 0.00 219683.95 19223.89 239230.67 00:27:43.028 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:43.028 Verification LBA range: start 0x0 length 0x400 00:27:43.028 Nvme5n1 : 1.15 222.81 13.93 0.00 0.00 266073.13 21068.61 254765.13 00:27:43.028 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:43.028 Verification LBA range: start 0x0 length 0x400 00:27:43.028 Nvme6n1 : 1.16 224.34 14.02 0.00 0.00 259648.68 3495.25 253211.69 00:27:43.028 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:43.028 Verification LBA range: start 0x0 length 0x400 00:27:43.028 Nvme7n1 : 1.15 221.88 13.87 0.00 0.00 258345.34 21748.24 259425.47 00:27:43.028 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:43.028 Verification LBA range: start 0x0 length 0x400 00:27:43.028 Nvme8n1 : 1.18 272.16 17.01 0.00 0.00 207485.76 16893.72 260978.92 00:27:43.028 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:43.028 Verification LBA range: start 0x0 length 0x400 00:27:43.028 Nvme9n1 : 1.16 220.22 13.76 0.00 0.00 251596.61 21651.15 274959.93 00:27:43.028 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:43.028 Verification LBA range: start 0x0 length 0x400 00:27:43.028 Nvme10n1 : 1.17 218.49 13.66 0.00 0.00 249472.95 21554.06 301368.51 00:27:43.028 =================================================================================================================== 00:27:43.028 Total : 2363.87 147.74 0.00 0.00 247351.75 3495.25 301368.51 00:27:43.028 01:13:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@94 -- # stoptarget 00:27:43.028 01:13:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:27:43.286 01:13:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:27:43.286 01:13:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:27:43.286 01:13:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@45 -- # nvmftestfini 00:27:43.286 01:13:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:43.286 01:13:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # sync 00:27:43.286 01:13:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:43.286 01:13:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@120 -- # set +e 00:27:43.286 01:13:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:43.286 01:13:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:43.286 rmmod nvme_tcp 00:27:43.286 rmmod nvme_fabrics 00:27:43.286 rmmod nvme_keyring 00:27:43.286 01:13:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:43.286 01:13:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set -e 00:27:43.286 01:13:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # return 0 00:27:43.286 01:13:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@489 -- # '[' -n 3853428 ']' 00:27:43.286 01:13:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@490 -- # killprocess 3853428 00:27:43.286 01:13:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@946 -- # '[' -z 3853428 ']' 00:27:43.286 01:13:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@950 -- # kill -0 3853428 00:27:43.286 01:13:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@951 -- # uname 00:27:43.286 01:13:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:27:43.286 01:13:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3853428 00:27:43.286 01:13:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:27:43.286 01:13:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:27:43.286 01:13:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3853428' 00:27:43.286 killing process with pid 3853428 00:27:43.286 01:13:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@965 -- # kill 3853428 00:27:43.286 01:13:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@970 -- # wait 3853428 00:27:43.852 01:13:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:43.852 01:13:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:43.852 01:13:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:43.853 01:13:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:43.853 01:13:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:43.853 01:13:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:43.853 01:13:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:43.853 01:13:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:45.753 01:13:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:45.753 00:27:45.753 real 0m11.848s 00:27:45.753 user 0m34.510s 00:27:45.753 sys 0m3.166s 00:27:45.753 01:13:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:27:45.753 01:13:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:45.753 ************************************ 00:27:45.753 END TEST nvmf_shutdown_tc1 00:27:45.753 ************************************ 00:27:45.753 01:13:38 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@148 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:27:45.753 01:13:38 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:27:45.753 01:13:38 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1103 -- # xtrace_disable 00:27:45.753 01:13:38 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:27:45.753 ************************************ 00:27:45.753 START TEST nvmf_shutdown_tc2 00:27:45.753 ************************************ 00:27:45.753 01:13:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1121 -- # nvmf_shutdown_tc2 00:27:45.753 01:13:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@99 -- # starttarget 00:27:45.753 01:13:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@15 -- # nvmftestinit 00:27:45.753 01:13:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:45.753 01:13:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:45.753 01:13:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:45.753 01:13:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:45.753 01:13:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:45.753 01:13:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:45.753 01:13:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:45.753 01:13:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:45.753 01:13:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:45.753 01:13:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:45.753 01:13:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@285 -- # xtrace_disable 00:27:45.753 01:13:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:45.753 01:13:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:45.753 01:13:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # pci_devs=() 00:27:45.753 01:13:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:45.753 01:13:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:45.753 01:13:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:45.753 01:13:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:45.753 01:13:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:45.753 01:13:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # net_devs=() 00:27:45.753 01:13:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:45.753 01:13:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # e810=() 00:27:45.753 01:13:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # local -ga e810 00:27:45.753 01:13:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # x722=() 00:27:45.753 01:13:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # local -ga x722 00:27:45.753 01:13:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # mlx=() 00:27:45.753 01:13:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # local -ga mlx 00:27:45.753 01:13:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:45.753 01:13:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:45.753 01:13:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:45.753 01:13:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:45.753 01:13:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:45.753 01:13:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:45.753 01:13:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:45.753 01:13:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:45.753 01:13:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:45.753 01:13:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:45.753 01:13:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:45.753 01:13:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:45.753 01:13:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:45.753 01:13:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:45.753 01:13:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:45.753 01:13:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:45.753 01:13:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:45.753 01:13:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:45.753 01:13:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:27:45.753 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:27:45.753 01:13:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:45.753 01:13:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:45.753 01:13:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:45.753 01:13:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:45.753 01:13:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:45.753 01:13:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:45.753 01:13:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:27:45.753 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:27:45.753 01:13:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:45.754 01:13:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:45.754 01:13:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:45.754 01:13:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:45.754 01:13:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:45.754 01:13:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:45.754 01:13:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:45.754 01:13:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:45.754 01:13:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:45.754 01:13:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:45.754 01:13:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:45.754 01:13:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:45.754 01:13:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:45.754 01:13:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:45.754 01:13:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:45.754 01:13:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:27:45.754 Found net devices under 0000:0a:00.0: cvl_0_0 00:27:45.754 01:13:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:45.754 01:13:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:45.754 01:13:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:45.754 01:13:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:45.754 01:13:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:45.754 01:13:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:45.754 01:13:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:45.754 01:13:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:45.754 01:13:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:27:45.754 Found net devices under 0000:0a:00.1: cvl_0_1 00:27:45.754 01:13:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:45.754 01:13:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:45.754 01:13:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # is_hw=yes 00:27:45.754 01:13:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:45.754 01:13:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:45.754 01:13:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:45.754 01:13:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:45.754 01:13:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:45.754 01:13:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:45.754 01:13:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:45.754 01:13:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:45.754 01:13:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:45.754 01:13:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:45.754 01:13:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:45.754 01:13:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:45.754 01:13:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:45.754 01:13:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:45.754 01:13:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:45.754 01:13:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:46.012 01:13:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:46.012 01:13:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:46.012 01:13:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:46.012 01:13:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:46.012 01:13:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:46.012 01:13:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:46.012 01:13:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:46.012 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:46.012 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.228 ms 00:27:46.012 00:27:46.012 --- 10.0.0.2 ping statistics --- 00:27:46.012 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:46.012 rtt min/avg/max/mdev = 0.228/0.228/0.228/0.000 ms 00:27:46.012 01:13:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:46.012 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:46.012 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.131 ms 00:27:46.012 00:27:46.012 --- 10.0.0.1 ping statistics --- 00:27:46.012 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:46.012 rtt min/avg/max/mdev = 0.131/0.131/0.131/0.000 ms 00:27:46.012 01:13:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:46.012 01:13:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # return 0 00:27:46.012 01:13:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:46.013 01:13:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:46.013 01:13:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:46.013 01:13:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:46.013 01:13:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:46.013 01:13:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:46.013 01:13:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:46.013 01:13:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:27:46.013 01:13:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:46.013 01:13:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@720 -- # xtrace_disable 00:27:46.013 01:13:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:46.013 01:13:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@481 -- # nvmfpid=3854790 00:27:46.013 01:13:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:27:46.013 01:13:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # waitforlisten 3854790 00:27:46.013 01:13:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@827 -- # '[' -z 3854790 ']' 00:27:46.013 01:13:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:46.013 01:13:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@832 -- # local max_retries=100 00:27:46.013 01:13:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:46.013 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:46.013 01:13:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # xtrace_disable 00:27:46.013 01:13:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:46.013 [2024-07-25 01:13:39.072786] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 23.11.0 initialization... 00:27:46.013 [2024-07-25 01:13:39.072877] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:46.013 EAL: No free 2048 kB hugepages reported on node 1 00:27:46.013 [2024-07-25 01:13:39.143745] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:46.271 [2024-07-25 01:13:39.241569] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:46.271 [2024-07-25 01:13:39.241647] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:46.271 [2024-07-25 01:13:39.241663] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:46.271 [2024-07-25 01:13:39.241676] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:46.271 [2024-07-25 01:13:39.241687] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:46.271 [2024-07-25 01:13:39.241770] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:27:46.271 [2024-07-25 01:13:39.241884] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:27:46.271 [2024-07-25 01:13:39.241948] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:27:46.271 [2024-07-25 01:13:39.241950] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:46.271 01:13:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:27:46.271 01:13:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@860 -- # return 0 00:27:46.271 01:13:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:46.271 01:13:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:46.271 01:13:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:46.271 01:13:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:46.271 01:13:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:46.271 01:13:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:46.271 01:13:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:46.271 [2024-07-25 01:13:39.397088] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:46.271 01:13:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:46.271 01:13:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:27:46.271 01:13:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:27:46.271 01:13:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@720 -- # xtrace_disable 00:27:46.271 01:13:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:46.271 01:13:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:27:46.271 01:13:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:46.271 01:13:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:27:46.271 01:13:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:46.271 01:13:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:27:46.271 01:13:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:46.271 01:13:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:27:46.271 01:13:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:46.271 01:13:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:27:46.529 01:13:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:46.529 01:13:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:27:46.529 01:13:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:46.529 01:13:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:27:46.529 01:13:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:46.529 01:13:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:27:46.529 01:13:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:46.529 01:13:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:27:46.529 01:13:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:46.529 01:13:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:27:46.529 01:13:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:46.529 01:13:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:27:46.529 01:13:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@35 -- # rpc_cmd 00:27:46.529 01:13:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:46.530 01:13:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:46.530 Malloc1 00:27:46.530 [2024-07-25 01:13:39.485640] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:46.530 Malloc2 00:27:46.530 Malloc3 00:27:46.530 Malloc4 00:27:46.530 Malloc5 00:27:46.787 Malloc6 00:27:46.787 Malloc7 00:27:46.787 Malloc8 00:27:46.787 Malloc9 00:27:46.787 Malloc10 00:27:46.787 01:13:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:46.787 01:13:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:27:46.787 01:13:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:46.787 01:13:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:47.045 01:13:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # perfpid=3854968 00:27:47.045 01:13:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # waitforlisten 3854968 /var/tmp/bdevperf.sock 00:27:47.045 01:13:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@827 -- # '[' -z 3854968 ']' 00:27:47.045 01:13:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:47.046 01:13:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:27:47.046 01:13:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:27:47.046 01:13:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@832 -- # local max_retries=100 00:27:47.046 01:13:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # config=() 00:27:47.046 01:13:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:47.046 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:47.046 01:13:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # local subsystem config 00:27:47.046 01:13:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # xtrace_disable 00:27:47.046 01:13:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:47.046 01:13:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:47.046 01:13:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:47.046 { 00:27:47.046 "params": { 00:27:47.046 "name": "Nvme$subsystem", 00:27:47.046 "trtype": "$TEST_TRANSPORT", 00:27:47.046 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:47.046 "adrfam": "ipv4", 00:27:47.046 "trsvcid": "$NVMF_PORT", 00:27:47.046 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:47.046 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:47.046 "hdgst": ${hdgst:-false}, 00:27:47.046 "ddgst": ${ddgst:-false} 00:27:47.046 }, 00:27:47.046 "method": "bdev_nvme_attach_controller" 00:27:47.046 } 00:27:47.046 EOF 00:27:47.046 )") 00:27:47.046 01:13:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:27:47.046 01:13:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:47.046 01:13:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:47.046 { 00:27:47.046 "params": { 00:27:47.046 "name": "Nvme$subsystem", 00:27:47.046 "trtype": "$TEST_TRANSPORT", 00:27:47.046 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:47.046 "adrfam": "ipv4", 00:27:47.046 "trsvcid": "$NVMF_PORT", 00:27:47.046 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:47.046 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:47.046 "hdgst": ${hdgst:-false}, 00:27:47.046 "ddgst": ${ddgst:-false} 00:27:47.046 }, 00:27:47.046 "method": "bdev_nvme_attach_controller" 00:27:47.046 } 00:27:47.046 EOF 00:27:47.046 )") 00:27:47.046 01:13:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:27:47.046 01:13:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:47.046 01:13:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:47.046 { 00:27:47.046 "params": { 00:27:47.046 "name": "Nvme$subsystem", 00:27:47.046 "trtype": "$TEST_TRANSPORT", 00:27:47.046 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:47.046 "adrfam": "ipv4", 00:27:47.046 "trsvcid": "$NVMF_PORT", 00:27:47.046 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:47.046 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:47.046 "hdgst": ${hdgst:-false}, 00:27:47.046 "ddgst": ${ddgst:-false} 00:27:47.046 }, 00:27:47.046 "method": "bdev_nvme_attach_controller" 00:27:47.046 } 00:27:47.046 EOF 00:27:47.046 )") 00:27:47.046 01:13:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:27:47.046 01:13:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:47.046 01:13:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:47.046 { 00:27:47.046 "params": { 00:27:47.046 "name": "Nvme$subsystem", 00:27:47.046 "trtype": "$TEST_TRANSPORT", 00:27:47.046 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:47.046 "adrfam": "ipv4", 00:27:47.046 "trsvcid": "$NVMF_PORT", 00:27:47.046 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:47.046 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:47.046 "hdgst": ${hdgst:-false}, 00:27:47.046 "ddgst": ${ddgst:-false} 00:27:47.046 }, 00:27:47.046 "method": "bdev_nvme_attach_controller" 00:27:47.046 } 00:27:47.046 EOF 00:27:47.046 )") 00:27:47.046 01:13:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:27:47.046 01:13:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:47.046 01:13:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:47.046 { 00:27:47.046 "params": { 00:27:47.046 "name": "Nvme$subsystem", 00:27:47.046 "trtype": "$TEST_TRANSPORT", 00:27:47.046 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:47.046 "adrfam": "ipv4", 00:27:47.046 "trsvcid": "$NVMF_PORT", 00:27:47.046 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:47.046 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:47.046 "hdgst": ${hdgst:-false}, 00:27:47.046 "ddgst": ${ddgst:-false} 00:27:47.046 }, 00:27:47.046 "method": "bdev_nvme_attach_controller" 00:27:47.046 } 00:27:47.046 EOF 00:27:47.046 )") 00:27:47.046 01:13:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:27:47.046 01:13:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:47.046 01:13:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:47.046 { 00:27:47.046 "params": { 00:27:47.046 "name": "Nvme$subsystem", 00:27:47.046 "trtype": "$TEST_TRANSPORT", 00:27:47.046 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:47.046 "adrfam": "ipv4", 00:27:47.046 "trsvcid": "$NVMF_PORT", 00:27:47.046 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:47.046 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:47.046 "hdgst": ${hdgst:-false}, 00:27:47.046 "ddgst": ${ddgst:-false} 00:27:47.046 }, 00:27:47.046 "method": "bdev_nvme_attach_controller" 00:27:47.046 } 00:27:47.046 EOF 00:27:47.046 )") 00:27:47.046 01:13:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:27:47.046 01:13:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:47.046 01:13:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:47.046 { 00:27:47.046 "params": { 00:27:47.046 "name": "Nvme$subsystem", 00:27:47.046 "trtype": "$TEST_TRANSPORT", 00:27:47.046 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:47.046 "adrfam": "ipv4", 00:27:47.046 "trsvcid": "$NVMF_PORT", 00:27:47.046 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:47.046 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:47.046 "hdgst": ${hdgst:-false}, 00:27:47.046 "ddgst": ${ddgst:-false} 00:27:47.046 }, 00:27:47.046 "method": "bdev_nvme_attach_controller" 00:27:47.046 } 00:27:47.046 EOF 00:27:47.046 )") 00:27:47.046 01:13:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:27:47.046 01:13:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:47.046 01:13:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:47.046 { 00:27:47.046 "params": { 00:27:47.046 "name": "Nvme$subsystem", 00:27:47.046 "trtype": "$TEST_TRANSPORT", 00:27:47.046 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:47.046 "adrfam": "ipv4", 00:27:47.046 "trsvcid": "$NVMF_PORT", 00:27:47.046 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:47.046 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:47.046 "hdgst": ${hdgst:-false}, 00:27:47.046 "ddgst": ${ddgst:-false} 00:27:47.046 }, 00:27:47.046 "method": "bdev_nvme_attach_controller" 00:27:47.046 } 00:27:47.046 EOF 00:27:47.046 )") 00:27:47.046 01:13:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:27:47.046 01:13:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:47.046 01:13:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:47.046 { 00:27:47.046 "params": { 00:27:47.046 "name": "Nvme$subsystem", 00:27:47.046 "trtype": "$TEST_TRANSPORT", 00:27:47.046 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:47.046 "adrfam": "ipv4", 00:27:47.046 "trsvcid": "$NVMF_PORT", 00:27:47.046 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:47.046 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:47.046 "hdgst": ${hdgst:-false}, 00:27:47.046 "ddgst": ${ddgst:-false} 00:27:47.046 }, 00:27:47.046 "method": "bdev_nvme_attach_controller" 00:27:47.046 } 00:27:47.046 EOF 00:27:47.046 )") 00:27:47.046 01:13:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:27:47.046 01:13:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:47.046 01:13:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:47.046 { 00:27:47.046 "params": { 00:27:47.046 "name": "Nvme$subsystem", 00:27:47.046 "trtype": "$TEST_TRANSPORT", 00:27:47.046 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:47.046 "adrfam": "ipv4", 00:27:47.046 "trsvcid": "$NVMF_PORT", 00:27:47.046 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:47.046 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:47.046 "hdgst": ${hdgst:-false}, 00:27:47.046 "ddgst": ${ddgst:-false} 00:27:47.046 }, 00:27:47.046 "method": "bdev_nvme_attach_controller" 00:27:47.046 } 00:27:47.046 EOF 00:27:47.046 )") 00:27:47.046 01:13:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:27:47.046 01:13:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@556 -- # jq . 00:27:47.047 01:13:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@557 -- # IFS=, 00:27:47.047 01:13:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:27:47.047 "params": { 00:27:47.047 "name": "Nvme1", 00:27:47.047 "trtype": "tcp", 00:27:47.047 "traddr": "10.0.0.2", 00:27:47.047 "adrfam": "ipv4", 00:27:47.047 "trsvcid": "4420", 00:27:47.047 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:47.047 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:47.047 "hdgst": false, 00:27:47.047 "ddgst": false 00:27:47.047 }, 00:27:47.047 "method": "bdev_nvme_attach_controller" 00:27:47.047 },{ 00:27:47.047 "params": { 00:27:47.047 "name": "Nvme2", 00:27:47.047 "trtype": "tcp", 00:27:47.047 "traddr": "10.0.0.2", 00:27:47.047 "adrfam": "ipv4", 00:27:47.047 "trsvcid": "4420", 00:27:47.047 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:27:47.047 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:27:47.047 "hdgst": false, 00:27:47.047 "ddgst": false 00:27:47.047 }, 00:27:47.047 "method": "bdev_nvme_attach_controller" 00:27:47.047 },{ 00:27:47.047 "params": { 00:27:47.047 "name": "Nvme3", 00:27:47.047 "trtype": "tcp", 00:27:47.047 "traddr": "10.0.0.2", 00:27:47.047 "adrfam": "ipv4", 00:27:47.047 "trsvcid": "4420", 00:27:47.047 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:27:47.047 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:27:47.047 "hdgst": false, 00:27:47.047 "ddgst": false 00:27:47.047 }, 00:27:47.047 "method": "bdev_nvme_attach_controller" 00:27:47.047 },{ 00:27:47.047 "params": { 00:27:47.047 "name": "Nvme4", 00:27:47.047 "trtype": "tcp", 00:27:47.047 "traddr": "10.0.0.2", 00:27:47.047 "adrfam": "ipv4", 00:27:47.047 "trsvcid": "4420", 00:27:47.047 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:27:47.047 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:27:47.047 "hdgst": false, 00:27:47.047 "ddgst": false 00:27:47.047 }, 00:27:47.047 "method": "bdev_nvme_attach_controller" 00:27:47.047 },{ 00:27:47.047 "params": { 00:27:47.047 "name": "Nvme5", 00:27:47.047 "trtype": "tcp", 00:27:47.047 "traddr": "10.0.0.2", 00:27:47.047 "adrfam": "ipv4", 00:27:47.047 "trsvcid": "4420", 00:27:47.047 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:27:47.047 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:27:47.047 "hdgst": false, 00:27:47.047 "ddgst": false 00:27:47.047 }, 00:27:47.047 "method": "bdev_nvme_attach_controller" 00:27:47.047 },{ 00:27:47.047 "params": { 00:27:47.047 "name": "Nvme6", 00:27:47.047 "trtype": "tcp", 00:27:47.047 "traddr": "10.0.0.2", 00:27:47.047 "adrfam": "ipv4", 00:27:47.047 "trsvcid": "4420", 00:27:47.047 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:27:47.047 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:27:47.047 "hdgst": false, 00:27:47.047 "ddgst": false 00:27:47.047 }, 00:27:47.047 "method": "bdev_nvme_attach_controller" 00:27:47.047 },{ 00:27:47.047 "params": { 00:27:47.047 "name": "Nvme7", 00:27:47.047 "trtype": "tcp", 00:27:47.047 "traddr": "10.0.0.2", 00:27:47.047 "adrfam": "ipv4", 00:27:47.047 "trsvcid": "4420", 00:27:47.047 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:27:47.047 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:27:47.047 "hdgst": false, 00:27:47.047 "ddgst": false 00:27:47.047 }, 00:27:47.047 "method": "bdev_nvme_attach_controller" 00:27:47.047 },{ 00:27:47.047 "params": { 00:27:47.047 "name": "Nvme8", 00:27:47.047 "trtype": "tcp", 00:27:47.047 "traddr": "10.0.0.2", 00:27:47.047 "adrfam": "ipv4", 00:27:47.047 "trsvcid": "4420", 00:27:47.047 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:27:47.047 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:27:47.047 "hdgst": false, 00:27:47.047 "ddgst": false 00:27:47.047 }, 00:27:47.047 "method": "bdev_nvme_attach_controller" 00:27:47.047 },{ 00:27:47.047 "params": { 00:27:47.047 "name": "Nvme9", 00:27:47.047 "trtype": "tcp", 00:27:47.047 "traddr": "10.0.0.2", 00:27:47.047 "adrfam": "ipv4", 00:27:47.047 "trsvcid": "4420", 00:27:47.047 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:27:47.047 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:27:47.047 "hdgst": false, 00:27:47.047 "ddgst": false 00:27:47.047 }, 00:27:47.047 "method": "bdev_nvme_attach_controller" 00:27:47.047 },{ 00:27:47.047 "params": { 00:27:47.047 "name": "Nvme10", 00:27:47.047 "trtype": "tcp", 00:27:47.047 "traddr": "10.0.0.2", 00:27:47.047 "adrfam": "ipv4", 00:27:47.047 "trsvcid": "4420", 00:27:47.047 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:27:47.047 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:27:47.047 "hdgst": false, 00:27:47.047 "ddgst": false 00:27:47.047 }, 00:27:47.047 "method": "bdev_nvme_attach_controller" 00:27:47.047 }' 00:27:47.047 [2024-07-25 01:13:39.987750] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 23.11.0 initialization... 00:27:47.047 [2024-07-25 01:13:39.987842] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3854968 ] 00:27:47.047 EAL: No free 2048 kB hugepages reported on node 1 00:27:47.047 [2024-07-25 01:13:40.058887] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:47.047 [2024-07-25 01:13:40.145175] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:48.419 Running I/O for 10 seconds... 00:27:48.984 01:13:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:27:48.984 01:13:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@860 -- # return 0 00:27:48.984 01:13:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:27:48.984 01:13:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:48.984 01:13:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:48.984 01:13:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:48.984 01:13:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@107 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:27:48.984 01:13:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:27:48.984 01:13:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:27:48.984 01:13:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@57 -- # local ret=1 00:27:48.984 01:13:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local i 00:27:48.984 01:13:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:27:48.984 01:13:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:27:48.984 01:13:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:27:48.984 01:13:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:48.984 01:13:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:27:48.984 01:13:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:48.984 01:13:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:48.984 01:13:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=67 00:27:48.984 01:13:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:27:48.984 01:13:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:27:49.242 01:13:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:27:49.242 01:13:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:27:49.242 01:13:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:27:49.242 01:13:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:27:49.242 01:13:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:49.242 01:13:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:49.242 01:13:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:49.242 01:13:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=135 00:27:49.242 01:13:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 135 -ge 100 ']' 00:27:49.242 01:13:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # ret=0 00:27:49.242 01:13:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # break 00:27:49.242 01:13:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@69 -- # return 0 00:27:49.242 01:13:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@110 -- # killprocess 3854968 00:27:49.242 01:13:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@946 -- # '[' -z 3854968 ']' 00:27:49.242 01:13:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@950 -- # kill -0 3854968 00:27:49.242 01:13:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@951 -- # uname 00:27:49.242 01:13:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:27:49.242 01:13:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3854968 00:27:49.242 01:13:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:27:49.242 01:13:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:27:49.242 01:13:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3854968' 00:27:49.242 killing process with pid 3854968 00:27:49.242 01:13:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@965 -- # kill 3854968 00:27:49.242 01:13:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@970 -- # wait 3854968 00:27:49.500 Received shutdown signal, test time was about 0.884520 seconds 00:27:49.500 00:27:49.500 Latency(us) 00:27:49.500 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:49.500 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:49.500 Verification LBA range: start 0x0 length 0x400 00:27:49.500 Nvme1n1 : 0.82 238.35 14.90 0.00 0.00 262707.35 3203.98 242337.56 00:27:49.500 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:49.500 Verification LBA range: start 0x0 length 0x400 00:27:49.500 Nvme2n1 : 0.87 221.60 13.85 0.00 0.00 279051.12 23690.05 274959.93 00:27:49.500 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:49.500 Verification LBA range: start 0x0 length 0x400 00:27:49.500 Nvme3n1 : 0.84 229.49 14.34 0.00 0.00 263082.79 22039.51 274959.93 00:27:49.500 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:49.500 Verification LBA range: start 0x0 length 0x400 00:27:49.500 Nvme4n1 : 0.86 298.26 18.64 0.00 0.00 198034.96 18155.90 250104.79 00:27:49.500 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:49.500 Verification LBA range: start 0x0 length 0x400 00:27:49.500 Nvme5n1 : 0.84 228.11 14.26 0.00 0.00 252508.03 20388.98 253211.69 00:27:49.500 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:49.500 Verification LBA range: start 0x0 length 0x400 00:27:49.500 Nvme6n1 : 0.88 218.61 13.66 0.00 0.00 258529.91 21748.24 257872.02 00:27:49.500 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:49.500 Verification LBA range: start 0x0 length 0x400 00:27:49.500 Nvme7n1 : 0.85 226.24 14.14 0.00 0.00 242847.04 20486.07 257872.02 00:27:49.500 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:49.500 Verification LBA range: start 0x0 length 0x400 00:27:49.500 Nvme8n1 : 0.83 231.30 14.46 0.00 0.00 230441.47 16796.63 236123.78 00:27:49.500 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:49.500 Verification LBA range: start 0x0 length 0x400 00:27:49.500 Nvme9n1 : 0.85 225.08 14.07 0.00 0.00 232512.98 18932.62 256318.58 00:27:49.500 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:49.500 Verification LBA range: start 0x0 length 0x400 00:27:49.500 Nvme10n1 : 0.88 217.27 13.58 0.00 0.00 235475.25 10048.85 298261.62 00:27:49.500 =================================================================================================================== 00:27:49.500 Total : 2334.32 145.90 0.00 0.00 244025.01 3203.98 298261.62 00:27:49.757 01:13:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@113 -- # sleep 1 00:27:50.691 01:13:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # kill -0 3854790 00:27:50.691 01:13:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@116 -- # stoptarget 00:27:50.691 01:13:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:27:50.691 01:13:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:27:50.691 01:13:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:27:50.691 01:13:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@45 -- # nvmftestfini 00:27:50.691 01:13:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:50.691 01:13:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # sync 00:27:50.691 01:13:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:50.691 01:13:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@120 -- # set +e 00:27:50.691 01:13:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:50.691 01:13:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:50.691 rmmod nvme_tcp 00:27:50.691 rmmod nvme_fabrics 00:27:50.691 rmmod nvme_keyring 00:27:50.691 01:13:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:50.691 01:13:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set -e 00:27:50.691 01:13:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # return 0 00:27:50.691 01:13:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@489 -- # '[' -n 3854790 ']' 00:27:50.691 01:13:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@490 -- # killprocess 3854790 00:27:50.691 01:13:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@946 -- # '[' -z 3854790 ']' 00:27:50.691 01:13:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@950 -- # kill -0 3854790 00:27:50.691 01:13:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@951 -- # uname 00:27:50.691 01:13:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:27:50.691 01:13:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3854790 00:27:50.691 01:13:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:27:50.691 01:13:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:27:50.691 01:13:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3854790' 00:27:50.691 killing process with pid 3854790 00:27:50.691 01:13:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@965 -- # kill 3854790 00:27:50.691 01:13:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@970 -- # wait 3854790 00:27:51.309 01:13:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:51.309 01:13:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:51.309 01:13:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:51.309 01:13:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:51.309 01:13:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:51.309 01:13:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:51.309 01:13:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:51.309 01:13:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:53.206 01:13:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:53.206 00:27:53.206 real 0m7.399s 00:27:53.206 user 0m21.779s 00:27:53.206 sys 0m1.494s 00:27:53.206 01:13:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:27:53.206 01:13:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:53.206 ************************************ 00:27:53.206 END TEST nvmf_shutdown_tc2 00:27:53.206 ************************************ 00:27:53.206 01:13:46 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@149 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:27:53.206 01:13:46 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:27:53.206 01:13:46 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1103 -- # xtrace_disable 00:27:53.206 01:13:46 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:27:53.206 ************************************ 00:27:53.206 START TEST nvmf_shutdown_tc3 00:27:53.206 ************************************ 00:27:53.206 01:13:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1121 -- # nvmf_shutdown_tc3 00:27:53.206 01:13:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@121 -- # starttarget 00:27:53.206 01:13:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@15 -- # nvmftestinit 00:27:53.206 01:13:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:53.206 01:13:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:53.206 01:13:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:53.206 01:13:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:53.206 01:13:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:53.206 01:13:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:53.206 01:13:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:53.206 01:13:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:53.206 01:13:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:53.206 01:13:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:53.206 01:13:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@285 -- # xtrace_disable 00:27:53.206 01:13:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:53.206 01:13:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:53.206 01:13:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # pci_devs=() 00:27:53.206 01:13:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:53.206 01:13:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:53.206 01:13:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:53.206 01:13:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:53.206 01:13:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:53.206 01:13:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # net_devs=() 00:27:53.206 01:13:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:53.207 01:13:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # e810=() 00:27:53.207 01:13:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # local -ga e810 00:27:53.207 01:13:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # x722=() 00:27:53.207 01:13:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # local -ga x722 00:27:53.207 01:13:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # mlx=() 00:27:53.207 01:13:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # local -ga mlx 00:27:53.207 01:13:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:53.207 01:13:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:53.207 01:13:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:53.207 01:13:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:53.207 01:13:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:53.207 01:13:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:53.207 01:13:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:53.207 01:13:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:53.207 01:13:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:53.207 01:13:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:53.207 01:13:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:53.207 01:13:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:53.207 01:13:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:53.207 01:13:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:53.207 01:13:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:53.207 01:13:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:53.207 01:13:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:53.207 01:13:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:53.207 01:13:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:27:53.207 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:27:53.207 01:13:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:53.207 01:13:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:53.207 01:13:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:53.207 01:13:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:53.207 01:13:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:53.207 01:13:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:53.207 01:13:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:27:53.207 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:27:53.207 01:13:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:53.207 01:13:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:53.207 01:13:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:53.207 01:13:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:53.207 01:13:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:53.207 01:13:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:53.207 01:13:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:53.207 01:13:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:53.207 01:13:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:53.207 01:13:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:53.207 01:13:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:53.207 01:13:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:53.207 01:13:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:53.207 01:13:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:53.207 01:13:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:53.207 01:13:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:27:53.207 Found net devices under 0000:0a:00.0: cvl_0_0 00:27:53.207 01:13:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:53.207 01:13:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:53.207 01:13:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:53.207 01:13:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:53.207 01:13:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:53.207 01:13:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:53.207 01:13:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:53.207 01:13:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:53.207 01:13:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:27:53.207 Found net devices under 0000:0a:00.1: cvl_0_1 00:27:53.207 01:13:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:53.207 01:13:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:53.207 01:13:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # is_hw=yes 00:27:53.207 01:13:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:53.207 01:13:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:53.207 01:13:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:53.207 01:13:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:53.207 01:13:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:53.207 01:13:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:53.207 01:13:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:53.207 01:13:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:53.207 01:13:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:53.207 01:13:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:53.207 01:13:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:53.207 01:13:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:53.207 01:13:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:53.207 01:13:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:53.207 01:13:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:53.207 01:13:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:53.465 01:13:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:53.465 01:13:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:53.465 01:13:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:53.465 01:13:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:53.465 01:13:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:53.465 01:13:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:53.465 01:13:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:53.465 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:53.465 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.207 ms 00:27:53.465 00:27:53.465 --- 10.0.0.2 ping statistics --- 00:27:53.465 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:53.465 rtt min/avg/max/mdev = 0.207/0.207/0.207/0.000 ms 00:27:53.465 01:13:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:53.465 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:53.465 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.057 ms 00:27:53.465 00:27:53.465 --- 10.0.0.1 ping statistics --- 00:27:53.465 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:53.465 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:27:53.466 01:13:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:53.466 01:13:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # return 0 00:27:53.466 01:13:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:53.466 01:13:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:53.466 01:13:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:53.466 01:13:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:53.466 01:13:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:53.466 01:13:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:53.466 01:13:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:53.466 01:13:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:27:53.466 01:13:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:53.466 01:13:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@720 -- # xtrace_disable 00:27:53.466 01:13:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:53.466 01:13:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@481 -- # nvmfpid=3855769 00:27:53.466 01:13:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:27:53.466 01:13:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # waitforlisten 3855769 00:27:53.466 01:13:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@827 -- # '[' -z 3855769 ']' 00:27:53.466 01:13:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:53.466 01:13:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@832 -- # local max_retries=100 00:27:53.466 01:13:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:53.466 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:53.466 01:13:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # xtrace_disable 00:27:53.466 01:13:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:53.466 [2024-07-25 01:13:46.520090] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 23.11.0 initialization... 00:27:53.466 [2024-07-25 01:13:46.520176] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:53.466 EAL: No free 2048 kB hugepages reported on node 1 00:27:53.466 [2024-07-25 01:13:46.590720] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:53.723 [2024-07-25 01:13:46.681722] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:53.723 [2024-07-25 01:13:46.681793] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:53.723 [2024-07-25 01:13:46.681806] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:53.723 [2024-07-25 01:13:46.681817] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:53.723 [2024-07-25 01:13:46.681826] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:53.723 [2024-07-25 01:13:46.681937] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:27:53.723 [2024-07-25 01:13:46.682000] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:27:53.724 [2024-07-25 01:13:46.682068] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:27:53.724 [2024-07-25 01:13:46.682069] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:53.724 01:13:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:27:53.724 01:13:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@860 -- # return 0 00:27:53.724 01:13:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:53.724 01:13:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:53.724 01:13:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:53.724 01:13:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:53.724 01:13:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:53.724 01:13:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:53.724 01:13:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:53.724 [2024-07-25 01:13:46.838187] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:53.724 01:13:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:53.724 01:13:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:27:53.724 01:13:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:27:53.724 01:13:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@720 -- # xtrace_disable 00:27:53.724 01:13:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:53.724 01:13:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:27:53.724 01:13:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:53.724 01:13:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:27:53.724 01:13:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:53.724 01:13:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:27:53.724 01:13:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:53.724 01:13:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:27:53.724 01:13:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:53.724 01:13:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:27:53.724 01:13:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:53.724 01:13:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:27:53.724 01:13:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:53.724 01:13:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:27:53.724 01:13:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:53.724 01:13:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:27:53.724 01:13:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:53.724 01:13:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:27:53.981 01:13:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:53.981 01:13:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:27:53.981 01:13:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:53.981 01:13:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:27:53.981 01:13:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@35 -- # rpc_cmd 00:27:53.981 01:13:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:53.981 01:13:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:53.981 Malloc1 00:27:53.981 [2024-07-25 01:13:46.927483] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:53.981 Malloc2 00:27:53.981 Malloc3 00:27:53.981 Malloc4 00:27:53.981 Malloc5 00:27:54.238 Malloc6 00:27:54.238 Malloc7 00:27:54.238 Malloc8 00:27:54.238 Malloc9 00:27:54.238 Malloc10 00:27:54.238 01:13:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:54.238 01:13:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:27:54.238 01:13:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:54.238 01:13:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:54.238 01:13:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # perfpid=3855939 00:27:54.238 01:13:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # waitforlisten 3855939 /var/tmp/bdevperf.sock 00:27:54.238 01:13:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@827 -- # '[' -z 3855939 ']' 00:27:54.238 01:13:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:54.238 01:13:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:27:54.238 01:13:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:27:54.238 01:13:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@832 -- # local max_retries=100 00:27:54.238 01:13:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # config=() 00:27:54.238 01:13:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:54.238 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:54.238 01:13:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # local subsystem config 00:27:54.239 01:13:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # xtrace_disable 00:27:54.239 01:13:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:54.239 01:13:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:54.239 01:13:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:54.239 { 00:27:54.239 "params": { 00:27:54.239 "name": "Nvme$subsystem", 00:27:54.239 "trtype": "$TEST_TRANSPORT", 00:27:54.239 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:54.239 "adrfam": "ipv4", 00:27:54.239 "trsvcid": "$NVMF_PORT", 00:27:54.239 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:54.239 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:54.239 "hdgst": ${hdgst:-false}, 00:27:54.239 "ddgst": ${ddgst:-false} 00:27:54.239 }, 00:27:54.239 "method": "bdev_nvme_attach_controller" 00:27:54.239 } 00:27:54.239 EOF 00:27:54.239 )") 00:27:54.239 01:13:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:27:54.239 01:13:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:54.239 01:13:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:54.239 { 00:27:54.239 "params": { 00:27:54.239 "name": "Nvme$subsystem", 00:27:54.239 "trtype": "$TEST_TRANSPORT", 00:27:54.239 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:54.239 "adrfam": "ipv4", 00:27:54.239 "trsvcid": "$NVMF_PORT", 00:27:54.239 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:54.239 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:54.239 "hdgst": ${hdgst:-false}, 00:27:54.239 "ddgst": ${ddgst:-false} 00:27:54.239 }, 00:27:54.239 "method": "bdev_nvme_attach_controller" 00:27:54.239 } 00:27:54.239 EOF 00:27:54.239 )") 00:27:54.239 01:13:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:27:54.497 01:13:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:54.497 01:13:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:54.497 { 00:27:54.497 "params": { 00:27:54.497 "name": "Nvme$subsystem", 00:27:54.497 "trtype": "$TEST_TRANSPORT", 00:27:54.497 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:54.497 "adrfam": "ipv4", 00:27:54.498 "trsvcid": "$NVMF_PORT", 00:27:54.498 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:54.498 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:54.498 "hdgst": ${hdgst:-false}, 00:27:54.498 "ddgst": ${ddgst:-false} 00:27:54.498 }, 00:27:54.498 "method": "bdev_nvme_attach_controller" 00:27:54.498 } 00:27:54.498 EOF 00:27:54.498 )") 00:27:54.498 01:13:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:27:54.498 01:13:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:54.498 01:13:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:54.498 { 00:27:54.498 "params": { 00:27:54.498 "name": "Nvme$subsystem", 00:27:54.498 "trtype": "$TEST_TRANSPORT", 00:27:54.498 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:54.498 "adrfam": "ipv4", 00:27:54.498 "trsvcid": "$NVMF_PORT", 00:27:54.498 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:54.498 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:54.498 "hdgst": ${hdgst:-false}, 00:27:54.498 "ddgst": ${ddgst:-false} 00:27:54.498 }, 00:27:54.498 "method": "bdev_nvme_attach_controller" 00:27:54.498 } 00:27:54.498 EOF 00:27:54.498 )") 00:27:54.498 01:13:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:27:54.498 01:13:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:54.498 01:13:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:54.498 { 00:27:54.498 "params": { 00:27:54.498 "name": "Nvme$subsystem", 00:27:54.498 "trtype": "$TEST_TRANSPORT", 00:27:54.498 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:54.498 "adrfam": "ipv4", 00:27:54.498 "trsvcid": "$NVMF_PORT", 00:27:54.498 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:54.498 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:54.498 "hdgst": ${hdgst:-false}, 00:27:54.498 "ddgst": ${ddgst:-false} 00:27:54.498 }, 00:27:54.498 "method": "bdev_nvme_attach_controller" 00:27:54.498 } 00:27:54.498 EOF 00:27:54.498 )") 00:27:54.498 01:13:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:27:54.498 01:13:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:54.498 01:13:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:54.498 { 00:27:54.498 "params": { 00:27:54.498 "name": "Nvme$subsystem", 00:27:54.498 "trtype": "$TEST_TRANSPORT", 00:27:54.498 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:54.498 "adrfam": "ipv4", 00:27:54.498 "trsvcid": "$NVMF_PORT", 00:27:54.498 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:54.498 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:54.498 "hdgst": ${hdgst:-false}, 00:27:54.498 "ddgst": ${ddgst:-false} 00:27:54.498 }, 00:27:54.498 "method": "bdev_nvme_attach_controller" 00:27:54.498 } 00:27:54.498 EOF 00:27:54.498 )") 00:27:54.498 01:13:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:27:54.498 01:13:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:54.498 01:13:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:54.498 { 00:27:54.498 "params": { 00:27:54.498 "name": "Nvme$subsystem", 00:27:54.498 "trtype": "$TEST_TRANSPORT", 00:27:54.498 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:54.498 "adrfam": "ipv4", 00:27:54.498 "trsvcid": "$NVMF_PORT", 00:27:54.498 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:54.498 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:54.498 "hdgst": ${hdgst:-false}, 00:27:54.498 "ddgst": ${ddgst:-false} 00:27:54.498 }, 00:27:54.498 "method": "bdev_nvme_attach_controller" 00:27:54.498 } 00:27:54.498 EOF 00:27:54.498 )") 00:27:54.498 01:13:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:27:54.498 01:13:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:54.498 01:13:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:54.498 { 00:27:54.498 "params": { 00:27:54.498 "name": "Nvme$subsystem", 00:27:54.498 "trtype": "$TEST_TRANSPORT", 00:27:54.498 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:54.498 "adrfam": "ipv4", 00:27:54.498 "trsvcid": "$NVMF_PORT", 00:27:54.498 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:54.498 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:54.498 "hdgst": ${hdgst:-false}, 00:27:54.498 "ddgst": ${ddgst:-false} 00:27:54.498 }, 00:27:54.498 "method": "bdev_nvme_attach_controller" 00:27:54.498 } 00:27:54.498 EOF 00:27:54.498 )") 00:27:54.498 01:13:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:27:54.498 01:13:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:54.498 01:13:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:54.498 { 00:27:54.498 "params": { 00:27:54.498 "name": "Nvme$subsystem", 00:27:54.498 "trtype": "$TEST_TRANSPORT", 00:27:54.498 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:54.498 "adrfam": "ipv4", 00:27:54.498 "trsvcid": "$NVMF_PORT", 00:27:54.498 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:54.498 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:54.498 "hdgst": ${hdgst:-false}, 00:27:54.498 "ddgst": ${ddgst:-false} 00:27:54.498 }, 00:27:54.498 "method": "bdev_nvme_attach_controller" 00:27:54.498 } 00:27:54.498 EOF 00:27:54.498 )") 00:27:54.498 01:13:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:27:54.498 01:13:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:54.498 01:13:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:54.498 { 00:27:54.498 "params": { 00:27:54.498 "name": "Nvme$subsystem", 00:27:54.498 "trtype": "$TEST_TRANSPORT", 00:27:54.498 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:54.498 "adrfam": "ipv4", 00:27:54.498 "trsvcid": "$NVMF_PORT", 00:27:54.498 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:54.498 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:54.498 "hdgst": ${hdgst:-false}, 00:27:54.498 "ddgst": ${ddgst:-false} 00:27:54.498 }, 00:27:54.498 "method": "bdev_nvme_attach_controller" 00:27:54.498 } 00:27:54.498 EOF 00:27:54.498 )") 00:27:54.498 01:13:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:27:54.498 01:13:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@556 -- # jq . 00:27:54.498 01:13:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@557 -- # IFS=, 00:27:54.498 01:13:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:27:54.498 "params": { 00:27:54.498 "name": "Nvme1", 00:27:54.498 "trtype": "tcp", 00:27:54.498 "traddr": "10.0.0.2", 00:27:54.498 "adrfam": "ipv4", 00:27:54.498 "trsvcid": "4420", 00:27:54.498 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:54.498 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:54.498 "hdgst": false, 00:27:54.498 "ddgst": false 00:27:54.498 }, 00:27:54.498 "method": "bdev_nvme_attach_controller" 00:27:54.498 },{ 00:27:54.498 "params": { 00:27:54.498 "name": "Nvme2", 00:27:54.498 "trtype": "tcp", 00:27:54.498 "traddr": "10.0.0.2", 00:27:54.498 "adrfam": "ipv4", 00:27:54.498 "trsvcid": "4420", 00:27:54.498 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:27:54.498 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:27:54.498 "hdgst": false, 00:27:54.498 "ddgst": false 00:27:54.498 }, 00:27:54.498 "method": "bdev_nvme_attach_controller" 00:27:54.498 },{ 00:27:54.498 "params": { 00:27:54.498 "name": "Nvme3", 00:27:54.498 "trtype": "tcp", 00:27:54.498 "traddr": "10.0.0.2", 00:27:54.498 "adrfam": "ipv4", 00:27:54.498 "trsvcid": "4420", 00:27:54.498 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:27:54.498 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:27:54.498 "hdgst": false, 00:27:54.498 "ddgst": false 00:27:54.498 }, 00:27:54.498 "method": "bdev_nvme_attach_controller" 00:27:54.498 },{ 00:27:54.498 "params": { 00:27:54.498 "name": "Nvme4", 00:27:54.498 "trtype": "tcp", 00:27:54.498 "traddr": "10.0.0.2", 00:27:54.498 "adrfam": "ipv4", 00:27:54.498 "trsvcid": "4420", 00:27:54.498 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:27:54.498 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:27:54.498 "hdgst": false, 00:27:54.498 "ddgst": false 00:27:54.498 }, 00:27:54.498 "method": "bdev_nvme_attach_controller" 00:27:54.498 },{ 00:27:54.498 "params": { 00:27:54.498 "name": "Nvme5", 00:27:54.498 "trtype": "tcp", 00:27:54.498 "traddr": "10.0.0.2", 00:27:54.498 "adrfam": "ipv4", 00:27:54.498 "trsvcid": "4420", 00:27:54.498 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:27:54.498 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:27:54.498 "hdgst": false, 00:27:54.498 "ddgst": false 00:27:54.498 }, 00:27:54.498 "method": "bdev_nvme_attach_controller" 00:27:54.498 },{ 00:27:54.498 "params": { 00:27:54.498 "name": "Nvme6", 00:27:54.498 "trtype": "tcp", 00:27:54.498 "traddr": "10.0.0.2", 00:27:54.498 "adrfam": "ipv4", 00:27:54.498 "trsvcid": "4420", 00:27:54.498 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:27:54.498 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:27:54.498 "hdgst": false, 00:27:54.499 "ddgst": false 00:27:54.499 }, 00:27:54.499 "method": "bdev_nvme_attach_controller" 00:27:54.499 },{ 00:27:54.499 "params": { 00:27:54.499 "name": "Nvme7", 00:27:54.499 "trtype": "tcp", 00:27:54.499 "traddr": "10.0.0.2", 00:27:54.499 "adrfam": "ipv4", 00:27:54.499 "trsvcid": "4420", 00:27:54.499 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:27:54.499 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:27:54.499 "hdgst": false, 00:27:54.499 "ddgst": false 00:27:54.499 }, 00:27:54.499 "method": "bdev_nvme_attach_controller" 00:27:54.499 },{ 00:27:54.499 "params": { 00:27:54.499 "name": "Nvme8", 00:27:54.499 "trtype": "tcp", 00:27:54.499 "traddr": "10.0.0.2", 00:27:54.499 "adrfam": "ipv4", 00:27:54.499 "trsvcid": "4420", 00:27:54.499 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:27:54.499 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:27:54.499 "hdgst": false, 00:27:54.499 "ddgst": false 00:27:54.499 }, 00:27:54.499 "method": "bdev_nvme_attach_controller" 00:27:54.499 },{ 00:27:54.499 "params": { 00:27:54.499 "name": "Nvme9", 00:27:54.499 "trtype": "tcp", 00:27:54.499 "traddr": "10.0.0.2", 00:27:54.499 "adrfam": "ipv4", 00:27:54.499 "trsvcid": "4420", 00:27:54.499 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:27:54.499 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:27:54.499 "hdgst": false, 00:27:54.499 "ddgst": false 00:27:54.499 }, 00:27:54.499 "method": "bdev_nvme_attach_controller" 00:27:54.499 },{ 00:27:54.499 "params": { 00:27:54.499 "name": "Nvme10", 00:27:54.499 "trtype": "tcp", 00:27:54.499 "traddr": "10.0.0.2", 00:27:54.499 "adrfam": "ipv4", 00:27:54.499 "trsvcid": "4420", 00:27:54.499 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:27:54.499 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:27:54.499 "hdgst": false, 00:27:54.499 "ddgst": false 00:27:54.499 }, 00:27:54.499 "method": "bdev_nvme_attach_controller" 00:27:54.499 }' 00:27:54.499 [2024-07-25 01:13:47.424573] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 23.11.0 initialization... 00:27:54.499 [2024-07-25 01:13:47.424668] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3855939 ] 00:27:54.499 EAL: No free 2048 kB hugepages reported on node 1 00:27:54.499 [2024-07-25 01:13:47.490518] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:54.499 [2024-07-25 01:13:47.577167] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:56.397 Running I/O for 10 seconds... 00:27:56.397 01:13:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:27:56.397 01:13:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@860 -- # return 0 00:27:56.397 01:13:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:27:56.397 01:13:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:56.397 01:13:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:56.397 01:13:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:56.397 01:13:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@130 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:27:56.397 01:13:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@132 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:27:56.397 01:13:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:27:56.397 01:13:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:27:56.397 01:13:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@57 -- # local ret=1 00:27:56.397 01:13:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local i 00:27:56.397 01:13:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:27:56.397 01:13:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:27:56.397 01:13:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:27:56.397 01:13:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:27:56.397 01:13:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:56.397 01:13:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:56.397 01:13:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:56.397 01:13:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=3 00:27:56.397 01:13:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:27:56.397 01:13:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:27:56.655 01:13:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:27:56.655 01:13:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:27:56.655 01:13:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:27:56.655 01:13:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:27:56.655 01:13:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:56.655 01:13:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:56.655 01:13:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:56.655 01:13:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=67 00:27:56.655 01:13:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:27:56.655 01:13:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:27:56.913 01:13:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:27:56.913 01:13:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:27:56.913 01:13:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:27:56.913 01:13:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:27:56.913 01:13:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:56.913 01:13:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:56.913 01:13:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:56.913 01:13:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=131 00:27:56.913 01:13:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:27:56.913 01:13:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # ret=0 00:27:56.913 01:13:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # break 00:27:56.913 01:13:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@69 -- # return 0 00:27:56.913 01:13:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@135 -- # killprocess 3855769 00:27:56.913 01:13:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@946 -- # '[' -z 3855769 ']' 00:27:56.913 01:13:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@950 -- # kill -0 3855769 00:27:56.913 01:13:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@951 -- # uname 00:27:56.913 01:13:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:27:56.913 01:13:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3855769 00:27:57.187 01:13:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:27:57.187 01:13:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:27:57.187 01:13:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3855769' 00:27:57.187 killing process with pid 3855769 00:27:57.187 01:13:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@965 -- # kill 3855769 00:27:57.187 01:13:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@970 -- # wait 3855769 00:27:57.187 [2024-07-25 01:13:50.075159] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133d560 is same with the state(5) to be set 00:27:57.187 [2024-07-25 01:13:50.075310] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133d560 is same with the state(5) to be set 00:27:57.187 [2024-07-25 01:13:50.075340] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133d560 is same with the state(5) to be set 00:27:57.187 [2024-07-25 01:13:50.075354] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133d560 is same with the state(5) to be set 00:27:57.187 [2024-07-25 01:13:50.075367] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133d560 is same with the state(5) to be set 00:27:57.187 [2024-07-25 01:13:50.075379] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133d560 is same with the state(5) to be set 00:27:57.187 [2024-07-25 01:13:50.075392] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133d560 is same with the state(5) to be set 00:27:57.187 [2024-07-25 01:13:50.075404] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133d560 is same with the state(5) to be set 00:27:57.187 [2024-07-25 01:13:50.075417] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133d560 is same with the state(5) to be set 00:27:57.187 [2024-07-25 01:13:50.075429] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133d560 is same with the state(5) to be set 00:27:57.187 [2024-07-25 01:13:50.075442] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133d560 is same with the state(5) to be set 00:27:57.187 [2024-07-25 01:13:50.075454] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133d560 is same with the state(5) to be set 00:27:57.187 [2024-07-25 01:13:50.075466] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133d560 is same with the state(5) to be set 00:27:57.187 [2024-07-25 01:13:50.075479] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133d560 is same with the state(5) to be set 00:27:57.187 [2024-07-25 01:13:50.075492] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133d560 is same with the state(5) to be set 00:27:57.187 [2024-07-25 01:13:50.075505] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133d560 is same with the state(5) to be set 00:27:57.187 [2024-07-25 01:13:50.075517] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133d560 is same with the state(5) to be set 00:27:57.187 [2024-07-25 01:13:50.075529] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133d560 is same with the state(5) to be set 00:27:57.187 [2024-07-25 01:13:50.075542] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133d560 is same with the state(5) to be set 00:27:57.187 [2024-07-25 01:13:50.075562] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133d560 is same with the state(5) to be set 00:27:57.187 [2024-07-25 01:13:50.075575] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133d560 is same with the state(5) to be set 00:27:57.187 [2024-07-25 01:13:50.075589] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133d560 is same with the state(5) to be set 00:27:57.187 [2024-07-25 01:13:50.075602] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133d560 is same with the state(5) to be set 00:27:57.187 [2024-07-25 01:13:50.075615] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133d560 is same with the state(5) to be set 00:27:57.187 [2024-07-25 01:13:50.075628] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133d560 is same with the state(5) to be set 00:27:57.187 [2024-07-25 01:13:50.075640] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133d560 is same with the state(5) to be set 00:27:57.187 [2024-07-25 01:13:50.075652] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133d560 is same with the state(5) to be set 00:27:57.187 [2024-07-25 01:13:50.075665] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133d560 is same with the state(5) to be set 00:27:57.187 [2024-07-25 01:13:50.075677] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133d560 is same with the state(5) to be set 00:27:57.187 [2024-07-25 01:13:50.075693] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133d560 is same with the state(5) to be set 00:27:57.187 [2024-07-25 01:13:50.075706] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133d560 is same with the state(5) to be set 00:27:57.187 [2024-07-25 01:13:50.075719] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133d560 is same with the state(5) to be set 00:27:57.187 [2024-07-25 01:13:50.075732] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133d560 is same with the state(5) to be set 00:27:57.187 [2024-07-25 01:13:50.075745] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133d560 is same with the state(5) to be set 00:27:57.187 [2024-07-25 01:13:50.075759] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133d560 is same with the state(5) to be set 00:27:57.187 [2024-07-25 01:13:50.075772] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133d560 is same with the state(5) to be set 00:27:57.187 [2024-07-25 01:13:50.075785] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133d560 is same with the state(5) to be set 00:27:57.187 [2024-07-25 01:13:50.075797] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133d560 is same with the state(5) to be set 00:27:57.187 [2024-07-25 01:13:50.075809] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133d560 is same with the state(5) to be set 00:27:57.187 [2024-07-25 01:13:50.075822] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133d560 is same with the state(5) to be set 00:27:57.187 [2024-07-25 01:13:50.075834] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133d560 is same with the state(5) to be set 00:27:57.187 [2024-07-25 01:13:50.075847] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133d560 is same with the state(5) to be set 00:27:57.187 [2024-07-25 01:13:50.075859] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133d560 is same with the state(5) to be set 00:27:57.187 [2024-07-25 01:13:50.075871] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133d560 is same with the state(5) to be set 00:27:57.187 [2024-07-25 01:13:50.075884] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133d560 is same with the state(5) to be set 00:27:57.187 [2024-07-25 01:13:50.075896] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133d560 is same with the state(5) to be set 00:27:57.187 [2024-07-25 01:13:50.075910] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133d560 is same with the state(5) to be set 00:27:57.187 [2024-07-25 01:13:50.075923] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133d560 is same with the state(5) to be set 00:27:57.187 [2024-07-25 01:13:50.075936] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133d560 is same with the state(5) to be set 00:27:57.187 [2024-07-25 01:13:50.075949] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133d560 is same with the state(5) to be set 00:27:57.187 [2024-07-25 01:13:50.075961] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133d560 is same with the state(5) to be set 00:27:57.187 [2024-07-25 01:13:50.075974] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133d560 is same with the state(5) to be set 00:27:57.187 [2024-07-25 01:13:50.075986] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133d560 is same with the state(5) to be set 00:27:57.187 [2024-07-25 01:13:50.075998] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133d560 is same with the state(5) to be set 00:27:57.187 [2024-07-25 01:13:50.076011] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133d560 is same with the state(5) to be set 00:27:57.187 [2024-07-25 01:13:50.076024] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133d560 is same with the state(5) to be set 00:27:57.187 [2024-07-25 01:13:50.076036] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133d560 is same with the state(5) to be set 00:27:57.187 [2024-07-25 01:13:50.076052] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133d560 is same with the state(5) to be set 00:27:57.187 [2024-07-25 01:13:50.076065] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133d560 is same with the state(5) to be set 00:27:57.187 [2024-07-25 01:13:50.076077] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133d560 is same with the state(5) to be set 00:27:57.187 [2024-07-25 01:13:50.076090] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133d560 is same with the state(5) to be set 00:27:57.187 [2024-07-25 01:13:50.076104] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133d560 is same with the state(5) to be set 00:27:57.187 [2024-07-25 01:13:50.076117] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133d560 is same with the state(5) to be set 00:27:57.187 [2024-07-25 01:13:50.077573] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d0b10 is same with the state(5) to be set 00:27:57.187 [2024-07-25 01:13:50.077606] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d0b10 is same with the state(5) to be set 00:27:57.187 [2024-07-25 01:13:50.077622] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d0b10 is same with the state(5) to be set 00:27:57.187 [2024-07-25 01:13:50.077635] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d0b10 is same with the state(5) to be set 00:27:57.188 [2024-07-25 01:13:50.077647] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d0b10 is same with the state(5) to be set 00:27:57.188 [2024-07-25 01:13:50.077659] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d0b10 is same with the state(5) to be set 00:27:57.188 [2024-07-25 01:13:50.077673] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d0b10 is same with the state(5) to be set 00:27:57.188 [2024-07-25 01:13:50.077687] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d0b10 is same with the state(5) to be set 00:27:57.188 [2024-07-25 01:13:50.077700] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d0b10 is same with the state(5) to be set 00:27:57.188 [2024-07-25 01:13:50.077713] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d0b10 is same with the state(5) to be set 00:27:57.188 [2024-07-25 01:13:50.077726] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d0b10 is same with the state(5) to be set 00:27:57.188 [2024-07-25 01:13:50.077738] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d0b10 is same with the state(5) to be set 00:27:57.188 [2024-07-25 01:13:50.077751] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d0b10 is same with the state(5) to be set 00:27:57.188 [2024-07-25 01:13:50.077764] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d0b10 is same with the state(5) to be set 00:27:57.188 [2024-07-25 01:13:50.077777] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d0b10 is same with the state(5) to be set 00:27:57.188 [2024-07-25 01:13:50.077789] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d0b10 is same with the state(5) to be set 00:27:57.188 [2024-07-25 01:13:50.077801] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d0b10 is same with the state(5) to be set 00:27:57.188 [2024-07-25 01:13:50.077813] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d0b10 is same with the state(5) to be set 00:27:57.188 [2024-07-25 01:13:50.077825] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d0b10 is same with the state(5) to be set 00:27:57.188 [2024-07-25 01:13:50.077839] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d0b10 is same with the state(5) to be set 00:27:57.188 [2024-07-25 01:13:50.077851] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d0b10 is same with the state(5) to be set 00:27:57.188 [2024-07-25 01:13:50.077870] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d0b10 is same with the state(5) to be set 00:27:57.188 [2024-07-25 01:13:50.077882] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d0b10 is same with the state(5) to be set 00:27:57.188 [2024-07-25 01:13:50.077895] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d0b10 is same with the state(5) to be set 00:27:57.188 [2024-07-25 01:13:50.077908] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d0b10 is same with the state(5) to be set 00:27:57.188 [2024-07-25 01:13:50.077921] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d0b10 is same with the state(5) to be set 00:27:57.188 [2024-07-25 01:13:50.077935] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d0b10 is same with the state(5) to be set 00:27:57.188 [2024-07-25 01:13:50.077947] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d0b10 is same with the state(5) to be set 00:27:57.188 [2024-07-25 01:13:50.077959] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d0b10 is same with the state(5) to be set 00:27:57.188 [2024-07-25 01:13:50.077972] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d0b10 is same with the state(5) to be set 00:27:57.188 [2024-07-25 01:13:50.077984] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d0b10 is same with the state(5) to be set 00:27:57.188 [2024-07-25 01:13:50.077997] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d0b10 is same with the state(5) to be set 00:27:57.188 [2024-07-25 01:13:50.078009] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d0b10 is same with the state(5) to be set 00:27:57.188 [2024-07-25 01:13:50.078023] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d0b10 is same with the state(5) to be set 00:27:57.188 [2024-07-25 01:13:50.078036] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d0b10 is same with the state(5) to be set 00:27:57.188 [2024-07-25 01:13:50.078049] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d0b10 is same with the state(5) to be set 00:27:57.188 [2024-07-25 01:13:50.078062] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d0b10 is same with the state(5) to be set 00:27:57.188 [2024-07-25 01:13:50.078074] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d0b10 is same with the state(5) to be set 00:27:57.188 [2024-07-25 01:13:50.078086] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d0b10 is same with the state(5) to be set 00:27:57.188 [2024-07-25 01:13:50.078099] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d0b10 is same with the state(5) to be set 00:27:57.188 [2024-07-25 01:13:50.078112] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d0b10 is same with the state(5) to be set 00:27:57.188 [2024-07-25 01:13:50.078124] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d0b10 is same with the state(5) to be set 00:27:57.188 [2024-07-25 01:13:50.078137] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d0b10 is same with the state(5) to be set 00:27:57.188 [2024-07-25 01:13:50.078150] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d0b10 is same with the state(5) to be set 00:27:57.188 [2024-07-25 01:13:50.078162] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d0b10 is same with the state(5) to be set 00:27:57.188 [2024-07-25 01:13:50.078174] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d0b10 is same with the state(5) to be set 00:27:57.188 [2024-07-25 01:13:50.078186] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d0b10 is same with the state(5) to be set 00:27:57.188 [2024-07-25 01:13:50.078199] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d0b10 is same with the state(5) to be set 00:27:57.188 [2024-07-25 01:13:50.078215] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d0b10 is same with the state(5) to be set 00:27:57.188 [2024-07-25 01:13:50.078228] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d0b10 is same with the state(5) to be set 00:27:57.188 [2024-07-25 01:13:50.078240] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d0b10 is same with the state(5) to be set 00:27:57.188 [2024-07-25 01:13:50.078271] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d0b10 is same with the state(5) to be set 00:27:57.188 [2024-07-25 01:13:50.078285] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d0b10 is same with the state(5) to be set 00:27:57.188 [2024-07-25 01:13:50.078301] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d0b10 is same with the state(5) to be set 00:27:57.188 [2024-07-25 01:13:50.078313] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d0b10 is same with the state(5) to be set 00:27:57.188 [2024-07-25 01:13:50.078327] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d0b10 is same with the state(5) to be set 00:27:57.188 [2024-07-25 01:13:50.078340] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d0b10 is same with the state(5) to be set 00:27:57.188 [2024-07-25 01:13:50.078352] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d0b10 is same with the state(5) to be set 00:27:57.188 [2024-07-25 01:13:50.078364] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d0b10 is same with the state(5) to be set 00:27:57.188 [2024-07-25 01:13:50.078376] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d0b10 is same with the state(5) to be set 00:27:57.188 [2024-07-25 01:13:50.078388] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d0b10 is same with the state(5) to be set 00:27:57.188 [2024-07-25 01:13:50.078401] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d0b10 is same with the state(5) to be set 00:27:57.188 [2024-07-25 01:13:50.078413] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d0b10 is same with the state(5) to be set 00:27:57.188 [2024-07-25 01:13:50.081298] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133dea0 is same with the state(5) to be set 00:27:57.188 [2024-07-25 01:13:50.081330] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133dea0 is same with the state(5) to be set 00:27:57.188 [2024-07-25 01:13:50.081345] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133dea0 is same with the state(5) to be set 00:27:57.188 [2024-07-25 01:13:50.081358] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133dea0 is same with the state(5) to be set 00:27:57.188 [2024-07-25 01:13:50.082462] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133e360 is same with the state(5) to be set 00:27:57.188 [2024-07-25 01:13:50.082497] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133e360 is same with the state(5) to be set 00:27:57.188 [2024-07-25 01:13:50.082513] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133e360 is same with the state(5) to be set 00:27:57.188 [2024-07-25 01:13:50.082526] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133e360 is same with the state(5) to be set 00:27:57.188 [2024-07-25 01:13:50.082539] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133e360 is same with the state(5) to be set 00:27:57.189 [2024-07-25 01:13:50.082562] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133e360 is same with the state(5) to be set 00:27:57.189 [2024-07-25 01:13:50.082574] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133e360 is same with the state(5) to be set 00:27:57.189 [2024-07-25 01:13:50.082587] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133e360 is same with the state(5) to be set 00:27:57.189 [2024-07-25 01:13:50.082611] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133e360 is same with the state(5) to be set 00:27:57.189 [2024-07-25 01:13:50.082625] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133e360 is same with the state(5) to be set 00:27:57.189 [2024-07-25 01:13:50.082637] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133e360 is same with the state(5) to be set 00:27:57.189 [2024-07-25 01:13:50.082650] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133e360 is same with the state(5) to be set 00:27:57.189 [2024-07-25 01:13:50.082662] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133e360 is same with the state(5) to be set 00:27:57.189 [2024-07-25 01:13:50.082674] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133e360 is same with the state(5) to be set 00:27:57.189 [2024-07-25 01:13:50.082686] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133e360 is same with the state(5) to be set 00:27:57.189 [2024-07-25 01:13:50.082699] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133e360 is same with the state(5) to be set 00:27:57.189 [2024-07-25 01:13:50.082711] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133e360 is same with the state(5) to be set 00:27:57.189 [2024-07-25 01:13:50.082724] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133e360 is same with the state(5) to be set 00:27:57.189 [2024-07-25 01:13:50.082736] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133e360 is same with the state(5) to be set 00:27:57.189 [2024-07-25 01:13:50.082748] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133e360 is same with the state(5) to be set 00:27:57.189 [2024-07-25 01:13:50.082761] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133e360 is same with the state(5) to be set 00:27:57.189 [2024-07-25 01:13:50.082774] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133e360 is same with the state(5) to be set 00:27:57.189 [2024-07-25 01:13:50.082786] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133e360 is same with the state(5) to be set 00:27:57.189 [2024-07-25 01:13:50.082798] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133e360 is same with the state(5) to be set 00:27:57.189 [2024-07-25 01:13:50.082810] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133e360 is same with the state(5) to be set 00:27:57.189 [2024-07-25 01:13:50.082823] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133e360 is same with the state(5) to be set 00:27:57.189 [2024-07-25 01:13:50.082835] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133e360 is same with the state(5) to be set 00:27:57.189 [2024-07-25 01:13:50.082848] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133e360 is same with the state(5) to be set 00:27:57.189 [2024-07-25 01:13:50.082860] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133e360 is same with the state(5) to be set 00:27:57.189 [2024-07-25 01:13:50.082874] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133e360 is same with the state(5) to be set 00:27:57.189 [2024-07-25 01:13:50.082886] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133e360 is same with the state(5) to be set 00:27:57.189 [2024-07-25 01:13:50.082899] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133e360 is same with the state(5) to be set 00:27:57.189 [2024-07-25 01:13:50.082911] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133e360 is same with the state(5) to be set 00:27:57.189 [2024-07-25 01:13:50.082924] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133e360 is same with the state(5) to be set 00:27:57.189 [2024-07-25 01:13:50.082936] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133e360 is same with the state(5) to be set 00:27:57.189 [2024-07-25 01:13:50.082952] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133e360 is same with the state(5) to be set 00:27:57.189 [2024-07-25 01:13:50.082965] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133e360 is same with the state(5) to be set 00:27:57.189 [2024-07-25 01:13:50.082977] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133e360 is same with the state(5) to be set 00:27:57.189 [2024-07-25 01:13:50.082990] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133e360 is same with the state(5) to be set 00:27:57.189 [2024-07-25 01:13:50.083003] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133e360 is same with the state(5) to be set 00:27:57.189 [2024-07-25 01:13:50.083015] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133e360 is same with the state(5) to be set 00:27:57.189 [2024-07-25 01:13:50.083029] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133e360 is same with the state(5) to be set 00:27:57.189 [2024-07-25 01:13:50.083041] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133e360 is same with the state(5) to be set 00:27:57.189 [2024-07-25 01:13:50.083054] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133e360 is same with the state(5) to be set 00:27:57.189 [2024-07-25 01:13:50.083067] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133e360 is same with the state(5) to be set 00:27:57.189 [2024-07-25 01:13:50.083079] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133e360 is same with the state(5) to be set 00:27:57.189 [2024-07-25 01:13:50.083091] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133e360 is same with the state(5) to be set 00:27:57.189 [2024-07-25 01:13:50.083103] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133e360 is same with the state(5) to be set 00:27:57.189 [2024-07-25 01:13:50.083116] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133e360 is same with the state(5) to be set 00:27:57.189 [2024-07-25 01:13:50.083129] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133e360 is same with the state(5) to be set 00:27:57.189 [2024-07-25 01:13:50.083141] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133e360 is same with the state(5) to be set 00:27:57.189 [2024-07-25 01:13:50.083154] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133e360 is same with the state(5) to be set 00:27:57.189 [2024-07-25 01:13:50.083166] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133e360 is same with the state(5) to be set 00:27:57.189 [2024-07-25 01:13:50.083178] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133e360 is same with the state(5) to be set 00:27:57.189 [2024-07-25 01:13:50.083190] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133e360 is same with the state(5) to be set 00:27:57.189 [2024-07-25 01:13:50.083204] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133e360 is same with the state(5) to be set 00:27:57.189 [2024-07-25 01:13:50.083217] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133e360 is same with the state(5) to be set 00:27:57.189 [2024-07-25 01:13:50.083230] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133e360 is same with the state(5) to be set 00:27:57.189 [2024-07-25 01:13:50.083258] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133e360 is same with the state(5) to be set 00:27:57.189 [2024-07-25 01:13:50.083276] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133e360 is same with the state(5) to be set 00:27:57.189 [2024-07-25 01:13:50.083304] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133e360 is same with the state(5) to be set 00:27:57.189 [2024-07-25 01:13:50.084207] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133e800 is same with the state(5) to be set 00:27:57.189 [2024-07-25 01:13:50.084258] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133e800 is same with the state(5) to be set 00:27:57.189 [2024-07-25 01:13:50.084279] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133e800 is same with the state(5) to be set 00:27:57.189 [2024-07-25 01:13:50.084299] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133e800 is same with the state(5) to be set 00:27:57.189 [2024-07-25 01:13:50.084312] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133e800 is same with the state(5) to be set 00:27:57.189 [2024-07-25 01:13:50.084325] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133e800 is same with the state(5) to be set 00:27:57.189 [2024-07-25 01:13:50.084338] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133e800 is same with the state(5) to be set 00:27:57.189 [2024-07-25 01:13:50.084351] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133e800 is same with the state(5) to be set 00:27:57.189 [2024-07-25 01:13:50.084364] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133e800 is same with the state(5) to be set 00:27:57.189 [2024-07-25 01:13:50.084378] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133e800 is same with the state(5) to be set 00:27:57.189 [2024-07-25 01:13:50.084391] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133e800 is same with the state(5) to be set 00:27:57.189 [2024-07-25 01:13:50.084404] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133e800 is same with the state(5) to be set 00:27:57.189 [2024-07-25 01:13:50.084417] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133e800 is same with the state(5) to be set 00:27:57.189 [2024-07-25 01:13:50.084429] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133e800 is same with the state(5) to be set 00:27:57.189 [2024-07-25 01:13:50.084442] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133e800 is same with the state(5) to be set 00:27:57.190 [2024-07-25 01:13:50.084455] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133e800 is same with the state(5) to be set 00:27:57.190 [2024-07-25 01:13:50.084467] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133e800 is same with the state(5) to be set 00:27:57.190 [2024-07-25 01:13:50.084480] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133e800 is same with the state(5) to be set 00:27:57.190 [2024-07-25 01:13:50.084492] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133e800 is same with the state(5) to be set 00:27:57.190 [2024-07-25 01:13:50.084505] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133e800 is same with the state(5) to be set 00:27:57.190 [2024-07-25 01:13:50.084518] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133e800 is same with the state(5) to be set 00:27:57.190 [2024-07-25 01:13:50.084530] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133e800 is same with the state(5) to be set 00:27:57.190 [2024-07-25 01:13:50.084543] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133e800 is same with the state(5) to be set 00:27:57.190 [2024-07-25 01:13:50.084556] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133e800 is same with the state(5) to be set 00:27:57.190 [2024-07-25 01:13:50.084578] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133e800 is same with the state(5) to be set 00:27:57.190 [2024-07-25 01:13:50.084591] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133e800 is same with the state(5) to be set 00:27:57.190 [2024-07-25 01:13:50.084605] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133e800 is same with the state(5) to be set 00:27:57.190 [2024-07-25 01:13:50.084618] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133e800 is same with the state(5) to be set 00:27:57.190 [2024-07-25 01:13:50.084635] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133e800 is same with the state(5) to be set 00:27:57.190 [2024-07-25 01:13:50.084648] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133e800 is same with the state(5) to be set 00:27:57.190 [2024-07-25 01:13:50.084661] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133e800 is same with the state(5) to be set 00:27:57.190 [2024-07-25 01:13:50.084674] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133e800 is same with the state(5) to be set 00:27:57.190 [2024-07-25 01:13:50.084688] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133e800 is same with the state(5) to be set 00:27:57.190 [2024-07-25 01:13:50.084701] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133e800 is same with the state(5) to be set 00:27:57.190 [2024-07-25 01:13:50.084714] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133e800 is same with the state(5) to be set 00:27:57.190 [2024-07-25 01:13:50.084726] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133e800 is same with the state(5) to be set 00:27:57.190 [2024-07-25 01:13:50.084739] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133e800 is same with the state(5) to be set 00:27:57.190 [2024-07-25 01:13:50.084752] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133e800 is same with the state(5) to be set 00:27:57.190 [2024-07-25 01:13:50.084765] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133e800 is same with the state(5) to be set 00:27:57.190 [2024-07-25 01:13:50.084777] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133e800 is same with the state(5) to be set 00:27:57.190 [2024-07-25 01:13:50.084790] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133e800 is same with the state(5) to be set 00:27:57.190 [2024-07-25 01:13:50.084803] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133e800 is same with the state(5) to be set 00:27:57.190 [2024-07-25 01:13:50.084816] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133e800 is same with the state(5) to be set 00:27:57.190 [2024-07-25 01:13:50.084828] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133e800 is same with the state(5) to be set 00:27:57.190 [2024-07-25 01:13:50.084841] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133e800 is same with the state(5) to be set 00:27:57.190 [2024-07-25 01:13:50.084854] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133e800 is same with the state(5) to be set 00:27:57.190 [2024-07-25 01:13:50.084867] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133e800 is same with the state(5) to be set 00:27:57.190 [2024-07-25 01:13:50.084880] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133e800 is same with the state(5) to be set 00:27:57.190 [2024-07-25 01:13:50.084892] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133e800 is same with the state(5) to be set 00:27:57.190 [2024-07-25 01:13:50.084905] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133e800 is same with the state(5) to be set 00:27:57.190 [2024-07-25 01:13:50.084918] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133e800 is same with the state(5) to be set 00:27:57.190 [2024-07-25 01:13:50.084930] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133e800 is same with the state(5) to be set 00:27:57.190 [2024-07-25 01:13:50.084942] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133e800 is same with the state(5) to be set 00:27:57.190 [2024-07-25 01:13:50.084955] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133e800 is same with the state(5) to be set 00:27:57.190 [2024-07-25 01:13:50.084968] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133e800 is same with the state(5) to be set 00:27:57.190 [2024-07-25 01:13:50.084983] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133e800 is same with the state(5) to be set 00:27:57.190 [2024-07-25 01:13:50.084997] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133e800 is same with the state(5) to be set 00:27:57.190 [2024-07-25 01:13:50.085009] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133e800 is same with the state(5) to be set 00:27:57.190 [2024-07-25 01:13:50.085022] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133e800 is same with the state(5) to be set 00:27:57.190 [2024-07-25 01:13:50.085034] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133e800 is same with the state(5) to be set 00:27:57.190 [2024-07-25 01:13:50.085047] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133e800 is same with the state(5) to be set 00:27:57.190 [2024-07-25 01:13:50.085060] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133e800 is same with the state(5) to be set 00:27:57.190 [2024-07-25 01:13:50.085072] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133e800 is same with the state(5) to be set 00:27:57.190 [2024-07-25 01:13:50.086240] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133eca0 is same with the state(5) to be set 00:27:57.190 [2024-07-25 01:13:50.086272] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133eca0 is same with the state(5) to be set 00:27:57.190 [2024-07-25 01:13:50.086295] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133eca0 is same with the state(5) to be set 00:27:57.190 [2024-07-25 01:13:50.086308] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133eca0 is same with the state(5) to be set 00:27:57.190 [2024-07-25 01:13:50.086321] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133eca0 is same with the state(5) to be set 00:27:57.190 [2024-07-25 01:13:50.086334] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133eca0 is same with the state(5) to be set 00:27:57.190 [2024-07-25 01:13:50.086346] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133eca0 is same with the state(5) to be set 00:27:57.190 [2024-07-25 01:13:50.086359] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133eca0 is same with the state(5) to be set 00:27:57.190 [2024-07-25 01:13:50.086372] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133eca0 is same with the state(5) to be set 00:27:57.190 [2024-07-25 01:13:50.086386] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133eca0 is same with the state(5) to be set 00:27:57.190 [2024-07-25 01:13:50.086398] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133eca0 is same with the state(5) to be set 00:27:57.190 [2024-07-25 01:13:50.086411] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133eca0 is same with the state(5) to be set 00:27:57.190 [2024-07-25 01:13:50.086424] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133eca0 is same with the state(5) to be set 00:27:57.190 [2024-07-25 01:13:50.086437] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133eca0 is same with the state(5) to be set 00:27:57.190 [2024-07-25 01:13:50.086449] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133eca0 is same with the state(5) to be set 00:27:57.190 [2024-07-25 01:13:50.086462] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133eca0 is same with the state(5) to be set 00:27:57.190 [2024-07-25 01:13:50.086475] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133eca0 is same with the state(5) to be set 00:27:57.190 [2024-07-25 01:13:50.086487] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133eca0 is same with the state(5) to be set 00:27:57.190 [2024-07-25 01:13:50.086500] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133eca0 is same with the state(5) to be set 00:27:57.190 [2024-07-25 01:13:50.086512] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133eca0 is same with the state(5) to be set 00:27:57.190 [2024-07-25 01:13:50.086529] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133eca0 is same with the state(5) to be set 00:27:57.190 [2024-07-25 01:13:50.086548] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133eca0 is same with the state(5) to be set 00:27:57.190 [2024-07-25 01:13:50.086560] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133eca0 is same with the state(5) to be set 00:27:57.190 [2024-07-25 01:13:50.086573] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133eca0 is same with the state(5) to be set 00:27:57.190 [2024-07-25 01:13:50.086585] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133eca0 is same with the state(5) to be set 00:27:57.190 [2024-07-25 01:13:50.086598] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133eca0 is same with the state(5) to be set 00:27:57.190 [2024-07-25 01:13:50.086611] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133eca0 is same with the state(5) to be set 00:27:57.190 [2024-07-25 01:13:50.086623] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133eca0 is same with the state(5) to be set 00:27:57.190 [2024-07-25 01:13:50.086637] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133eca0 is same with the state(5) to be set 00:27:57.190 [2024-07-25 01:13:50.086649] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133eca0 is same with the state(5) to be set 00:27:57.190 [2024-07-25 01:13:50.086662] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133eca0 is same with the state(5) to be set 00:27:57.190 [2024-07-25 01:13:50.086675] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133eca0 is same with the state(5) to be set 00:27:57.190 [2024-07-25 01:13:50.086688] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133eca0 is same with the state(5) to be set 00:27:57.191 [2024-07-25 01:13:50.086702] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133eca0 is same with the state(5) to be set 00:27:57.191 [2024-07-25 01:13:50.086715] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133eca0 is same with the state(5) to be set 00:27:57.191 [2024-07-25 01:13:50.086728] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133eca0 is same with the state(5) to be set 00:27:57.191 [2024-07-25 01:13:50.086742] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133eca0 is same with the state(5) to be set 00:27:57.191 [2024-07-25 01:13:50.086755] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133eca0 is same with the state(5) to be set 00:27:57.191 [2024-07-25 01:13:50.086767] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133eca0 is same with the state(5) to be set 00:27:57.191 [2024-07-25 01:13:50.086780] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133eca0 is same with the state(5) to be set 00:27:57.191 [2024-07-25 01:13:50.086793] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133eca0 is same with the state(5) to be set 00:27:57.191 [2024-07-25 01:13:50.086806] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133eca0 is same with the state(5) to be set 00:27:57.191 [2024-07-25 01:13:50.086819] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133eca0 is same with the state(5) to be set 00:27:57.191 [2024-07-25 01:13:50.086832] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133eca0 is same with the state(5) to be set 00:27:57.191 [2024-07-25 01:13:50.086845] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133eca0 is same with the state(5) to be set 00:27:57.191 [2024-07-25 01:13:50.086858] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133eca0 is same with the state(5) to be set 00:27:57.191 [2024-07-25 01:13:50.086870] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133eca0 is same with the state(5) to be set 00:27:57.191 [2024-07-25 01:13:50.086886] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133eca0 is same with the state(5) to be set 00:27:57.191 [2024-07-25 01:13:50.086900] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133eca0 is same with the state(5) to be set 00:27:57.191 [2024-07-25 01:13:50.086912] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133eca0 is same with the state(5) to be set 00:27:57.191 [2024-07-25 01:13:50.086925] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133eca0 is same with the state(5) to be set 00:27:57.191 [2024-07-25 01:13:50.086938] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133eca0 is same with the state(5) to be set 00:27:57.191 [2024-07-25 01:13:50.086950] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133eca0 is same with the state(5) to be set 00:27:57.191 [2024-07-25 01:13:50.086963] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133eca0 is same with the state(5) to be set 00:27:57.191 [2024-07-25 01:13:50.086976] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133eca0 is same with the state(5) to be set 00:27:57.191 [2024-07-25 01:13:50.086989] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133eca0 is same with the state(5) to be set 00:27:57.191 [2024-07-25 01:13:50.087001] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133eca0 is same with the state(5) to be set 00:27:57.191 [2024-07-25 01:13:50.087014] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133eca0 is same with the state(5) to be set 00:27:57.191 [2024-07-25 01:13:50.087027] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133eca0 is same with the state(5) to be set 00:27:57.191 [2024-07-25 01:13:50.087039] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133eca0 is same with the state(5) to be set 00:27:57.191 [2024-07-25 01:13:50.087051] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133eca0 is same with the state(5) to be set 00:27:57.191 [2024-07-25 01:13:50.087066] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133eca0 is same with the state(5) to be set 00:27:57.191 [2024-07-25 01:13:50.087080] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133eca0 is same with the state(5) to be set 00:27:57.191 [2024-07-25 01:13:50.088423] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133f140 is same with the state(5) to be set 00:27:57.191 [2024-07-25 01:13:50.088451] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133f140 is same with the state(5) to be set 00:27:57.191 [2024-07-25 01:13:50.088465] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133f140 is same with the state(5) to be set 00:27:57.191 [2024-07-25 01:13:50.088478] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133f140 is same with the state(5) to be set 00:27:57.191 [2024-07-25 01:13:50.088490] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133f140 is same with the state(5) to be set 00:27:57.191 [2024-07-25 01:13:50.088503] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133f140 is same with the state(5) to be set 00:27:57.191 [2024-07-25 01:13:50.088516] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133f140 is same with the state(5) to be set 00:27:57.191 [2024-07-25 01:13:50.088529] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133f140 is same with the state(5) to be set 00:27:57.191 [2024-07-25 01:13:50.088548] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133f140 is same with the state(5) to be set 00:27:57.191 [2024-07-25 01:13:50.088560] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133f140 is same with the state(5) to be set 00:27:57.191 [2024-07-25 01:13:50.088580] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133f140 is same with the state(5) to be set 00:27:57.191 [2024-07-25 01:13:50.088610] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133f140 is same with the state(5) to be set 00:27:57.191 [2024-07-25 01:13:50.088633] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133f140 is same with the state(5) to be set 00:27:57.191 [2024-07-25 01:13:50.088648] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133f140 is same with the state(5) to be set 00:27:57.191 [2024-07-25 01:13:50.088661] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133f140 is same with the state(5) to be set 00:27:57.191 [2024-07-25 01:13:50.088673] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133f140 is same with the state(5) to be set 00:27:57.191 [2024-07-25 01:13:50.088686] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133f140 is same with the state(5) to be set 00:27:57.191 [2024-07-25 01:13:50.088699] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133f140 is same with the state(5) to be set 00:27:57.191 [2024-07-25 01:13:50.088711] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133f140 is same with the state(5) to be set 00:27:57.191 [2024-07-25 01:13:50.088724] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133f140 is same with the state(5) to be set 00:27:57.191 [2024-07-25 01:13:50.088737] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133f140 is same with the state(5) to be set 00:27:57.191 [2024-07-25 01:13:50.088749] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133f140 is same with the state(5) to be set 00:27:57.191 [2024-07-25 01:13:50.088762] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133f140 is same with the state(5) to be set 00:27:57.191 [2024-07-25 01:13:50.088774] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133f140 is same with the state(5) to be set 00:27:57.191 [2024-07-25 01:13:50.088786] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133f140 is same with the state(5) to be set 00:27:57.191 [2024-07-25 01:13:50.088799] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133f140 is same with the state(5) to be set 00:27:57.191 [2024-07-25 01:13:50.088811] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133f140 is same with the state(5) to be set 00:27:57.191 [2024-07-25 01:13:50.088824] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133f140 is same with the state(5) to be set 00:27:57.191 [2024-07-25 01:13:50.088837] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133f140 is same with the state(5) to be set 00:27:57.191 [2024-07-25 01:13:50.088849] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133f140 is same with the state(5) to be set 00:27:57.191 [2024-07-25 01:13:50.088863] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133f140 is same with the state(5) to be set 00:27:57.191 [2024-07-25 01:13:50.088875] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133f140 is same with the state(5) to be set 00:27:57.191 [2024-07-25 01:13:50.088888] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133f140 is same with the state(5) to be set 00:27:57.191 [2024-07-25 01:13:50.088901] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133f140 is same with the state(5) to be set 00:27:57.191 [2024-07-25 01:13:50.088913] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133f140 is same with the state(5) to be set 00:27:57.191 [2024-07-25 01:13:50.088925] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133f140 is same with the state(5) to be set 00:27:57.191 [2024-07-25 01:13:50.088938] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133f140 is same with the state(5) to be set 00:27:57.191 [2024-07-25 01:13:50.088950] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133f140 is same with the state(5) to be set 00:27:57.191 [2024-07-25 01:13:50.088967] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133f140 is same with the state(5) to be set 00:27:57.191 [2024-07-25 01:13:50.088980] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133f140 is same with the state(5) to be set 00:27:57.191 [2024-07-25 01:13:50.088992] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133f140 is same with the state(5) to be set 00:27:57.191 [2024-07-25 01:13:50.089004] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133f140 is same with the state(5) to be set 00:27:57.191 [2024-07-25 01:13:50.089017] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133f140 is same with the state(5) to be set 00:27:57.191 [2024-07-25 01:13:50.089030] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133f140 is same with the state(5) to be set 00:27:57.191 [2024-07-25 01:13:50.089042] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133f140 is same with the state(5) to be set 00:27:57.191 [2024-07-25 01:13:50.089054] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133f140 is same with the state(5) to be set 00:27:57.191 [2024-07-25 01:13:50.089067] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133f140 is same with the state(5) to be set 00:27:57.192 [2024-07-25 01:13:50.089080] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133f140 is same with the state(5) to be set 00:27:57.192 [2024-07-25 01:13:50.089092] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133f140 is same with the state(5) to be set 00:27:57.192 [2024-07-25 01:13:50.089104] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133f140 is same with the state(5) to be set 00:27:57.192 [2024-07-25 01:13:50.089116] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133f140 is same with the state(5) to be set 00:27:57.192 [2024-07-25 01:13:50.089128] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133f140 is same with the state(5) to be set 00:27:57.192 [2024-07-25 01:13:50.089141] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133f140 is same with the state(5) to be set 00:27:57.192 [2024-07-25 01:13:50.089155] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133f140 is same with the state(5) to be set 00:27:57.192 [2024-07-25 01:13:50.089167] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133f140 is same with the state(5) to be set 00:27:57.192 [2024-07-25 01:13:50.089179] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133f140 is same with the state(5) to be set 00:27:57.192 [2024-07-25 01:13:50.089191] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133f140 is same with the state(5) to be set 00:27:57.192 [2024-07-25 01:13:50.089203] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133f140 is same with the state(5) to be set 00:27:57.192 [2024-07-25 01:13:50.089216] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133f140 is same with the state(5) to be set 00:27:57.192 [2024-07-25 01:13:50.089228] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133f140 is same with the state(5) to be set 00:27:57.192 [2024-07-25 01:13:50.089249] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133f140 is same with the state(5) to be set 00:27:57.192 [2024-07-25 01:13:50.089268] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133f140 is same with the state(5) to be set 00:27:57.192 [2024-07-25 01:13:50.089281] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133f140 is same with the state(5) to be set 00:27:57.192 [2024-07-25 01:13:50.090352] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133f600 is same with the state(5) to be set 00:27:57.192 [2024-07-25 01:13:50.090379] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133f600 is same with the state(5) to be set 00:27:57.192 [2024-07-25 01:13:50.090398] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133f600 is same with the state(5) to be set 00:27:57.192 [2024-07-25 01:13:50.090411] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133f600 is same with the state(5) to be set 00:27:57.192 [2024-07-25 01:13:50.090423] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133f600 is same with the state(5) to be set 00:27:57.192 [2024-07-25 01:13:50.090436] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133f600 is same with the state(5) to be set 00:27:57.192 [2024-07-25 01:13:50.090449] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133f600 is same with the state(5) to be set 00:27:57.192 [2024-07-25 01:13:50.090462] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133f600 is same with the state(5) to be set 00:27:57.192 [2024-07-25 01:13:50.090474] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133f600 is same with the state(5) to be set 00:27:57.192 [2024-07-25 01:13:50.090487] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133f600 is same with the state(5) to be set 00:27:57.192 [2024-07-25 01:13:50.090499] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133f600 is same with the state(5) to be set 00:27:57.192 [2024-07-25 01:13:50.090512] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133f600 is same with the state(5) to be set 00:27:57.192 [2024-07-25 01:13:50.090526] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133f600 is same with the state(5) to be set 00:27:57.192 [2024-07-25 01:13:50.090539] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133f600 is same with the state(5) to be set 00:27:57.192 [2024-07-25 01:13:50.090553] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133f600 is same with the state(5) to be set 00:27:57.192 [2024-07-25 01:13:50.090565] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133f600 is same with the state(5) to be set 00:27:57.192 [2024-07-25 01:13:50.090578] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133f600 is same with the state(5) to be set 00:27:57.192 [2024-07-25 01:13:50.090591] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133f600 is same with the state(5) to be set 00:27:57.192 [2024-07-25 01:13:50.090604] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133f600 is same with the state(5) to be set 00:27:57.192 [2024-07-25 01:13:50.090617] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133f600 is same with the state(5) to be set 00:27:57.192 [2024-07-25 01:13:50.090629] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133f600 is same with the state(5) to be set 00:27:57.192 [2024-07-25 01:13:50.090641] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133f600 is same with the state(5) to be set 00:27:57.192 [2024-07-25 01:13:50.090654] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133f600 is same with the state(5) to be set 00:27:57.192 [2024-07-25 01:13:50.090666] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133f600 is same with the state(5) to be set 00:27:57.192 [2024-07-25 01:13:50.090678] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133f600 is same with the state(5) to be set 00:27:57.192 [2024-07-25 01:13:50.090690] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133f600 is same with the state(5) to be set 00:27:57.192 [2024-07-25 01:13:50.090703] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133f600 is same with the state(5) to be set 00:27:57.192 [2024-07-25 01:13:50.090716] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133f600 is same with the state(5) to be set 00:27:57.192 [2024-07-25 01:13:50.090728] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133f600 is same with the state(5) to be set 00:27:57.192 [2024-07-25 01:13:50.090744] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133f600 is same with the state(5) to be set 00:27:57.192 [2024-07-25 01:13:50.090758] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133f600 is same with the state(5) to be set 00:27:57.192 [2024-07-25 01:13:50.090770] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133f600 is same with the state(5) to be set 00:27:57.192 [2024-07-25 01:13:50.090784] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133f600 is same with the state(5) to be set 00:27:57.192 [2024-07-25 01:13:50.090799] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133f600 is same with the state(5) to be set 00:27:57.192 [2024-07-25 01:13:50.090812] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133f600 is same with the state(5) to be set 00:27:57.192 [2024-07-25 01:13:50.090825] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133f600 is same with the state(5) to be set 00:27:57.192 [2024-07-25 01:13:50.090837] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133f600 is same with the state(5) to be set 00:27:57.192 [2024-07-25 01:13:50.090850] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133f600 is same with the state(5) to be set 00:27:57.192 [2024-07-25 01:13:50.090863] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133f600 is same with the state(5) to be set 00:27:57.192 [2024-07-25 01:13:50.090876] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133f600 is same with the state(5) to be set 00:27:57.192 [2024-07-25 01:13:50.090888] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133f600 is same with the state(5) to be set 00:27:57.192 [2024-07-25 01:13:50.090901] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133f600 is same with the state(5) to be set 00:27:57.192 [2024-07-25 01:13:50.090913] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133f600 is same with the state(5) to be set 00:27:57.192 [2024-07-25 01:13:50.090927] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133f600 is same with the state(5) to be set 00:27:57.192 [2024-07-25 01:13:50.090941] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133f600 is same with the state(5) to be set 00:27:57.192 [2024-07-25 01:13:50.090954] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133f600 is same with the state(5) to be set 00:27:57.192 [2024-07-25 01:13:50.090967] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133f600 is same with the state(5) to be set 00:27:57.192 [2024-07-25 01:13:50.090979] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133f600 is same with the state(5) to be set 00:27:57.192 [2024-07-25 01:13:50.090993] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133f600 is same with the state(5) to be set 00:27:57.192 [2024-07-25 01:13:50.091006] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133f600 is same with the state(5) to be set 00:27:57.192 [2024-07-25 01:13:50.091018] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133f600 is same with the state(5) to be set 00:27:57.193 [2024-07-25 01:13:50.091031] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133f600 is same with the state(5) to be set 00:27:57.193 [2024-07-25 01:13:50.091044] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133f600 is same with the state(5) to be set 00:27:57.193 [2024-07-25 01:13:50.091056] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133f600 is same with the state(5) to be set 00:27:57.193 [2024-07-25 01:13:50.091069] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133f600 is same with the state(5) to be set 00:27:57.193 [2024-07-25 01:13:50.091082] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133f600 is same with the state(5) to be set 00:27:57.193 [2024-07-25 01:13:50.091098] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133f600 is same with the state(5) to be set 00:27:57.193 [2024-07-25 01:13:50.091111] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133f600 is same with the state(5) to be set 00:27:57.193 [2024-07-25 01:13:50.091123] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133f600 is same with the state(5) to be set 00:27:57.193 [2024-07-25 01:13:50.091137] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133f600 is same with the state(5) to be set 00:27:57.193 [2024-07-25 01:13:50.091150] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133f600 is same with the state(5) to be set 00:27:57.193 [2024-07-25 01:13:50.091162] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133f600 is same with the state(5) to be set 00:27:57.193 [2024-07-25 01:13:50.091175] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133f600 is same with the state(5) to be set 00:27:57.193 [2024-07-25 01:13:50.093516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.193 [2024-07-25 01:13:50.093569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.193 [2024-07-25 01:13:50.093602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.193 [2024-07-25 01:13:50.093619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.193 [2024-07-25 01:13:50.093635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.193 [2024-07-25 01:13:50.093650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.193 [2024-07-25 01:13:50.093665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.193 [2024-07-25 01:13:50.093679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.193 [2024-07-25 01:13:50.093694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.193 [2024-07-25 01:13:50.093708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.193 [2024-07-25 01:13:50.093724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.193 [2024-07-25 01:13:50.093737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.193 [2024-07-25 01:13:50.093752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.193 [2024-07-25 01:13:50.093766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.193 [2024-07-25 01:13:50.093797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.193 [2024-07-25 01:13:50.093810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.193 [2024-07-25 01:13:50.093824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.193 [2024-07-25 01:13:50.093837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.193 [2024-07-25 01:13:50.093852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.193 [2024-07-25 01:13:50.093872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.193 [2024-07-25 01:13:50.093888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.193 [2024-07-25 01:13:50.093901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.193 [2024-07-25 01:13:50.093916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.193 [2024-07-25 01:13:50.093929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.193 [2024-07-25 01:13:50.093944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.193 [2024-07-25 01:13:50.093957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.193 [2024-07-25 01:13:50.093972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.193 [2024-07-25 01:13:50.094001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.193 [2024-07-25 01:13:50.094017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.193 [2024-07-25 01:13:50.094031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.193 [2024-07-25 01:13:50.094046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.193 [2024-07-25 01:13:50.094060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.193 [2024-07-25 01:13:50.094074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.193 [2024-07-25 01:13:50.094090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.193 [2024-07-25 01:13:50.094106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.193 [2024-07-25 01:13:50.094120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.193 [2024-07-25 01:13:50.094136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.193 [2024-07-25 01:13:50.094149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.193 [2024-07-25 01:13:50.094165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.193 [2024-07-25 01:13:50.094179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.193 [2024-07-25 01:13:50.094195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.193 [2024-07-25 01:13:50.094209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.193 [2024-07-25 01:13:50.094224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.193 [2024-07-25 01:13:50.094238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.193 [2024-07-25 01:13:50.094266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.193 [2024-07-25 01:13:50.094292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.193 [2024-07-25 01:13:50.094308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.193 [2024-07-25 01:13:50.094321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.193 [2024-07-25 01:13:50.094337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.193 [2024-07-25 01:13:50.094351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.193 [2024-07-25 01:13:50.094366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.193 [2024-07-25 01:13:50.094380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.193 [2024-07-25 01:13:50.094395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.193 [2024-07-25 01:13:50.094409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.193 [2024-07-25 01:13:50.094424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.193 [2024-07-25 01:13:50.094438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.193 [2024-07-25 01:13:50.094453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.193 [2024-07-25 01:13:50.094466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.193 [2024-07-25 01:13:50.094482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.193 [2024-07-25 01:13:50.094496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.193 [2024-07-25 01:13:50.094511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.193 [2024-07-25 01:13:50.094525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.193 [2024-07-25 01:13:50.094551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.193 [2024-07-25 01:13:50.094565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.193 [2024-07-25 01:13:50.094580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.193 [2024-07-25 01:13:50.094594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.193 [2024-07-25 01:13:50.094610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.194 [2024-07-25 01:13:50.094623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.194 [2024-07-25 01:13:50.094639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.194 [2024-07-25 01:13:50.094656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.194 [2024-07-25 01:13:50.094672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.194 [2024-07-25 01:13:50.094686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.194 [2024-07-25 01:13:50.094702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.194 [2024-07-25 01:13:50.094715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.194 [2024-07-25 01:13:50.094731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.194 [2024-07-25 01:13:50.094745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.194 [2024-07-25 01:13:50.094760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.194 [2024-07-25 01:13:50.094774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.194 [2024-07-25 01:13:50.094789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.194 [2024-07-25 01:13:50.094803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.194 [2024-07-25 01:13:50.094818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.194 [2024-07-25 01:13:50.094832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.194 [2024-07-25 01:13:50.094847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.194 [2024-07-25 01:13:50.094861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.194 [2024-07-25 01:13:50.094877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.194 [2024-07-25 01:13:50.094890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.194 [2024-07-25 01:13:50.094905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.194 [2024-07-25 01:13:50.094919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.194 [2024-07-25 01:13:50.094934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.194 [2024-07-25 01:13:50.094948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.194 [2024-07-25 01:13:50.094964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.194 [2024-07-25 01:13:50.094978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.194 [2024-07-25 01:13:50.094994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.194 [2024-07-25 01:13:50.095008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.194 [2024-07-25 01:13:50.095027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.194 [2024-07-25 01:13:50.095041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.194 [2024-07-25 01:13:50.095057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.194 [2024-07-25 01:13:50.095071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.194 [2024-07-25 01:13:50.095086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.194 [2024-07-25 01:13:50.095099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.194 [2024-07-25 01:13:50.095114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.194 [2024-07-25 01:13:50.095128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.194 [2024-07-25 01:13:50.095145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.194 [2024-07-25 01:13:50.095159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.194 [2024-07-25 01:13:50.095174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.194 [2024-07-25 01:13:50.095188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.194 [2024-07-25 01:13:50.095203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.194 [2024-07-25 01:13:50.095217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.194 [2024-07-25 01:13:50.095232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.194 [2024-07-25 01:13:50.095252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.194 [2024-07-25 01:13:50.095269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.194 [2024-07-25 01:13:50.095290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.194 [2024-07-25 01:13:50.095305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.194 [2024-07-25 01:13:50.095319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.194 [2024-07-25 01:13:50.095334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.194 [2024-07-25 01:13:50.095348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.194 [2024-07-25 01:13:50.095363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.194 [2024-07-25 01:13:50.095376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.194 [2024-07-25 01:13:50.095392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.194 [2024-07-25 01:13:50.095409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.194 [2024-07-25 01:13:50.095425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.194 [2024-07-25 01:13:50.095439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.194 [2024-07-25 01:13:50.095454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.194 [2024-07-25 01:13:50.095468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.194 [2024-07-25 01:13:50.095484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.194 [2024-07-25 01:13:50.095497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.194 [2024-07-25 01:13:50.095512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.194 [2024-07-25 01:13:50.095526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.194 [2024-07-25 01:13:50.095585] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:57.194 [2024-07-25 01:13:50.095674] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x123a3e0 was disconnected and freed. reset controller. 00:27:57.194 [2024-07-25 01:13:50.095873] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:57.194 [2024-07-25 01:13:50.095905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.194 [2024-07-25 01:13:50.095932] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:57.194 [2024-07-25 01:13:50.095958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.194 [2024-07-25 01:13:50.095983] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:57.194 [2024-07-25 01:13:50.096009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.194 [2024-07-25 01:13:50.096034] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:57.194 [2024-07-25 01:13:50.096058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.194 [2024-07-25 01:13:50.096074] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1400ec0 is same with the state(5) to be set 00:27:57.194 [2024-07-25 01:13:50.096129] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:57.194 [2024-07-25 01:13:50.096150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.194 [2024-07-25 01:13:50.096164] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:57.194 [2024-07-25 01:13:50.096178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.194 [2024-07-25 01:13:50.096192] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:57.194 [2024-07-25 01:13:50.096210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.194 [2024-07-25 01:13:50.096224] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:57.195 [2024-07-25 01:13:50.096238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.195 [2024-07-25 01:13:50.096259] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x126cf90 is same with the state(5) to be set 00:27:57.195 [2024-07-25 01:13:50.096316] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:57.195 [2024-07-25 01:13:50.096336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.195 [2024-07-25 01:13:50.096350] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:57.195 [2024-07-25 01:13:50.096364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.195 [2024-07-25 01:13:50.096377] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:57.195 [2024-07-25 01:13:50.096390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.195 [2024-07-25 01:13:50.096411] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:57.195 [2024-07-25 01:13:50.096425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.195 [2024-07-25 01:13:50.096438] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123e300 is same with the state(5) to be set 00:27:57.195 [2024-07-25 01:13:50.096483] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:57.195 [2024-07-25 01:13:50.096502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.195 [2024-07-25 01:13:50.096517] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:57.195 [2024-07-25 01:13:50.096530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.195 [2024-07-25 01:13:50.096544] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:57.195 [2024-07-25 01:13:50.096557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.195 [2024-07-25 01:13:50.096572] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:57.195 [2024-07-25 01:13:50.096585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.195 [2024-07-25 01:13:50.096597] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1269810 is same with the state(5) to be set 00:27:57.195 [2024-07-25 01:13:50.096642] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:57.195 [2024-07-25 01:13:50.096662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.195 [2024-07-25 01:13:50.096677] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:57.195 [2024-07-25 01:13:50.096690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.195 [2024-07-25 01:13:50.096709] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:57.195 [2024-07-25 01:13:50.096722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.195 [2024-07-25 01:13:50.096736] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:57.195 [2024-07-25 01:13:50.096749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.195 [2024-07-25 01:13:50.096762] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123c190 is same with the state(5) to be set 00:27:57.195 [2024-07-25 01:13:50.096804] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:57.195 [2024-07-25 01:13:50.096825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.195 [2024-07-25 01:13:50.096839] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:57.195 [2024-07-25 01:13:50.096852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.195 [2024-07-25 01:13:50.096866] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:57.195 [2024-07-25 01:13:50.096878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.195 [2024-07-25 01:13:50.096892] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:57.195 [2024-07-25 01:13:50.096905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.195 [2024-07-25 01:13:50.096918] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12616b0 is same with the state(5) to be set 00:27:57.195 [2024-07-25 01:13:50.096961] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:57.195 [2024-07-25 01:13:50.096981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.195 [2024-07-25 01:13:50.097001] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:57.195 [2024-07-25 01:13:50.097016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.195 [2024-07-25 01:13:50.097029] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:57.195 [2024-07-25 01:13:50.097043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.195 [2024-07-25 01:13:50.097056] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:57.195 [2024-07-25 01:13:50.097069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.195 [2024-07-25 01:13:50.097082] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd36610 is same with the state(5) to be set 00:27:57.195 [2024-07-25 01:13:50.097126] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:57.195 [2024-07-25 01:13:50.097146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.195 [2024-07-25 01:13:50.097164] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:57.195 [2024-07-25 01:13:50.097179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.195 [2024-07-25 01:13:50.097192] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:57.195 [2024-07-25 01:13:50.097205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.195 [2024-07-25 01:13:50.097218] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:57.195 [2024-07-25 01:13:50.097231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.195 [2024-07-25 01:13:50.097250] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1296f90 is same with the state(5) to be set 00:27:57.195 [2024-07-25 01:13:50.097309] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:57.195 [2024-07-25 01:13:50.097329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.195 [2024-07-25 01:13:50.097344] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:57.195 [2024-07-25 01:13:50.097357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.195 [2024-07-25 01:13:50.097371] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:57.195 [2024-07-25 01:13:50.097383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.195 [2024-07-25 01:13:50.097397] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:57.195 [2024-07-25 01:13:50.097410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.195 [2024-07-25 01:13:50.097422] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e2f00 is same with the state(5) to be set 00:27:57.195 [2024-07-25 01:13:50.097467] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:57.195 [2024-07-25 01:13:50.097487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.195 [2024-07-25 01:13:50.097501] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:57.195 [2024-07-25 01:13:50.097515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.195 [2024-07-25 01:13:50.097529] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:57.195 [2024-07-25 01:13:50.097541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.195 [2024-07-25 01:13:50.097563] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:57.195 [2024-07-25 01:13:50.097582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.195 [2024-07-25 01:13:50.097595] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e1f50 is same with the state(5) to be set 00:27:57.195 [2024-07-25 01:13:50.100021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.195 [2024-07-25 01:13:50.100071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.195 [2024-07-25 01:13:50.100112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.195 [2024-07-25 01:13:50.100134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.195 [2024-07-25 01:13:50.100151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.195 [2024-07-25 01:13:50.100166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.196 [2024-07-25 01:13:50.100181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.196 [2024-07-25 01:13:50.100195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.196 [2024-07-25 01:13:50.100210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.196 [2024-07-25 01:13:50.100223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.196 [2024-07-25 01:13:50.100238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.196 [2024-07-25 01:13:50.100261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.196 [2024-07-25 01:13:50.100285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.196 [2024-07-25 01:13:50.100299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.196 [2024-07-25 01:13:50.100314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.196 [2024-07-25 01:13:50.100328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.196 [2024-07-25 01:13:50.100343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.196 [2024-07-25 01:13:50.100356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.196 [2024-07-25 01:13:50.100372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.196 [2024-07-25 01:13:50.100385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.196 [2024-07-25 01:13:50.100401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.196 [2024-07-25 01:13:50.100414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.196 [2024-07-25 01:13:50.100429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.196 [2024-07-25 01:13:50.100443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.196 [2024-07-25 01:13:50.100458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.196 [2024-07-25 01:13:50.100471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.196 [2024-07-25 01:13:50.100491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.196 [2024-07-25 01:13:50.100506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.196 [2024-07-25 01:13:50.100521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.196 [2024-07-25 01:13:50.100534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.196 [2024-07-25 01:13:50.100555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.196 [2024-07-25 01:13:50.100570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.196 [2024-07-25 01:13:50.100585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.196 [2024-07-25 01:13:50.100599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.196 [2024-07-25 01:13:50.100614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.196 [2024-07-25 01:13:50.100627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.196 [2024-07-25 01:13:50.100643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.196 [2024-07-25 01:13:50.100656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.196 [2024-07-25 01:13:50.100672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.196 [2024-07-25 01:13:50.100684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.196 [2024-07-25 01:13:50.100699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.196 [2024-07-25 01:13:50.100713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.196 [2024-07-25 01:13:50.100728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.196 [2024-07-25 01:13:50.100743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.196 [2024-07-25 01:13:50.100758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.196 [2024-07-25 01:13:50.100772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.196 [2024-07-25 01:13:50.100790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.196 [2024-07-25 01:13:50.100805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.196 [2024-07-25 01:13:50.100821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.196 [2024-07-25 01:13:50.100835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.196 [2024-07-25 01:13:50.100849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.197 [2024-07-25 01:13:50.100868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.197 [2024-07-25 01:13:50.100885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.197 [2024-07-25 01:13:50.100900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.197 [2024-07-25 01:13:50.100916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.197 [2024-07-25 01:13:50.100931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.197 [2024-07-25 01:13:50.100947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.197 [2024-07-25 01:13:50.100961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.197 [2024-07-25 01:13:50.100977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.197 [2024-07-25 01:13:50.100992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.197 [2024-07-25 01:13:50.113584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.197 [2024-07-25 01:13:50.113649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.197 [2024-07-25 01:13:50.113669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.197 [2024-07-25 01:13:50.113683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.197 [2024-07-25 01:13:50.113699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.197 [2024-07-25 01:13:50.113714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.197 [2024-07-25 01:13:50.113730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.197 [2024-07-25 01:13:50.113744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.197 [2024-07-25 01:13:50.113760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.197 [2024-07-25 01:13:50.113773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.197 [2024-07-25 01:13:50.113789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.197 [2024-07-25 01:13:50.113802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.197 [2024-07-25 01:13:50.113818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.197 [2024-07-25 01:13:50.113832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.197 [2024-07-25 01:13:50.113848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.198 [2024-07-25 01:13:50.113862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.198 [2024-07-25 01:13:50.113891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.198 [2024-07-25 01:13:50.113906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.198 [2024-07-25 01:13:50.113921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.198 [2024-07-25 01:13:50.113935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.198 [2024-07-25 01:13:50.113951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.198 [2024-07-25 01:13:50.113965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.198 [2024-07-25 01:13:50.113980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.198 [2024-07-25 01:13:50.113995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.198 [2024-07-25 01:13:50.114010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.198 [2024-07-25 01:13:50.114024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.198 [2024-07-25 01:13:50.114039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.198 [2024-07-25 01:13:50.114054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.198 [2024-07-25 01:13:50.114070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.198 [2024-07-25 01:13:50.114083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.198 [2024-07-25 01:13:50.114099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.198 [2024-07-25 01:13:50.114112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.198 [2024-07-25 01:13:50.114128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.198 [2024-07-25 01:13:50.114142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.198 [2024-07-25 01:13:50.114158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.198 [2024-07-25 01:13:50.114172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.198 [2024-07-25 01:13:50.114188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.198 [2024-07-25 01:13:50.114201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.198 [2024-07-25 01:13:50.114217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.198 [2024-07-25 01:13:50.114230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.198 [2024-07-25 01:13:50.114252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.198 [2024-07-25 01:13:50.114286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.198 [2024-07-25 01:13:50.114302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.198 [2024-07-25 01:13:50.114317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.198 [2024-07-25 01:13:50.114332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.198 [2024-07-25 01:13:50.114347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.198 [2024-07-25 01:13:50.114362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.198 [2024-07-25 01:13:50.114376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.198 [2024-07-25 01:13:50.114391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.198 [2024-07-25 01:13:50.114405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.198 [2024-07-25 01:13:50.114421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.198 [2024-07-25 01:13:50.114434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.198 [2024-07-25 01:13:50.114449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.198 [2024-07-25 01:13:50.114462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.198 [2024-07-25 01:13:50.114478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.198 [2024-07-25 01:13:50.114492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.198 [2024-07-25 01:13:50.114508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.198 [2024-07-25 01:13:50.114521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.198 [2024-07-25 01:13:50.114545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.198 [2024-07-25 01:13:50.114559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.198 [2024-07-25 01:13:50.114575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.198 [2024-07-25 01:13:50.114589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.198 [2024-07-25 01:13:50.114605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.198 [2024-07-25 01:13:50.114619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.198 [2024-07-25 01:13:50.114634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.198 [2024-07-25 01:13:50.114648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.198 [2024-07-25 01:13:50.114667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.198 [2024-07-25 01:13:50.114682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.198 [2024-07-25 01:13:50.114828] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x13171d0 was disconnected and freed. reset controller. 00:27:57.198 [2024-07-25 01:13:50.114911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.198 [2024-07-25 01:13:50.114930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.198 [2024-07-25 01:13:50.114956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.198 [2024-07-25 01:13:50.114971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.198 [2024-07-25 01:13:50.114988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.198 [2024-07-25 01:13:50.115001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.198 [2024-07-25 01:13:50.115017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.198 [2024-07-25 01:13:50.115031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.198 [2024-07-25 01:13:50.115047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.198 [2024-07-25 01:13:50.115060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.198 [2024-07-25 01:13:50.115076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.198 [2024-07-25 01:13:50.115090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.198 [2024-07-25 01:13:50.115105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.198 [2024-07-25 01:13:50.115118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.198 [2024-07-25 01:13:50.115134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.198 [2024-07-25 01:13:50.115148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.198 [2024-07-25 01:13:50.115163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.198 [2024-07-25 01:13:50.115177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.198 [2024-07-25 01:13:50.115193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.198 [2024-07-25 01:13:50.115207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.198 [2024-07-25 01:13:50.115222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.198 [2024-07-25 01:13:50.115235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.198 [2024-07-25 01:13:50.115263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.198 [2024-07-25 01:13:50.115279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.198 [2024-07-25 01:13:50.115302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.199 [2024-07-25 01:13:50.115316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.199 [2024-07-25 01:13:50.115332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.199 [2024-07-25 01:13:50.115345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.199 [2024-07-25 01:13:50.115361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.199 [2024-07-25 01:13:50.115375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.199 [2024-07-25 01:13:50.115390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.199 [2024-07-25 01:13:50.115404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.199 [2024-07-25 01:13:50.115419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.199 [2024-07-25 01:13:50.115432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.199 [2024-07-25 01:13:50.115448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.199 [2024-07-25 01:13:50.115461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.199 [2024-07-25 01:13:50.115477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.199 [2024-07-25 01:13:50.115491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.199 [2024-07-25 01:13:50.115505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.199 [2024-07-25 01:13:50.115519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.199 [2024-07-25 01:13:50.115541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.199 [2024-07-25 01:13:50.115555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.199 [2024-07-25 01:13:50.115570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.199 [2024-07-25 01:13:50.115584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.199 [2024-07-25 01:13:50.115599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.199 [2024-07-25 01:13:50.115613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.199 [2024-07-25 01:13:50.115629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.199 [2024-07-25 01:13:50.115646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.199 [2024-07-25 01:13:50.115662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.199 [2024-07-25 01:13:50.115676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.199 [2024-07-25 01:13:50.115692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.199 [2024-07-25 01:13:50.115705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.199 [2024-07-25 01:13:50.115721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.199 [2024-07-25 01:13:50.115735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.199 [2024-07-25 01:13:50.115751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.199 [2024-07-25 01:13:50.115765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.199 [2024-07-25 01:13:50.115780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.199 [2024-07-25 01:13:50.115794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.199 [2024-07-25 01:13:50.115810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.199 [2024-07-25 01:13:50.115823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.199 [2024-07-25 01:13:50.115839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.199 [2024-07-25 01:13:50.115852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.199 [2024-07-25 01:13:50.115868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.199 [2024-07-25 01:13:50.115881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.199 [2024-07-25 01:13:50.115896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.199 [2024-07-25 01:13:50.115910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.199 [2024-07-25 01:13:50.115925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.199 [2024-07-25 01:13:50.115939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.199 [2024-07-25 01:13:50.115954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.199 [2024-07-25 01:13:50.115968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.199 [2024-07-25 01:13:50.115984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.199 [2024-07-25 01:13:50.115998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.199 [2024-07-25 01:13:50.116017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.199 [2024-07-25 01:13:50.116031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.199 [2024-07-25 01:13:50.116047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.199 [2024-07-25 01:13:50.116061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.199 [2024-07-25 01:13:50.116076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.199 [2024-07-25 01:13:50.116091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.199 [2024-07-25 01:13:50.116107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.199 [2024-07-25 01:13:50.116120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.199 [2024-07-25 01:13:50.116135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.199 [2024-07-25 01:13:50.116149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.199 [2024-07-25 01:13:50.116164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.199 [2024-07-25 01:13:50.116178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.199 [2024-07-25 01:13:50.116193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.199 [2024-07-25 01:13:50.116207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.199 [2024-07-25 01:13:50.116230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.199 [2024-07-25 01:13:50.116249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.199 [2024-07-25 01:13:50.116267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.199 [2024-07-25 01:13:50.116282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.199 [2024-07-25 01:13:50.116297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.199 [2024-07-25 01:13:50.116310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.199 [2024-07-25 01:13:50.116325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.199 [2024-07-25 01:13:50.116339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.199 [2024-07-25 01:13:50.116353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.199 [2024-07-25 01:13:50.116367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.199 [2024-07-25 01:13:50.116382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.199 [2024-07-25 01:13:50.116399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.199 [2024-07-25 01:13:50.116415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.199 [2024-07-25 01:13:50.116429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.199 [2024-07-25 01:13:50.116444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.199 [2024-07-25 01:13:50.116457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.199 [2024-07-25 01:13:50.116473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.199 [2024-07-25 01:13:50.116486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.200 [2024-07-25 01:13:50.116501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.200 [2024-07-25 01:13:50.116515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.200 [2024-07-25 01:13:50.116530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.200 [2024-07-25 01:13:50.116544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.200 [2024-07-25 01:13:50.116569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.200 [2024-07-25 01:13:50.116583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.200 [2024-07-25 01:13:50.116599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.200 [2024-07-25 01:13:50.116612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.200 [2024-07-25 01:13:50.116628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.200 [2024-07-25 01:13:50.116642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.200 [2024-07-25 01:13:50.116657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.200 [2024-07-25 01:13:50.116671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.200 [2024-07-25 01:13:50.116686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.200 [2024-07-25 01:13:50.116700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.200 [2024-07-25 01:13:50.116720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.200 [2024-07-25 01:13:50.116734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.200 [2024-07-25 01:13:50.116750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.200 [2024-07-25 01:13:50.116773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.200 [2024-07-25 01:13:50.116791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.200 [2024-07-25 01:13:50.116805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.200 [2024-07-25 01:13:50.116825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.200 [2024-07-25 01:13:50.116858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.200 [2024-07-25 01:13:50.116881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.200 [2024-07-25 01:13:50.116895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.200 [2024-07-25 01:13:50.116983] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1394cd0 was disconnected and freed. reset controller. 00:27:57.200 [2024-07-25 01:13:50.117540] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:27:57.200 [2024-07-25 01:13:50.117616] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13e1f50 (9): Bad file descriptor 00:27:57.200 [2024-07-25 01:13:50.117699] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1400ec0 (9): Bad file descriptor 00:27:57.200 [2024-07-25 01:13:50.117740] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x126cf90 (9): Bad file descriptor 00:27:57.200 [2024-07-25 01:13:50.117769] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x123e300 (9): Bad file descriptor 00:27:57.200 [2024-07-25 01:13:50.117797] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1269810 (9): Bad file descriptor 00:27:57.200 [2024-07-25 01:13:50.117820] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x123c190 (9): Bad file descriptor 00:27:57.200 [2024-07-25 01:13:50.117843] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12616b0 (9): Bad file descriptor 00:27:57.200 [2024-07-25 01:13:50.117870] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd36610 (9): Bad file descriptor 00:27:57.200 [2024-07-25 01:13:50.117911] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1296f90 (9): Bad file descriptor 00:27:57.200 [2024-07-25 01:13:50.117951] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13e2f00 (9): Bad file descriptor 00:27:57.200 [2024-07-25 01:13:50.120931] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:27:57.200 [2024-07-25 01:13:50.120974] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:27:57.200 [2024-07-25 01:13:50.121207] nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:27:57.200 [2024-07-25 01:13:50.121291] nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:27:57.200 [2024-07-25 01:13:50.121362] nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:27:57.200 [2024-07-25 01:13:50.121428] nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:27:57.200 [2024-07-25 01:13:50.121494] nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:27:57.200 [2024-07-25 01:13:50.121579] nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:27:57.200 [2024-07-25 01:13:50.121774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.200 [2024-07-25 01:13:50.121811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13e1f50 with addr=10.0.0.2, port=4420 00:27:57.200 [2024-07-25 01:13:50.121838] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e1f50 is same with the state(5) to be set 00:27:57.200 [2024-07-25 01:13:50.121969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.200 [2024-07-25 01:13:50.121996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x123c190 with addr=10.0.0.2, port=4420 00:27:57.200 [2024-07-25 01:13:50.122013] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123c190 is same with the state(5) to be set 00:27:57.200 [2024-07-25 01:13:50.122135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.200 [2024-07-25 01:13:50.122160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1269810 with addr=10.0.0.2, port=4420 00:27:57.200 [2024-07-25 01:13:50.122175] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1269810 is same with the state(5) to be set 00:27:57.200 [2024-07-25 01:13:50.122272] nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:27:57.200 [2024-07-25 01:13:50.122827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.200 [2024-07-25 01:13:50.122851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.200 [2024-07-25 01:13:50.122879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.200 [2024-07-25 01:13:50.122895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.200 [2024-07-25 01:13:50.122912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.200 [2024-07-25 01:13:50.122926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.200 [2024-07-25 01:13:50.122942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.200 [2024-07-25 01:13:50.122956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.200 [2024-07-25 01:13:50.122972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.200 [2024-07-25 01:13:50.122986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.200 [2024-07-25 01:13:50.123001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.200 [2024-07-25 01:13:50.123015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.200 [2024-07-25 01:13:50.123030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.200 [2024-07-25 01:13:50.123044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.200 [2024-07-25 01:13:50.123060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.200 [2024-07-25 01:13:50.123073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.200 [2024-07-25 01:13:50.123089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.200 [2024-07-25 01:13:50.123103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.200 [2024-07-25 01:13:50.123119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.200 [2024-07-25 01:13:50.123133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.200 [2024-07-25 01:13:50.123154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.200 [2024-07-25 01:13:50.123170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.200 [2024-07-25 01:13:50.123185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.200 [2024-07-25 01:13:50.123199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.200 [2024-07-25 01:13:50.123215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.200 [2024-07-25 01:13:50.123228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.200 [2024-07-25 01:13:50.123255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.200 [2024-07-25 01:13:50.123291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.200 [2024-07-25 01:13:50.123318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.200 [2024-07-25 01:13:50.123343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.201 [2024-07-25 01:13:50.123363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.201 [2024-07-25 01:13:50.123378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.201 [2024-07-25 01:13:50.123394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.201 [2024-07-25 01:13:50.123408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.201 [2024-07-25 01:13:50.123424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.201 [2024-07-25 01:13:50.123437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.201 [2024-07-25 01:13:50.123453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.201 [2024-07-25 01:13:50.123466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.201 [2024-07-25 01:13:50.123482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.201 [2024-07-25 01:13:50.123495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.201 [2024-07-25 01:13:50.123511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.201 [2024-07-25 01:13:50.123525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.201 [2024-07-25 01:13:50.123541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.201 [2024-07-25 01:13:50.123554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.201 [2024-07-25 01:13:50.123570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.201 [2024-07-25 01:13:50.123588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.201 [2024-07-25 01:13:50.123605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.201 [2024-07-25 01:13:50.123619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.201 [2024-07-25 01:13:50.123635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.201 [2024-07-25 01:13:50.123649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.201 [2024-07-25 01:13:50.123665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.201 [2024-07-25 01:13:50.123680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.201 [2024-07-25 01:13:50.123696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.201 [2024-07-25 01:13:50.123710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.201 [2024-07-25 01:13:50.123726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.201 [2024-07-25 01:13:50.123739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.201 [2024-07-25 01:13:50.123755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.201 [2024-07-25 01:13:50.123769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.201 [2024-07-25 01:13:50.123784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.201 [2024-07-25 01:13:50.123798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.201 [2024-07-25 01:13:50.123813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.201 [2024-07-25 01:13:50.123827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.201 [2024-07-25 01:13:50.123843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.201 [2024-07-25 01:13:50.123857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.201 [2024-07-25 01:13:50.123872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.201 [2024-07-25 01:13:50.123885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.201 [2024-07-25 01:13:50.123901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.201 [2024-07-25 01:13:50.123914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.201 [2024-07-25 01:13:50.123930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.201 [2024-07-25 01:13:50.123943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.201 [2024-07-25 01:13:50.123963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.201 [2024-07-25 01:13:50.123977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.201 [2024-07-25 01:13:50.123993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.201 [2024-07-25 01:13:50.124007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.201 [2024-07-25 01:13:50.124022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.201 [2024-07-25 01:13:50.124036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.201 [2024-07-25 01:13:50.124052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.201 [2024-07-25 01:13:50.124066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.201 [2024-07-25 01:13:50.124082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.201 [2024-07-25 01:13:50.124095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.201 [2024-07-25 01:13:50.124111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.201 [2024-07-25 01:13:50.124125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.201 [2024-07-25 01:13:50.124141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.201 [2024-07-25 01:13:50.124155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.201 [2024-07-25 01:13:50.124170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.201 [2024-07-25 01:13:50.124184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.201 [2024-07-25 01:13:50.124199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.201 [2024-07-25 01:13:50.124213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.201 [2024-07-25 01:13:50.124228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.201 [2024-07-25 01:13:50.124249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.201 [2024-07-25 01:13:50.124267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.201 [2024-07-25 01:13:50.124281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.201 [2024-07-25 01:13:50.124296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.201 [2024-07-25 01:13:50.124310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.201 [2024-07-25 01:13:50.124326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.201 [2024-07-25 01:13:50.124343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.201 [2024-07-25 01:13:50.124359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.201 [2024-07-25 01:13:50.124373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.201 [2024-07-25 01:13:50.124389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.201 [2024-07-25 01:13:50.124403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.201 [2024-07-25 01:13:50.124418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.201 [2024-07-25 01:13:50.124432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.201 [2024-07-25 01:13:50.124447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.201 [2024-07-25 01:13:50.124461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.201 [2024-07-25 01:13:50.124476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.201 [2024-07-25 01:13:50.124490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.201 [2024-07-25 01:13:50.124506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.201 [2024-07-25 01:13:50.124519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.202 [2024-07-25 01:13:50.124534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.202 [2024-07-25 01:13:50.124548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.202 [2024-07-25 01:13:50.124564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.202 [2024-07-25 01:13:50.124577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.202 [2024-07-25 01:13:50.124594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.202 [2024-07-25 01:13:50.124615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.202 [2024-07-25 01:13:50.124630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.202 [2024-07-25 01:13:50.124644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.202 [2024-07-25 01:13:50.124660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.202 [2024-07-25 01:13:50.124673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.202 [2024-07-25 01:13:50.124689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.202 [2024-07-25 01:13:50.124702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.202 [2024-07-25 01:13:50.124721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.202 [2024-07-25 01:13:50.124735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.202 [2024-07-25 01:13:50.124750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.202 [2024-07-25 01:13:50.124764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.202 [2024-07-25 01:13:50.124779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.202 [2024-07-25 01:13:50.124792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.202 [2024-07-25 01:13:50.124808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.202 [2024-07-25 01:13:50.124821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.202 [2024-07-25 01:13:50.124836] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13961f0 is same with the state(5) to be set 00:27:57.202 [2024-07-25 01:13:50.124936] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x13961f0 was disconnected and freed. reset controller. 00:27:57.202 [2024-07-25 01:13:50.125061] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13e1f50 (9): Bad file descriptor 00:27:57.202 [2024-07-25 01:13:50.125099] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x123c190 (9): Bad file descriptor 00:27:57.202 [2024-07-25 01:13:50.125132] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1269810 (9): Bad file descriptor 00:27:57.202 [2024-07-25 01:13:50.126374] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:27:57.202 [2024-07-25 01:13:50.126417] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:27:57.202 [2024-07-25 01:13:50.126434] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:27:57.202 [2024-07-25 01:13:50.126452] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:27:57.202 [2024-07-25 01:13:50.126484] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:27:57.202 [2024-07-25 01:13:50.126505] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:27:57.202 [2024-07-25 01:13:50.126519] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:27:57.202 [2024-07-25 01:13:50.126537] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:27:57.202 [2024-07-25 01:13:50.126550] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:27:57.202 [2024-07-25 01:13:50.126562] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:27:57.202 [2024-07-25 01:13:50.126664] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:57.202 [2024-07-25 01:13:50.126687] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:57.202 [2024-07-25 01:13:50.126701] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:57.202 [2024-07-25 01:13:50.126836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.202 [2024-07-25 01:13:50.126863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x126cf90 with addr=10.0.0.2, port=4420 00:27:57.202 [2024-07-25 01:13:50.126884] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x126cf90 is same with the state(5) to be set 00:27:57.202 [2024-07-25 01:13:50.127199] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x126cf90 (9): Bad file descriptor 00:27:57.202 [2024-07-25 01:13:50.127275] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:27:57.202 [2024-07-25 01:13:50.127296] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:27:57.202 [2024-07-25 01:13:50.127310] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:27:57.202 [2024-07-25 01:13:50.127371] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:57.202 [2024-07-25 01:13:50.127709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.202 [2024-07-25 01:13:50.127733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.202 [2024-07-25 01:13:50.127755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.202 [2024-07-25 01:13:50.127770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.202 [2024-07-25 01:13:50.127786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.202 [2024-07-25 01:13:50.127801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.202 [2024-07-25 01:13:50.127817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.202 [2024-07-25 01:13:50.127830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.202 [2024-07-25 01:13:50.127847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.202 [2024-07-25 01:13:50.127860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.202 [2024-07-25 01:13:50.127876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.202 [2024-07-25 01:13:50.127890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.202 [2024-07-25 01:13:50.127905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.202 [2024-07-25 01:13:50.127919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.202 [2024-07-25 01:13:50.127934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.202 [2024-07-25 01:13:50.127948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.202 [2024-07-25 01:13:50.127963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.202 [2024-07-25 01:13:50.127977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.202 [2024-07-25 01:13:50.127992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.202 [2024-07-25 01:13:50.128006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.202 [2024-07-25 01:13:50.128026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.202 [2024-07-25 01:13:50.128041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.202 [2024-07-25 01:13:50.128057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.202 [2024-07-25 01:13:50.128071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.202 [2024-07-25 01:13:50.128087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.203 [2024-07-25 01:13:50.128100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.203 [2024-07-25 01:13:50.128116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.203 [2024-07-25 01:13:50.128130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.203 [2024-07-25 01:13:50.128145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.203 [2024-07-25 01:13:50.128159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.203 [2024-07-25 01:13:50.128174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.203 [2024-07-25 01:13:50.128188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.203 [2024-07-25 01:13:50.128203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.203 [2024-07-25 01:13:50.128218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.203 [2024-07-25 01:13:50.128234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.203 [2024-07-25 01:13:50.128256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.203 [2024-07-25 01:13:50.128274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.203 [2024-07-25 01:13:50.128288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.203 [2024-07-25 01:13:50.128303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.203 [2024-07-25 01:13:50.128317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.203 [2024-07-25 01:13:50.128332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.203 [2024-07-25 01:13:50.128345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.203 [2024-07-25 01:13:50.128361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.203 [2024-07-25 01:13:50.128375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.203 [2024-07-25 01:13:50.128390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.203 [2024-07-25 01:13:50.128411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.203 [2024-07-25 01:13:50.128428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.203 [2024-07-25 01:13:50.128442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.203 [2024-07-25 01:13:50.128458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.203 [2024-07-25 01:13:50.128471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.203 [2024-07-25 01:13:50.128486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.203 [2024-07-25 01:13:50.128500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.203 [2024-07-25 01:13:50.128516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.203 [2024-07-25 01:13:50.128529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.203 [2024-07-25 01:13:50.128545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.203 [2024-07-25 01:13:50.128559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.203 [2024-07-25 01:13:50.128575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.203 [2024-07-25 01:13:50.128589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.203 [2024-07-25 01:13:50.128604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.203 [2024-07-25 01:13:50.128624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.203 [2024-07-25 01:13:50.128650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.203 [2024-07-25 01:13:50.128667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.203 [2024-07-25 01:13:50.128682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.203 [2024-07-25 01:13:50.128696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.203 [2024-07-25 01:13:50.128712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.203 [2024-07-25 01:13:50.128726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.203 [2024-07-25 01:13:50.128741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.203 [2024-07-25 01:13:50.128755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.203 [2024-07-25 01:13:50.128771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.203 [2024-07-25 01:13:50.128784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.203 [2024-07-25 01:13:50.128799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.203 [2024-07-25 01:13:50.128817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.203 [2024-07-25 01:13:50.128834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.203 [2024-07-25 01:13:50.128848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.203 [2024-07-25 01:13:50.128863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.203 [2024-07-25 01:13:50.128877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.203 [2024-07-25 01:13:50.128892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.203 [2024-07-25 01:13:50.128905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.203 [2024-07-25 01:13:50.128921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.203 [2024-07-25 01:13:50.128934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.203 [2024-07-25 01:13:50.128949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.203 [2024-07-25 01:13:50.128963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.203 [2024-07-25 01:13:50.128978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.203 [2024-07-25 01:13:50.128992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.203 [2024-07-25 01:13:50.129007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.203 [2024-07-25 01:13:50.129021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.203 [2024-07-25 01:13:50.129037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.203 [2024-07-25 01:13:50.129051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.203 [2024-07-25 01:13:50.129067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.203 [2024-07-25 01:13:50.129080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.203 [2024-07-25 01:13:50.129096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.203 [2024-07-25 01:13:50.129110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.203 [2024-07-25 01:13:50.129125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.203 [2024-07-25 01:13:50.129139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.203 [2024-07-25 01:13:50.129154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.203 [2024-07-25 01:13:50.129167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.203 [2024-07-25 01:13:50.129186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.203 [2024-07-25 01:13:50.129201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.203 [2024-07-25 01:13:50.129216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.203 [2024-07-25 01:13:50.129230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.203 [2024-07-25 01:13:50.129252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.203 [2024-07-25 01:13:50.129267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.203 [2024-07-25 01:13:50.129283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.203 [2024-07-25 01:13:50.129297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.204 [2024-07-25 01:13:50.129313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.204 [2024-07-25 01:13:50.129326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.204 [2024-07-25 01:13:50.129342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.204 [2024-07-25 01:13:50.129355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.204 [2024-07-25 01:13:50.129370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.204 [2024-07-25 01:13:50.129384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.204 [2024-07-25 01:13:50.129400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.204 [2024-07-25 01:13:50.129412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.204 [2024-07-25 01:13:50.129428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.204 [2024-07-25 01:13:50.129442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.204 [2024-07-25 01:13:50.129457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.204 [2024-07-25 01:13:50.129471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.204 [2024-07-25 01:13:50.129487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.204 [2024-07-25 01:13:50.129500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.204 [2024-07-25 01:13:50.129516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.204 [2024-07-25 01:13:50.129535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.204 [2024-07-25 01:13:50.129551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.204 [2024-07-25 01:13:50.129568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.204 [2024-07-25 01:13:50.129584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.204 [2024-07-25 01:13:50.129598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.204 [2024-07-25 01:13:50.129613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.204 [2024-07-25 01:13:50.129627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.204 [2024-07-25 01:13:50.129642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.204 [2024-07-25 01:13:50.129655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.204 [2024-07-25 01:13:50.129669] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1315ef0 is same with the state(5) to be set 00:27:57.204 [2024-07-25 01:13:50.130938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.204 [2024-07-25 01:13:50.130962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.204 [2024-07-25 01:13:50.130982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.204 [2024-07-25 01:13:50.130997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.204 [2024-07-25 01:13:50.131013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.204 [2024-07-25 01:13:50.131027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.204 [2024-07-25 01:13:50.131042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.204 [2024-07-25 01:13:50.131056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.204 [2024-07-25 01:13:50.131072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.204 [2024-07-25 01:13:50.131086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.204 [2024-07-25 01:13:50.131101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.204 [2024-07-25 01:13:50.131114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.204 [2024-07-25 01:13:50.131130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.204 [2024-07-25 01:13:50.131143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.204 [2024-07-25 01:13:50.131159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.204 [2024-07-25 01:13:50.131172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.204 [2024-07-25 01:13:50.131187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.204 [2024-07-25 01:13:50.131205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.204 [2024-07-25 01:13:50.131221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.204 [2024-07-25 01:13:50.131235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.204 [2024-07-25 01:13:50.131259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.204 [2024-07-25 01:13:50.131274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.204 [2024-07-25 01:13:50.131291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.204 [2024-07-25 01:13:50.131306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.204 [2024-07-25 01:13:50.131322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.204 [2024-07-25 01:13:50.131335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.204 [2024-07-25 01:13:50.131351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.204 [2024-07-25 01:13:50.131365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.204 [2024-07-25 01:13:50.131380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.204 [2024-07-25 01:13:50.131394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.204 [2024-07-25 01:13:50.131409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.204 [2024-07-25 01:13:50.131423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.204 [2024-07-25 01:13:50.131438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.204 [2024-07-25 01:13:50.131452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.204 [2024-07-25 01:13:50.131467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.204 [2024-07-25 01:13:50.131481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.204 [2024-07-25 01:13:50.131497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.204 [2024-07-25 01:13:50.131510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.204 [2024-07-25 01:13:50.131526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.204 [2024-07-25 01:13:50.131540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.204 [2024-07-25 01:13:50.131555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.204 [2024-07-25 01:13:50.131570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.204 [2024-07-25 01:13:50.131589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.204 [2024-07-25 01:13:50.131604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.204 [2024-07-25 01:13:50.131620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.204 [2024-07-25 01:13:50.131634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.204 [2024-07-25 01:13:50.131649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.204 [2024-07-25 01:13:50.131663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.204 [2024-07-25 01:13:50.131679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.204 [2024-07-25 01:13:50.131692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.204 [2024-07-25 01:13:50.131708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.204 [2024-07-25 01:13:50.131722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.204 [2024-07-25 01:13:50.131737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.204 [2024-07-25 01:13:50.131751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.205 [2024-07-25 01:13:50.131767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.205 [2024-07-25 01:13:50.131782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.205 [2024-07-25 01:13:50.131798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.205 [2024-07-25 01:13:50.131812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.205 [2024-07-25 01:13:50.131827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.205 [2024-07-25 01:13:50.131843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.205 [2024-07-25 01:13:50.131871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.205 [2024-07-25 01:13:50.131889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.205 [2024-07-25 01:13:50.131905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.205 [2024-07-25 01:13:50.131919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.205 [2024-07-25 01:13:50.131934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.205 [2024-07-25 01:13:50.131948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.205 [2024-07-25 01:13:50.131964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.205 [2024-07-25 01:13:50.131981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.205 [2024-07-25 01:13:50.131998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.205 [2024-07-25 01:13:50.132012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.205 [2024-07-25 01:13:50.132027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.205 [2024-07-25 01:13:50.132041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.205 [2024-07-25 01:13:50.140520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.205 [2024-07-25 01:13:50.140579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.205 [2024-07-25 01:13:50.140597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.205 [2024-07-25 01:13:50.140612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.205 [2024-07-25 01:13:50.140627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.205 [2024-07-25 01:13:50.140641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.205 [2024-07-25 01:13:50.140657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.205 [2024-07-25 01:13:50.140671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.205 [2024-07-25 01:13:50.140687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.205 [2024-07-25 01:13:50.140701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.205 [2024-07-25 01:13:50.140717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.205 [2024-07-25 01:13:50.140731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.205 [2024-07-25 01:13:50.140748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.205 [2024-07-25 01:13:50.140762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.205 [2024-07-25 01:13:50.140778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.205 [2024-07-25 01:13:50.140792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.205 [2024-07-25 01:13:50.140808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.205 [2024-07-25 01:13:50.140822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.205 [2024-07-25 01:13:50.140838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.205 [2024-07-25 01:13:50.140851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.205 [2024-07-25 01:13:50.140879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.205 [2024-07-25 01:13:50.140894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.205 [2024-07-25 01:13:50.140910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.205 [2024-07-25 01:13:50.140924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.205 [2024-07-25 01:13:50.140940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.205 [2024-07-25 01:13:50.140954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.205 [2024-07-25 01:13:50.140970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.205 [2024-07-25 01:13:50.140984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.205 [2024-07-25 01:13:50.140999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.205 [2024-07-25 01:13:50.141014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.205 [2024-07-25 01:13:50.141029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.205 [2024-07-25 01:13:50.141043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.205 [2024-07-25 01:13:50.141059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.205 [2024-07-25 01:13:50.141072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.205 [2024-07-25 01:13:50.141088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.205 [2024-07-25 01:13:50.141102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.205 [2024-07-25 01:13:50.141118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.205 [2024-07-25 01:13:50.141132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.205 [2024-07-25 01:13:50.141147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.205 [2024-07-25 01:13:50.141161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.205 [2024-07-25 01:13:50.141176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.205 [2024-07-25 01:13:50.141190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.205 [2024-07-25 01:13:50.141205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.205 [2024-07-25 01:13:50.141219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.205 [2024-07-25 01:13:50.141235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.205 [2024-07-25 01:13:50.141273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.205 [2024-07-25 01:13:50.141294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.205 [2024-07-25 01:13:50.141310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.205 [2024-07-25 01:13:50.141326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.205 [2024-07-25 01:13:50.141340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.205 [2024-07-25 01:13:50.141356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.205 [2024-07-25 01:13:50.141370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.205 [2024-07-25 01:13:50.141386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.205 [2024-07-25 01:13:50.141399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.205 [2024-07-25 01:13:50.141415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.205 [2024-07-25 01:13:50.141428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.205 [2024-07-25 01:13:50.141443] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13976f0 is same with the state(5) to be set 00:27:57.205 [2024-07-25 01:13:50.142831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.205 [2024-07-25 01:13:50.142857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.205 [2024-07-25 01:13:50.142880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.205 [2024-07-25 01:13:50.142895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.206 [2024-07-25 01:13:50.142912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.206 [2024-07-25 01:13:50.142927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.206 [2024-07-25 01:13:50.142942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.206 [2024-07-25 01:13:50.142956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.206 [2024-07-25 01:13:50.142972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.206 [2024-07-25 01:13:50.142985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.206 [2024-07-25 01:13:50.143001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.206 [2024-07-25 01:13:50.143015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.206 [2024-07-25 01:13:50.143031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.206 [2024-07-25 01:13:50.143049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.206 [2024-07-25 01:13:50.143067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.206 [2024-07-25 01:13:50.143082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.206 [2024-07-25 01:13:50.143098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.206 [2024-07-25 01:13:50.143111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.206 [2024-07-25 01:13:50.143127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.206 [2024-07-25 01:13:50.143141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.206 [2024-07-25 01:13:50.143157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.206 [2024-07-25 01:13:50.143170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.206 [2024-07-25 01:13:50.143186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.206 [2024-07-25 01:13:50.143199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.206 [2024-07-25 01:13:50.143215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.206 [2024-07-25 01:13:50.143228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.206 [2024-07-25 01:13:50.143250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.206 [2024-07-25 01:13:50.143267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.206 [2024-07-25 01:13:50.143282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.206 [2024-07-25 01:13:50.143302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.206 [2024-07-25 01:13:50.143317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.206 [2024-07-25 01:13:50.143331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.206 [2024-07-25 01:13:50.143346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.206 [2024-07-25 01:13:50.143360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.206 [2024-07-25 01:13:50.143375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.206 [2024-07-25 01:13:50.143388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.206 [2024-07-25 01:13:50.143404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.206 [2024-07-25 01:13:50.143418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.206 [2024-07-25 01:13:50.143438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.206 [2024-07-25 01:13:50.143453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.206 [2024-07-25 01:13:50.143468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.206 [2024-07-25 01:13:50.143482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.206 [2024-07-25 01:13:50.143498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.206 [2024-07-25 01:13:50.143512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.206 [2024-07-25 01:13:50.143527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.206 [2024-07-25 01:13:50.143540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.206 [2024-07-25 01:13:50.143556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.206 [2024-07-25 01:13:50.143571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.206 [2024-07-25 01:13:50.143586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.206 [2024-07-25 01:13:50.143601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.206 [2024-07-25 01:13:50.143617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.206 [2024-07-25 01:13:50.143630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.206 [2024-07-25 01:13:50.143646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.206 [2024-07-25 01:13:50.143660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.206 [2024-07-25 01:13:50.143675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.206 [2024-07-25 01:13:50.143688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.206 [2024-07-25 01:13:50.143704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.206 [2024-07-25 01:13:50.143723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.206 [2024-07-25 01:13:50.143749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.206 [2024-07-25 01:13:50.143765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.206 [2024-07-25 01:13:50.143782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.206 [2024-07-25 01:13:50.143795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.206 [2024-07-25 01:13:50.143811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.206 [2024-07-25 01:13:50.143829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.206 [2024-07-25 01:13:50.143846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.206 [2024-07-25 01:13:50.143859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.206 [2024-07-25 01:13:50.143875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.206 [2024-07-25 01:13:50.143890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.206 [2024-07-25 01:13:50.143905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.206 [2024-07-25 01:13:50.143918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.207 [2024-07-25 01:13:50.143934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.207 [2024-07-25 01:13:50.143947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.207 [2024-07-25 01:13:50.143963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.207 [2024-07-25 01:13:50.143976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.207 [2024-07-25 01:13:50.143991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.207 [2024-07-25 01:13:50.144005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.207 [2024-07-25 01:13:50.144020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.207 [2024-07-25 01:13:50.144034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.207 [2024-07-25 01:13:50.144050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.207 [2024-07-25 01:13:50.144063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.207 [2024-07-25 01:13:50.144079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.207 [2024-07-25 01:13:50.144092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.207 [2024-07-25 01:13:50.144107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.207 [2024-07-25 01:13:50.144121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.207 [2024-07-25 01:13:50.144136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.207 [2024-07-25 01:13:50.144150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.207 [2024-07-25 01:13:50.144165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.207 [2024-07-25 01:13:50.144178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.207 [2024-07-25 01:13:50.144198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.207 [2024-07-25 01:13:50.144213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.207 [2024-07-25 01:13:50.144228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.207 [2024-07-25 01:13:50.144248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.207 [2024-07-25 01:13:50.144266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.207 [2024-07-25 01:13:50.144281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.207 [2024-07-25 01:13:50.144297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.207 [2024-07-25 01:13:50.144311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.207 [2024-07-25 01:13:50.144327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.207 [2024-07-25 01:13:50.144340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.207 [2024-07-25 01:13:50.144356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.207 [2024-07-25 01:13:50.144369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.207 [2024-07-25 01:13:50.144385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.207 [2024-07-25 01:13:50.144398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.207 [2024-07-25 01:13:50.144413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.207 [2024-07-25 01:13:50.144427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.207 [2024-07-25 01:13:50.144442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.207 [2024-07-25 01:13:50.144456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.207 [2024-07-25 01:13:50.144471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.207 [2024-07-25 01:13:50.144485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.207 [2024-07-25 01:13:50.144500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.207 [2024-07-25 01:13:50.144514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.207 [2024-07-25 01:13:50.144530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.207 [2024-07-25 01:13:50.144544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.207 [2024-07-25 01:13:50.144559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.207 [2024-07-25 01:13:50.144576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.207 [2024-07-25 01:13:50.144592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.207 [2024-07-25 01:13:50.144607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.207 [2024-07-25 01:13:50.144622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.207 [2024-07-25 01:13:50.144636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.207 [2024-07-25 01:13:50.144651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.207 [2024-07-25 01:13:50.144664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.207 [2024-07-25 01:13:50.144680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.207 [2024-07-25 01:13:50.144693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.207 [2024-07-25 01:13:50.144709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.207 [2024-07-25 01:13:50.144722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.207 [2024-07-25 01:13:50.144737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.207 [2024-07-25 01:13:50.144752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.207 [2024-07-25 01:13:50.144767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.207 [2024-07-25 01:13:50.144781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.207 [2024-07-25 01:13:50.144802] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1236630 is same with the state(5) to be set 00:27:57.207 [2024-07-25 01:13:50.146062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.207 [2024-07-25 01:13:50.146085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.207 [2024-07-25 01:13:50.146106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.207 [2024-07-25 01:13:50.146121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.207 [2024-07-25 01:13:50.146138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.207 [2024-07-25 01:13:50.146152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.207 [2024-07-25 01:13:50.146168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.207 [2024-07-25 01:13:50.146182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.207 [2024-07-25 01:13:50.146197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.207 [2024-07-25 01:13:50.146212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.207 [2024-07-25 01:13:50.146232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.207 [2024-07-25 01:13:50.146254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.207 [2024-07-25 01:13:50.146271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.207 [2024-07-25 01:13:50.146291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.207 [2024-07-25 01:13:50.146307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.207 [2024-07-25 01:13:50.146321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.207 [2024-07-25 01:13:50.146336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.207 [2024-07-25 01:13:50.146349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.207 [2024-07-25 01:13:50.146365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.207 [2024-07-25 01:13:50.146378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.208 [2024-07-25 01:13:50.146394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.208 [2024-07-25 01:13:50.146407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.208 [2024-07-25 01:13:50.146423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.208 [2024-07-25 01:13:50.146437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.208 [2024-07-25 01:13:50.146454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.208 [2024-07-25 01:13:50.146468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.208 [2024-07-25 01:13:50.146483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.208 [2024-07-25 01:13:50.146496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.208 [2024-07-25 01:13:50.146512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.208 [2024-07-25 01:13:50.146526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.208 [2024-07-25 01:13:50.146541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.208 [2024-07-25 01:13:50.146555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.208 [2024-07-25 01:13:50.146570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.208 [2024-07-25 01:13:50.146583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.208 [2024-07-25 01:13:50.146599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.208 [2024-07-25 01:13:50.146620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.208 [2024-07-25 01:13:50.146636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.208 [2024-07-25 01:13:50.146650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.208 [2024-07-25 01:13:50.146666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.208 [2024-07-25 01:13:50.146679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.208 [2024-07-25 01:13:50.146694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.208 [2024-07-25 01:13:50.146708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.208 [2024-07-25 01:13:50.146724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.208 [2024-07-25 01:13:50.146738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.208 [2024-07-25 01:13:50.146753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.208 [2024-07-25 01:13:50.146767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.208 [2024-07-25 01:13:50.146782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.208 [2024-07-25 01:13:50.146797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.208 [2024-07-25 01:13:50.146812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.208 [2024-07-25 01:13:50.146825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.208 [2024-07-25 01:13:50.146842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.208 [2024-07-25 01:13:50.146856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.208 [2024-07-25 01:13:50.146871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.208 [2024-07-25 01:13:50.146886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.208 [2024-07-25 01:13:50.146901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.208 [2024-07-25 01:13:50.146914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.208 [2024-07-25 01:13:50.146930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.208 [2024-07-25 01:13:50.146950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.208 [2024-07-25 01:13:50.146975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.208 [2024-07-25 01:13:50.146990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.208 [2024-07-25 01:13:50.147010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.208 [2024-07-25 01:13:50.147024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.208 [2024-07-25 01:13:50.147039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.208 [2024-07-25 01:13:50.147053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.208 [2024-07-25 01:13:50.147068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.208 [2024-07-25 01:13:50.147082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.208 [2024-07-25 01:13:50.147098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.208 [2024-07-25 01:13:50.147112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.208 [2024-07-25 01:13:50.147127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.208 [2024-07-25 01:13:50.147140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.208 [2024-07-25 01:13:50.147155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.208 [2024-07-25 01:13:50.147169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.208 [2024-07-25 01:13:50.147184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.208 [2024-07-25 01:13:50.147198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.208 [2024-07-25 01:13:50.147214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.208 [2024-07-25 01:13:50.147228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.208 [2024-07-25 01:13:50.147249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.208 [2024-07-25 01:13:50.147266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.208 [2024-07-25 01:13:50.147282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.208 [2024-07-25 01:13:50.147296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.208 [2024-07-25 01:13:50.147312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.208 [2024-07-25 01:13:50.147325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.208 [2024-07-25 01:13:50.147340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.208 [2024-07-25 01:13:50.147354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.208 [2024-07-25 01:13:50.147369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.208 [2024-07-25 01:13:50.147386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.208 [2024-07-25 01:13:50.147403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.208 [2024-07-25 01:13:50.147417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.208 [2024-07-25 01:13:50.147432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.208 [2024-07-25 01:13:50.147446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.208 [2024-07-25 01:13:50.147461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.208 [2024-07-25 01:13:50.147475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.208 [2024-07-25 01:13:50.147490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.208 [2024-07-25 01:13:50.147505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.208 [2024-07-25 01:13:50.147520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.208 [2024-07-25 01:13:50.147534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.208 [2024-07-25 01:13:50.147549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.208 [2024-07-25 01:13:50.147563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.208 [2024-07-25 01:13:50.147578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.209 [2024-07-25 01:13:50.147591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.209 [2024-07-25 01:13:50.147608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.209 [2024-07-25 01:13:50.147622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.209 [2024-07-25 01:13:50.147637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.209 [2024-07-25 01:13:50.147650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.209 [2024-07-25 01:13:50.147666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.209 [2024-07-25 01:13:50.147680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.209 [2024-07-25 01:13:50.147695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.209 [2024-07-25 01:13:50.147708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.209 [2024-07-25 01:13:50.147724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.209 [2024-07-25 01:13:50.147738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.209 [2024-07-25 01:13:50.147758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.209 [2024-07-25 01:13:50.147772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.209 [2024-07-25 01:13:50.147788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.209 [2024-07-25 01:13:50.147801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.209 [2024-07-25 01:13:50.147817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.209 [2024-07-25 01:13:50.147830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.209 [2024-07-25 01:13:50.147846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.209 [2024-07-25 01:13:50.147859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.209 [2024-07-25 01:13:50.147875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.209 [2024-07-25 01:13:50.147888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.209 [2024-07-25 01:13:50.147904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.209 [2024-07-25 01:13:50.147917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.209 [2024-07-25 01:13:50.147932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.209 [2024-07-25 01:13:50.147946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.209 [2024-07-25 01:13:50.147961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.209 [2024-07-25 01:13:50.147975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.209 [2024-07-25 01:13:50.147990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.209 [2024-07-25 01:13:50.148003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.209 [2024-07-25 01:13:50.148025] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1237b10 is same with the state(5) to be set 00:27:57.209 [2024-07-25 01:13:50.149291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.209 [2024-07-25 01:13:50.149314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.209 [2024-07-25 01:13:50.149334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.209 [2024-07-25 01:13:50.149350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.209 [2024-07-25 01:13:50.149365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.209 [2024-07-25 01:13:50.149380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.209 [2024-07-25 01:13:50.149401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.209 [2024-07-25 01:13:50.149416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.209 [2024-07-25 01:13:50.149432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.209 [2024-07-25 01:13:50.149445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.209 [2024-07-25 01:13:50.149461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.209 [2024-07-25 01:13:50.149475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.209 [2024-07-25 01:13:50.149490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.209 [2024-07-25 01:13:50.149504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.209 [2024-07-25 01:13:50.149519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.209 [2024-07-25 01:13:50.149533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.209 [2024-07-25 01:13:50.149549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.209 [2024-07-25 01:13:50.149562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.209 [2024-07-25 01:13:50.149577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.209 [2024-07-25 01:13:50.149591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.209 [2024-07-25 01:13:50.149606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.209 [2024-07-25 01:13:50.149620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.209 [2024-07-25 01:13:50.149635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.209 [2024-07-25 01:13:50.149649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.209 [2024-07-25 01:13:50.149664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.209 [2024-07-25 01:13:50.149677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.209 [2024-07-25 01:13:50.149693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.209 [2024-07-25 01:13:50.149706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.209 [2024-07-25 01:13:50.149721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.209 [2024-07-25 01:13:50.149734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.209 [2024-07-25 01:13:50.149750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.209 [2024-07-25 01:13:50.149770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.209 [2024-07-25 01:13:50.149786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.209 [2024-07-25 01:13:50.149799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.209 [2024-07-25 01:13:50.149815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.209 [2024-07-25 01:13:50.149828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.209 [2024-07-25 01:13:50.149844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.209 [2024-07-25 01:13:50.149865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.209 [2024-07-25 01:13:50.149881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.209 [2024-07-25 01:13:50.149895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.209 [2024-07-25 01:13:50.149910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.209 [2024-07-25 01:13:50.149924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.209 [2024-07-25 01:13:50.149940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.209 [2024-07-25 01:13:50.149953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.209 [2024-07-25 01:13:50.149968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.209 [2024-07-25 01:13:50.149981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.209 [2024-07-25 01:13:50.149997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.209 [2024-07-25 01:13:50.150010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.209 [2024-07-25 01:13:50.150025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.210 [2024-07-25 01:13:50.150039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.210 [2024-07-25 01:13:50.150054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.210 [2024-07-25 01:13:50.150067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.210 [2024-07-25 01:13:50.150082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.210 [2024-07-25 01:13:50.150096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.210 [2024-07-25 01:13:50.150111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.210 [2024-07-25 01:13:50.150124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.210 [2024-07-25 01:13:50.150143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.210 [2024-07-25 01:13:50.150157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.210 [2024-07-25 01:13:50.150180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.210 [2024-07-25 01:13:50.150201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.210 [2024-07-25 01:13:50.150218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.210 [2024-07-25 01:13:50.150232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.210 [2024-07-25 01:13:50.150254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.210 [2024-07-25 01:13:50.150269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.210 [2024-07-25 01:13:50.150285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.210 [2024-07-25 01:13:50.150298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.210 [2024-07-25 01:13:50.150313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.210 [2024-07-25 01:13:50.150327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.210 [2024-07-25 01:13:50.150342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.210 [2024-07-25 01:13:50.150357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.210 [2024-07-25 01:13:50.150373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.210 [2024-07-25 01:13:50.150387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.210 [2024-07-25 01:13:50.150403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.210 [2024-07-25 01:13:50.150417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.210 [2024-07-25 01:13:50.150432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.210 [2024-07-25 01:13:50.150446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.210 [2024-07-25 01:13:50.150461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.210 [2024-07-25 01:13:50.150475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.210 [2024-07-25 01:13:50.150491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.210 [2024-07-25 01:13:50.150504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.210 [2024-07-25 01:13:50.150519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.210 [2024-07-25 01:13:50.150537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.210 [2024-07-25 01:13:50.150553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.210 [2024-07-25 01:13:50.150567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.210 [2024-07-25 01:13:50.150583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.210 [2024-07-25 01:13:50.150596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.210 [2024-07-25 01:13:50.150612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.210 [2024-07-25 01:13:50.150625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.210 [2024-07-25 01:13:50.150641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.210 [2024-07-25 01:13:50.150655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.210 [2024-07-25 01:13:50.150671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.210 [2024-07-25 01:13:50.150684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.210 [2024-07-25 01:13:50.150699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.210 [2024-07-25 01:13:50.150713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.210 [2024-07-25 01:13:50.150729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.210 [2024-07-25 01:13:50.150743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.210 [2024-07-25 01:13:50.150758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.210 [2024-07-25 01:13:50.150772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.210 [2024-07-25 01:13:50.150788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.210 [2024-07-25 01:13:50.150801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.210 [2024-07-25 01:13:50.150817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.210 [2024-07-25 01:13:50.150831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.210 [2024-07-25 01:13:50.150847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.210 [2024-07-25 01:13:50.150861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.210 [2024-07-25 01:13:50.150876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.210 [2024-07-25 01:13:50.150890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.210 [2024-07-25 01:13:50.150909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.210 [2024-07-25 01:13:50.150923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.210 [2024-07-25 01:13:50.150939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.210 [2024-07-25 01:13:50.150952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.210 [2024-07-25 01:13:50.150968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.210 [2024-07-25 01:13:50.150981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.210 [2024-07-25 01:13:50.150997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.210 [2024-07-25 01:13:50.151010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.210 [2024-07-25 01:13:50.151026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.210 [2024-07-25 01:13:50.151040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.210 [2024-07-25 01:13:50.151055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.210 [2024-07-25 01:13:50.151069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.210 [2024-07-25 01:13:50.151084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.210 [2024-07-25 01:13:50.151098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.210 [2024-07-25 01:13:50.151113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.210 [2024-07-25 01:13:50.151127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.210 [2024-07-25 01:13:50.151143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.210 [2024-07-25 01:13:50.151156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.210 [2024-07-25 01:13:50.151172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.210 [2024-07-25 01:13:50.151185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.210 [2024-07-25 01:13:50.151200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.210 [2024-07-25 01:13:50.151214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.211 [2024-07-25 01:13:50.151228] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1239010 is same with the state(5) to be set 00:27:57.211 [2024-07-25 01:13:50.152885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.211 [2024-07-25 01:13:50.152911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.211 [2024-07-25 01:13:50.152938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.211 [2024-07-25 01:13:50.152955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.211 [2024-07-25 01:13:50.152972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.211 [2024-07-25 01:13:50.152986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.211 [2024-07-25 01:13:50.153003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.211 [2024-07-25 01:13:50.153017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.211 [2024-07-25 01:13:50.153033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.211 [2024-07-25 01:13:50.153046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.211 [2024-07-25 01:13:50.153062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.211 [2024-07-25 01:13:50.153075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.211 [2024-07-25 01:13:50.153090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.211 [2024-07-25 01:13:50.153104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.211 [2024-07-25 01:13:50.153120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.211 [2024-07-25 01:13:50.153133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.211 [2024-07-25 01:13:50.153149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.211 [2024-07-25 01:13:50.153162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.211 [2024-07-25 01:13:50.153178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.211 [2024-07-25 01:13:50.153190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.211 [2024-07-25 01:13:50.153206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.211 [2024-07-25 01:13:50.153220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.211 [2024-07-25 01:13:50.153235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.211 [2024-07-25 01:13:50.153257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.211 [2024-07-25 01:13:50.153274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.211 [2024-07-25 01:13:50.153288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.211 [2024-07-25 01:13:50.153303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.211 [2024-07-25 01:13:50.153321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.211 [2024-07-25 01:13:50.153337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.211 [2024-07-25 01:13:50.153351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.211 [2024-07-25 01:13:50.153366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.211 [2024-07-25 01:13:50.153381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.211 [2024-07-25 01:13:50.153400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.211 [2024-07-25 01:13:50.153425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.211 [2024-07-25 01:13:50.153448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.211 [2024-07-25 01:13:50.153463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.211 [2024-07-25 01:13:50.153480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.211 [2024-07-25 01:13:50.153494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.211 [2024-07-25 01:13:50.153509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.211 [2024-07-25 01:13:50.153523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.211 [2024-07-25 01:13:50.153538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.211 [2024-07-25 01:13:50.153552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.211 [2024-07-25 01:13:50.153567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.211 [2024-07-25 01:13:50.153581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.211 [2024-07-25 01:13:50.153597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.211 [2024-07-25 01:13:50.153610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.211 [2024-07-25 01:13:50.153626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.211 [2024-07-25 01:13:50.153639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.211 [2024-07-25 01:13:50.153655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.211 [2024-07-25 01:13:50.153669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.211 [2024-07-25 01:13:50.153684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.211 [2024-07-25 01:13:50.153698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.211 [2024-07-25 01:13:50.153718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.211 [2024-07-25 01:13:50.153733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.211 [2024-07-25 01:13:50.153748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.211 [2024-07-25 01:13:50.153762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.211 [2024-07-25 01:13:50.153777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.211 [2024-07-25 01:13:50.153791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.211 [2024-07-25 01:13:50.153806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.211 [2024-07-25 01:13:50.153819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.211 [2024-07-25 01:13:50.153836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.211 [2024-07-25 01:13:50.153849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.211 [2024-07-25 01:13:50.153865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.211 [2024-07-25 01:13:50.153878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.211 [2024-07-25 01:13:50.153894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.211 [2024-07-25 01:13:50.153907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.211 [2024-07-25 01:13:50.153923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.212 [2024-07-25 01:13:50.153936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.212 [2024-07-25 01:13:50.153953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.212 [2024-07-25 01:13:50.153967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.212 [2024-07-25 01:13:50.153982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.212 [2024-07-25 01:13:50.153996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.212 [2024-07-25 01:13:50.154012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.212 [2024-07-25 01:13:50.154025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.212 [2024-07-25 01:13:50.154040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.212 [2024-07-25 01:13:50.154054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.212 [2024-07-25 01:13:50.154069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.212 [2024-07-25 01:13:50.154086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.212 [2024-07-25 01:13:50.154102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.212 [2024-07-25 01:13:50.154116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.212 [2024-07-25 01:13:50.154131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.212 [2024-07-25 01:13:50.154145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.212 [2024-07-25 01:13:50.154161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.212 [2024-07-25 01:13:50.154175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.212 [2024-07-25 01:13:50.154190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.212 [2024-07-25 01:13:50.154203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.212 [2024-07-25 01:13:50.154219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.212 [2024-07-25 01:13:50.154232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.212 [2024-07-25 01:13:50.154258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.212 [2024-07-25 01:13:50.154274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.212 [2024-07-25 01:13:50.154289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.212 [2024-07-25 01:13:50.154303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.212 [2024-07-25 01:13:50.154319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.212 [2024-07-25 01:13:50.154333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.212 [2024-07-25 01:13:50.154349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.212 [2024-07-25 01:13:50.154363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.212 [2024-07-25 01:13:50.154378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.212 [2024-07-25 01:13:50.154392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.212 [2024-07-25 01:13:50.154407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.212 [2024-07-25 01:13:50.154422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.212 [2024-07-25 01:13:50.154438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.212 [2024-07-25 01:13:50.154451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.212 [2024-07-25 01:13:50.154470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.212 [2024-07-25 01:13:50.154498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.212 [2024-07-25 01:13:50.154518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.212 [2024-07-25 01:13:50.154533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.212 [2024-07-25 01:13:50.154549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.212 [2024-07-25 01:13:50.154562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.212 [2024-07-25 01:13:50.154577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.212 [2024-07-25 01:13:50.154591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.212 [2024-07-25 01:13:50.154606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.212 [2024-07-25 01:13:50.154620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.212 [2024-07-25 01:13:50.154635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.212 [2024-07-25 01:13:50.154649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.212 [2024-07-25 01:13:50.154664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.212 [2024-07-25 01:13:50.154678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.212 [2024-07-25 01:13:50.154693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.212 [2024-07-25 01:13:50.154707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.212 [2024-07-25 01:13:50.154722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.212 [2024-07-25 01:13:50.154735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.212 [2024-07-25 01:13:50.154751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.212 [2024-07-25 01:13:50.154764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.212 [2024-07-25 01:13:50.154779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.212 [2024-07-25 01:13:50.154793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.212 [2024-07-25 01:13:50.154809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.212 [2024-07-25 01:13:50.154823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.212 [2024-07-25 01:13:50.154839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.212 [2024-07-25 01:13:50.154853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.212 [2024-07-25 01:13:50.154872] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x130e8b0 is same with the state(5) to be set 00:27:57.212 [2024-07-25 01:13:50.156458] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:57.212 [2024-07-25 01:13:50.156491] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:27:57.212 [2024-07-25 01:13:50.156511] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:27:57.212 [2024-07-25 01:13:50.156529] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:27:57.212 [2024-07-25 01:13:50.156660] bdev_nvme.c:2896:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:57.212 [2024-07-25 01:13:50.156688] bdev_nvme.c:2896:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:57.212 [2024-07-25 01:13:50.156788] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:27:57.212 task offset: 19456 on job bdev=Nvme9n1 fails 00:27:57.212 00:27:57.212 Latency(us) 00:27:57.212 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:57.212 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:57.212 Job: Nvme1n1 ended in about 0.93 seconds with error 00:27:57.212 Verification LBA range: start 0x0 length 0x400 00:27:57.212 Nvme1n1 : 0.93 137.62 8.60 68.81 0.00 306632.63 28156.21 262532.36 00:27:57.212 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:57.212 Job: Nvme2n1 ended in about 0.92 seconds with error 00:27:57.212 Verification LBA range: start 0x0 length 0x400 00:27:57.212 Nvme2n1 : 0.92 209.03 13.06 69.68 0.00 222452.81 16699.54 259425.47 00:27:57.212 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:57.212 Job: Nvme3n1 ended in about 0.92 seconds with error 00:27:57.212 Verification LBA range: start 0x0 length 0x400 00:27:57.212 Nvme3n1 : 0.92 208.79 13.05 69.60 0.00 218177.99 26408.58 278066.82 00:27:57.212 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:57.212 Job: Nvme4n1 ended in about 0.93 seconds with error 00:27:57.212 Verification LBA range: start 0x0 length 0x400 00:27:57.212 Nvme4n1 : 0.93 207.42 12.96 69.14 0.00 215080.39 20583.16 237677.23 00:27:57.212 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:57.212 Job: Nvme5n1 ended in about 0.94 seconds with error 00:27:57.213 Verification LBA range: start 0x0 length 0x400 00:27:57.213 Nvme5n1 : 0.94 135.89 8.49 67.95 0.00 286148.33 24272.59 287387.50 00:27:57.213 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:57.213 Job: Nvme6n1 ended in about 0.95 seconds with error 00:27:57.213 Verification LBA range: start 0x0 length 0x400 00:27:57.213 Nvme6n1 : 0.95 135.42 8.46 67.71 0.00 281335.02 38641.97 257872.02 00:27:57.213 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:57.213 Job: Nvme7n1 ended in about 0.95 seconds with error 00:27:57.213 Verification LBA range: start 0x0 length 0x400 00:27:57.213 Nvme7n1 : 0.95 134.96 8.43 67.48 0.00 276511.86 21359.88 306028.85 00:27:57.213 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:57.213 Job: Nvme8n1 ended in about 0.95 seconds with error 00:27:57.213 Verification LBA range: start 0x0 length 0x400 00:27:57.213 Nvme8n1 : 0.95 134.50 8.41 67.25 0.00 271614.36 21359.88 264085.81 00:27:57.213 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:57.213 Job: Nvme9n1 ended in about 0.90 seconds with error 00:27:57.213 Verification LBA range: start 0x0 length 0x400 00:27:57.213 Nvme9n1 : 0.90 142.35 8.90 71.18 0.00 248166.34 5534.15 309135.74 00:27:57.213 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:57.213 Job: Nvme10n1 ended in about 0.96 seconds with error 00:27:57.213 Verification LBA range: start 0x0 length 0x400 00:27:57.213 Nvme10n1 : 0.96 134.00 8.37 67.00 0.00 260894.72 19903.53 262532.36 00:27:57.213 =================================================================================================================== 00:27:57.213 Total : 1579.97 98.75 685.78 0.00 255053.17 5534.15 309135.74 00:27:57.213 [2024-07-25 01:13:50.184779] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:27:57.213 [2024-07-25 01:13:50.184873] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:27:57.213 [2024-07-25 01:13:50.185232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.213 [2024-07-25 01:13:50.185276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x123e300 with addr=10.0.0.2, port=4420 00:27:57.213 [2024-07-25 01:13:50.185298] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123e300 is same with the state(5) to be set 00:27:57.213 [2024-07-25 01:13:50.185430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.213 [2024-07-25 01:13:50.185458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12616b0 with addr=10.0.0.2, port=4420 00:27:57.213 [2024-07-25 01:13:50.185474] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12616b0 is same with the state(5) to be set 00:27:57.213 [2024-07-25 01:13:50.185596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.213 [2024-07-25 01:13:50.185624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1400ec0 with addr=10.0.0.2, port=4420 00:27:57.213 [2024-07-25 01:13:50.185640] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1400ec0 is same with the state(5) to be set 00:27:57.213 [2024-07-25 01:13:50.185751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.213 [2024-07-25 01:13:50.185777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd36610 with addr=10.0.0.2, port=4420 00:27:57.213 [2024-07-25 01:13:50.185793] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd36610 is same with the state(5) to be set 00:27:57.213 [2024-07-25 01:13:50.187423] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:27:57.213 [2024-07-25 01:13:50.187453] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:27:57.213 [2024-07-25 01:13:50.187472] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:27:57.213 [2024-07-25 01:13:50.187490] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:27:57.213 [2024-07-25 01:13:50.187672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.213 [2024-07-25 01:13:50.187700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13e2f00 with addr=10.0.0.2, port=4420 00:27:57.213 [2024-07-25 01:13:50.187716] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e2f00 is same with the state(5) to be set 00:27:57.213 [2024-07-25 01:13:50.187830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.213 [2024-07-25 01:13:50.187857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1296f90 with addr=10.0.0.2, port=4420 00:27:57.213 [2024-07-25 01:13:50.187873] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1296f90 is same with the state(5) to be set 00:27:57.213 [2024-07-25 01:13:50.187899] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x123e300 (9): Bad file descriptor 00:27:57.213 [2024-07-25 01:13:50.187921] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12616b0 (9): Bad file descriptor 00:27:57.213 [2024-07-25 01:13:50.187939] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1400ec0 (9): Bad file descriptor 00:27:57.213 [2024-07-25 01:13:50.187980] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd36610 (9): Bad file descriptor 00:27:57.213 [2024-07-25 01:13:50.188043] bdev_nvme.c:2896:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:57.213 [2024-07-25 01:13:50.188068] bdev_nvme.c:2896:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:57.213 [2024-07-25 01:13:50.188087] bdev_nvme.c:2896:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:57.213 [2024-07-25 01:13:50.188105] bdev_nvme.c:2896:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:57.213 [2024-07-25 01:13:50.188298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.213 [2024-07-25 01:13:50.188327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1269810 with addr=10.0.0.2, port=4420 00:27:57.213 [2024-07-25 01:13:50.188344] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1269810 is same with the state(5) to be set 00:27:57.213 [2024-07-25 01:13:50.188449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.213 [2024-07-25 01:13:50.188476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x123c190 with addr=10.0.0.2, port=4420 00:27:57.213 [2024-07-25 01:13:50.188492] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123c190 is same with the state(5) to be set 00:27:57.213 [2024-07-25 01:13:50.188604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.213 [2024-07-25 01:13:50.188630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13e1f50 with addr=10.0.0.2, port=4420 00:27:57.213 [2024-07-25 01:13:50.188646] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e1f50 is same with the state(5) to be set 00:27:57.213 [2024-07-25 01:13:50.188750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.213 [2024-07-25 01:13:50.188776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x126cf90 with addr=10.0.0.2, port=4420 00:27:57.213 [2024-07-25 01:13:50.188792] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x126cf90 is same with the state(5) to be set 00:27:57.213 [2024-07-25 01:13:50.188811] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13e2f00 (9): Bad file descriptor 00:27:57.213 [2024-07-25 01:13:50.188829] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1296f90 (9): Bad file descriptor 00:27:57.213 [2024-07-25 01:13:50.188846] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:57.213 [2024-07-25 01:13:50.188858] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:57.213 [2024-07-25 01:13:50.188874] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:57.213 [2024-07-25 01:13:50.188894] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:27:57.213 [2024-07-25 01:13:50.188908] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:27:57.213 [2024-07-25 01:13:50.188921] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:27:57.213 [2024-07-25 01:13:50.188938] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:27:57.213 [2024-07-25 01:13:50.188952] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:27:57.213 [2024-07-25 01:13:50.188965] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:27:57.213 [2024-07-25 01:13:50.188981] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:27:57.213 [2024-07-25 01:13:50.188999] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:27:57.213 [2024-07-25 01:13:50.189013] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:27:57.213 [2024-07-25 01:13:50.189118] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:57.213 [2024-07-25 01:13:50.189141] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:57.213 [2024-07-25 01:13:50.189153] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:57.213 [2024-07-25 01:13:50.189165] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:57.213 [2024-07-25 01:13:50.189182] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1269810 (9): Bad file descriptor 00:27:57.213 [2024-07-25 01:13:50.189201] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x123c190 (9): Bad file descriptor 00:27:57.213 [2024-07-25 01:13:50.189219] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13e1f50 (9): Bad file descriptor 00:27:57.213 [2024-07-25 01:13:50.189236] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x126cf90 (9): Bad file descriptor 00:27:57.213 [2024-07-25 01:13:50.189259] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:27:57.213 [2024-07-25 01:13:50.189283] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:27:57.213 [2024-07-25 01:13:50.189296] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:27:57.213 [2024-07-25 01:13:50.189312] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:27:57.213 [2024-07-25 01:13:50.189325] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:27:57.213 [2024-07-25 01:13:50.189338] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:27:57.213 [2024-07-25 01:13:50.189375] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:57.213 [2024-07-25 01:13:50.189392] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:57.214 [2024-07-25 01:13:50.189404] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:27:57.214 [2024-07-25 01:13:50.189417] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:27:57.214 [2024-07-25 01:13:50.189430] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:27:57.214 [2024-07-25 01:13:50.189446] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:27:57.214 [2024-07-25 01:13:50.189460] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:27:57.214 [2024-07-25 01:13:50.189473] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:27:57.214 [2024-07-25 01:13:50.189487] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:27:57.214 [2024-07-25 01:13:50.189501] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:27:57.214 [2024-07-25 01:13:50.189513] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:27:57.214 [2024-07-25 01:13:50.189528] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:27:57.214 [2024-07-25 01:13:50.189541] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:27:57.214 [2024-07-25 01:13:50.189554] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:27:57.214 [2024-07-25 01:13:50.189592] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:57.214 [2024-07-25 01:13:50.189614] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:57.214 [2024-07-25 01:13:50.189628] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:57.214 [2024-07-25 01:13:50.189640] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:57.780 01:13:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # nvmfpid= 00:27:57.780 01:13:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@139 -- # sleep 1 00:27:58.714 01:13:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # kill -9 3855939 00:27:58.714 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 142: kill: (3855939) - No such process 00:27:58.714 01:13:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # true 00:27:58.714 01:13:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@144 -- # stoptarget 00:27:58.714 01:13:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:27:58.714 01:13:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:27:58.714 01:13:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:27:58.714 01:13:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@45 -- # nvmftestfini 00:27:58.714 01:13:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:58.714 01:13:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # sync 00:27:58.714 01:13:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:58.714 01:13:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@120 -- # set +e 00:27:58.714 01:13:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:58.714 01:13:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:58.714 rmmod nvme_tcp 00:27:58.714 rmmod nvme_fabrics 00:27:58.714 rmmod nvme_keyring 00:27:58.714 01:13:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:58.714 01:13:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set -e 00:27:58.714 01:13:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # return 0 00:27:58.714 01:13:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:27:58.714 01:13:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:58.714 01:13:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:58.714 01:13:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:58.714 01:13:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:58.714 01:13:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:58.714 01:13:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:58.714 01:13:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:58.714 01:13:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:00.614 01:13:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:00.614 00:28:00.614 real 0m7.446s 00:28:00.614 user 0m18.000s 00:28:00.614 sys 0m1.501s 00:28:00.614 01:13:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:28:00.614 01:13:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:00.614 ************************************ 00:28:00.614 END TEST nvmf_shutdown_tc3 00:28:00.614 ************************************ 00:28:00.873 01:13:53 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@151 -- # trap - SIGINT SIGTERM EXIT 00:28:00.873 00:28:00.873 real 0m26.907s 00:28:00.873 user 1m14.366s 00:28:00.873 sys 0m6.311s 00:28:00.873 01:13:53 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1122 -- # xtrace_disable 00:28:00.873 01:13:53 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:00.873 ************************************ 00:28:00.873 END TEST nvmf_shutdown 00:28:00.873 ************************************ 00:28:00.873 01:13:53 nvmf_tcp -- nvmf/nvmf.sh@86 -- # timing_exit target 00:28:00.873 01:13:53 nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:00.873 01:13:53 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:00.873 01:13:53 nvmf_tcp -- nvmf/nvmf.sh@88 -- # timing_enter host 00:28:00.873 01:13:53 nvmf_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:28:00.873 01:13:53 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:00.873 01:13:53 nvmf_tcp -- nvmf/nvmf.sh@90 -- # [[ 0 -eq 0 ]] 00:28:00.873 01:13:53 nvmf_tcp -- nvmf/nvmf.sh@91 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:28:00.873 01:13:53 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:28:00.873 01:13:53 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:28:00.873 01:13:53 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:00.873 ************************************ 00:28:00.873 START TEST nvmf_multicontroller 00:28:00.873 ************************************ 00:28:00.873 01:13:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:28:00.873 * Looking for test storage... 00:28:00.873 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:00.873 01:13:53 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:00.873 01:13:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:28:00.873 01:13:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:00.873 01:13:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:00.873 01:13:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:00.873 01:13:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:00.873 01:13:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:00.873 01:13:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:00.873 01:13:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:00.873 01:13:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:00.873 01:13:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:00.873 01:13:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:00.873 01:13:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:28:00.873 01:13:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:28:00.873 01:13:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:00.873 01:13:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:00.873 01:13:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:00.873 01:13:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:00.873 01:13:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:00.873 01:13:53 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:00.873 01:13:53 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:00.873 01:13:53 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:00.873 01:13:53 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:00.873 01:13:53 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:00.873 01:13:53 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:00.873 01:13:53 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:28:00.873 01:13:53 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:00.873 01:13:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@47 -- # : 0 00:28:00.873 01:13:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:00.873 01:13:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:00.873 01:13:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:00.873 01:13:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:00.873 01:13:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:00.873 01:13:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:00.873 01:13:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:00.873 01:13:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:00.873 01:13:53 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:28:00.873 01:13:53 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:28:00.873 01:13:53 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:28:00.873 01:13:53 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:28:00.873 01:13:53 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:28:00.873 01:13:53 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:28:00.873 01:13:53 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:28:00.873 01:13:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:00.873 01:13:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:00.873 01:13:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:00.873 01:13:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:00.873 01:13:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:00.873 01:13:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:00.873 01:13:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:00.873 01:13:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:00.873 01:13:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:00.873 01:13:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:00.873 01:13:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@285 -- # xtrace_disable 00:28:00.873 01:13:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:02.776 01:13:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:02.776 01:13:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@291 -- # pci_devs=() 00:28:02.776 01:13:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:02.776 01:13:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:02.776 01:13:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:02.776 01:13:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:02.776 01:13:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:02.776 01:13:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@295 -- # net_devs=() 00:28:02.776 01:13:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:02.776 01:13:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@296 -- # e810=() 00:28:02.776 01:13:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@296 -- # local -ga e810 00:28:02.776 01:13:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@297 -- # x722=() 00:28:02.776 01:13:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@297 -- # local -ga x722 00:28:02.776 01:13:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@298 -- # mlx=() 00:28:02.776 01:13:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@298 -- # local -ga mlx 00:28:02.776 01:13:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:02.776 01:13:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:02.776 01:13:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:02.776 01:13:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:02.776 01:13:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:02.776 01:13:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:02.776 01:13:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:02.776 01:13:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:02.776 01:13:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:02.776 01:13:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:02.776 01:13:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:02.776 01:13:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:02.776 01:13:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:02.776 01:13:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:02.776 01:13:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:02.776 01:13:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:02.776 01:13:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:02.776 01:13:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:02.776 01:13:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:02.776 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:02.776 01:13:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:02.776 01:13:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:02.776 01:13:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:02.776 01:13:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:02.777 01:13:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:02.777 01:13:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:02.777 01:13:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:02.777 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:02.777 01:13:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:02.777 01:13:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:02.777 01:13:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:02.777 01:13:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:02.777 01:13:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:02.777 01:13:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:02.777 01:13:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:02.777 01:13:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:02.777 01:13:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:02.777 01:13:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:02.777 01:13:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:02.777 01:13:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:02.777 01:13:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:02.777 01:13:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:02.777 01:13:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:02.777 01:13:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:02.777 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:02.777 01:13:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:02.777 01:13:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:02.777 01:13:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:02.777 01:13:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:02.777 01:13:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:02.777 01:13:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:02.777 01:13:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:02.777 01:13:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:02.777 01:13:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:02.777 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:02.777 01:13:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:02.777 01:13:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:02.777 01:13:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # is_hw=yes 00:28:02.777 01:13:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:02.777 01:13:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:02.777 01:13:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:02.777 01:13:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:02.777 01:13:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:02.777 01:13:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:02.777 01:13:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:02.777 01:13:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:02.777 01:13:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:02.777 01:13:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:02.777 01:13:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:02.777 01:13:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:02.777 01:13:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:02.777 01:13:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:02.777 01:13:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:02.777 01:13:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:02.777 01:13:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:02.777 01:13:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:02.777 01:13:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:02.777 01:13:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:03.035 01:13:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:03.035 01:13:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:03.035 01:13:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:03.035 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:03.035 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.189 ms 00:28:03.035 00:28:03.035 --- 10.0.0.2 ping statistics --- 00:28:03.035 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:03.035 rtt min/avg/max/mdev = 0.189/0.189/0.189/0.000 ms 00:28:03.035 01:13:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:03.035 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:03.035 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.096 ms 00:28:03.035 00:28:03.035 --- 10.0.0.1 ping statistics --- 00:28:03.035 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:03.035 rtt min/avg/max/mdev = 0.096/0.096/0.096/0.000 ms 00:28:03.035 01:13:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:03.035 01:13:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@422 -- # return 0 00:28:03.035 01:13:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:03.035 01:13:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:03.035 01:13:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:03.035 01:13:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:03.035 01:13:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:03.035 01:13:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:03.035 01:13:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:03.035 01:13:55 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:28:03.035 01:13:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:03.035 01:13:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@720 -- # xtrace_disable 00:28:03.035 01:13:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:03.035 01:13:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@481 -- # nvmfpid=3858453 00:28:03.035 01:13:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:28:03.035 01:13:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@482 -- # waitforlisten 3858453 00:28:03.035 01:13:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@827 -- # '[' -z 3858453 ']' 00:28:03.035 01:13:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:03.036 01:13:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@832 -- # local max_retries=100 00:28:03.036 01:13:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:03.036 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:03.036 01:13:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@836 -- # xtrace_disable 00:28:03.036 01:13:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:03.036 [2024-07-25 01:13:56.039101] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 23.11.0 initialization... 00:28:03.036 [2024-07-25 01:13:56.039200] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:03.036 EAL: No free 2048 kB hugepages reported on node 1 00:28:03.036 [2024-07-25 01:13:56.108527] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:28:03.294 [2024-07-25 01:13:56.198740] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:03.294 [2024-07-25 01:13:56.198793] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:03.294 [2024-07-25 01:13:56.198821] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:03.294 [2024-07-25 01:13:56.198834] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:03.294 [2024-07-25 01:13:56.198845] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:03.294 [2024-07-25 01:13:56.198939] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:28:03.294 [2024-07-25 01:13:56.199056] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:28:03.294 [2024-07-25 01:13:56.199059] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:03.294 01:13:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:28:03.294 01:13:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@860 -- # return 0 00:28:03.294 01:13:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:03.294 01:13:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:03.295 01:13:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:03.295 01:13:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:03.295 01:13:56 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:03.295 01:13:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:03.295 01:13:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:03.295 [2024-07-25 01:13:56.336462] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:03.295 01:13:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:03.295 01:13:56 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:28:03.295 01:13:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:03.295 01:13:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:03.295 Malloc0 00:28:03.295 01:13:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:03.295 01:13:56 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:03.295 01:13:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:03.295 01:13:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:03.295 01:13:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:03.295 01:13:56 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:03.295 01:13:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:03.295 01:13:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:03.295 01:13:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:03.295 01:13:56 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:03.295 01:13:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:03.295 01:13:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:03.295 [2024-07-25 01:13:56.403932] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:03.295 01:13:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:03.295 01:13:56 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:28:03.295 01:13:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:03.295 01:13:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:03.295 [2024-07-25 01:13:56.411803] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:28:03.295 01:13:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:03.295 01:13:56 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:28:03.295 01:13:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:03.295 01:13:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:03.295 Malloc1 00:28:03.295 01:13:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:03.295 01:13:56 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:28:03.295 01:13:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:03.295 01:13:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:03.553 01:13:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:03.553 01:13:56 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:28:03.553 01:13:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:03.553 01:13:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:03.553 01:13:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:03.553 01:13:56 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:28:03.553 01:13:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:03.553 01:13:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:03.553 01:13:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:03.553 01:13:56 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:28:03.553 01:13:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:03.553 01:13:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:03.553 01:13:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:03.553 01:13:56 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=3858479 00:28:03.553 01:13:56 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:03.553 01:13:56 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:28:03.553 01:13:56 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 3858479 /var/tmp/bdevperf.sock 00:28:03.553 01:13:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@827 -- # '[' -z 3858479 ']' 00:28:03.553 01:13:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:03.553 01:13:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@832 -- # local max_retries=100 00:28:03.553 01:13:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:03.553 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:03.553 01:13:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@836 -- # xtrace_disable 00:28:03.553 01:13:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:03.812 01:13:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:28:03.812 01:13:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@860 -- # return 0 00:28:03.812 01:13:56 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:28:03.812 01:13:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:03.812 01:13:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:03.812 NVMe0n1 00:28:03.812 01:13:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:03.812 01:13:56 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:28:03.812 01:13:56 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:28:03.812 01:13:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:03.812 01:13:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:03.812 01:13:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:03.812 1 00:28:03.812 01:13:56 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:28:03.812 01:13:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:28:03.812 01:13:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:28:03.812 01:13:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:28:03.812 01:13:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:03.812 01:13:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:28:03.812 01:13:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:03.812 01:13:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:28:03.812 01:13:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:03.812 01:13:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:03.812 request: 00:28:03.812 { 00:28:03.812 "name": "NVMe0", 00:28:03.812 "trtype": "tcp", 00:28:03.812 "traddr": "10.0.0.2", 00:28:03.812 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:28:03.812 "hostaddr": "10.0.0.2", 00:28:03.812 "hostsvcid": "60000", 00:28:03.812 "adrfam": "ipv4", 00:28:03.812 "trsvcid": "4420", 00:28:03.812 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:03.812 "method": "bdev_nvme_attach_controller", 00:28:03.812 "req_id": 1 00:28:03.812 } 00:28:03.812 Got JSON-RPC error response 00:28:03.812 response: 00:28:03.812 { 00:28:03.812 "code": -114, 00:28:03.812 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:28:03.812 } 00:28:03.812 01:13:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:28:03.812 01:13:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:28:03.812 01:13:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:28:03.812 01:13:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:28:03.812 01:13:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:28:03.813 01:13:56 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:28:03.813 01:13:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:28:03.813 01:13:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:28:03.813 01:13:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:28:03.813 01:13:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:03.813 01:13:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:28:03.813 01:13:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:03.813 01:13:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:28:03.813 01:13:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:03.813 01:13:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:03.813 request: 00:28:03.813 { 00:28:03.813 "name": "NVMe0", 00:28:03.813 "trtype": "tcp", 00:28:03.813 "traddr": "10.0.0.2", 00:28:03.813 "hostaddr": "10.0.0.2", 00:28:03.813 "hostsvcid": "60000", 00:28:03.813 "adrfam": "ipv4", 00:28:03.813 "trsvcid": "4420", 00:28:03.813 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:28:03.813 "method": "bdev_nvme_attach_controller", 00:28:03.813 "req_id": 1 00:28:03.813 } 00:28:03.813 Got JSON-RPC error response 00:28:03.813 response: 00:28:03.813 { 00:28:03.813 "code": -114, 00:28:03.813 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:28:03.813 } 00:28:03.813 01:13:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:28:03.813 01:13:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:28:03.813 01:13:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:28:03.813 01:13:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:28:03.813 01:13:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:28:03.813 01:13:56 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:28:03.813 01:13:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:28:03.813 01:13:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:28:03.813 01:13:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:28:03.813 01:13:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:03.813 01:13:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:28:03.813 01:13:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:03.813 01:13:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:28:03.813 01:13:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:03.813 01:13:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:03.813 request: 00:28:03.813 { 00:28:03.813 "name": "NVMe0", 00:28:03.813 "trtype": "tcp", 00:28:03.813 "traddr": "10.0.0.2", 00:28:03.813 "hostaddr": "10.0.0.2", 00:28:03.813 "hostsvcid": "60000", 00:28:03.813 "adrfam": "ipv4", 00:28:03.813 "trsvcid": "4420", 00:28:03.813 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:03.813 "multipath": "disable", 00:28:03.813 "method": "bdev_nvme_attach_controller", 00:28:03.813 "req_id": 1 00:28:03.813 } 00:28:03.813 Got JSON-RPC error response 00:28:03.813 response: 00:28:03.813 { 00:28:03.813 "code": -114, 00:28:03.813 "message": "A controller named NVMe0 already exists and multipath is disabled\n" 00:28:03.813 } 00:28:03.813 01:13:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:28:03.813 01:13:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:28:03.813 01:13:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:28:03.813 01:13:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:28:03.813 01:13:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:28:03.813 01:13:56 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:28:03.813 01:13:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:28:03.813 01:13:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:28:03.813 01:13:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:28:03.813 01:13:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:03.813 01:13:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:28:03.813 01:13:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:03.813 01:13:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:28:03.813 01:13:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:03.813 01:13:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:03.813 request: 00:28:03.813 { 00:28:03.813 "name": "NVMe0", 00:28:03.813 "trtype": "tcp", 00:28:03.813 "traddr": "10.0.0.2", 00:28:03.813 "hostaddr": "10.0.0.2", 00:28:03.813 "hostsvcid": "60000", 00:28:03.813 "adrfam": "ipv4", 00:28:03.813 "trsvcid": "4420", 00:28:03.813 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:03.813 "multipath": "failover", 00:28:03.813 "method": "bdev_nvme_attach_controller", 00:28:03.813 "req_id": 1 00:28:03.813 } 00:28:03.813 Got JSON-RPC error response 00:28:03.813 response: 00:28:03.813 { 00:28:03.813 "code": -114, 00:28:03.813 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:28:03.813 } 00:28:03.813 01:13:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:28:03.813 01:13:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:28:03.813 01:13:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:28:03.813 01:13:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:28:03.813 01:13:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:28:03.813 01:13:56 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:28:03.813 01:13:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:03.813 01:13:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:04.071 00:28:04.071 01:13:57 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:04.071 01:13:57 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:28:04.071 01:13:57 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:04.071 01:13:57 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:04.071 01:13:57 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:04.071 01:13:57 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:28:04.071 01:13:57 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:04.071 01:13:57 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:04.328 00:28:04.328 01:13:57 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:04.329 01:13:57 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:28:04.329 01:13:57 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:28:04.329 01:13:57 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:04.329 01:13:57 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:04.329 01:13:57 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:04.329 01:13:57 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:28:04.329 01:13:57 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:28:05.701 0 00:28:05.701 01:13:58 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:28:05.701 01:13:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:05.701 01:13:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:05.701 01:13:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:05.701 01:13:58 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@100 -- # killprocess 3858479 00:28:05.701 01:13:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@946 -- # '[' -z 3858479 ']' 00:28:05.702 01:13:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@950 -- # kill -0 3858479 00:28:05.702 01:13:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@951 -- # uname 00:28:05.702 01:13:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:28:05.702 01:13:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3858479 00:28:05.702 01:13:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:28:05.702 01:13:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:28:05.702 01:13:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3858479' 00:28:05.702 killing process with pid 3858479 00:28:05.702 01:13:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@965 -- # kill 3858479 00:28:05.702 01:13:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@970 -- # wait 3858479 00:28:05.702 01:13:58 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:05.702 01:13:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:05.702 01:13:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:05.702 01:13:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:05.702 01:13:58 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:28:05.702 01:13:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:05.702 01:13:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:05.702 01:13:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:05.702 01:13:58 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:28:05.702 01:13:58 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@107 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:28:05.702 01:13:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1608 -- # read -r file 00:28:05.702 01:13:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1607 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:28:05.702 01:13:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1607 -- # sort -u 00:28:05.702 01:13:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1609 -- # cat 00:28:05.702 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:28:05.702 [2024-07-25 01:13:56.517123] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 23.11.0 initialization... 00:28:05.702 [2024-07-25 01:13:56.517202] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3858479 ] 00:28:05.702 EAL: No free 2048 kB hugepages reported on node 1 00:28:05.702 [2024-07-25 01:13:56.578543] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:05.702 [2024-07-25 01:13:56.664077] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:05.702 [2024-07-25 01:13:57.314541] bdev.c:4580:bdev_name_add: *ERROR*: Bdev name 7cc55ec8-fdcd-453c-a131-c7d7e94fc438 already exists 00:28:05.702 [2024-07-25 01:13:57.314592] bdev.c:7696:bdev_register: *ERROR*: Unable to add uuid:7cc55ec8-fdcd-453c-a131-c7d7e94fc438 alias for bdev NVMe1n1 00:28:05.702 [2024-07-25 01:13:57.314624] bdev_nvme.c:4314:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:28:05.702 Running I/O for 1 seconds... 00:28:05.702 00:28:05.702 Latency(us) 00:28:05.702 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:05.702 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:28:05.702 NVMe0n1 : 1.01 19702.32 76.96 0.00 0.00 6486.54 2912.71 11505.21 00:28:05.702 =================================================================================================================== 00:28:05.702 Total : 19702.32 76.96 0.00 0.00 6486.54 2912.71 11505.21 00:28:05.702 Received shutdown signal, test time was about 1.000000 seconds 00:28:05.702 00:28:05.702 Latency(us) 00:28:05.702 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:05.702 =================================================================================================================== 00:28:05.702 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:05.702 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:28:05.702 01:13:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1614 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:28:05.702 01:13:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1608 -- # read -r file 00:28:05.702 01:13:58 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@108 -- # nvmftestfini 00:28:05.702 01:13:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:05.702 01:13:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@117 -- # sync 00:28:05.702 01:13:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:05.702 01:13:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@120 -- # set +e 00:28:05.702 01:13:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:05.702 01:13:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:05.702 rmmod nvme_tcp 00:28:05.702 rmmod nvme_fabrics 00:28:05.702 rmmod nvme_keyring 00:28:05.702 01:13:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:05.702 01:13:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@124 -- # set -e 00:28:05.702 01:13:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@125 -- # return 0 00:28:05.702 01:13:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@489 -- # '[' -n 3858453 ']' 00:28:05.702 01:13:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@490 -- # killprocess 3858453 00:28:05.702 01:13:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@946 -- # '[' -z 3858453 ']' 00:28:05.702 01:13:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@950 -- # kill -0 3858453 00:28:05.702 01:13:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@951 -- # uname 00:28:05.702 01:13:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:28:05.702 01:13:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3858453 00:28:05.702 01:13:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:28:05.702 01:13:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:28:05.702 01:13:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3858453' 00:28:05.702 killing process with pid 3858453 00:28:05.702 01:13:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@965 -- # kill 3858453 00:28:05.702 01:13:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@970 -- # wait 3858453 00:28:05.960 01:13:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:05.960 01:13:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:05.960 01:13:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:05.960 01:13:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:05.960 01:13:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:05.960 01:13:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:05.960 01:13:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:05.960 01:13:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:08.488 01:14:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:08.488 00:28:08.488 real 0m7.243s 00:28:08.488 user 0m11.370s 00:28:08.488 sys 0m2.206s 00:28:08.488 01:14:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1122 -- # xtrace_disable 00:28:08.488 01:14:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:08.488 ************************************ 00:28:08.488 END TEST nvmf_multicontroller 00:28:08.488 ************************************ 00:28:08.488 01:14:01 nvmf_tcp -- nvmf/nvmf.sh@92 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:28:08.488 01:14:01 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:28:08.488 01:14:01 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:28:08.488 01:14:01 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:08.488 ************************************ 00:28:08.488 START TEST nvmf_aer 00:28:08.488 ************************************ 00:28:08.488 01:14:01 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:28:08.488 * Looking for test storage... 00:28:08.488 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:08.488 01:14:01 nvmf_tcp.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:08.488 01:14:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:28:08.488 01:14:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:08.488 01:14:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:08.489 01:14:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:08.489 01:14:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:08.489 01:14:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:08.489 01:14:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:08.489 01:14:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:08.489 01:14:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:08.489 01:14:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:08.489 01:14:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:08.489 01:14:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:28:08.489 01:14:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:28:08.489 01:14:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:08.489 01:14:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:08.489 01:14:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:08.489 01:14:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:08.489 01:14:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:08.489 01:14:01 nvmf_tcp.nvmf_aer -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:08.489 01:14:01 nvmf_tcp.nvmf_aer -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:08.489 01:14:01 nvmf_tcp.nvmf_aer -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:08.489 01:14:01 nvmf_tcp.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:08.489 01:14:01 nvmf_tcp.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:08.489 01:14:01 nvmf_tcp.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:08.489 01:14:01 nvmf_tcp.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:28:08.489 01:14:01 nvmf_tcp.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:08.489 01:14:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@47 -- # : 0 00:28:08.489 01:14:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:08.489 01:14:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:08.489 01:14:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:08.489 01:14:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:08.489 01:14:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:08.489 01:14:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:08.489 01:14:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:08.489 01:14:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:08.489 01:14:01 nvmf_tcp.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:28:08.489 01:14:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:08.489 01:14:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:08.489 01:14:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:08.489 01:14:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:08.489 01:14:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:08.489 01:14:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:08.489 01:14:01 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:08.489 01:14:01 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:08.489 01:14:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:08.489 01:14:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:08.489 01:14:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@285 -- # xtrace_disable 00:28:08.489 01:14:01 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:10.469 01:14:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:10.469 01:14:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@291 -- # pci_devs=() 00:28:10.469 01:14:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:10.469 01:14:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:10.469 01:14:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:10.469 01:14:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:10.469 01:14:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:10.469 01:14:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@295 -- # net_devs=() 00:28:10.469 01:14:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:10.469 01:14:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@296 -- # e810=() 00:28:10.469 01:14:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@296 -- # local -ga e810 00:28:10.469 01:14:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@297 -- # x722=() 00:28:10.469 01:14:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@297 -- # local -ga x722 00:28:10.469 01:14:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@298 -- # mlx=() 00:28:10.469 01:14:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@298 -- # local -ga mlx 00:28:10.469 01:14:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:10.469 01:14:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:10.469 01:14:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:10.469 01:14:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:10.469 01:14:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:10.469 01:14:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:10.469 01:14:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:10.469 01:14:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:10.469 01:14:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:10.469 01:14:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:10.469 01:14:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:10.469 01:14:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:10.469 01:14:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:10.469 01:14:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:10.469 01:14:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:10.469 01:14:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:10.469 01:14:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:10.469 01:14:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:10.469 01:14:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:10.469 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:10.469 01:14:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:10.469 01:14:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:10.469 01:14:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:10.469 01:14:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:10.469 01:14:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:10.469 01:14:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:10.469 01:14:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:10.469 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:10.469 01:14:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:10.469 01:14:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:10.469 01:14:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:10.469 01:14:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:10.469 01:14:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:10.469 01:14:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:10.469 01:14:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:10.469 01:14:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:10.469 01:14:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:10.469 01:14:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:10.469 01:14:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:10.469 01:14:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:10.469 01:14:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:10.469 01:14:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:10.469 01:14:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:10.469 01:14:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:10.469 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:10.469 01:14:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:10.469 01:14:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:10.469 01:14:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:10.469 01:14:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:10.469 01:14:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:10.469 01:14:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:10.469 01:14:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:10.469 01:14:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:10.469 01:14:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:10.469 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:10.470 01:14:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:10.470 01:14:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:10.470 01:14:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # is_hw=yes 00:28:10.470 01:14:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:10.470 01:14:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:10.470 01:14:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:10.470 01:14:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:10.470 01:14:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:10.470 01:14:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:10.470 01:14:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:10.470 01:14:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:10.470 01:14:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:10.470 01:14:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:10.470 01:14:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:10.470 01:14:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:10.470 01:14:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:10.470 01:14:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:10.470 01:14:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:10.470 01:14:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:10.470 01:14:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:10.470 01:14:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:10.470 01:14:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:10.470 01:14:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:10.470 01:14:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:10.470 01:14:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:10.470 01:14:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:10.470 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:10.470 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.129 ms 00:28:10.470 00:28:10.470 --- 10.0.0.2 ping statistics --- 00:28:10.470 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:10.470 rtt min/avg/max/mdev = 0.129/0.129/0.129/0.000 ms 00:28:10.470 01:14:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:10.470 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:10.470 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.124 ms 00:28:10.470 00:28:10.470 --- 10.0.0.1 ping statistics --- 00:28:10.470 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:10.470 rtt min/avg/max/mdev = 0.124/0.124/0.124/0.000 ms 00:28:10.470 01:14:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:10.470 01:14:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@422 -- # return 0 00:28:10.470 01:14:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:10.470 01:14:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:10.470 01:14:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:10.470 01:14:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:10.470 01:14:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:10.470 01:14:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:10.470 01:14:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:10.470 01:14:03 nvmf_tcp.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:28:10.470 01:14:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:10.470 01:14:03 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@720 -- # xtrace_disable 00:28:10.470 01:14:03 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:10.470 01:14:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@481 -- # nvmfpid=3860689 00:28:10.470 01:14:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:28:10.470 01:14:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@482 -- # waitforlisten 3860689 00:28:10.470 01:14:03 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@827 -- # '[' -z 3860689 ']' 00:28:10.470 01:14:03 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:10.470 01:14:03 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@832 -- # local max_retries=100 00:28:10.470 01:14:03 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:10.470 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:10.470 01:14:03 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@836 -- # xtrace_disable 00:28:10.470 01:14:03 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:10.470 [2024-07-25 01:14:03.445199] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 23.11.0 initialization... 00:28:10.470 [2024-07-25 01:14:03.445300] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:10.470 EAL: No free 2048 kB hugepages reported on node 1 00:28:10.470 [2024-07-25 01:14:03.517774] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:10.470 [2024-07-25 01:14:03.611921] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:10.470 [2024-07-25 01:14:03.611976] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:10.470 [2024-07-25 01:14:03.612002] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:10.470 [2024-07-25 01:14:03.612016] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:10.470 [2024-07-25 01:14:03.612028] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:10.470 [2024-07-25 01:14:03.612100] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:10.470 [2024-07-25 01:14:03.612154] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:28:10.470 [2024-07-25 01:14:03.612336] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:28:10.470 [2024-07-25 01:14:03.612340] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:10.727 01:14:03 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:28:10.727 01:14:03 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@860 -- # return 0 00:28:10.727 01:14:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:10.727 01:14:03 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:10.727 01:14:03 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:10.727 01:14:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:10.727 01:14:03 nvmf_tcp.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:10.727 01:14:03 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:10.727 01:14:03 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:10.727 [2024-07-25 01:14:03.773161] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:10.727 01:14:03 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:10.727 01:14:03 nvmf_tcp.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:28:10.728 01:14:03 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:10.728 01:14:03 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:10.728 Malloc0 00:28:10.728 01:14:03 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:10.728 01:14:03 nvmf_tcp.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:28:10.728 01:14:03 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:10.728 01:14:03 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:10.728 01:14:03 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:10.728 01:14:03 nvmf_tcp.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:10.728 01:14:03 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:10.728 01:14:03 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:10.728 01:14:03 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:10.728 01:14:03 nvmf_tcp.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:10.728 01:14:03 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:10.728 01:14:03 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:10.728 [2024-07-25 01:14:03.826988] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:10.728 01:14:03 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:10.728 01:14:03 nvmf_tcp.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:28:10.728 01:14:03 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:10.728 01:14:03 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:10.728 [ 00:28:10.728 { 00:28:10.728 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:28:10.728 "subtype": "Discovery", 00:28:10.728 "listen_addresses": [], 00:28:10.728 "allow_any_host": true, 00:28:10.728 "hosts": [] 00:28:10.728 }, 00:28:10.728 { 00:28:10.728 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:28:10.728 "subtype": "NVMe", 00:28:10.728 "listen_addresses": [ 00:28:10.728 { 00:28:10.728 "trtype": "TCP", 00:28:10.728 "adrfam": "IPv4", 00:28:10.728 "traddr": "10.0.0.2", 00:28:10.728 "trsvcid": "4420" 00:28:10.728 } 00:28:10.728 ], 00:28:10.728 "allow_any_host": true, 00:28:10.728 "hosts": [], 00:28:10.728 "serial_number": "SPDK00000000000001", 00:28:10.728 "model_number": "SPDK bdev Controller", 00:28:10.728 "max_namespaces": 2, 00:28:10.728 "min_cntlid": 1, 00:28:10.728 "max_cntlid": 65519, 00:28:10.728 "namespaces": [ 00:28:10.728 { 00:28:10.728 "nsid": 1, 00:28:10.728 "bdev_name": "Malloc0", 00:28:10.728 "name": "Malloc0", 00:28:10.728 "nguid": "ED2FAC59095E449FAE6A9DB1CB34373C", 00:28:10.728 "uuid": "ed2fac59-095e-449f-ae6a-9db1cb34373c" 00:28:10.728 } 00:28:10.728 ] 00:28:10.728 } 00:28:10.728 ] 00:28:10.728 01:14:03 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:10.728 01:14:03 nvmf_tcp.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:28:10.728 01:14:03 nvmf_tcp.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:28:10.728 01:14:03 nvmf_tcp.nvmf_aer -- host/aer.sh@33 -- # aerpid=3860834 00:28:10.728 01:14:03 nvmf_tcp.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:28:10.728 01:14:03 nvmf_tcp.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:28:10.728 01:14:03 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1261 -- # local i=0 00:28:10.728 01:14:03 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1262 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:28:10.728 01:14:03 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1263 -- # '[' 0 -lt 200 ']' 00:28:10.728 01:14:03 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1264 -- # i=1 00:28:10.728 01:14:03 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1265 -- # sleep 0.1 00:28:10.986 EAL: No free 2048 kB hugepages reported on node 1 00:28:10.986 01:14:03 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1262 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:28:10.986 01:14:03 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1263 -- # '[' 1 -lt 200 ']' 00:28:10.986 01:14:03 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1264 -- # i=2 00:28:10.986 01:14:03 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1265 -- # sleep 0.1 00:28:10.986 01:14:04 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1262 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:28:10.986 01:14:04 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1263 -- # '[' 2 -lt 200 ']' 00:28:10.986 01:14:04 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1264 -- # i=3 00:28:10.986 01:14:04 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1265 -- # sleep 0.1 00:28:11.244 01:14:04 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1262 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:28:11.244 01:14:04 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1268 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:28:11.244 01:14:04 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1272 -- # return 0 00:28:11.244 01:14:04 nvmf_tcp.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:28:11.244 01:14:04 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:11.244 01:14:04 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:11.244 Malloc1 00:28:11.244 01:14:04 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:11.244 01:14:04 nvmf_tcp.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:28:11.244 01:14:04 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:11.244 01:14:04 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:11.244 01:14:04 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:11.244 01:14:04 nvmf_tcp.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:28:11.244 01:14:04 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:11.244 01:14:04 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:11.244 Asynchronous Event Request test 00:28:11.244 Attaching to 10.0.0.2 00:28:11.244 Attached to 10.0.0.2 00:28:11.244 Registering asynchronous event callbacks... 00:28:11.244 Starting namespace attribute notice tests for all controllers... 00:28:11.244 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:28:11.244 aer_cb - Changed Namespace 00:28:11.244 Cleaning up... 00:28:11.244 [ 00:28:11.244 { 00:28:11.244 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:28:11.244 "subtype": "Discovery", 00:28:11.244 "listen_addresses": [], 00:28:11.244 "allow_any_host": true, 00:28:11.244 "hosts": [] 00:28:11.244 }, 00:28:11.244 { 00:28:11.244 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:28:11.244 "subtype": "NVMe", 00:28:11.244 "listen_addresses": [ 00:28:11.244 { 00:28:11.244 "trtype": "TCP", 00:28:11.244 "adrfam": "IPv4", 00:28:11.244 "traddr": "10.0.0.2", 00:28:11.244 "trsvcid": "4420" 00:28:11.244 } 00:28:11.244 ], 00:28:11.244 "allow_any_host": true, 00:28:11.244 "hosts": [], 00:28:11.244 "serial_number": "SPDK00000000000001", 00:28:11.244 "model_number": "SPDK bdev Controller", 00:28:11.244 "max_namespaces": 2, 00:28:11.244 "min_cntlid": 1, 00:28:11.244 "max_cntlid": 65519, 00:28:11.244 "namespaces": [ 00:28:11.244 { 00:28:11.244 "nsid": 1, 00:28:11.244 "bdev_name": "Malloc0", 00:28:11.244 "name": "Malloc0", 00:28:11.244 "nguid": "ED2FAC59095E449FAE6A9DB1CB34373C", 00:28:11.244 "uuid": "ed2fac59-095e-449f-ae6a-9db1cb34373c" 00:28:11.244 }, 00:28:11.244 { 00:28:11.244 "nsid": 2, 00:28:11.244 "bdev_name": "Malloc1", 00:28:11.244 "name": "Malloc1", 00:28:11.244 "nguid": "68767962605C441E8DDA9AD36EBD8EAD", 00:28:11.244 "uuid": "68767962-605c-441e-8dda-9ad36ebd8ead" 00:28:11.244 } 00:28:11.244 ] 00:28:11.244 } 00:28:11.244 ] 00:28:11.244 01:14:04 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:11.244 01:14:04 nvmf_tcp.nvmf_aer -- host/aer.sh@43 -- # wait 3860834 00:28:11.244 01:14:04 nvmf_tcp.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:28:11.244 01:14:04 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:11.244 01:14:04 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:11.244 01:14:04 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:11.244 01:14:04 nvmf_tcp.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:28:11.244 01:14:04 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:11.244 01:14:04 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:11.244 01:14:04 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:11.244 01:14:04 nvmf_tcp.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:11.244 01:14:04 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:11.244 01:14:04 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:11.244 01:14:04 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:11.244 01:14:04 nvmf_tcp.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:28:11.244 01:14:04 nvmf_tcp.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:28:11.244 01:14:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:11.244 01:14:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@117 -- # sync 00:28:11.244 01:14:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:11.244 01:14:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@120 -- # set +e 00:28:11.244 01:14:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:11.245 01:14:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:11.245 rmmod nvme_tcp 00:28:11.245 rmmod nvme_fabrics 00:28:11.245 rmmod nvme_keyring 00:28:11.245 01:14:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:11.245 01:14:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@124 -- # set -e 00:28:11.245 01:14:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@125 -- # return 0 00:28:11.245 01:14:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@489 -- # '[' -n 3860689 ']' 00:28:11.245 01:14:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@490 -- # killprocess 3860689 00:28:11.245 01:14:04 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@946 -- # '[' -z 3860689 ']' 00:28:11.245 01:14:04 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@950 -- # kill -0 3860689 00:28:11.245 01:14:04 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@951 -- # uname 00:28:11.245 01:14:04 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:28:11.245 01:14:04 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3860689 00:28:11.503 01:14:04 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:28:11.503 01:14:04 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:28:11.503 01:14:04 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3860689' 00:28:11.503 killing process with pid 3860689 00:28:11.503 01:14:04 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@965 -- # kill 3860689 00:28:11.503 01:14:04 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@970 -- # wait 3860689 00:28:11.503 01:14:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:11.503 01:14:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:11.503 01:14:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:11.503 01:14:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:11.503 01:14:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:11.503 01:14:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:11.503 01:14:04 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:11.503 01:14:04 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:14.033 01:14:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:14.033 00:28:14.033 real 0m5.538s 00:28:14.033 user 0m4.748s 00:28:14.033 sys 0m1.928s 00:28:14.033 01:14:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1122 -- # xtrace_disable 00:28:14.033 01:14:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:14.033 ************************************ 00:28:14.033 END TEST nvmf_aer 00:28:14.033 ************************************ 00:28:14.033 01:14:06 nvmf_tcp -- nvmf/nvmf.sh@93 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:28:14.033 01:14:06 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:28:14.033 01:14:06 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:28:14.033 01:14:06 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:14.033 ************************************ 00:28:14.033 START TEST nvmf_async_init 00:28:14.033 ************************************ 00:28:14.033 01:14:06 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:28:14.033 * Looking for test storage... 00:28:14.033 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:14.033 01:14:06 nvmf_tcp.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:14.033 01:14:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:28:14.033 01:14:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:14.033 01:14:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:14.033 01:14:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:14.033 01:14:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:14.033 01:14:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:14.033 01:14:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:14.033 01:14:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:14.033 01:14:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:14.033 01:14:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:14.033 01:14:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:14.033 01:14:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:28:14.033 01:14:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:28:14.033 01:14:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:14.033 01:14:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:14.033 01:14:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:14.034 01:14:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:14.034 01:14:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:14.034 01:14:06 nvmf_tcp.nvmf_async_init -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:14.034 01:14:06 nvmf_tcp.nvmf_async_init -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:14.034 01:14:06 nvmf_tcp.nvmf_async_init -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:14.034 01:14:06 nvmf_tcp.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:14.034 01:14:06 nvmf_tcp.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:14.034 01:14:06 nvmf_tcp.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:14.034 01:14:06 nvmf_tcp.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:28:14.034 01:14:06 nvmf_tcp.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:14.034 01:14:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@47 -- # : 0 00:28:14.034 01:14:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:14.034 01:14:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:14.034 01:14:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:14.034 01:14:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:14.034 01:14:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:14.034 01:14:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:14.034 01:14:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:14.034 01:14:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:14.034 01:14:06 nvmf_tcp.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:28:14.034 01:14:06 nvmf_tcp.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:28:14.034 01:14:06 nvmf_tcp.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:28:14.034 01:14:06 nvmf_tcp.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:28:14.034 01:14:06 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:28:14.034 01:14:06 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:28:14.034 01:14:06 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # nguid=05f8973745cb449cae865177c1c4283d 00:28:14.034 01:14:06 nvmf_tcp.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:28:14.034 01:14:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:14.034 01:14:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:14.034 01:14:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:14.034 01:14:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:14.034 01:14:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:14.034 01:14:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:14.034 01:14:06 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:14.034 01:14:06 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:14.034 01:14:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:14.034 01:14:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:14.034 01:14:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@285 -- # xtrace_disable 00:28:14.034 01:14:06 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:15.933 01:14:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:15.933 01:14:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@291 -- # pci_devs=() 00:28:15.933 01:14:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:15.933 01:14:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:15.933 01:14:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:15.933 01:14:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:15.933 01:14:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:15.933 01:14:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@295 -- # net_devs=() 00:28:15.933 01:14:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:15.933 01:14:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@296 -- # e810=() 00:28:15.933 01:14:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@296 -- # local -ga e810 00:28:15.933 01:14:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@297 -- # x722=() 00:28:15.933 01:14:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@297 -- # local -ga x722 00:28:15.933 01:14:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@298 -- # mlx=() 00:28:15.933 01:14:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@298 -- # local -ga mlx 00:28:15.933 01:14:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:15.933 01:14:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:15.933 01:14:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:15.933 01:14:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:15.933 01:14:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:15.933 01:14:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:15.933 01:14:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:15.933 01:14:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:15.933 01:14:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:15.933 01:14:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:15.933 01:14:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:15.933 01:14:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:15.933 01:14:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:15.933 01:14:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:15.933 01:14:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:15.933 01:14:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:15.933 01:14:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:15.933 01:14:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:15.933 01:14:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:15.933 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:15.933 01:14:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:15.933 01:14:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:15.933 01:14:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:15.933 01:14:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:15.933 01:14:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:15.933 01:14:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:15.933 01:14:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:15.933 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:15.933 01:14:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:15.933 01:14:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:15.933 01:14:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:15.933 01:14:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:15.933 01:14:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:15.933 01:14:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:15.933 01:14:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:15.933 01:14:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:15.933 01:14:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:15.933 01:14:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:15.933 01:14:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:15.933 01:14:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:15.933 01:14:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:15.933 01:14:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:15.934 01:14:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:15.934 01:14:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:15.934 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:15.934 01:14:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:15.934 01:14:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:15.934 01:14:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:15.934 01:14:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:15.934 01:14:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:15.934 01:14:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:15.934 01:14:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:15.934 01:14:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:15.934 01:14:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:15.934 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:15.934 01:14:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:15.934 01:14:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:15.934 01:14:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # is_hw=yes 00:28:15.934 01:14:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:15.934 01:14:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:15.934 01:14:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:15.934 01:14:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:15.934 01:14:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:15.934 01:14:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:15.934 01:14:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:15.934 01:14:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:15.934 01:14:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:15.934 01:14:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:15.934 01:14:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:15.934 01:14:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:15.934 01:14:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:15.934 01:14:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:15.934 01:14:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:15.934 01:14:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:15.934 01:14:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:15.934 01:14:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:15.934 01:14:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:15.934 01:14:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:15.934 01:14:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:15.934 01:14:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:15.934 01:14:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:15.934 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:15.934 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.239 ms 00:28:15.934 00:28:15.934 --- 10.0.0.2 ping statistics --- 00:28:15.934 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:15.934 rtt min/avg/max/mdev = 0.239/0.239/0.239/0.000 ms 00:28:15.934 01:14:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:15.934 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:15.934 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.161 ms 00:28:15.934 00:28:15.934 --- 10.0.0.1 ping statistics --- 00:28:15.934 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:15.934 rtt min/avg/max/mdev = 0.161/0.161/0.161/0.000 ms 00:28:15.934 01:14:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:15.934 01:14:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@422 -- # return 0 00:28:15.934 01:14:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:15.934 01:14:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:15.934 01:14:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:15.934 01:14:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:15.934 01:14:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:15.934 01:14:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:15.934 01:14:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:15.934 01:14:08 nvmf_tcp.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:28:15.934 01:14:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:15.934 01:14:08 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@720 -- # xtrace_disable 00:28:15.934 01:14:08 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:15.934 01:14:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@481 -- # nvmfpid=3862770 00:28:15.934 01:14:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:28:15.934 01:14:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@482 -- # waitforlisten 3862770 00:28:15.934 01:14:08 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@827 -- # '[' -z 3862770 ']' 00:28:15.934 01:14:08 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:15.934 01:14:08 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@832 -- # local max_retries=100 00:28:15.934 01:14:08 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:15.934 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:15.934 01:14:08 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@836 -- # xtrace_disable 00:28:15.934 01:14:08 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:15.934 [2024-07-25 01:14:08.882351] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 23.11.0 initialization... 00:28:15.934 [2024-07-25 01:14:08.882421] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:15.934 EAL: No free 2048 kB hugepages reported on node 1 00:28:15.934 [2024-07-25 01:14:08.946170] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:15.934 [2024-07-25 01:14:09.029960] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:15.934 [2024-07-25 01:14:09.030029] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:15.934 [2024-07-25 01:14:09.030052] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:15.934 [2024-07-25 01:14:09.030063] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:15.934 [2024-07-25 01:14:09.030073] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:15.934 [2024-07-25 01:14:09.030104] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:16.192 01:14:09 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:28:16.192 01:14:09 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@860 -- # return 0 00:28:16.192 01:14:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:16.192 01:14:09 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:16.192 01:14:09 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:16.192 01:14:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:16.192 01:14:09 nvmf_tcp.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:28:16.192 01:14:09 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:16.192 01:14:09 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:16.192 [2024-07-25 01:14:09.171779] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:16.192 01:14:09 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:16.192 01:14:09 nvmf_tcp.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:28:16.192 01:14:09 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:16.192 01:14:09 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:16.192 null0 00:28:16.192 01:14:09 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:16.192 01:14:09 nvmf_tcp.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:28:16.192 01:14:09 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:16.192 01:14:09 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:16.192 01:14:09 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:16.192 01:14:09 nvmf_tcp.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:28:16.192 01:14:09 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:16.192 01:14:09 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:16.192 01:14:09 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:16.192 01:14:09 nvmf_tcp.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 05f8973745cb449cae865177c1c4283d 00:28:16.192 01:14:09 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:16.192 01:14:09 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:16.192 01:14:09 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:16.193 01:14:09 nvmf_tcp.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:28:16.193 01:14:09 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:16.193 01:14:09 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:16.193 [2024-07-25 01:14:09.212052] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:16.193 01:14:09 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:16.193 01:14:09 nvmf_tcp.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:28:16.193 01:14:09 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:16.193 01:14:09 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:16.450 nvme0n1 00:28:16.450 01:14:09 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:16.450 01:14:09 nvmf_tcp.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:28:16.450 01:14:09 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:16.450 01:14:09 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:16.450 [ 00:28:16.450 { 00:28:16.450 "name": "nvme0n1", 00:28:16.450 "aliases": [ 00:28:16.450 "05f89737-45cb-449c-ae86-5177c1c4283d" 00:28:16.450 ], 00:28:16.450 "product_name": "NVMe disk", 00:28:16.450 "block_size": 512, 00:28:16.450 "num_blocks": 2097152, 00:28:16.450 "uuid": "05f89737-45cb-449c-ae86-5177c1c4283d", 00:28:16.450 "assigned_rate_limits": { 00:28:16.450 "rw_ios_per_sec": 0, 00:28:16.451 "rw_mbytes_per_sec": 0, 00:28:16.451 "r_mbytes_per_sec": 0, 00:28:16.451 "w_mbytes_per_sec": 0 00:28:16.451 }, 00:28:16.451 "claimed": false, 00:28:16.451 "zoned": false, 00:28:16.451 "supported_io_types": { 00:28:16.451 "read": true, 00:28:16.451 "write": true, 00:28:16.451 "unmap": false, 00:28:16.451 "write_zeroes": true, 00:28:16.451 "flush": true, 00:28:16.451 "reset": true, 00:28:16.451 "compare": true, 00:28:16.451 "compare_and_write": true, 00:28:16.451 "abort": true, 00:28:16.451 "nvme_admin": true, 00:28:16.451 "nvme_io": true 00:28:16.451 }, 00:28:16.451 "memory_domains": [ 00:28:16.451 { 00:28:16.451 "dma_device_id": "system", 00:28:16.451 "dma_device_type": 1 00:28:16.451 } 00:28:16.451 ], 00:28:16.451 "driver_specific": { 00:28:16.451 "nvme": [ 00:28:16.451 { 00:28:16.451 "trid": { 00:28:16.451 "trtype": "TCP", 00:28:16.451 "adrfam": "IPv4", 00:28:16.451 "traddr": "10.0.0.2", 00:28:16.451 "trsvcid": "4420", 00:28:16.451 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:28:16.451 }, 00:28:16.451 "ctrlr_data": { 00:28:16.451 "cntlid": 1, 00:28:16.451 "vendor_id": "0x8086", 00:28:16.451 "model_number": "SPDK bdev Controller", 00:28:16.451 "serial_number": "00000000000000000000", 00:28:16.451 "firmware_revision": "24.05.1", 00:28:16.451 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:16.451 "oacs": { 00:28:16.451 "security": 0, 00:28:16.451 "format": 0, 00:28:16.451 "firmware": 0, 00:28:16.451 "ns_manage": 0 00:28:16.451 }, 00:28:16.451 "multi_ctrlr": true, 00:28:16.451 "ana_reporting": false 00:28:16.451 }, 00:28:16.451 "vs": { 00:28:16.451 "nvme_version": "1.3" 00:28:16.451 }, 00:28:16.451 "ns_data": { 00:28:16.451 "id": 1, 00:28:16.451 "can_share": true 00:28:16.451 } 00:28:16.451 } 00:28:16.451 ], 00:28:16.451 "mp_policy": "active_passive" 00:28:16.451 } 00:28:16.451 } 00:28:16.451 ] 00:28:16.451 01:14:09 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:16.451 01:14:09 nvmf_tcp.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:28:16.451 01:14:09 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:16.451 01:14:09 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:16.451 [2024-07-25 01:14:09.460542] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:28:16.451 [2024-07-25 01:14:09.460636] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1099760 (9): Bad file descriptor 00:28:16.451 [2024-07-25 01:14:09.592393] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:28:16.451 01:14:09 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:16.451 01:14:09 nvmf_tcp.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:28:16.451 01:14:09 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:16.451 01:14:09 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:16.451 [ 00:28:16.451 { 00:28:16.451 "name": "nvme0n1", 00:28:16.451 "aliases": [ 00:28:16.451 "05f89737-45cb-449c-ae86-5177c1c4283d" 00:28:16.709 ], 00:28:16.709 "product_name": "NVMe disk", 00:28:16.709 "block_size": 512, 00:28:16.709 "num_blocks": 2097152, 00:28:16.709 "uuid": "05f89737-45cb-449c-ae86-5177c1c4283d", 00:28:16.709 "assigned_rate_limits": { 00:28:16.709 "rw_ios_per_sec": 0, 00:28:16.709 "rw_mbytes_per_sec": 0, 00:28:16.709 "r_mbytes_per_sec": 0, 00:28:16.709 "w_mbytes_per_sec": 0 00:28:16.709 }, 00:28:16.709 "claimed": false, 00:28:16.709 "zoned": false, 00:28:16.709 "supported_io_types": { 00:28:16.709 "read": true, 00:28:16.709 "write": true, 00:28:16.709 "unmap": false, 00:28:16.709 "write_zeroes": true, 00:28:16.709 "flush": true, 00:28:16.709 "reset": true, 00:28:16.709 "compare": true, 00:28:16.709 "compare_and_write": true, 00:28:16.709 "abort": true, 00:28:16.709 "nvme_admin": true, 00:28:16.709 "nvme_io": true 00:28:16.709 }, 00:28:16.709 "memory_domains": [ 00:28:16.709 { 00:28:16.709 "dma_device_id": "system", 00:28:16.709 "dma_device_type": 1 00:28:16.709 } 00:28:16.709 ], 00:28:16.709 "driver_specific": { 00:28:16.709 "nvme": [ 00:28:16.709 { 00:28:16.709 "trid": { 00:28:16.709 "trtype": "TCP", 00:28:16.709 "adrfam": "IPv4", 00:28:16.709 "traddr": "10.0.0.2", 00:28:16.709 "trsvcid": "4420", 00:28:16.709 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:28:16.709 }, 00:28:16.709 "ctrlr_data": { 00:28:16.709 "cntlid": 2, 00:28:16.709 "vendor_id": "0x8086", 00:28:16.709 "model_number": "SPDK bdev Controller", 00:28:16.709 "serial_number": "00000000000000000000", 00:28:16.709 "firmware_revision": "24.05.1", 00:28:16.709 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:16.709 "oacs": { 00:28:16.709 "security": 0, 00:28:16.709 "format": 0, 00:28:16.709 "firmware": 0, 00:28:16.709 "ns_manage": 0 00:28:16.709 }, 00:28:16.709 "multi_ctrlr": true, 00:28:16.709 "ana_reporting": false 00:28:16.709 }, 00:28:16.709 "vs": { 00:28:16.709 "nvme_version": "1.3" 00:28:16.709 }, 00:28:16.709 "ns_data": { 00:28:16.709 "id": 1, 00:28:16.709 "can_share": true 00:28:16.709 } 00:28:16.709 } 00:28:16.709 ], 00:28:16.709 "mp_policy": "active_passive" 00:28:16.709 } 00:28:16.709 } 00:28:16.709 ] 00:28:16.709 01:14:09 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:16.709 01:14:09 nvmf_tcp.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:16.709 01:14:09 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:16.709 01:14:09 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:16.709 01:14:09 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:16.709 01:14:09 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:28:16.709 01:14:09 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.ZQaSkFcVhU 00:28:16.709 01:14:09 nvmf_tcp.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:28:16.709 01:14:09 nvmf_tcp.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.ZQaSkFcVhU 00:28:16.709 01:14:09 nvmf_tcp.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:28:16.709 01:14:09 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:16.709 01:14:09 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:16.709 01:14:09 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:16.709 01:14:09 nvmf_tcp.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:28:16.709 01:14:09 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:16.709 01:14:09 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:16.709 [2024-07-25 01:14:09.645160] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:28:16.709 [2024-07-25 01:14:09.645341] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:28:16.709 01:14:09 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:16.709 01:14:09 nvmf_tcp.nvmf_async_init -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.ZQaSkFcVhU 00:28:16.709 01:14:09 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:16.709 01:14:09 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:16.709 [2024-07-25 01:14:09.653171] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:28:16.709 01:14:09 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:16.709 01:14:09 nvmf_tcp.nvmf_async_init -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.ZQaSkFcVhU 00:28:16.709 01:14:09 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:16.709 01:14:09 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:16.709 [2024-07-25 01:14:09.661182] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:28:16.709 [2024-07-25 01:14:09.661257] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:28:16.709 nvme0n1 00:28:16.709 01:14:09 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:16.709 01:14:09 nvmf_tcp.nvmf_async_init -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:28:16.709 01:14:09 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:16.709 01:14:09 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:16.709 [ 00:28:16.709 { 00:28:16.709 "name": "nvme0n1", 00:28:16.709 "aliases": [ 00:28:16.709 "05f89737-45cb-449c-ae86-5177c1c4283d" 00:28:16.709 ], 00:28:16.709 "product_name": "NVMe disk", 00:28:16.709 "block_size": 512, 00:28:16.709 "num_blocks": 2097152, 00:28:16.709 "uuid": "05f89737-45cb-449c-ae86-5177c1c4283d", 00:28:16.709 "assigned_rate_limits": { 00:28:16.709 "rw_ios_per_sec": 0, 00:28:16.709 "rw_mbytes_per_sec": 0, 00:28:16.709 "r_mbytes_per_sec": 0, 00:28:16.709 "w_mbytes_per_sec": 0 00:28:16.709 }, 00:28:16.709 "claimed": false, 00:28:16.709 "zoned": false, 00:28:16.709 "supported_io_types": { 00:28:16.709 "read": true, 00:28:16.709 "write": true, 00:28:16.709 "unmap": false, 00:28:16.709 "write_zeroes": true, 00:28:16.709 "flush": true, 00:28:16.709 "reset": true, 00:28:16.709 "compare": true, 00:28:16.709 "compare_and_write": true, 00:28:16.709 "abort": true, 00:28:16.709 "nvme_admin": true, 00:28:16.709 "nvme_io": true 00:28:16.709 }, 00:28:16.709 "memory_domains": [ 00:28:16.709 { 00:28:16.709 "dma_device_id": "system", 00:28:16.709 "dma_device_type": 1 00:28:16.709 } 00:28:16.709 ], 00:28:16.709 "driver_specific": { 00:28:16.709 "nvme": [ 00:28:16.709 { 00:28:16.709 "trid": { 00:28:16.709 "trtype": "TCP", 00:28:16.709 "adrfam": "IPv4", 00:28:16.709 "traddr": "10.0.0.2", 00:28:16.709 "trsvcid": "4421", 00:28:16.709 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:28:16.709 }, 00:28:16.709 "ctrlr_data": { 00:28:16.709 "cntlid": 3, 00:28:16.709 "vendor_id": "0x8086", 00:28:16.709 "model_number": "SPDK bdev Controller", 00:28:16.709 "serial_number": "00000000000000000000", 00:28:16.709 "firmware_revision": "24.05.1", 00:28:16.709 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:16.709 "oacs": { 00:28:16.709 "security": 0, 00:28:16.709 "format": 0, 00:28:16.709 "firmware": 0, 00:28:16.709 "ns_manage": 0 00:28:16.709 }, 00:28:16.709 "multi_ctrlr": true, 00:28:16.709 "ana_reporting": false 00:28:16.709 }, 00:28:16.709 "vs": { 00:28:16.709 "nvme_version": "1.3" 00:28:16.709 }, 00:28:16.709 "ns_data": { 00:28:16.709 "id": 1, 00:28:16.709 "can_share": true 00:28:16.709 } 00:28:16.709 } 00:28:16.709 ], 00:28:16.709 "mp_policy": "active_passive" 00:28:16.709 } 00:28:16.709 } 00:28:16.709 ] 00:28:16.709 01:14:09 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:16.709 01:14:09 nvmf_tcp.nvmf_async_init -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:16.709 01:14:09 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:16.709 01:14:09 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:16.709 01:14:09 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:16.709 01:14:09 nvmf_tcp.nvmf_async_init -- host/async_init.sh@75 -- # rm -f /tmp/tmp.ZQaSkFcVhU 00:28:16.709 01:14:09 nvmf_tcp.nvmf_async_init -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:28:16.709 01:14:09 nvmf_tcp.nvmf_async_init -- host/async_init.sh@78 -- # nvmftestfini 00:28:16.709 01:14:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:16.709 01:14:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@117 -- # sync 00:28:16.709 01:14:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:16.709 01:14:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@120 -- # set +e 00:28:16.709 01:14:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:16.709 01:14:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:16.709 rmmod nvme_tcp 00:28:16.709 rmmod nvme_fabrics 00:28:16.709 rmmod nvme_keyring 00:28:16.709 01:14:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:16.709 01:14:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@124 -- # set -e 00:28:16.709 01:14:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@125 -- # return 0 00:28:16.709 01:14:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@489 -- # '[' -n 3862770 ']' 00:28:16.709 01:14:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@490 -- # killprocess 3862770 00:28:16.709 01:14:09 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@946 -- # '[' -z 3862770 ']' 00:28:16.709 01:14:09 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@950 -- # kill -0 3862770 00:28:16.709 01:14:09 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@951 -- # uname 00:28:16.709 01:14:09 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:28:16.709 01:14:09 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3862770 00:28:16.709 01:14:09 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:28:16.709 01:14:09 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:28:16.709 01:14:09 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3862770' 00:28:16.709 killing process with pid 3862770 00:28:16.709 01:14:09 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@965 -- # kill 3862770 00:28:16.709 [2024-07-25 01:14:09.851263] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:28:16.709 [2024-07-25 01:14:09.851312] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:28:16.709 01:14:09 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@970 -- # wait 3862770 00:28:16.968 01:14:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:16.968 01:14:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:16.968 01:14:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:16.968 01:14:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:16.968 01:14:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:16.968 01:14:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:16.968 01:14:10 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:16.968 01:14:10 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:19.497 01:14:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:19.497 00:28:19.497 real 0m5.394s 00:28:19.497 user 0m2.002s 00:28:19.497 sys 0m1.767s 00:28:19.497 01:14:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1122 -- # xtrace_disable 00:28:19.497 01:14:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:19.497 ************************************ 00:28:19.497 END TEST nvmf_async_init 00:28:19.497 ************************************ 00:28:19.497 01:14:12 nvmf_tcp -- nvmf/nvmf.sh@94 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:28:19.497 01:14:12 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:28:19.497 01:14:12 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:28:19.497 01:14:12 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:19.497 ************************************ 00:28:19.497 START TEST dma 00:28:19.497 ************************************ 00:28:19.497 01:14:12 nvmf_tcp.dma -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:28:19.497 * Looking for test storage... 00:28:19.497 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:19.497 01:14:12 nvmf_tcp.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:19.497 01:14:12 nvmf_tcp.dma -- nvmf/common.sh@7 -- # uname -s 00:28:19.497 01:14:12 nvmf_tcp.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:19.497 01:14:12 nvmf_tcp.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:19.497 01:14:12 nvmf_tcp.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:19.497 01:14:12 nvmf_tcp.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:19.497 01:14:12 nvmf_tcp.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:19.497 01:14:12 nvmf_tcp.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:19.497 01:14:12 nvmf_tcp.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:19.497 01:14:12 nvmf_tcp.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:19.497 01:14:12 nvmf_tcp.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:19.497 01:14:12 nvmf_tcp.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:19.497 01:14:12 nvmf_tcp.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:28:19.497 01:14:12 nvmf_tcp.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:28:19.497 01:14:12 nvmf_tcp.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:19.497 01:14:12 nvmf_tcp.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:19.497 01:14:12 nvmf_tcp.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:19.497 01:14:12 nvmf_tcp.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:19.497 01:14:12 nvmf_tcp.dma -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:19.497 01:14:12 nvmf_tcp.dma -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:19.497 01:14:12 nvmf_tcp.dma -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:19.497 01:14:12 nvmf_tcp.dma -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:19.497 01:14:12 nvmf_tcp.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:19.497 01:14:12 nvmf_tcp.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:19.497 01:14:12 nvmf_tcp.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:19.497 01:14:12 nvmf_tcp.dma -- paths/export.sh@5 -- # export PATH 00:28:19.498 01:14:12 nvmf_tcp.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:19.498 01:14:12 nvmf_tcp.dma -- nvmf/common.sh@47 -- # : 0 00:28:19.498 01:14:12 nvmf_tcp.dma -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:19.498 01:14:12 nvmf_tcp.dma -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:19.498 01:14:12 nvmf_tcp.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:19.498 01:14:12 nvmf_tcp.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:19.498 01:14:12 nvmf_tcp.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:19.498 01:14:12 nvmf_tcp.dma -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:19.498 01:14:12 nvmf_tcp.dma -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:19.498 01:14:12 nvmf_tcp.dma -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:19.498 01:14:12 nvmf_tcp.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:28:19.498 01:14:12 nvmf_tcp.dma -- host/dma.sh@13 -- # exit 0 00:28:19.498 00:28:19.498 real 0m0.069s 00:28:19.498 user 0m0.034s 00:28:19.498 sys 0m0.041s 00:28:19.498 01:14:12 nvmf_tcp.dma -- common/autotest_common.sh@1122 -- # xtrace_disable 00:28:19.498 01:14:12 nvmf_tcp.dma -- common/autotest_common.sh@10 -- # set +x 00:28:19.498 ************************************ 00:28:19.498 END TEST dma 00:28:19.498 ************************************ 00:28:19.498 01:14:12 nvmf_tcp -- nvmf/nvmf.sh@97 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:28:19.498 01:14:12 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:28:19.498 01:14:12 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:28:19.498 01:14:12 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:19.498 ************************************ 00:28:19.498 START TEST nvmf_identify 00:28:19.498 ************************************ 00:28:19.498 01:14:12 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:28:19.498 * Looking for test storage... 00:28:19.498 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:19.498 01:14:12 nvmf_tcp.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:19.498 01:14:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:28:19.498 01:14:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:19.498 01:14:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:19.498 01:14:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:19.498 01:14:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:19.498 01:14:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:19.498 01:14:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:19.498 01:14:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:19.498 01:14:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:19.498 01:14:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:19.498 01:14:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:19.498 01:14:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:28:19.498 01:14:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:28:19.498 01:14:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:19.498 01:14:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:19.498 01:14:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:19.498 01:14:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:19.498 01:14:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:19.498 01:14:12 nvmf_tcp.nvmf_identify -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:19.498 01:14:12 nvmf_tcp.nvmf_identify -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:19.498 01:14:12 nvmf_tcp.nvmf_identify -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:19.498 01:14:12 nvmf_tcp.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:19.498 01:14:12 nvmf_tcp.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:19.498 01:14:12 nvmf_tcp.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:19.498 01:14:12 nvmf_tcp.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:28:19.498 01:14:12 nvmf_tcp.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:19.498 01:14:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@47 -- # : 0 00:28:19.498 01:14:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:19.498 01:14:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:19.498 01:14:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:19.498 01:14:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:19.498 01:14:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:19.498 01:14:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:19.498 01:14:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:19.498 01:14:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:19.498 01:14:12 nvmf_tcp.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:28:19.498 01:14:12 nvmf_tcp.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:28:19.498 01:14:12 nvmf_tcp.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:28:19.498 01:14:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:19.498 01:14:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:19.498 01:14:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:19.498 01:14:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:19.498 01:14:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:19.498 01:14:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:19.498 01:14:12 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:19.498 01:14:12 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:19.498 01:14:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:19.498 01:14:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:19.498 01:14:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@285 -- # xtrace_disable 00:28:19.498 01:14:12 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:21.397 01:14:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:21.397 01:14:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@291 -- # pci_devs=() 00:28:21.397 01:14:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:21.397 01:14:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:21.397 01:14:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:21.397 01:14:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:21.397 01:14:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:21.397 01:14:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@295 -- # net_devs=() 00:28:21.397 01:14:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:21.397 01:14:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@296 -- # e810=() 00:28:21.397 01:14:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@296 -- # local -ga e810 00:28:21.397 01:14:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@297 -- # x722=() 00:28:21.397 01:14:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@297 -- # local -ga x722 00:28:21.397 01:14:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@298 -- # mlx=() 00:28:21.397 01:14:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@298 -- # local -ga mlx 00:28:21.397 01:14:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:21.397 01:14:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:21.397 01:14:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:21.397 01:14:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:21.397 01:14:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:21.397 01:14:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:21.397 01:14:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:21.397 01:14:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:21.397 01:14:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:21.397 01:14:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:21.397 01:14:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:21.397 01:14:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:21.397 01:14:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:21.397 01:14:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:21.397 01:14:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:21.397 01:14:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:21.397 01:14:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:21.397 01:14:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:21.397 01:14:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:21.397 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:21.397 01:14:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:21.397 01:14:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:21.397 01:14:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:21.397 01:14:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:21.397 01:14:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:21.397 01:14:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:21.397 01:14:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:21.397 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:21.397 01:14:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:21.397 01:14:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:21.397 01:14:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:21.397 01:14:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:21.397 01:14:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:21.397 01:14:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:21.397 01:14:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:21.397 01:14:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:21.397 01:14:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:21.397 01:14:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:21.397 01:14:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:21.397 01:14:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:21.397 01:14:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:21.397 01:14:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:21.397 01:14:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:21.397 01:14:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:21.397 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:21.397 01:14:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:21.397 01:14:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:21.397 01:14:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:21.397 01:14:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:21.397 01:14:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:21.397 01:14:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:21.397 01:14:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:21.397 01:14:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:21.397 01:14:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:21.397 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:21.397 01:14:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:21.397 01:14:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:21.397 01:14:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # is_hw=yes 00:28:21.397 01:14:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:21.397 01:14:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:21.397 01:14:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:21.397 01:14:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:21.397 01:14:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:21.397 01:14:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:21.397 01:14:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:21.397 01:14:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:21.397 01:14:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:21.397 01:14:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:21.397 01:14:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:21.397 01:14:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:21.397 01:14:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:21.397 01:14:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:21.397 01:14:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:21.397 01:14:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:21.397 01:14:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:21.656 01:14:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:21.656 01:14:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:21.656 01:14:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:21.656 01:14:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:21.656 01:14:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:21.656 01:14:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:21.656 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:21.656 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.255 ms 00:28:21.656 00:28:21.656 --- 10.0.0.2 ping statistics --- 00:28:21.656 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:21.656 rtt min/avg/max/mdev = 0.255/0.255/0.255/0.000 ms 00:28:21.656 01:14:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:21.656 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:21.656 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.126 ms 00:28:21.656 00:28:21.656 --- 10.0.0.1 ping statistics --- 00:28:21.656 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:21.656 rtt min/avg/max/mdev = 0.126/0.126/0.126/0.000 ms 00:28:21.656 01:14:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:21.656 01:14:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@422 -- # return 0 00:28:21.656 01:14:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:21.656 01:14:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:21.656 01:14:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:21.656 01:14:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:21.656 01:14:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:21.656 01:14:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:21.656 01:14:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:21.656 01:14:14 nvmf_tcp.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:28:21.656 01:14:14 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@720 -- # xtrace_disable 00:28:21.656 01:14:14 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:21.656 01:14:14 nvmf_tcp.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=3864891 00:28:21.656 01:14:14 nvmf_tcp.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:28:21.656 01:14:14 nvmf_tcp.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:21.656 01:14:14 nvmf_tcp.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 3864891 00:28:21.656 01:14:14 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@827 -- # '[' -z 3864891 ']' 00:28:21.656 01:14:14 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:21.656 01:14:14 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@832 -- # local max_retries=100 00:28:21.656 01:14:14 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:21.656 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:21.656 01:14:14 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@836 -- # xtrace_disable 00:28:21.656 01:14:14 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:21.656 [2024-07-25 01:14:14.681677] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 23.11.0 initialization... 00:28:21.656 [2024-07-25 01:14:14.681752] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:21.656 EAL: No free 2048 kB hugepages reported on node 1 00:28:21.656 [2024-07-25 01:14:14.746317] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:21.914 [2024-07-25 01:14:14.836801] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:21.914 [2024-07-25 01:14:14.836850] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:21.914 [2024-07-25 01:14:14.836870] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:21.914 [2024-07-25 01:14:14.836881] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:21.914 [2024-07-25 01:14:14.836892] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:21.914 [2024-07-25 01:14:14.836985] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:21.914 [2024-07-25 01:14:14.837043] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:28:21.914 [2024-07-25 01:14:14.837110] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:28:21.914 [2024-07-25 01:14:14.837113] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:21.914 01:14:14 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:28:21.914 01:14:14 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@860 -- # return 0 00:28:21.915 01:14:14 nvmf_tcp.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:21.915 01:14:14 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:21.915 01:14:14 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:21.915 [2024-07-25 01:14:14.966004] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:21.915 01:14:14 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:21.915 01:14:14 nvmf_tcp.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:28:21.915 01:14:14 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:21.915 01:14:14 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:21.915 01:14:14 nvmf_tcp.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:28:21.915 01:14:14 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:21.915 01:14:14 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:21.915 Malloc0 00:28:21.915 01:14:15 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:21.915 01:14:15 nvmf_tcp.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:21.915 01:14:15 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:21.915 01:14:15 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:21.915 01:14:15 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:21.915 01:14:15 nvmf_tcp.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:28:21.915 01:14:15 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:21.915 01:14:15 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:21.915 01:14:15 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:21.915 01:14:15 nvmf_tcp.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:21.915 01:14:15 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:21.915 01:14:15 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:21.915 [2024-07-25 01:14:15.038162] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:21.915 01:14:15 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:21.915 01:14:15 nvmf_tcp.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:28:21.915 01:14:15 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:21.915 01:14:15 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:21.915 01:14:15 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:21.915 01:14:15 nvmf_tcp.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:28:21.915 01:14:15 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:21.915 01:14:15 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:21.915 [ 00:28:21.915 { 00:28:21.915 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:28:21.915 "subtype": "Discovery", 00:28:21.915 "listen_addresses": [ 00:28:21.915 { 00:28:21.915 "trtype": "TCP", 00:28:21.915 "adrfam": "IPv4", 00:28:21.915 "traddr": "10.0.0.2", 00:28:21.915 "trsvcid": "4420" 00:28:21.915 } 00:28:21.915 ], 00:28:21.915 "allow_any_host": true, 00:28:21.915 "hosts": [] 00:28:21.915 }, 00:28:21.915 { 00:28:21.915 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:28:21.915 "subtype": "NVMe", 00:28:21.915 "listen_addresses": [ 00:28:21.915 { 00:28:21.915 "trtype": "TCP", 00:28:21.915 "adrfam": "IPv4", 00:28:21.915 "traddr": "10.0.0.2", 00:28:21.915 "trsvcid": "4420" 00:28:21.915 } 00:28:21.915 ], 00:28:21.915 "allow_any_host": true, 00:28:21.915 "hosts": [], 00:28:21.915 "serial_number": "SPDK00000000000001", 00:28:21.915 "model_number": "SPDK bdev Controller", 00:28:21.915 "max_namespaces": 32, 00:28:21.915 "min_cntlid": 1, 00:28:21.915 "max_cntlid": 65519, 00:28:21.915 "namespaces": [ 00:28:21.915 { 00:28:21.915 "nsid": 1, 00:28:21.915 "bdev_name": "Malloc0", 00:28:21.915 "name": "Malloc0", 00:28:21.915 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:28:21.915 "eui64": "ABCDEF0123456789", 00:28:21.915 "uuid": "71b3fe74-eea1-4c55-b16a-a34bc1dd2573" 00:28:21.915 } 00:28:21.915 ] 00:28:21.915 } 00:28:21.915 ] 00:28:21.915 01:14:15 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:21.915 01:14:15 nvmf_tcp.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:28:22.176 [2024-07-25 01:14:15.076777] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 23.11.0 initialization... 00:28:22.176 [2024-07-25 01:14:15.076816] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3865021 ] 00:28:22.176 EAL: No free 2048 kB hugepages reported on node 1 00:28:22.176 [2024-07-25 01:14:15.109883] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:28:22.176 [2024-07-25 01:14:15.109944] nvme_tcp.c:2329:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:28:22.176 [2024-07-25 01:14:15.109954] nvme_tcp.c:2333:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:28:22.176 [2024-07-25 01:14:15.109969] nvme_tcp.c:2351:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:28:22.176 [2024-07-25 01:14:15.109983] sock.c: 336:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:28:22.176 [2024-07-25 01:14:15.113290] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:28:22.176 [2024-07-25 01:14:15.113349] nvme_tcp.c:1546:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0xa2c980 0 00:28:22.176 [2024-07-25 01:14:15.121273] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:28:22.176 [2024-07-25 01:14:15.121298] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:28:22.176 [2024-07-25 01:14:15.121306] nvme_tcp.c:1592:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:28:22.176 [2024-07-25 01:14:15.121312] nvme_tcp.c:1593:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:28:22.176 [2024-07-25 01:14:15.121364] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:22.176 [2024-07-25 01:14:15.121376] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:22.176 [2024-07-25 01:14:15.121383] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xa2c980) 00:28:22.176 [2024-07-25 01:14:15.121401] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:28:22.176 [2024-07-25 01:14:15.121427] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa944c0, cid 0, qid 0 00:28:22.176 [2024-07-25 01:14:15.129253] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:22.176 [2024-07-25 01:14:15.129272] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:22.176 [2024-07-25 01:14:15.129279] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:22.176 [2024-07-25 01:14:15.129287] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa944c0) on tqpair=0xa2c980 00:28:22.176 [2024-07-25 01:14:15.129303] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:28:22.176 [2024-07-25 01:14:15.129315] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:28:22.176 [2024-07-25 01:14:15.129325] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:28:22.176 [2024-07-25 01:14:15.129347] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:22.176 [2024-07-25 01:14:15.129355] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:22.176 [2024-07-25 01:14:15.129362] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xa2c980) 00:28:22.176 [2024-07-25 01:14:15.129374] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.176 [2024-07-25 01:14:15.129398] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa944c0, cid 0, qid 0 00:28:22.176 [2024-07-25 01:14:15.129547] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:22.176 [2024-07-25 01:14:15.129559] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:22.176 [2024-07-25 01:14:15.129566] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:22.176 [2024-07-25 01:14:15.129572] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa944c0) on tqpair=0xa2c980 00:28:22.176 [2024-07-25 01:14:15.129586] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:28:22.176 [2024-07-25 01:14:15.129600] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:28:22.176 [2024-07-25 01:14:15.129619] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:22.176 [2024-07-25 01:14:15.129627] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:22.176 [2024-07-25 01:14:15.129634] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xa2c980) 00:28:22.176 [2024-07-25 01:14:15.129644] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.176 [2024-07-25 01:14:15.129666] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa944c0, cid 0, qid 0 00:28:22.176 [2024-07-25 01:14:15.129777] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:22.176 [2024-07-25 01:14:15.129792] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:22.176 [2024-07-25 01:14:15.129799] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:22.176 [2024-07-25 01:14:15.129806] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa944c0) on tqpair=0xa2c980 00:28:22.176 [2024-07-25 01:14:15.129815] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:28:22.176 [2024-07-25 01:14:15.129829] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:28:22.176 [2024-07-25 01:14:15.129841] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:22.176 [2024-07-25 01:14:15.129848] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:22.176 [2024-07-25 01:14:15.129855] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xa2c980) 00:28:22.176 [2024-07-25 01:14:15.129865] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.176 [2024-07-25 01:14:15.129886] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa944c0, cid 0, qid 0 00:28:22.176 [2024-07-25 01:14:15.129990] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:22.176 [2024-07-25 01:14:15.130005] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:22.176 [2024-07-25 01:14:15.130012] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:22.176 [2024-07-25 01:14:15.130019] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa944c0) on tqpair=0xa2c980 00:28:22.176 [2024-07-25 01:14:15.130027] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:28:22.176 [2024-07-25 01:14:15.130044] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:22.176 [2024-07-25 01:14:15.130053] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:22.176 [2024-07-25 01:14:15.130060] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xa2c980) 00:28:22.176 [2024-07-25 01:14:15.130070] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.176 [2024-07-25 01:14:15.130090] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa944c0, cid 0, qid 0 00:28:22.176 [2024-07-25 01:14:15.130212] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:22.176 [2024-07-25 01:14:15.130227] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:22.176 [2024-07-25 01:14:15.130233] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:22.176 [2024-07-25 01:14:15.130240] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa944c0) on tqpair=0xa2c980 00:28:22.176 [2024-07-25 01:14:15.130255] nvme_ctrlr.c:3751:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:28:22.176 [2024-07-25 01:14:15.130264] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:28:22.176 [2024-07-25 01:14:15.130277] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:28:22.176 [2024-07-25 01:14:15.130391] nvme_ctrlr.c:3944:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:28:22.176 [2024-07-25 01:14:15.130400] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:28:22.176 [2024-07-25 01:14:15.130413] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:22.176 [2024-07-25 01:14:15.130421] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:22.177 [2024-07-25 01:14:15.130427] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xa2c980) 00:28:22.177 [2024-07-25 01:14:15.130438] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.177 [2024-07-25 01:14:15.130459] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa944c0, cid 0, qid 0 00:28:22.177 [2024-07-25 01:14:15.130619] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:22.177 [2024-07-25 01:14:15.130634] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:22.177 [2024-07-25 01:14:15.130640] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:22.177 [2024-07-25 01:14:15.130647] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa944c0) on tqpair=0xa2c980 00:28:22.177 [2024-07-25 01:14:15.130656] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:28:22.177 [2024-07-25 01:14:15.130672] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:22.177 [2024-07-25 01:14:15.130681] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:22.177 [2024-07-25 01:14:15.130687] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xa2c980) 00:28:22.177 [2024-07-25 01:14:15.130697] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.177 [2024-07-25 01:14:15.130718] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa944c0, cid 0, qid 0 00:28:22.177 [2024-07-25 01:14:15.130820] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:22.177 [2024-07-25 01:14:15.130832] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:22.177 [2024-07-25 01:14:15.130838] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:22.177 [2024-07-25 01:14:15.130845] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa944c0) on tqpair=0xa2c980 00:28:22.177 [2024-07-25 01:14:15.130853] nvme_ctrlr.c:3786:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:28:22.177 [2024-07-25 01:14:15.130861] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:28:22.177 [2024-07-25 01:14:15.130874] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:28:22.177 [2024-07-25 01:14:15.130894] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:28:22.177 [2024-07-25 01:14:15.130912] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:22.177 [2024-07-25 01:14:15.130920] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xa2c980) 00:28:22.177 [2024-07-25 01:14:15.130931] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.177 [2024-07-25 01:14:15.130952] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa944c0, cid 0, qid 0 00:28:22.177 [2024-07-25 01:14:15.131096] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:22.177 [2024-07-25 01:14:15.131107] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:22.177 [2024-07-25 01:14:15.131114] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:22.177 [2024-07-25 01:14:15.131124] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xa2c980): datao=0, datal=4096, cccid=0 00:28:22.177 [2024-07-25 01:14:15.131133] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xa944c0) on tqpair(0xa2c980): expected_datao=0, payload_size=4096 00:28:22.177 [2024-07-25 01:14:15.131141] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:22.177 [2024-07-25 01:14:15.131158] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:22.177 [2024-07-25 01:14:15.131167] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:22.177 [2024-07-25 01:14:15.131211] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:22.177 [2024-07-25 01:14:15.131221] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:22.177 [2024-07-25 01:14:15.131228] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:22.177 [2024-07-25 01:14:15.131234] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa944c0) on tqpair=0xa2c980 00:28:22.177 [2024-07-25 01:14:15.131256] nvme_ctrlr.c:1986:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:28:22.177 [2024-07-25 01:14:15.131268] nvme_ctrlr.c:1990:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:28:22.177 [2024-07-25 01:14:15.131276] nvme_ctrlr.c:1993:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:28:22.177 [2024-07-25 01:14:15.131284] nvme_ctrlr.c:2017:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:28:22.177 [2024-07-25 01:14:15.131292] nvme_ctrlr.c:2032:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:28:22.177 [2024-07-25 01:14:15.131300] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:28:22.177 [2024-07-25 01:14:15.131314] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:28:22.177 [2024-07-25 01:14:15.131326] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:22.177 [2024-07-25 01:14:15.131334] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:22.177 [2024-07-25 01:14:15.131340] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xa2c980) 00:28:22.177 [2024-07-25 01:14:15.131350] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:28:22.177 [2024-07-25 01:14:15.131372] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa944c0, cid 0, qid 0 00:28:22.177 [2024-07-25 01:14:15.131494] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:22.177 [2024-07-25 01:14:15.131506] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:22.177 [2024-07-25 01:14:15.131513] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:22.177 [2024-07-25 01:14:15.131519] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa944c0) on tqpair=0xa2c980 00:28:22.177 [2024-07-25 01:14:15.131531] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:22.177 [2024-07-25 01:14:15.131538] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:22.177 [2024-07-25 01:14:15.131544] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xa2c980) 00:28:22.177 [2024-07-25 01:14:15.131554] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:22.177 [2024-07-25 01:14:15.131564] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:22.177 [2024-07-25 01:14:15.131570] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:22.177 [2024-07-25 01:14:15.131576] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0xa2c980) 00:28:22.177 [2024-07-25 01:14:15.131585] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:22.177 [2024-07-25 01:14:15.131594] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:22.177 [2024-07-25 01:14:15.131605] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:22.177 [2024-07-25 01:14:15.131611] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0xa2c980) 00:28:22.177 [2024-07-25 01:14:15.131620] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:22.177 [2024-07-25 01:14:15.131629] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:22.177 [2024-07-25 01:14:15.131636] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:22.177 [2024-07-25 01:14:15.131642] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa2c980) 00:28:22.177 [2024-07-25 01:14:15.131650] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:22.177 [2024-07-25 01:14:15.131659] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:28:22.177 [2024-07-25 01:14:15.131677] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:28:22.177 [2024-07-25 01:14:15.131690] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:22.177 [2024-07-25 01:14:15.131696] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xa2c980) 00:28:22.177 [2024-07-25 01:14:15.131706] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.177 [2024-07-25 01:14:15.131728] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa944c0, cid 0, qid 0 00:28:22.177 [2024-07-25 01:14:15.131739] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa94620, cid 1, qid 0 00:28:22.177 [2024-07-25 01:14:15.131746] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa94780, cid 2, qid 0 00:28:22.177 [2024-07-25 01:14:15.131754] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa948e0, cid 3, qid 0 00:28:22.177 [2024-07-25 01:14:15.131761] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa94a40, cid 4, qid 0 00:28:22.177 [2024-07-25 01:14:15.131930] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:22.177 [2024-07-25 01:14:15.131942] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:22.177 [2024-07-25 01:14:15.131949] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:22.177 [2024-07-25 01:14:15.131956] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa94a40) on tqpair=0xa2c980 00:28:22.177 [2024-07-25 01:14:15.131964] nvme_ctrlr.c:2904:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:28:22.177 [2024-07-25 01:14:15.131973] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:28:22.177 [2024-07-25 01:14:15.131990] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:22.177 [2024-07-25 01:14:15.131999] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xa2c980) 00:28:22.177 [2024-07-25 01:14:15.132009] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.177 [2024-07-25 01:14:15.132029] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa94a40, cid 4, qid 0 00:28:22.177 [2024-07-25 01:14:15.132188] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:22.177 [2024-07-25 01:14:15.132203] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:22.177 [2024-07-25 01:14:15.132210] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:22.177 [2024-07-25 01:14:15.132216] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xa2c980): datao=0, datal=4096, cccid=4 00:28:22.177 [2024-07-25 01:14:15.132223] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xa94a40) on tqpair(0xa2c980): expected_datao=0, payload_size=4096 00:28:22.177 [2024-07-25 01:14:15.132235] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:22.177 [2024-07-25 01:14:15.132261] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:22.177 [2024-07-25 01:14:15.132296] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:22.177 [2024-07-25 01:14:15.177255] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:22.178 [2024-07-25 01:14:15.177274] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:22.178 [2024-07-25 01:14:15.177282] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:22.178 [2024-07-25 01:14:15.177289] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa94a40) on tqpair=0xa2c980 00:28:22.178 [2024-07-25 01:14:15.177309] nvme_ctrlr.c:4038:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:28:22.178 [2024-07-25 01:14:15.177350] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:22.178 [2024-07-25 01:14:15.177361] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xa2c980) 00:28:22.178 [2024-07-25 01:14:15.177373] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.178 [2024-07-25 01:14:15.177384] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:22.178 [2024-07-25 01:14:15.177392] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:22.178 [2024-07-25 01:14:15.177398] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xa2c980) 00:28:22.178 [2024-07-25 01:14:15.177407] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:28:22.178 [2024-07-25 01:14:15.177438] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa94a40, cid 4, qid 0 00:28:22.178 [2024-07-25 01:14:15.177450] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa94ba0, cid 5, qid 0 00:28:22.178 [2024-07-25 01:14:15.177659] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:22.178 [2024-07-25 01:14:15.177674] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:22.178 [2024-07-25 01:14:15.177681] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:22.178 [2024-07-25 01:14:15.177687] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xa2c980): datao=0, datal=1024, cccid=4 00:28:22.178 [2024-07-25 01:14:15.177695] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xa94a40) on tqpair(0xa2c980): expected_datao=0, payload_size=1024 00:28:22.178 [2024-07-25 01:14:15.177703] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:22.178 [2024-07-25 01:14:15.177712] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:22.178 [2024-07-25 01:14:15.177720] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:22.178 [2024-07-25 01:14:15.177728] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:22.178 [2024-07-25 01:14:15.177737] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:22.178 [2024-07-25 01:14:15.177743] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:22.178 [2024-07-25 01:14:15.177750] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa94ba0) on tqpair=0xa2c980 00:28:22.178 [2024-07-25 01:14:15.218381] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:22.178 [2024-07-25 01:14:15.218400] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:22.178 [2024-07-25 01:14:15.218407] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:22.178 [2024-07-25 01:14:15.218414] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa94a40) on tqpair=0xa2c980 00:28:22.178 [2024-07-25 01:14:15.218432] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:22.178 [2024-07-25 01:14:15.218441] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xa2c980) 00:28:22.178 [2024-07-25 01:14:15.218452] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.178 [2024-07-25 01:14:15.218481] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa94a40, cid 4, qid 0 00:28:22.178 [2024-07-25 01:14:15.218618] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:22.178 [2024-07-25 01:14:15.218633] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:22.178 [2024-07-25 01:14:15.218640] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:22.178 [2024-07-25 01:14:15.218646] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xa2c980): datao=0, datal=3072, cccid=4 00:28:22.178 [2024-07-25 01:14:15.218654] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xa94a40) on tqpair(0xa2c980): expected_datao=0, payload_size=3072 00:28:22.178 [2024-07-25 01:14:15.218661] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:22.178 [2024-07-25 01:14:15.218671] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:22.178 [2024-07-25 01:14:15.218679] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:22.178 [2024-07-25 01:14:15.218728] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:22.178 [2024-07-25 01:14:15.218742] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:22.178 [2024-07-25 01:14:15.218749] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:22.178 [2024-07-25 01:14:15.218756] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa94a40) on tqpair=0xa2c980 00:28:22.178 [2024-07-25 01:14:15.218771] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:22.178 [2024-07-25 01:14:15.218779] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xa2c980) 00:28:22.178 [2024-07-25 01:14:15.218790] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.178 [2024-07-25 01:14:15.218817] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa94a40, cid 4, qid 0 00:28:22.178 [2024-07-25 01:14:15.218948] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:22.178 [2024-07-25 01:14:15.218962] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:22.178 [2024-07-25 01:14:15.218969] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:22.178 [2024-07-25 01:14:15.218975] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xa2c980): datao=0, datal=8, cccid=4 00:28:22.178 [2024-07-25 01:14:15.218983] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xa94a40) on tqpair(0xa2c980): expected_datao=0, payload_size=8 00:28:22.178 [2024-07-25 01:14:15.218990] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:22.178 [2024-07-25 01:14:15.219000] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:22.178 [2024-07-25 01:14:15.219007] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:22.178 [2024-07-25 01:14:15.260397] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:22.178 [2024-07-25 01:14:15.260415] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:22.178 [2024-07-25 01:14:15.260422] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:22.178 [2024-07-25 01:14:15.260429] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa94a40) on tqpair=0xa2c980 00:28:22.178 ===================================================== 00:28:22.178 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:28:22.178 ===================================================== 00:28:22.178 Controller Capabilities/Features 00:28:22.178 ================================ 00:28:22.178 Vendor ID: 0000 00:28:22.178 Subsystem Vendor ID: 0000 00:28:22.178 Serial Number: .................... 00:28:22.178 Model Number: ........................................ 00:28:22.178 Firmware Version: 24.05.1 00:28:22.178 Recommended Arb Burst: 0 00:28:22.178 IEEE OUI Identifier: 00 00 00 00:28:22.178 Multi-path I/O 00:28:22.178 May have multiple subsystem ports: No 00:28:22.178 May have multiple controllers: No 00:28:22.178 Associated with SR-IOV VF: No 00:28:22.178 Max Data Transfer Size: 131072 00:28:22.178 Max Number of Namespaces: 0 00:28:22.178 Max Number of I/O Queues: 1024 00:28:22.178 NVMe Specification Version (VS): 1.3 00:28:22.178 NVMe Specification Version (Identify): 1.3 00:28:22.178 Maximum Queue Entries: 128 00:28:22.178 Contiguous Queues Required: Yes 00:28:22.178 Arbitration Mechanisms Supported 00:28:22.178 Weighted Round Robin: Not Supported 00:28:22.178 Vendor Specific: Not Supported 00:28:22.178 Reset Timeout: 15000 ms 00:28:22.178 Doorbell Stride: 4 bytes 00:28:22.178 NVM Subsystem Reset: Not Supported 00:28:22.178 Command Sets Supported 00:28:22.178 NVM Command Set: Supported 00:28:22.178 Boot Partition: Not Supported 00:28:22.178 Memory Page Size Minimum: 4096 bytes 00:28:22.178 Memory Page Size Maximum: 4096 bytes 00:28:22.178 Persistent Memory Region: Not Supported 00:28:22.178 Optional Asynchronous Events Supported 00:28:22.178 Namespace Attribute Notices: Not Supported 00:28:22.178 Firmware Activation Notices: Not Supported 00:28:22.178 ANA Change Notices: Not Supported 00:28:22.178 PLE Aggregate Log Change Notices: Not Supported 00:28:22.178 LBA Status Info Alert Notices: Not Supported 00:28:22.178 EGE Aggregate Log Change Notices: Not Supported 00:28:22.178 Normal NVM Subsystem Shutdown event: Not Supported 00:28:22.178 Zone Descriptor Change Notices: Not Supported 00:28:22.178 Discovery Log Change Notices: Supported 00:28:22.178 Controller Attributes 00:28:22.178 128-bit Host Identifier: Not Supported 00:28:22.178 Non-Operational Permissive Mode: Not Supported 00:28:22.178 NVM Sets: Not Supported 00:28:22.178 Read Recovery Levels: Not Supported 00:28:22.178 Endurance Groups: Not Supported 00:28:22.178 Predictable Latency Mode: Not Supported 00:28:22.178 Traffic Based Keep ALive: Not Supported 00:28:22.178 Namespace Granularity: Not Supported 00:28:22.178 SQ Associations: Not Supported 00:28:22.178 UUID List: Not Supported 00:28:22.178 Multi-Domain Subsystem: Not Supported 00:28:22.178 Fixed Capacity Management: Not Supported 00:28:22.178 Variable Capacity Management: Not Supported 00:28:22.178 Delete Endurance Group: Not Supported 00:28:22.178 Delete NVM Set: Not Supported 00:28:22.178 Extended LBA Formats Supported: Not Supported 00:28:22.178 Flexible Data Placement Supported: Not Supported 00:28:22.178 00:28:22.178 Controller Memory Buffer Support 00:28:22.178 ================================ 00:28:22.178 Supported: No 00:28:22.178 00:28:22.178 Persistent Memory Region Support 00:28:22.178 ================================ 00:28:22.178 Supported: No 00:28:22.178 00:28:22.178 Admin Command Set Attributes 00:28:22.178 ============================ 00:28:22.178 Security Send/Receive: Not Supported 00:28:22.178 Format NVM: Not Supported 00:28:22.178 Firmware Activate/Download: Not Supported 00:28:22.179 Namespace Management: Not Supported 00:28:22.179 Device Self-Test: Not Supported 00:28:22.179 Directives: Not Supported 00:28:22.179 NVMe-MI: Not Supported 00:28:22.179 Virtualization Management: Not Supported 00:28:22.179 Doorbell Buffer Config: Not Supported 00:28:22.179 Get LBA Status Capability: Not Supported 00:28:22.179 Command & Feature Lockdown Capability: Not Supported 00:28:22.179 Abort Command Limit: 1 00:28:22.179 Async Event Request Limit: 4 00:28:22.179 Number of Firmware Slots: N/A 00:28:22.179 Firmware Slot 1 Read-Only: N/A 00:28:22.179 Firmware Activation Without Reset: N/A 00:28:22.179 Multiple Update Detection Support: N/A 00:28:22.179 Firmware Update Granularity: No Information Provided 00:28:22.179 Per-Namespace SMART Log: No 00:28:22.179 Asymmetric Namespace Access Log Page: Not Supported 00:28:22.179 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:28:22.179 Command Effects Log Page: Not Supported 00:28:22.179 Get Log Page Extended Data: Supported 00:28:22.179 Telemetry Log Pages: Not Supported 00:28:22.179 Persistent Event Log Pages: Not Supported 00:28:22.179 Supported Log Pages Log Page: May Support 00:28:22.179 Commands Supported & Effects Log Page: Not Supported 00:28:22.179 Feature Identifiers & Effects Log Page:May Support 00:28:22.179 NVMe-MI Commands & Effects Log Page: May Support 00:28:22.179 Data Area 4 for Telemetry Log: Not Supported 00:28:22.179 Error Log Page Entries Supported: 128 00:28:22.179 Keep Alive: Not Supported 00:28:22.179 00:28:22.179 NVM Command Set Attributes 00:28:22.179 ========================== 00:28:22.179 Submission Queue Entry Size 00:28:22.179 Max: 1 00:28:22.179 Min: 1 00:28:22.179 Completion Queue Entry Size 00:28:22.179 Max: 1 00:28:22.179 Min: 1 00:28:22.179 Number of Namespaces: 0 00:28:22.179 Compare Command: Not Supported 00:28:22.179 Write Uncorrectable Command: Not Supported 00:28:22.179 Dataset Management Command: Not Supported 00:28:22.179 Write Zeroes Command: Not Supported 00:28:22.179 Set Features Save Field: Not Supported 00:28:22.179 Reservations: Not Supported 00:28:22.179 Timestamp: Not Supported 00:28:22.179 Copy: Not Supported 00:28:22.179 Volatile Write Cache: Not Present 00:28:22.179 Atomic Write Unit (Normal): 1 00:28:22.179 Atomic Write Unit (PFail): 1 00:28:22.179 Atomic Compare & Write Unit: 1 00:28:22.179 Fused Compare & Write: Supported 00:28:22.179 Scatter-Gather List 00:28:22.179 SGL Command Set: Supported 00:28:22.179 SGL Keyed: Supported 00:28:22.179 SGL Bit Bucket Descriptor: Not Supported 00:28:22.179 SGL Metadata Pointer: Not Supported 00:28:22.179 Oversized SGL: Not Supported 00:28:22.179 SGL Metadata Address: Not Supported 00:28:22.179 SGL Offset: Supported 00:28:22.179 Transport SGL Data Block: Not Supported 00:28:22.179 Replay Protected Memory Block: Not Supported 00:28:22.179 00:28:22.179 Firmware Slot Information 00:28:22.179 ========================= 00:28:22.179 Active slot: 0 00:28:22.179 00:28:22.179 00:28:22.179 Error Log 00:28:22.179 ========= 00:28:22.179 00:28:22.179 Active Namespaces 00:28:22.179 ================= 00:28:22.179 Discovery Log Page 00:28:22.179 ================== 00:28:22.179 Generation Counter: 2 00:28:22.179 Number of Records: 2 00:28:22.179 Record Format: 0 00:28:22.179 00:28:22.179 Discovery Log Entry 0 00:28:22.179 ---------------------- 00:28:22.179 Transport Type: 3 (TCP) 00:28:22.179 Address Family: 1 (IPv4) 00:28:22.179 Subsystem Type: 3 (Current Discovery Subsystem) 00:28:22.179 Entry Flags: 00:28:22.179 Duplicate Returned Information: 1 00:28:22.179 Explicit Persistent Connection Support for Discovery: 1 00:28:22.179 Transport Requirements: 00:28:22.179 Secure Channel: Not Required 00:28:22.179 Port ID: 0 (0x0000) 00:28:22.179 Controller ID: 65535 (0xffff) 00:28:22.179 Admin Max SQ Size: 128 00:28:22.179 Transport Service Identifier: 4420 00:28:22.179 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:28:22.179 Transport Address: 10.0.0.2 00:28:22.179 Discovery Log Entry 1 00:28:22.179 ---------------------- 00:28:22.179 Transport Type: 3 (TCP) 00:28:22.179 Address Family: 1 (IPv4) 00:28:22.179 Subsystem Type: 2 (NVM Subsystem) 00:28:22.179 Entry Flags: 00:28:22.179 Duplicate Returned Information: 0 00:28:22.179 Explicit Persistent Connection Support for Discovery: 0 00:28:22.179 Transport Requirements: 00:28:22.179 Secure Channel: Not Required 00:28:22.179 Port ID: 0 (0x0000) 00:28:22.179 Controller ID: 65535 (0xffff) 00:28:22.179 Admin Max SQ Size: 128 00:28:22.179 Transport Service Identifier: 4420 00:28:22.179 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:28:22.179 Transport Address: 10.0.0.2 [2024-07-25 01:14:15.260551] nvme_ctrlr.c:4234:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:28:22.179 [2024-07-25 01:14:15.260576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.179 [2024-07-25 01:14:15.260588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.179 [2024-07-25 01:14:15.260598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.179 [2024-07-25 01:14:15.260608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.179 [2024-07-25 01:14:15.260626] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:22.179 [2024-07-25 01:14:15.260635] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:22.179 [2024-07-25 01:14:15.260645] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa2c980) 00:28:22.179 [2024-07-25 01:14:15.260656] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.179 [2024-07-25 01:14:15.260681] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa948e0, cid 3, qid 0 00:28:22.179 [2024-07-25 01:14:15.260816] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:22.179 [2024-07-25 01:14:15.260828] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:22.179 [2024-07-25 01:14:15.260834] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:22.179 [2024-07-25 01:14:15.260841] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa948e0) on tqpair=0xa2c980 00:28:22.179 [2024-07-25 01:14:15.260853] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:22.179 [2024-07-25 01:14:15.260861] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:22.179 [2024-07-25 01:14:15.260867] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa2c980) 00:28:22.179 [2024-07-25 01:14:15.260878] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.179 [2024-07-25 01:14:15.260903] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa948e0, cid 3, qid 0 00:28:22.179 [2024-07-25 01:14:15.261039] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:22.179 [2024-07-25 01:14:15.261054] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:22.179 [2024-07-25 01:14:15.261061] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:22.179 [2024-07-25 01:14:15.261068] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa948e0) on tqpair=0xa2c980 00:28:22.179 [2024-07-25 01:14:15.261076] nvme_ctrlr.c:1084:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:28:22.179 [2024-07-25 01:14:15.261084] nvme_ctrlr.c:1087:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:28:22.179 [2024-07-25 01:14:15.261100] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:22.179 [2024-07-25 01:14:15.261109] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:22.179 [2024-07-25 01:14:15.261115] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa2c980) 00:28:22.179 [2024-07-25 01:14:15.261126] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.179 [2024-07-25 01:14:15.261147] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa948e0, cid 3, qid 0 00:28:22.179 [2024-07-25 01:14:15.265267] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:22.179 [2024-07-25 01:14:15.265284] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:22.179 [2024-07-25 01:14:15.265291] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:22.179 [2024-07-25 01:14:15.265298] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa948e0) on tqpair=0xa2c980 00:28:22.179 [2024-07-25 01:14:15.265317] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:22.179 [2024-07-25 01:14:15.265326] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:22.179 [2024-07-25 01:14:15.265333] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa2c980) 00:28:22.179 [2024-07-25 01:14:15.265344] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.179 [2024-07-25 01:14:15.265366] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa948e0, cid 3, qid 0 00:28:22.179 [2024-07-25 01:14:15.265512] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:22.179 [2024-07-25 01:14:15.265527] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:22.179 [2024-07-25 01:14:15.265534] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:22.179 [2024-07-25 01:14:15.265541] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa948e0) on tqpair=0xa2c980 00:28:22.180 [2024-07-25 01:14:15.265558] nvme_ctrlr.c:1206:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 4 milliseconds 00:28:22.180 00:28:22.180 01:14:15 nvmf_tcp.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:28:22.180 [2024-07-25 01:14:15.300938] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 23.11.0 initialization... 00:28:22.180 [2024-07-25 01:14:15.300982] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3865039 ] 00:28:22.180 EAL: No free 2048 kB hugepages reported on node 1 00:28:22.440 [2024-07-25 01:14:15.337941] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:28:22.440 [2024-07-25 01:14:15.337995] nvme_tcp.c:2329:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:28:22.440 [2024-07-25 01:14:15.338005] nvme_tcp.c:2333:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:28:22.440 [2024-07-25 01:14:15.338022] nvme_tcp.c:2351:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:28:22.440 [2024-07-25 01:14:15.338036] sock.c: 336:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:28:22.440 [2024-07-25 01:14:15.341299] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:28:22.440 [2024-07-25 01:14:15.341343] nvme_tcp.c:1546:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x177f980 0 00:28:22.440 [2024-07-25 01:14:15.341478] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:28:22.440 [2024-07-25 01:14:15.341492] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:28:22.440 [2024-07-25 01:14:15.341500] nvme_tcp.c:1592:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:28:22.440 [2024-07-25 01:14:15.341506] nvme_tcp.c:1593:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:28:22.440 [2024-07-25 01:14:15.341546] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:22.440 [2024-07-25 01:14:15.341558] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:22.440 [2024-07-25 01:14:15.341564] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x177f980) 00:28:22.440 [2024-07-25 01:14:15.341579] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:28:22.440 [2024-07-25 01:14:15.341604] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17e74c0, cid 0, qid 0 00:28:22.440 [2024-07-25 01:14:15.348268] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:22.440 [2024-07-25 01:14:15.348286] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:22.440 [2024-07-25 01:14:15.348294] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:22.440 [2024-07-25 01:14:15.348301] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x17e74c0) on tqpair=0x177f980 00:28:22.440 [2024-07-25 01:14:15.348317] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:28:22.440 [2024-07-25 01:14:15.348328] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:28:22.440 [2024-07-25 01:14:15.348338] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:28:22.440 [2024-07-25 01:14:15.348356] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:22.440 [2024-07-25 01:14:15.348365] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:22.440 [2024-07-25 01:14:15.348372] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x177f980) 00:28:22.440 [2024-07-25 01:14:15.348388] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.440 [2024-07-25 01:14:15.348415] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17e74c0, cid 0, qid 0 00:28:22.440 [2024-07-25 01:14:15.348540] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:22.440 [2024-07-25 01:14:15.348556] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:22.440 [2024-07-25 01:14:15.348562] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:22.440 [2024-07-25 01:14:15.348569] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x17e74c0) on tqpair=0x177f980 00:28:22.440 [2024-07-25 01:14:15.348583] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:28:22.440 [2024-07-25 01:14:15.348598] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:28:22.440 [2024-07-25 01:14:15.348611] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:22.440 [2024-07-25 01:14:15.348619] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:22.440 [2024-07-25 01:14:15.348625] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x177f980) 00:28:22.440 [2024-07-25 01:14:15.348636] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.440 [2024-07-25 01:14:15.348658] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17e74c0, cid 0, qid 0 00:28:22.440 [2024-07-25 01:14:15.348767] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:22.440 [2024-07-25 01:14:15.348782] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:22.440 [2024-07-25 01:14:15.348789] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:22.440 [2024-07-25 01:14:15.348796] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x17e74c0) on tqpair=0x177f980 00:28:22.440 [2024-07-25 01:14:15.348806] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:28:22.441 [2024-07-25 01:14:15.348821] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:28:22.441 [2024-07-25 01:14:15.348833] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:22.441 [2024-07-25 01:14:15.348840] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:22.441 [2024-07-25 01:14:15.348847] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x177f980) 00:28:22.441 [2024-07-25 01:14:15.348857] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.441 [2024-07-25 01:14:15.348879] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17e74c0, cid 0, qid 0 00:28:22.441 [2024-07-25 01:14:15.348986] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:22.441 [2024-07-25 01:14:15.349002] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:22.441 [2024-07-25 01:14:15.349008] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:22.441 [2024-07-25 01:14:15.349015] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x17e74c0) on tqpair=0x177f980 00:28:22.441 [2024-07-25 01:14:15.349025] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:28:22.441 [2024-07-25 01:14:15.349043] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:22.441 [2024-07-25 01:14:15.349052] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:22.441 [2024-07-25 01:14:15.349059] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x177f980) 00:28:22.441 [2024-07-25 01:14:15.349069] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.441 [2024-07-25 01:14:15.349090] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17e74c0, cid 0, qid 0 00:28:22.441 [2024-07-25 01:14:15.349199] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:22.441 [2024-07-25 01:14:15.349214] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:22.441 [2024-07-25 01:14:15.349221] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:22.441 [2024-07-25 01:14:15.349228] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x17e74c0) on tqpair=0x177f980 00:28:22.441 [2024-07-25 01:14:15.349238] nvme_ctrlr.c:3751:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:28:22.441 [2024-07-25 01:14:15.349254] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:28:22.441 [2024-07-25 01:14:15.349269] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:28:22.441 [2024-07-25 01:14:15.349378] nvme_ctrlr.c:3944:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:28:22.441 [2024-07-25 01:14:15.349386] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:28:22.441 [2024-07-25 01:14:15.349399] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:22.441 [2024-07-25 01:14:15.349407] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:22.441 [2024-07-25 01:14:15.349413] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x177f980) 00:28:22.441 [2024-07-25 01:14:15.349424] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.441 [2024-07-25 01:14:15.349446] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17e74c0, cid 0, qid 0 00:28:22.441 [2024-07-25 01:14:15.349572] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:22.441 [2024-07-25 01:14:15.349584] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:22.441 [2024-07-25 01:14:15.349591] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:22.441 [2024-07-25 01:14:15.349598] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x17e74c0) on tqpair=0x177f980 00:28:22.441 [2024-07-25 01:14:15.349608] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:28:22.441 [2024-07-25 01:14:15.349624] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:22.441 [2024-07-25 01:14:15.349633] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:22.441 [2024-07-25 01:14:15.349640] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x177f980) 00:28:22.441 [2024-07-25 01:14:15.349650] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.441 [2024-07-25 01:14:15.349671] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17e74c0, cid 0, qid 0 00:28:22.441 [2024-07-25 01:14:15.349794] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:22.441 [2024-07-25 01:14:15.349806] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:22.441 [2024-07-25 01:14:15.349812] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:22.441 [2024-07-25 01:14:15.349819] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x17e74c0) on tqpair=0x177f980 00:28:22.441 [2024-07-25 01:14:15.349828] nvme_ctrlr.c:3786:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:28:22.441 [2024-07-25 01:14:15.349837] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:28:22.441 [2024-07-25 01:14:15.349850] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:28:22.441 [2024-07-25 01:14:15.349864] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:28:22.441 [2024-07-25 01:14:15.349883] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:22.441 [2024-07-25 01:14:15.349892] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x177f980) 00:28:22.441 [2024-07-25 01:14:15.349903] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.441 [2024-07-25 01:14:15.349925] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17e74c0, cid 0, qid 0 00:28:22.441 [2024-07-25 01:14:15.350095] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:22.441 [2024-07-25 01:14:15.350107] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:22.441 [2024-07-25 01:14:15.350114] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:22.441 [2024-07-25 01:14:15.350120] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x177f980): datao=0, datal=4096, cccid=0 00:28:22.441 [2024-07-25 01:14:15.350128] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x17e74c0) on tqpair(0x177f980): expected_datao=0, payload_size=4096 00:28:22.441 [2024-07-25 01:14:15.350135] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:22.441 [2024-07-25 01:14:15.350152] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:22.441 [2024-07-25 01:14:15.350162] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:22.441 [2024-07-25 01:14:15.390358] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:22.441 [2024-07-25 01:14:15.390378] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:22.441 [2024-07-25 01:14:15.390385] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:22.441 [2024-07-25 01:14:15.390392] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x17e74c0) on tqpair=0x177f980 00:28:22.441 [2024-07-25 01:14:15.390410] nvme_ctrlr.c:1986:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:28:22.441 [2024-07-25 01:14:15.390421] nvme_ctrlr.c:1990:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:28:22.441 [2024-07-25 01:14:15.390429] nvme_ctrlr.c:1993:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:28:22.441 [2024-07-25 01:14:15.390436] nvme_ctrlr.c:2017:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:28:22.441 [2024-07-25 01:14:15.390444] nvme_ctrlr.c:2032:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:28:22.441 [2024-07-25 01:14:15.390452] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:28:22.441 [2024-07-25 01:14:15.390467] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:28:22.441 [2024-07-25 01:14:15.390480] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:22.441 [2024-07-25 01:14:15.390488] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:22.441 [2024-07-25 01:14:15.390495] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x177f980) 00:28:22.441 [2024-07-25 01:14:15.390507] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:28:22.441 [2024-07-25 01:14:15.390531] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17e74c0, cid 0, qid 0 00:28:22.441 [2024-07-25 01:14:15.390656] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:22.441 [2024-07-25 01:14:15.390668] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:22.441 [2024-07-25 01:14:15.390675] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:22.441 [2024-07-25 01:14:15.390682] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x17e74c0) on tqpair=0x177f980 00:28:22.441 [2024-07-25 01:14:15.390695] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:22.441 [2024-07-25 01:14:15.390703] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:22.441 [2024-07-25 01:14:15.390709] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x177f980) 00:28:22.441 [2024-07-25 01:14:15.390724] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:22.441 [2024-07-25 01:14:15.390736] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:22.441 [2024-07-25 01:14:15.390743] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:22.441 [2024-07-25 01:14:15.390750] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x177f980) 00:28:22.442 [2024-07-25 01:14:15.390759] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:22.442 [2024-07-25 01:14:15.390769] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:22.442 [2024-07-25 01:14:15.390776] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:22.442 [2024-07-25 01:14:15.390782] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x177f980) 00:28:22.442 [2024-07-25 01:14:15.390791] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:22.442 [2024-07-25 01:14:15.390801] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:22.442 [2024-07-25 01:14:15.390807] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:22.442 [2024-07-25 01:14:15.390814] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x177f980) 00:28:22.442 [2024-07-25 01:14:15.390822] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:22.442 [2024-07-25 01:14:15.390831] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:28:22.442 [2024-07-25 01:14:15.390851] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:28:22.442 [2024-07-25 01:14:15.390864] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:22.442 [2024-07-25 01:14:15.390871] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x177f980) 00:28:22.442 [2024-07-25 01:14:15.390882] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.442 [2024-07-25 01:14:15.390920] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17e74c0, cid 0, qid 0 00:28:22.442 [2024-07-25 01:14:15.390931] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17e7620, cid 1, qid 0 00:28:22.442 [2024-07-25 01:14:15.390939] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17e7780, cid 2, qid 0 00:28:22.442 [2024-07-25 01:14:15.390947] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17e78e0, cid 3, qid 0 00:28:22.442 [2024-07-25 01:14:15.390954] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17e7a40, cid 4, qid 0 00:28:22.442 [2024-07-25 01:14:15.391114] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:22.442 [2024-07-25 01:14:15.391130] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:22.442 [2024-07-25 01:14:15.391137] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:22.442 [2024-07-25 01:14:15.391144] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x17e7a40) on tqpair=0x177f980 00:28:22.442 [2024-07-25 01:14:15.391154] nvme_ctrlr.c:2904:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:28:22.442 [2024-07-25 01:14:15.391163] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:28:22.442 [2024-07-25 01:14:15.391177] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:28:22.442 [2024-07-25 01:14:15.391188] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:28:22.442 [2024-07-25 01:14:15.391198] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:22.442 [2024-07-25 01:14:15.391210] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:22.442 [2024-07-25 01:14:15.391217] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x177f980) 00:28:22.442 [2024-07-25 01:14:15.391228] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:28:22.442 [2024-07-25 01:14:15.395271] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17e7a40, cid 4, qid 0 00:28:22.442 [2024-07-25 01:14:15.395400] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:22.442 [2024-07-25 01:14:15.395413] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:22.442 [2024-07-25 01:14:15.395419] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:22.442 [2024-07-25 01:14:15.395426] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x17e7a40) on tqpair=0x177f980 00:28:22.442 [2024-07-25 01:14:15.395497] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:28:22.442 [2024-07-25 01:14:15.395517] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:28:22.442 [2024-07-25 01:14:15.395532] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:22.442 [2024-07-25 01:14:15.395540] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x177f980) 00:28:22.442 [2024-07-25 01:14:15.395551] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.442 [2024-07-25 01:14:15.395573] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17e7a40, cid 4, qid 0 00:28:22.442 [2024-07-25 01:14:15.395707] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:22.442 [2024-07-25 01:14:15.395722] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:22.442 [2024-07-25 01:14:15.395729] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:22.442 [2024-07-25 01:14:15.395736] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x177f980): datao=0, datal=4096, cccid=4 00:28:22.442 [2024-07-25 01:14:15.395743] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x17e7a40) on tqpair(0x177f980): expected_datao=0, payload_size=4096 00:28:22.442 [2024-07-25 01:14:15.395751] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:22.442 [2024-07-25 01:14:15.395761] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:22.442 [2024-07-25 01:14:15.395769] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:22.442 [2024-07-25 01:14:15.395786] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:22.442 [2024-07-25 01:14:15.395797] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:22.442 [2024-07-25 01:14:15.395803] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:22.442 [2024-07-25 01:14:15.395810] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x17e7a40) on tqpair=0x177f980 00:28:22.442 [2024-07-25 01:14:15.395825] nvme_ctrlr.c:4570:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:28:22.442 [2024-07-25 01:14:15.395848] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:28:22.442 [2024-07-25 01:14:15.395866] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:28:22.442 [2024-07-25 01:14:15.395879] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:22.442 [2024-07-25 01:14:15.395888] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x177f980) 00:28:22.442 [2024-07-25 01:14:15.395898] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.442 [2024-07-25 01:14:15.395920] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17e7a40, cid 4, qid 0 00:28:22.442 [2024-07-25 01:14:15.396064] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:22.442 [2024-07-25 01:14:15.396079] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:22.442 [2024-07-25 01:14:15.396086] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:22.442 [2024-07-25 01:14:15.396092] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x177f980): datao=0, datal=4096, cccid=4 00:28:22.442 [2024-07-25 01:14:15.396100] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x17e7a40) on tqpair(0x177f980): expected_datao=0, payload_size=4096 00:28:22.442 [2024-07-25 01:14:15.396107] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:22.442 [2024-07-25 01:14:15.396117] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:22.442 [2024-07-25 01:14:15.396125] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:22.442 [2024-07-25 01:14:15.396137] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:22.442 [2024-07-25 01:14:15.396146] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:22.442 [2024-07-25 01:14:15.396153] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:22.442 [2024-07-25 01:14:15.396160] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x17e7a40) on tqpair=0x177f980 00:28:22.442 [2024-07-25 01:14:15.396181] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:28:22.442 [2024-07-25 01:14:15.396200] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:28:22.442 [2024-07-25 01:14:15.396214] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:22.442 [2024-07-25 01:14:15.396222] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x177f980) 00:28:22.442 [2024-07-25 01:14:15.396233] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.442 [2024-07-25 01:14:15.396261] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17e7a40, cid 4, qid 0 00:28:22.442 [2024-07-25 01:14:15.396398] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:22.442 [2024-07-25 01:14:15.396411] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:22.442 [2024-07-25 01:14:15.396418] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:22.442 [2024-07-25 01:14:15.396424] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x177f980): datao=0, datal=4096, cccid=4 00:28:22.442 [2024-07-25 01:14:15.396432] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x17e7a40) on tqpair(0x177f980): expected_datao=0, payload_size=4096 00:28:22.442 [2024-07-25 01:14:15.396439] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:22.442 [2024-07-25 01:14:15.396449] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:22.442 [2024-07-25 01:14:15.396456] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:22.442 [2024-07-25 01:14:15.396485] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:22.442 [2024-07-25 01:14:15.396496] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:22.442 [2024-07-25 01:14:15.396503] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:22.443 [2024-07-25 01:14:15.396509] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x17e7a40) on tqpair=0x177f980 00:28:22.443 [2024-07-25 01:14:15.396523] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:28:22.443 [2024-07-25 01:14:15.396538] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:28:22.443 [2024-07-25 01:14:15.396553] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:28:22.443 [2024-07-25 01:14:15.396563] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:28:22.443 [2024-07-25 01:14:15.396575] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:28:22.443 [2024-07-25 01:14:15.396584] nvme_ctrlr.c:2992:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:28:22.443 [2024-07-25 01:14:15.396592] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:28:22.443 [2024-07-25 01:14:15.396600] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:28:22.443 [2024-07-25 01:14:15.396625] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:22.443 [2024-07-25 01:14:15.396634] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x177f980) 00:28:22.443 [2024-07-25 01:14:15.396645] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.443 [2024-07-25 01:14:15.396656] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:22.443 [2024-07-25 01:14:15.396663] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:22.443 [2024-07-25 01:14:15.396670] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x177f980) 00:28:22.443 [2024-07-25 01:14:15.396679] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:28:22.443 [2024-07-25 01:14:15.396719] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17e7a40, cid 4, qid 0 00:28:22.443 [2024-07-25 01:14:15.396732] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17e7ba0, cid 5, qid 0 00:28:22.443 [2024-07-25 01:14:15.396873] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:22.443 [2024-07-25 01:14:15.396886] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:22.443 [2024-07-25 01:14:15.396893] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:22.443 [2024-07-25 01:14:15.396900] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x17e7a40) on tqpair=0x177f980 00:28:22.443 [2024-07-25 01:14:15.396912] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:22.443 [2024-07-25 01:14:15.396922] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:22.443 [2024-07-25 01:14:15.396928] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:22.443 [2024-07-25 01:14:15.396934] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x17e7ba0) on tqpair=0x177f980 00:28:22.443 [2024-07-25 01:14:15.396951] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:22.443 [2024-07-25 01:14:15.396960] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x177f980) 00:28:22.443 [2024-07-25 01:14:15.396971] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.443 [2024-07-25 01:14:15.396992] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17e7ba0, cid 5, qid 0 00:28:22.443 [2024-07-25 01:14:15.397116] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:22.443 [2024-07-25 01:14:15.397128] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:22.443 [2024-07-25 01:14:15.397135] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:22.443 [2024-07-25 01:14:15.397142] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x17e7ba0) on tqpair=0x177f980 00:28:22.443 [2024-07-25 01:14:15.397159] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:22.443 [2024-07-25 01:14:15.397168] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x177f980) 00:28:22.443 [2024-07-25 01:14:15.397179] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.443 [2024-07-25 01:14:15.397199] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17e7ba0, cid 5, qid 0 00:28:22.443 [2024-07-25 01:14:15.397324] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:22.443 [2024-07-25 01:14:15.397343] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:22.443 [2024-07-25 01:14:15.397351] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:22.443 [2024-07-25 01:14:15.397357] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x17e7ba0) on tqpair=0x177f980 00:28:22.443 [2024-07-25 01:14:15.397375] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:22.443 [2024-07-25 01:14:15.397384] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x177f980) 00:28:22.443 [2024-07-25 01:14:15.397395] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.443 [2024-07-25 01:14:15.397416] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17e7ba0, cid 5, qid 0 00:28:22.443 [2024-07-25 01:14:15.397533] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:22.443 [2024-07-25 01:14:15.397548] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:22.443 [2024-07-25 01:14:15.397555] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:22.443 [2024-07-25 01:14:15.397561] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x17e7ba0) on tqpair=0x177f980 00:28:22.443 [2024-07-25 01:14:15.397582] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:22.443 [2024-07-25 01:14:15.397592] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x177f980) 00:28:22.443 [2024-07-25 01:14:15.397603] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.443 [2024-07-25 01:14:15.397615] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:22.443 [2024-07-25 01:14:15.397623] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x177f980) 00:28:22.443 [2024-07-25 01:14:15.397632] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.443 [2024-07-25 01:14:15.397644] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:22.443 [2024-07-25 01:14:15.397651] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x177f980) 00:28:22.443 [2024-07-25 01:14:15.397661] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.443 [2024-07-25 01:14:15.397673] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:22.443 [2024-07-25 01:14:15.397680] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x177f980) 00:28:22.443 [2024-07-25 01:14:15.397689] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.443 [2024-07-25 01:14:15.397711] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17e7ba0, cid 5, qid 0 00:28:22.443 [2024-07-25 01:14:15.397722] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17e7a40, cid 4, qid 0 00:28:22.443 [2024-07-25 01:14:15.397730] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17e7d00, cid 6, qid 0 00:28:22.443 [2024-07-25 01:14:15.397738] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17e7e60, cid 7, qid 0 00:28:22.443 [2024-07-25 01:14:15.397926] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:22.443 [2024-07-25 01:14:15.397939] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:22.443 [2024-07-25 01:14:15.397946] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:22.443 [2024-07-25 01:14:15.397952] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x177f980): datao=0, datal=8192, cccid=5 00:28:22.443 [2024-07-25 01:14:15.397960] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x17e7ba0) on tqpair(0x177f980): expected_datao=0, payload_size=8192 00:28:22.443 [2024-07-25 01:14:15.397967] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:22.443 [2024-07-25 01:14:15.398028] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:22.443 [2024-07-25 01:14:15.398039] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:22.443 [2024-07-25 01:14:15.398047] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:22.443 [2024-07-25 01:14:15.398056] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:22.443 [2024-07-25 01:14:15.398062] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:22.443 [2024-07-25 01:14:15.398069] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x177f980): datao=0, datal=512, cccid=4 00:28:22.443 [2024-07-25 01:14:15.398076] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x17e7a40) on tqpair(0x177f980): expected_datao=0, payload_size=512 00:28:22.443 [2024-07-25 01:14:15.398083] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:22.443 [2024-07-25 01:14:15.398092] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:22.443 [2024-07-25 01:14:15.398099] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:22.443 [2024-07-25 01:14:15.398108] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:22.443 [2024-07-25 01:14:15.398116] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:22.443 [2024-07-25 01:14:15.398122] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:22.443 [2024-07-25 01:14:15.398129] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x177f980): datao=0, datal=512, cccid=6 00:28:22.443 [2024-07-25 01:14:15.398136] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x17e7d00) on tqpair(0x177f980): expected_datao=0, payload_size=512 00:28:22.443 [2024-07-25 01:14:15.398143] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:22.443 [2024-07-25 01:14:15.398152] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:22.444 [2024-07-25 01:14:15.398159] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:22.444 [2024-07-25 01:14:15.398167] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:22.444 [2024-07-25 01:14:15.398176] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:22.444 [2024-07-25 01:14:15.398182] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:22.444 [2024-07-25 01:14:15.398188] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x177f980): datao=0, datal=4096, cccid=7 00:28:22.444 [2024-07-25 01:14:15.398196] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x17e7e60) on tqpair(0x177f980): expected_datao=0, payload_size=4096 00:28:22.444 [2024-07-25 01:14:15.398203] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:22.444 [2024-07-25 01:14:15.398212] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:22.444 [2024-07-25 01:14:15.398219] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:22.444 [2024-07-25 01:14:15.398231] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:22.444 [2024-07-25 01:14:15.398240] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:22.444 [2024-07-25 01:14:15.398254] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:22.444 [2024-07-25 01:14:15.398261] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x17e7ba0) on tqpair=0x177f980 00:28:22.444 [2024-07-25 01:14:15.398282] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:22.444 [2024-07-25 01:14:15.398293] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:22.444 [2024-07-25 01:14:15.398299] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:22.444 [2024-07-25 01:14:15.398305] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x17e7a40) on tqpair=0x177f980 00:28:22.444 [2024-07-25 01:14:15.398321] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:22.444 [2024-07-25 01:14:15.398331] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:22.444 [2024-07-25 01:14:15.398337] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:22.444 [2024-07-25 01:14:15.398344] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x17e7d00) on tqpair=0x177f980 00:28:22.444 [2024-07-25 01:14:15.398359] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:22.444 [2024-07-25 01:14:15.398373] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:22.444 [2024-07-25 01:14:15.398380] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:22.444 [2024-07-25 01:14:15.398387] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x17e7e60) on tqpair=0x177f980 00:28:22.444 ===================================================== 00:28:22.444 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:22.444 ===================================================== 00:28:22.444 Controller Capabilities/Features 00:28:22.444 ================================ 00:28:22.444 Vendor ID: 8086 00:28:22.444 Subsystem Vendor ID: 8086 00:28:22.444 Serial Number: SPDK00000000000001 00:28:22.444 Model Number: SPDK bdev Controller 00:28:22.444 Firmware Version: 24.05.1 00:28:22.444 Recommended Arb Burst: 6 00:28:22.444 IEEE OUI Identifier: e4 d2 5c 00:28:22.444 Multi-path I/O 00:28:22.444 May have multiple subsystem ports: Yes 00:28:22.444 May have multiple controllers: Yes 00:28:22.444 Associated with SR-IOV VF: No 00:28:22.444 Max Data Transfer Size: 131072 00:28:22.444 Max Number of Namespaces: 32 00:28:22.444 Max Number of I/O Queues: 127 00:28:22.444 NVMe Specification Version (VS): 1.3 00:28:22.444 NVMe Specification Version (Identify): 1.3 00:28:22.444 Maximum Queue Entries: 128 00:28:22.444 Contiguous Queues Required: Yes 00:28:22.444 Arbitration Mechanisms Supported 00:28:22.444 Weighted Round Robin: Not Supported 00:28:22.444 Vendor Specific: Not Supported 00:28:22.444 Reset Timeout: 15000 ms 00:28:22.444 Doorbell Stride: 4 bytes 00:28:22.444 NVM Subsystem Reset: Not Supported 00:28:22.444 Command Sets Supported 00:28:22.444 NVM Command Set: Supported 00:28:22.444 Boot Partition: Not Supported 00:28:22.444 Memory Page Size Minimum: 4096 bytes 00:28:22.444 Memory Page Size Maximum: 4096 bytes 00:28:22.444 Persistent Memory Region: Not Supported 00:28:22.444 Optional Asynchronous Events Supported 00:28:22.444 Namespace Attribute Notices: Supported 00:28:22.444 Firmware Activation Notices: Not Supported 00:28:22.444 ANA Change Notices: Not Supported 00:28:22.444 PLE Aggregate Log Change Notices: Not Supported 00:28:22.444 LBA Status Info Alert Notices: Not Supported 00:28:22.444 EGE Aggregate Log Change Notices: Not Supported 00:28:22.444 Normal NVM Subsystem Shutdown event: Not Supported 00:28:22.444 Zone Descriptor Change Notices: Not Supported 00:28:22.444 Discovery Log Change Notices: Not Supported 00:28:22.444 Controller Attributes 00:28:22.444 128-bit Host Identifier: Supported 00:28:22.444 Non-Operational Permissive Mode: Not Supported 00:28:22.444 NVM Sets: Not Supported 00:28:22.444 Read Recovery Levels: Not Supported 00:28:22.444 Endurance Groups: Not Supported 00:28:22.444 Predictable Latency Mode: Not Supported 00:28:22.444 Traffic Based Keep ALive: Not Supported 00:28:22.444 Namespace Granularity: Not Supported 00:28:22.444 SQ Associations: Not Supported 00:28:22.444 UUID List: Not Supported 00:28:22.444 Multi-Domain Subsystem: Not Supported 00:28:22.444 Fixed Capacity Management: Not Supported 00:28:22.444 Variable Capacity Management: Not Supported 00:28:22.444 Delete Endurance Group: Not Supported 00:28:22.444 Delete NVM Set: Not Supported 00:28:22.444 Extended LBA Formats Supported: Not Supported 00:28:22.444 Flexible Data Placement Supported: Not Supported 00:28:22.444 00:28:22.444 Controller Memory Buffer Support 00:28:22.444 ================================ 00:28:22.444 Supported: No 00:28:22.444 00:28:22.444 Persistent Memory Region Support 00:28:22.444 ================================ 00:28:22.444 Supported: No 00:28:22.444 00:28:22.444 Admin Command Set Attributes 00:28:22.444 ============================ 00:28:22.444 Security Send/Receive: Not Supported 00:28:22.444 Format NVM: Not Supported 00:28:22.444 Firmware Activate/Download: Not Supported 00:28:22.444 Namespace Management: Not Supported 00:28:22.444 Device Self-Test: Not Supported 00:28:22.444 Directives: Not Supported 00:28:22.444 NVMe-MI: Not Supported 00:28:22.444 Virtualization Management: Not Supported 00:28:22.444 Doorbell Buffer Config: Not Supported 00:28:22.444 Get LBA Status Capability: Not Supported 00:28:22.444 Command & Feature Lockdown Capability: Not Supported 00:28:22.444 Abort Command Limit: 4 00:28:22.444 Async Event Request Limit: 4 00:28:22.444 Number of Firmware Slots: N/A 00:28:22.444 Firmware Slot 1 Read-Only: N/A 00:28:22.444 Firmware Activation Without Reset: N/A 00:28:22.444 Multiple Update Detection Support: N/A 00:28:22.444 Firmware Update Granularity: No Information Provided 00:28:22.444 Per-Namespace SMART Log: No 00:28:22.444 Asymmetric Namespace Access Log Page: Not Supported 00:28:22.444 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:28:22.444 Command Effects Log Page: Supported 00:28:22.444 Get Log Page Extended Data: Supported 00:28:22.444 Telemetry Log Pages: Not Supported 00:28:22.444 Persistent Event Log Pages: Not Supported 00:28:22.444 Supported Log Pages Log Page: May Support 00:28:22.444 Commands Supported & Effects Log Page: Not Supported 00:28:22.444 Feature Identifiers & Effects Log Page:May Support 00:28:22.444 NVMe-MI Commands & Effects Log Page: May Support 00:28:22.444 Data Area 4 for Telemetry Log: Not Supported 00:28:22.444 Error Log Page Entries Supported: 128 00:28:22.444 Keep Alive: Supported 00:28:22.444 Keep Alive Granularity: 10000 ms 00:28:22.444 00:28:22.444 NVM Command Set Attributes 00:28:22.444 ========================== 00:28:22.444 Submission Queue Entry Size 00:28:22.444 Max: 64 00:28:22.444 Min: 64 00:28:22.444 Completion Queue Entry Size 00:28:22.444 Max: 16 00:28:22.444 Min: 16 00:28:22.444 Number of Namespaces: 32 00:28:22.444 Compare Command: Supported 00:28:22.444 Write Uncorrectable Command: Not Supported 00:28:22.444 Dataset Management Command: Supported 00:28:22.444 Write Zeroes Command: Supported 00:28:22.444 Set Features Save Field: Not Supported 00:28:22.444 Reservations: Supported 00:28:22.444 Timestamp: Not Supported 00:28:22.444 Copy: Supported 00:28:22.444 Volatile Write Cache: Present 00:28:22.444 Atomic Write Unit (Normal): 1 00:28:22.444 Atomic Write Unit (PFail): 1 00:28:22.444 Atomic Compare & Write Unit: 1 00:28:22.444 Fused Compare & Write: Supported 00:28:22.444 Scatter-Gather List 00:28:22.444 SGL Command Set: Supported 00:28:22.444 SGL Keyed: Supported 00:28:22.444 SGL Bit Bucket Descriptor: Not Supported 00:28:22.444 SGL Metadata Pointer: Not Supported 00:28:22.444 Oversized SGL: Not Supported 00:28:22.444 SGL Metadata Address: Not Supported 00:28:22.444 SGL Offset: Supported 00:28:22.444 Transport SGL Data Block: Not Supported 00:28:22.444 Replay Protected Memory Block: Not Supported 00:28:22.444 00:28:22.444 Firmware Slot Information 00:28:22.444 ========================= 00:28:22.444 Active slot: 1 00:28:22.444 Slot 1 Firmware Revision: 24.05.1 00:28:22.444 00:28:22.444 00:28:22.444 Commands Supported and Effects 00:28:22.445 ============================== 00:28:22.445 Admin Commands 00:28:22.445 -------------- 00:28:22.445 Get Log Page (02h): Supported 00:28:22.445 Identify (06h): Supported 00:28:22.445 Abort (08h): Supported 00:28:22.445 Set Features (09h): Supported 00:28:22.445 Get Features (0Ah): Supported 00:28:22.445 Asynchronous Event Request (0Ch): Supported 00:28:22.445 Keep Alive (18h): Supported 00:28:22.445 I/O Commands 00:28:22.445 ------------ 00:28:22.445 Flush (00h): Supported LBA-Change 00:28:22.445 Write (01h): Supported LBA-Change 00:28:22.445 Read (02h): Supported 00:28:22.445 Compare (05h): Supported 00:28:22.445 Write Zeroes (08h): Supported LBA-Change 00:28:22.445 Dataset Management (09h): Supported LBA-Change 00:28:22.445 Copy (19h): Supported LBA-Change 00:28:22.445 Unknown (79h): Supported LBA-Change 00:28:22.445 Unknown (7Ah): Supported 00:28:22.445 00:28:22.445 Error Log 00:28:22.445 ========= 00:28:22.445 00:28:22.445 Arbitration 00:28:22.445 =========== 00:28:22.445 Arbitration Burst: 1 00:28:22.445 00:28:22.445 Power Management 00:28:22.445 ================ 00:28:22.445 Number of Power States: 1 00:28:22.445 Current Power State: Power State #0 00:28:22.445 Power State #0: 00:28:22.445 Max Power: 0.00 W 00:28:22.445 Non-Operational State: Operational 00:28:22.445 Entry Latency: Not Reported 00:28:22.445 Exit Latency: Not Reported 00:28:22.445 Relative Read Throughput: 0 00:28:22.445 Relative Read Latency: 0 00:28:22.445 Relative Write Throughput: 0 00:28:22.445 Relative Write Latency: 0 00:28:22.445 Idle Power: Not Reported 00:28:22.445 Active Power: Not Reported 00:28:22.445 Non-Operational Permissive Mode: Not Supported 00:28:22.445 00:28:22.445 Health Information 00:28:22.445 ================== 00:28:22.445 Critical Warnings: 00:28:22.445 Available Spare Space: OK 00:28:22.445 Temperature: OK 00:28:22.445 Device Reliability: OK 00:28:22.445 Read Only: No 00:28:22.445 Volatile Memory Backup: OK 00:28:22.445 Current Temperature: 0 Kelvin (-273 Celsius) 00:28:22.445 Temperature Threshold: [2024-07-25 01:14:15.398517] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:22.445 [2024-07-25 01:14:15.398530] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x177f980) 00:28:22.445 [2024-07-25 01:14:15.398555] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.445 [2024-07-25 01:14:15.398579] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17e7e60, cid 7, qid 0 00:28:22.445 [2024-07-25 01:14:15.398722] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:22.445 [2024-07-25 01:14:15.398738] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:22.445 [2024-07-25 01:14:15.398744] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:22.445 [2024-07-25 01:14:15.398751] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x17e7e60) on tqpair=0x177f980 00:28:22.445 [2024-07-25 01:14:15.398794] nvme_ctrlr.c:4234:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:28:22.445 [2024-07-25 01:14:15.398816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.445 [2024-07-25 01:14:15.398828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.445 [2024-07-25 01:14:15.398838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.445 [2024-07-25 01:14:15.398848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.445 [2024-07-25 01:14:15.398861] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:22.445 [2024-07-25 01:14:15.398869] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:22.445 [2024-07-25 01:14:15.398876] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x177f980) 00:28:22.445 [2024-07-25 01:14:15.398887] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.445 [2024-07-25 01:14:15.398910] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17e78e0, cid 3, qid 0 00:28:22.445 [2024-07-25 01:14:15.399015] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:22.445 [2024-07-25 01:14:15.399030] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:22.445 [2024-07-25 01:14:15.399037] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:22.445 [2024-07-25 01:14:15.399043] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x17e78e0) on tqpair=0x177f980 00:28:22.445 [2024-07-25 01:14:15.399055] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:22.445 [2024-07-25 01:14:15.399063] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:22.445 [2024-07-25 01:14:15.399070] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x177f980) 00:28:22.445 [2024-07-25 01:14:15.399081] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.445 [2024-07-25 01:14:15.399108] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17e78e0, cid 3, qid 0 00:28:22.445 [2024-07-25 01:14:15.399229] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:22.445 [2024-07-25 01:14:15.403252] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:22.445 [2024-07-25 01:14:15.403266] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:22.445 [2024-07-25 01:14:15.403273] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x17e78e0) on tqpair=0x177f980 00:28:22.445 [2024-07-25 01:14:15.403283] nvme_ctrlr.c:1084:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:28:22.445 [2024-07-25 01:14:15.403295] nvme_ctrlr.c:1087:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:28:22.445 [2024-07-25 01:14:15.403314] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:22.445 [2024-07-25 01:14:15.403324] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:22.445 [2024-07-25 01:14:15.403330] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x177f980) 00:28:22.445 [2024-07-25 01:14:15.403341] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.445 [2024-07-25 01:14:15.403365] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17e78e0, cid 3, qid 0 00:28:22.445 [2024-07-25 01:14:15.403481] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:22.445 [2024-07-25 01:14:15.403496] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:22.445 [2024-07-25 01:14:15.403503] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:22.445 [2024-07-25 01:14:15.403509] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x17e78e0) on tqpair=0x177f980 00:28:22.445 [2024-07-25 01:14:15.403524] nvme_ctrlr.c:1206:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 0 milliseconds 00:28:22.445 0 Kelvin (-273 Celsius) 00:28:22.445 Available Spare: 0% 00:28:22.445 Available Spare Threshold: 0% 00:28:22.445 Life Percentage Used: 0% 00:28:22.445 Data Units Read: 0 00:28:22.445 Data Units Written: 0 00:28:22.446 Host Read Commands: 0 00:28:22.446 Host Write Commands: 0 00:28:22.446 Controller Busy Time: 0 minutes 00:28:22.446 Power Cycles: 0 00:28:22.446 Power On Hours: 0 hours 00:28:22.446 Unsafe Shutdowns: 0 00:28:22.446 Unrecoverable Media Errors: 0 00:28:22.446 Lifetime Error Log Entries: 0 00:28:22.446 Warning Temperature Time: 0 minutes 00:28:22.446 Critical Temperature Time: 0 minutes 00:28:22.446 00:28:22.446 Number of Queues 00:28:22.446 ================ 00:28:22.446 Number of I/O Submission Queues: 127 00:28:22.446 Number of I/O Completion Queues: 127 00:28:22.446 00:28:22.446 Active Namespaces 00:28:22.446 ================= 00:28:22.446 Namespace ID:1 00:28:22.446 Error Recovery Timeout: Unlimited 00:28:22.446 Command Set Identifier: NVM (00h) 00:28:22.446 Deallocate: Supported 00:28:22.446 Deallocated/Unwritten Error: Not Supported 00:28:22.446 Deallocated Read Value: Unknown 00:28:22.446 Deallocate in Write Zeroes: Not Supported 00:28:22.446 Deallocated Guard Field: 0xFFFF 00:28:22.446 Flush: Supported 00:28:22.446 Reservation: Supported 00:28:22.446 Namespace Sharing Capabilities: Multiple Controllers 00:28:22.446 Size (in LBAs): 131072 (0GiB) 00:28:22.446 Capacity (in LBAs): 131072 (0GiB) 00:28:22.446 Utilization (in LBAs): 131072 (0GiB) 00:28:22.446 NGUID: ABCDEF0123456789ABCDEF0123456789 00:28:22.446 EUI64: ABCDEF0123456789 00:28:22.446 UUID: 71b3fe74-eea1-4c55-b16a-a34bc1dd2573 00:28:22.446 Thin Provisioning: Not Supported 00:28:22.446 Per-NS Atomic Units: Yes 00:28:22.446 Atomic Boundary Size (Normal): 0 00:28:22.446 Atomic Boundary Size (PFail): 0 00:28:22.446 Atomic Boundary Offset: 0 00:28:22.446 Maximum Single Source Range Length: 65535 00:28:22.446 Maximum Copy Length: 65535 00:28:22.446 Maximum Source Range Count: 1 00:28:22.446 NGUID/EUI64 Never Reused: No 00:28:22.446 Namespace Write Protected: No 00:28:22.446 Number of LBA Formats: 1 00:28:22.446 Current LBA Format: LBA Format #00 00:28:22.446 LBA Format #00: Data Size: 512 Metadata Size: 0 00:28:22.446 00:28:22.446 01:14:15 nvmf_tcp.nvmf_identify -- host/identify.sh@51 -- # sync 00:28:22.446 01:14:15 nvmf_tcp.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:22.446 01:14:15 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:22.446 01:14:15 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:22.446 01:14:15 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:22.446 01:14:15 nvmf_tcp.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:28:22.446 01:14:15 nvmf_tcp.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:28:22.446 01:14:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:22.446 01:14:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@117 -- # sync 00:28:22.446 01:14:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:22.446 01:14:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@120 -- # set +e 00:28:22.446 01:14:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:22.446 01:14:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:22.446 rmmod nvme_tcp 00:28:22.446 rmmod nvme_fabrics 00:28:22.446 rmmod nvme_keyring 00:28:22.446 01:14:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:22.446 01:14:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@124 -- # set -e 00:28:22.446 01:14:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@125 -- # return 0 00:28:22.446 01:14:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@489 -- # '[' -n 3864891 ']' 00:28:22.446 01:14:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@490 -- # killprocess 3864891 00:28:22.446 01:14:15 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@946 -- # '[' -z 3864891 ']' 00:28:22.446 01:14:15 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@950 -- # kill -0 3864891 00:28:22.446 01:14:15 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@951 -- # uname 00:28:22.446 01:14:15 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:28:22.446 01:14:15 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3864891 00:28:22.446 01:14:15 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:28:22.446 01:14:15 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:28:22.446 01:14:15 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3864891' 00:28:22.446 killing process with pid 3864891 00:28:22.446 01:14:15 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@965 -- # kill 3864891 00:28:22.446 01:14:15 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@970 -- # wait 3864891 00:28:22.704 01:14:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:22.704 01:14:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:22.704 01:14:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:22.704 01:14:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:22.704 01:14:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:22.704 01:14:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:22.704 01:14:15 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:22.704 01:14:15 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:25.234 01:14:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:25.234 00:28:25.234 real 0m5.537s 00:28:25.234 user 0m4.472s 00:28:25.234 sys 0m1.938s 00:28:25.234 01:14:17 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1122 -- # xtrace_disable 00:28:25.234 01:14:17 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:25.234 ************************************ 00:28:25.234 END TEST nvmf_identify 00:28:25.234 ************************************ 00:28:25.234 01:14:17 nvmf_tcp -- nvmf/nvmf.sh@98 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:28:25.234 01:14:17 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:28:25.234 01:14:17 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:28:25.234 01:14:17 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:25.234 ************************************ 00:28:25.234 START TEST nvmf_perf 00:28:25.234 ************************************ 00:28:25.234 01:14:17 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:28:25.234 * Looking for test storage... 00:28:25.234 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:25.234 01:14:17 nvmf_tcp.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:25.234 01:14:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:28:25.234 01:14:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:25.234 01:14:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:25.234 01:14:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:25.234 01:14:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:25.234 01:14:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:25.234 01:14:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:25.234 01:14:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:25.234 01:14:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:25.234 01:14:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:25.234 01:14:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:25.234 01:14:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:28:25.234 01:14:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:28:25.234 01:14:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:25.234 01:14:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:25.234 01:14:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:25.234 01:14:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:25.234 01:14:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:25.234 01:14:17 nvmf_tcp.nvmf_perf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:25.234 01:14:17 nvmf_tcp.nvmf_perf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:25.234 01:14:17 nvmf_tcp.nvmf_perf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:25.234 01:14:17 nvmf_tcp.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:25.234 01:14:17 nvmf_tcp.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:25.234 01:14:17 nvmf_tcp.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:25.234 01:14:17 nvmf_tcp.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:28:25.234 01:14:17 nvmf_tcp.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:25.234 01:14:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@47 -- # : 0 00:28:25.234 01:14:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:25.234 01:14:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:25.234 01:14:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:25.234 01:14:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:25.234 01:14:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:25.234 01:14:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:25.234 01:14:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:25.234 01:14:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:25.234 01:14:17 nvmf_tcp.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:28:25.234 01:14:17 nvmf_tcp.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:28:25.234 01:14:17 nvmf_tcp.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:28:25.234 01:14:17 nvmf_tcp.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:28:25.234 01:14:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:25.234 01:14:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:25.234 01:14:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:25.234 01:14:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:25.234 01:14:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:25.234 01:14:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:25.234 01:14:17 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:25.234 01:14:17 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:25.234 01:14:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:25.234 01:14:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:25.234 01:14:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@285 -- # xtrace_disable 00:28:25.234 01:14:17 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:28:27.136 01:14:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:27.136 01:14:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@291 -- # pci_devs=() 00:28:27.136 01:14:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:27.136 01:14:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:27.136 01:14:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:27.136 01:14:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:27.136 01:14:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:27.136 01:14:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@295 -- # net_devs=() 00:28:27.136 01:14:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:27.136 01:14:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@296 -- # e810=() 00:28:27.136 01:14:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@296 -- # local -ga e810 00:28:27.136 01:14:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@297 -- # x722=() 00:28:27.136 01:14:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@297 -- # local -ga x722 00:28:27.136 01:14:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@298 -- # mlx=() 00:28:27.136 01:14:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@298 -- # local -ga mlx 00:28:27.136 01:14:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:27.136 01:14:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:27.136 01:14:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:27.136 01:14:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:27.136 01:14:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:27.136 01:14:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:27.136 01:14:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:27.136 01:14:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:27.136 01:14:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:27.136 01:14:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:27.136 01:14:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:27.136 01:14:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:27.136 01:14:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:27.136 01:14:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:27.136 01:14:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:27.136 01:14:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:27.136 01:14:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:27.136 01:14:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:27.136 01:14:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:27.136 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:27.136 01:14:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:27.136 01:14:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:27.136 01:14:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:27.136 01:14:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:27.136 01:14:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:27.136 01:14:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:27.136 01:14:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:27.136 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:27.136 01:14:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:27.136 01:14:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:27.136 01:14:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:27.136 01:14:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:27.136 01:14:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:27.136 01:14:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:27.136 01:14:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:27.136 01:14:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:27.136 01:14:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:27.136 01:14:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:27.136 01:14:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:27.136 01:14:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:27.136 01:14:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:27.136 01:14:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:27.136 01:14:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:27.136 01:14:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:27.136 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:27.136 01:14:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:27.136 01:14:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:27.136 01:14:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:27.136 01:14:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:27.136 01:14:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:27.136 01:14:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:27.136 01:14:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:27.136 01:14:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:27.136 01:14:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:27.136 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:27.136 01:14:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:27.136 01:14:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:27.136 01:14:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # is_hw=yes 00:28:27.136 01:14:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:27.136 01:14:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:27.136 01:14:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:27.136 01:14:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:27.136 01:14:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:27.136 01:14:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:27.136 01:14:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:27.136 01:14:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:27.136 01:14:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:27.136 01:14:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:27.136 01:14:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:27.137 01:14:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:27.137 01:14:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:27.137 01:14:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:27.137 01:14:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:27.137 01:14:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:27.137 01:14:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:27.137 01:14:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:27.137 01:14:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:27.137 01:14:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:27.137 01:14:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:27.137 01:14:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:27.137 01:14:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:27.137 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:27.137 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.221 ms 00:28:27.137 00:28:27.137 --- 10.0.0.2 ping statistics --- 00:28:27.137 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:27.137 rtt min/avg/max/mdev = 0.221/0.221/0.221/0.000 ms 00:28:27.137 01:14:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:27.137 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:27.137 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.202 ms 00:28:27.137 00:28:27.137 --- 10.0.0.1 ping statistics --- 00:28:27.137 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:27.137 rtt min/avg/max/mdev = 0.202/0.202/0.202/0.000 ms 00:28:27.137 01:14:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:27.137 01:14:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@422 -- # return 0 00:28:27.137 01:14:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:27.137 01:14:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:27.137 01:14:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:27.137 01:14:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:27.137 01:14:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:27.137 01:14:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:27.137 01:14:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:27.137 01:14:20 nvmf_tcp.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:28:27.137 01:14:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:27.137 01:14:20 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@720 -- # xtrace_disable 00:28:27.137 01:14:20 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:28:27.137 01:14:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@481 -- # nvmfpid=3866967 00:28:27.137 01:14:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@482 -- # waitforlisten 3866967 00:28:27.137 01:14:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:28:27.137 01:14:20 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@827 -- # '[' -z 3866967 ']' 00:28:27.137 01:14:20 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:27.137 01:14:20 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@832 -- # local max_retries=100 00:28:27.137 01:14:20 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:27.137 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:27.137 01:14:20 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@836 -- # xtrace_disable 00:28:27.137 01:14:20 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:28:27.137 [2024-07-25 01:14:20.121258] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 23.11.0 initialization... 00:28:27.137 [2024-07-25 01:14:20.121374] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:27.137 EAL: No free 2048 kB hugepages reported on node 1 00:28:27.137 [2024-07-25 01:14:20.203261] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:27.395 [2024-07-25 01:14:20.299875] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:27.395 [2024-07-25 01:14:20.299927] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:27.395 [2024-07-25 01:14:20.299943] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:27.395 [2024-07-25 01:14:20.299957] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:27.395 [2024-07-25 01:14:20.299969] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:27.395 [2024-07-25 01:14:20.303265] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:27.395 [2024-07-25 01:14:20.303310] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:28:27.395 [2024-07-25 01:14:20.303398] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:28:27.395 [2024-07-25 01:14:20.303401] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:27.395 01:14:20 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:28:27.395 01:14:20 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@860 -- # return 0 00:28:27.395 01:14:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:27.395 01:14:20 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:27.395 01:14:20 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:28:27.395 01:14:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:27.395 01:14:20 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:28:27.395 01:14:20 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:28:30.670 01:14:23 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:28:30.670 01:14:23 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:28:30.670 01:14:23 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:88:00.0 00:28:30.670 01:14:23 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:28:30.928 01:14:24 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:28:30.928 01:14:24 nvmf_tcp.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:88:00.0 ']' 00:28:30.928 01:14:24 nvmf_tcp.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:28:30.928 01:14:24 nvmf_tcp.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:28:30.928 01:14:24 nvmf_tcp.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:28:31.185 [2024-07-25 01:14:24.301479] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:31.185 01:14:24 nvmf_tcp.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:31.443 01:14:24 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:28:31.443 01:14:24 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:31.700 01:14:24 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:28:31.700 01:14:24 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:28:31.958 01:14:25 nvmf_tcp.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:32.216 [2024-07-25 01:14:25.297115] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:32.216 01:14:25 nvmf_tcp.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:28:32.473 01:14:25 nvmf_tcp.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:88:00.0 ']' 00:28:32.473 01:14:25 nvmf_tcp.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:88:00.0' 00:28:32.473 01:14:25 nvmf_tcp.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:28:32.473 01:14:25 nvmf_tcp.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:88:00.0' 00:28:33.875 Initializing NVMe Controllers 00:28:33.875 Attached to NVMe Controller at 0000:88:00.0 [8086:0a54] 00:28:33.876 Associating PCIE (0000:88:00.0) NSID 1 with lcore 0 00:28:33.876 Initialization complete. Launching workers. 00:28:33.876 ======================================================== 00:28:33.876 Latency(us) 00:28:33.876 Device Information : IOPS MiB/s Average min max 00:28:33.876 PCIE (0000:88:00.0) NSID 1 from core 0: 84884.05 331.58 376.43 44.78 4510.90 00:28:33.876 ======================================================== 00:28:33.876 Total : 84884.05 331.58 376.43 44.78 4510.90 00:28:33.876 00:28:33.876 01:14:26 nvmf_tcp.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:33.876 EAL: No free 2048 kB hugepages reported on node 1 00:28:35.247 Initializing NVMe Controllers 00:28:35.247 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:35.247 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:35.247 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:28:35.247 Initialization complete. Launching workers. 00:28:35.247 ======================================================== 00:28:35.247 Latency(us) 00:28:35.247 Device Information : IOPS MiB/s Average min max 00:28:35.247 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 85.00 0.33 11864.55 194.15 45967.82 00:28:35.247 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 57.00 0.22 17692.29 7926.82 59857.01 00:28:35.247 ======================================================== 00:28:35.247 Total : 142.00 0.55 14203.85 194.15 59857.01 00:28:35.247 00:28:35.247 01:14:28 nvmf_tcp.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:35.247 EAL: No free 2048 kB hugepages reported on node 1 00:28:36.181 Initializing NVMe Controllers 00:28:36.181 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:36.181 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:36.181 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:28:36.181 Initialization complete. Launching workers. 00:28:36.181 ======================================================== 00:28:36.181 Latency(us) 00:28:36.181 Device Information : IOPS MiB/s Average min max 00:28:36.181 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8338.88 32.57 3838.29 422.79 9776.09 00:28:36.181 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3878.59 15.15 8304.00 4995.23 15761.86 00:28:36.181 ======================================================== 00:28:36.181 Total : 12217.47 47.72 5255.99 422.79 15761.86 00:28:36.181 00:28:36.181 01:14:29 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:28:36.181 01:14:29 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:28:36.181 01:14:29 nvmf_tcp.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:36.181 EAL: No free 2048 kB hugepages reported on node 1 00:28:38.707 Initializing NVMe Controllers 00:28:38.707 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:38.707 Controller IO queue size 128, less than required. 00:28:38.707 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:38.707 Controller IO queue size 128, less than required. 00:28:38.707 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:38.707 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:38.707 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:28:38.707 Initialization complete. Launching workers. 00:28:38.707 ======================================================== 00:28:38.707 Latency(us) 00:28:38.707 Device Information : IOPS MiB/s Average min max 00:28:38.707 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1357.30 339.32 95739.54 55942.34 175359.51 00:28:38.707 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 595.91 148.98 226001.90 69531.17 356084.21 00:28:38.707 ======================================================== 00:28:38.707 Total : 1953.21 488.30 135481.73 55942.34 356084.21 00:28:38.707 00:28:38.707 01:14:31 nvmf_tcp.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:28:38.707 EAL: No free 2048 kB hugepages reported on node 1 00:28:38.965 No valid NVMe controllers or AIO or URING devices found 00:28:38.965 Initializing NVMe Controllers 00:28:38.965 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:38.965 Controller IO queue size 128, less than required. 00:28:38.965 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:38.965 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:28:38.965 Controller IO queue size 128, less than required. 00:28:38.965 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:38.965 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:28:38.965 WARNING: Some requested NVMe devices were skipped 00:28:38.965 01:14:31 nvmf_tcp.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:28:38.965 EAL: No free 2048 kB hugepages reported on node 1 00:28:41.492 Initializing NVMe Controllers 00:28:41.492 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:41.492 Controller IO queue size 128, less than required. 00:28:41.492 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:41.492 Controller IO queue size 128, less than required. 00:28:41.492 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:41.492 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:41.492 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:28:41.492 Initialization complete. Launching workers. 00:28:41.492 00:28:41.492 ==================== 00:28:41.492 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:28:41.492 TCP transport: 00:28:41.492 polls: 18142 00:28:41.492 idle_polls: 6451 00:28:41.492 sock_completions: 11691 00:28:41.492 nvme_completions: 5215 00:28:41.492 submitted_requests: 7838 00:28:41.492 queued_requests: 1 00:28:41.492 00:28:41.492 ==================== 00:28:41.492 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:28:41.492 TCP transport: 00:28:41.492 polls: 21522 00:28:41.492 idle_polls: 10078 00:28:41.492 sock_completions: 11444 00:28:41.492 nvme_completions: 4541 00:28:41.492 submitted_requests: 6748 00:28:41.492 queued_requests: 1 00:28:41.492 ======================================================== 00:28:41.492 Latency(us) 00:28:41.492 Device Information : IOPS MiB/s Average min max 00:28:41.492 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1303.50 325.87 100365.72 63214.61 159266.81 00:28:41.492 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1135.00 283.75 114216.65 53353.88 165995.68 00:28:41.492 ======================================================== 00:28:41.492 Total : 2438.49 609.62 106812.63 53353.88 165995.68 00:28:41.492 00:28:41.492 01:14:34 nvmf_tcp.nvmf_perf -- host/perf.sh@66 -- # sync 00:28:41.492 01:14:34 nvmf_tcp.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:41.750 01:14:34 nvmf_tcp.nvmf_perf -- host/perf.sh@69 -- # '[' 1 -eq 1 ']' 00:28:41.750 01:14:34 nvmf_tcp.nvmf_perf -- host/perf.sh@71 -- # '[' -n 0000:88:00.0 ']' 00:28:41.750 01:14:34 nvmf_tcp.nvmf_perf -- host/perf.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore Nvme0n1 lvs_0 00:28:45.028 01:14:38 nvmf_tcp.nvmf_perf -- host/perf.sh@72 -- # ls_guid=2094fba4-988e-45b8-bf4a-43120aa07228 00:28:45.028 01:14:38 nvmf_tcp.nvmf_perf -- host/perf.sh@73 -- # get_lvs_free_mb 2094fba4-988e-45b8-bf4a-43120aa07228 00:28:45.028 01:14:38 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1360 -- # local lvs_uuid=2094fba4-988e-45b8-bf4a-43120aa07228 00:28:45.028 01:14:38 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1361 -- # local lvs_info 00:28:45.028 01:14:38 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1362 -- # local fc 00:28:45.028 01:14:38 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1363 -- # local cs 00:28:45.028 01:14:38 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1364 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:28:45.285 01:14:38 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1364 -- # lvs_info='[ 00:28:45.285 { 00:28:45.285 "uuid": "2094fba4-988e-45b8-bf4a-43120aa07228", 00:28:45.285 "name": "lvs_0", 00:28:45.285 "base_bdev": "Nvme0n1", 00:28:45.285 "total_data_clusters": 238234, 00:28:45.285 "free_clusters": 238234, 00:28:45.285 "block_size": 512, 00:28:45.285 "cluster_size": 4194304 00:28:45.285 } 00:28:45.285 ]' 00:28:45.285 01:14:38 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1365 -- # jq '.[] | select(.uuid=="2094fba4-988e-45b8-bf4a-43120aa07228") .free_clusters' 00:28:45.285 01:14:38 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1365 -- # fc=238234 00:28:45.285 01:14:38 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1366 -- # jq '.[] | select(.uuid=="2094fba4-988e-45b8-bf4a-43120aa07228") .cluster_size' 00:28:45.286 01:14:38 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1366 -- # cs=4194304 00:28:45.286 01:14:38 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1369 -- # free_mb=952936 00:28:45.286 01:14:38 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1370 -- # echo 952936 00:28:45.286 952936 00:28:45.286 01:14:38 nvmf_tcp.nvmf_perf -- host/perf.sh@77 -- # '[' 952936 -gt 20480 ']' 00:28:45.286 01:14:38 nvmf_tcp.nvmf_perf -- host/perf.sh@78 -- # free_mb=20480 00:28:45.286 01:14:38 nvmf_tcp.nvmf_perf -- host/perf.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 2094fba4-988e-45b8-bf4a-43120aa07228 lbd_0 20480 00:28:45.849 01:14:38 nvmf_tcp.nvmf_perf -- host/perf.sh@80 -- # lb_guid=70650677-9003-4c1a-8c0b-3f3491d67f45 00:28:45.849 01:14:38 nvmf_tcp.nvmf_perf -- host/perf.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore 70650677-9003-4c1a-8c0b-3f3491d67f45 lvs_n_0 00:28:46.778 01:14:39 nvmf_tcp.nvmf_perf -- host/perf.sh@83 -- # ls_nested_guid=4f6d907d-25a6-49f7-9188-b5e9fc27f83c 00:28:46.778 01:14:39 nvmf_tcp.nvmf_perf -- host/perf.sh@84 -- # get_lvs_free_mb 4f6d907d-25a6-49f7-9188-b5e9fc27f83c 00:28:46.778 01:14:39 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1360 -- # local lvs_uuid=4f6d907d-25a6-49f7-9188-b5e9fc27f83c 00:28:46.778 01:14:39 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1361 -- # local lvs_info 00:28:46.778 01:14:39 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1362 -- # local fc 00:28:46.778 01:14:39 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1363 -- # local cs 00:28:46.778 01:14:39 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1364 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:28:46.778 01:14:39 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1364 -- # lvs_info='[ 00:28:46.778 { 00:28:46.778 "uuid": "2094fba4-988e-45b8-bf4a-43120aa07228", 00:28:46.778 "name": "lvs_0", 00:28:46.778 "base_bdev": "Nvme0n1", 00:28:46.779 "total_data_clusters": 238234, 00:28:46.779 "free_clusters": 233114, 00:28:46.779 "block_size": 512, 00:28:46.779 "cluster_size": 4194304 00:28:46.779 }, 00:28:46.779 { 00:28:46.779 "uuid": "4f6d907d-25a6-49f7-9188-b5e9fc27f83c", 00:28:46.779 "name": "lvs_n_0", 00:28:46.779 "base_bdev": "70650677-9003-4c1a-8c0b-3f3491d67f45", 00:28:46.779 "total_data_clusters": 5114, 00:28:46.779 "free_clusters": 5114, 00:28:46.779 "block_size": 512, 00:28:46.779 "cluster_size": 4194304 00:28:46.779 } 00:28:46.779 ]' 00:28:46.779 01:14:39 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1365 -- # jq '.[] | select(.uuid=="4f6d907d-25a6-49f7-9188-b5e9fc27f83c") .free_clusters' 00:28:46.779 01:14:39 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1365 -- # fc=5114 00:28:46.779 01:14:39 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1366 -- # jq '.[] | select(.uuid=="4f6d907d-25a6-49f7-9188-b5e9fc27f83c") .cluster_size' 00:28:47.036 01:14:39 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1366 -- # cs=4194304 00:28:47.036 01:14:39 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1369 -- # free_mb=20456 00:28:47.036 01:14:39 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1370 -- # echo 20456 00:28:47.036 20456 00:28:47.036 01:14:39 nvmf_tcp.nvmf_perf -- host/perf.sh@85 -- # '[' 20456 -gt 20480 ']' 00:28:47.036 01:14:39 nvmf_tcp.nvmf_perf -- host/perf.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 4f6d907d-25a6-49f7-9188-b5e9fc27f83c lbd_nest_0 20456 00:28:47.293 01:14:40 nvmf_tcp.nvmf_perf -- host/perf.sh@88 -- # lb_nested_guid=fb62a25c-5625-4ee8-9186-e16b312908b9 00:28:47.293 01:14:40 nvmf_tcp.nvmf_perf -- host/perf.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:47.550 01:14:40 nvmf_tcp.nvmf_perf -- host/perf.sh@90 -- # for bdev in $lb_nested_guid 00:28:47.550 01:14:40 nvmf_tcp.nvmf_perf -- host/perf.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 fb62a25c-5625-4ee8-9186-e16b312908b9 00:28:47.550 01:14:40 nvmf_tcp.nvmf_perf -- host/perf.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:47.807 01:14:40 nvmf_tcp.nvmf_perf -- host/perf.sh@95 -- # qd_depth=("1" "32" "128") 00:28:47.807 01:14:40 nvmf_tcp.nvmf_perf -- host/perf.sh@96 -- # io_size=("512" "131072") 00:28:47.807 01:14:40 nvmf_tcp.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:28:47.807 01:14:40 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:28:47.807 01:14:40 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:47.807 EAL: No free 2048 kB hugepages reported on node 1 00:28:59.994 Initializing NVMe Controllers 00:28:59.994 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:59.994 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:59.994 Initialization complete. Launching workers. 00:28:59.994 ======================================================== 00:28:59.994 Latency(us) 00:28:59.994 Device Information : IOPS MiB/s Average min max 00:28:59.994 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 49.50 0.02 20263.16 211.22 46884.01 00:28:59.994 ======================================================== 00:28:59.994 Total : 49.50 0.02 20263.16 211.22 46884.01 00:28:59.994 00:28:59.994 01:14:51 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:28:59.994 01:14:51 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:59.994 EAL: No free 2048 kB hugepages reported on node 1 00:29:10.030 Initializing NVMe Controllers 00:29:10.030 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:10.030 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:10.030 Initialization complete. Launching workers. 00:29:10.030 ======================================================== 00:29:10.030 Latency(us) 00:29:10.030 Device Information : IOPS MiB/s Average min max 00:29:10.030 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 79.48 9.94 12601.23 5034.82 50880.02 00:29:10.030 ======================================================== 00:29:10.030 Total : 79.48 9.94 12601.23 5034.82 50880.02 00:29:10.030 00:29:10.030 01:15:01 nvmf_tcp.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:29:10.030 01:15:01 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:29:10.030 01:15:01 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:10.030 EAL: No free 2048 kB hugepages reported on node 1 00:29:19.991 Initializing NVMe Controllers 00:29:19.991 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:19.991 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:19.991 Initialization complete. Launching workers. 00:29:19.991 ======================================================== 00:29:19.991 Latency(us) 00:29:19.991 Device Information : IOPS MiB/s Average min max 00:29:19.991 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 6703.40 3.27 4774.23 307.55 12160.12 00:29:19.991 ======================================================== 00:29:19.991 Total : 6703.40 3.27 4774.23 307.55 12160.12 00:29:19.991 00:29:19.991 01:15:11 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:29:19.991 01:15:11 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:19.991 EAL: No free 2048 kB hugepages reported on node 1 00:29:29.961 Initializing NVMe Controllers 00:29:29.961 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:29.961 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:29.961 Initialization complete. Launching workers. 00:29:29.961 ======================================================== 00:29:29.961 Latency(us) 00:29:29.961 Device Information : IOPS MiB/s Average min max 00:29:29.961 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2681.70 335.21 11939.78 681.32 28898.27 00:29:29.961 ======================================================== 00:29:29.961 Total : 2681.70 335.21 11939.78 681.32 28898.27 00:29:29.961 00:29:29.961 01:15:22 nvmf_tcp.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:29:29.961 01:15:22 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:29:29.961 01:15:22 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:29.961 EAL: No free 2048 kB hugepages reported on node 1 00:29:39.920 Initializing NVMe Controllers 00:29:39.920 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:39.920 Controller IO queue size 128, less than required. 00:29:39.920 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:39.920 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:39.920 Initialization complete. Launching workers. 00:29:39.920 ======================================================== 00:29:39.920 Latency(us) 00:29:39.920 Device Information : IOPS MiB/s Average min max 00:29:39.920 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 11873.10 5.80 10789.18 1695.69 54790.72 00:29:39.920 ======================================================== 00:29:39.920 Total : 11873.10 5.80 10789.18 1695.69 54790.72 00:29:39.920 00:29:39.920 01:15:32 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:29:39.920 01:15:32 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:39.920 EAL: No free 2048 kB hugepages reported on node 1 00:29:52.108 Initializing NVMe Controllers 00:29:52.108 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:52.108 Controller IO queue size 128, less than required. 00:29:52.108 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:52.108 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:52.108 Initialization complete. Launching workers. 00:29:52.108 ======================================================== 00:29:52.108 Latency(us) 00:29:52.108 Device Information : IOPS MiB/s Average min max 00:29:52.108 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1199.61 149.95 107592.14 15302.43 212430.44 00:29:52.108 ======================================================== 00:29:52.108 Total : 1199.61 149.95 107592.14 15302.43 212430.44 00:29:52.108 00:29:52.108 01:15:43 nvmf_tcp.nvmf_perf -- host/perf.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:52.108 01:15:43 nvmf_tcp.nvmf_perf -- host/perf.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete fb62a25c-5625-4ee8-9186-e16b312908b9 00:29:52.108 01:15:44 nvmf_tcp.nvmf_perf -- host/perf.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:29:52.108 01:15:44 nvmf_tcp.nvmf_perf -- host/perf.sh@107 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 70650677-9003-4c1a-8c0b-3f3491d67f45 00:29:52.108 01:15:44 nvmf_tcp.nvmf_perf -- host/perf.sh@108 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:29:52.108 01:15:44 nvmf_tcp.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:29:52.108 01:15:44 nvmf_tcp.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:29:52.108 01:15:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@488 -- # nvmfcleanup 00:29:52.108 01:15:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@117 -- # sync 00:29:52.108 01:15:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:29:52.108 01:15:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@120 -- # set +e 00:29:52.108 01:15:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:52.108 01:15:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:29:52.108 rmmod nvme_tcp 00:29:52.108 rmmod nvme_fabrics 00:29:52.108 rmmod nvme_keyring 00:29:52.108 01:15:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:52.108 01:15:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@124 -- # set -e 00:29:52.108 01:15:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@125 -- # return 0 00:29:52.108 01:15:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@489 -- # '[' -n 3866967 ']' 00:29:52.108 01:15:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@490 -- # killprocess 3866967 00:29:52.108 01:15:44 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@946 -- # '[' -z 3866967 ']' 00:29:52.108 01:15:44 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@950 -- # kill -0 3866967 00:29:52.108 01:15:44 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@951 -- # uname 00:29:52.108 01:15:44 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:29:52.108 01:15:44 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3866967 00:29:52.108 01:15:44 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:29:52.108 01:15:44 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:29:52.108 01:15:44 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3866967' 00:29:52.108 killing process with pid 3866967 00:29:52.108 01:15:44 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@965 -- # kill 3866967 00:29:52.108 01:15:44 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@970 -- # wait 3866967 00:29:53.481 01:15:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:29:53.481 01:15:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:29:53.481 01:15:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:29:53.481 01:15:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:53.481 01:15:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:53.481 01:15:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:53.481 01:15:46 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:53.481 01:15:46 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:56.013 01:15:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:29:56.013 00:29:56.013 real 1m30.742s 00:29:56.013 user 5m28.156s 00:29:56.013 sys 0m17.028s 00:29:56.013 01:15:48 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:29:56.013 01:15:48 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:29:56.013 ************************************ 00:29:56.013 END TEST nvmf_perf 00:29:56.013 ************************************ 00:29:56.013 01:15:48 nvmf_tcp -- nvmf/nvmf.sh@99 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:29:56.013 01:15:48 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:29:56.013 01:15:48 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:29:56.013 01:15:48 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:56.013 ************************************ 00:29:56.013 START TEST nvmf_fio_host 00:29:56.013 ************************************ 00:29:56.013 01:15:48 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:29:56.013 * Looking for test storage... 00:29:56.013 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:56.013 01:15:48 nvmf_tcp.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:56.013 01:15:48 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:56.013 01:15:48 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:56.013 01:15:48 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:56.013 01:15:48 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:56.013 01:15:48 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:56.013 01:15:48 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:56.013 01:15:48 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:29:56.013 01:15:48 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:56.013 01:15:48 nvmf_tcp.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:56.013 01:15:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:29:56.013 01:15:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:56.013 01:15:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:56.013 01:15:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:56.013 01:15:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:56.013 01:15:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:56.013 01:15:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:56.013 01:15:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:56.013 01:15:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:56.013 01:15:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:56.013 01:15:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:56.013 01:15:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:29:56.013 01:15:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:29:56.013 01:15:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:56.013 01:15:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:56.013 01:15:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:56.013 01:15:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:56.013 01:15:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:56.013 01:15:48 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:56.013 01:15:48 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:56.014 01:15:48 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:56.014 01:15:48 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:56.014 01:15:48 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:56.014 01:15:48 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:56.014 01:15:48 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:29:56.014 01:15:48 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:56.014 01:15:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@47 -- # : 0 00:29:56.014 01:15:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:56.014 01:15:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:56.014 01:15:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:56.014 01:15:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:56.014 01:15:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:56.014 01:15:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:56.014 01:15:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:56.014 01:15:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:56.014 01:15:48 nvmf_tcp.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:56.014 01:15:48 nvmf_tcp.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:29:56.014 01:15:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:29:56.014 01:15:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:56.014 01:15:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:29:56.014 01:15:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:29:56.014 01:15:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:29:56.014 01:15:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:56.014 01:15:48 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:56.014 01:15:48 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:56.014 01:15:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:29:56.014 01:15:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:29:56.014 01:15:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@285 -- # xtrace_disable 00:29:56.014 01:15:48 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:29:57.956 01:15:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:57.956 01:15:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@291 -- # pci_devs=() 00:29:57.956 01:15:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:29:57.956 01:15:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:29:57.956 01:15:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:29:57.956 01:15:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:29:57.956 01:15:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:29:57.956 01:15:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@295 -- # net_devs=() 00:29:57.956 01:15:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:29:57.956 01:15:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@296 -- # e810=() 00:29:57.956 01:15:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@296 -- # local -ga e810 00:29:57.956 01:15:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@297 -- # x722=() 00:29:57.956 01:15:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@297 -- # local -ga x722 00:29:57.956 01:15:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@298 -- # mlx=() 00:29:57.956 01:15:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@298 -- # local -ga mlx 00:29:57.956 01:15:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:57.956 01:15:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:57.956 01:15:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:57.956 01:15:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:57.956 01:15:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:57.956 01:15:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:57.956 01:15:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:57.956 01:15:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:57.956 01:15:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:57.957 01:15:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:57.957 01:15:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:57.957 01:15:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:29:57.957 01:15:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:29:57.957 01:15:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:29:57.957 01:15:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:29:57.957 01:15:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:29:57.957 01:15:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:29:57.957 01:15:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:57.957 01:15:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:29:57.957 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:29:57.957 01:15:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:57.957 01:15:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:57.957 01:15:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:57.957 01:15:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:57.957 01:15:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:57.957 01:15:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:57.957 01:15:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:29:57.957 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:29:57.957 01:15:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:57.957 01:15:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:57.957 01:15:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:57.957 01:15:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:57.957 01:15:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:57.957 01:15:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:29:57.957 01:15:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:29:57.957 01:15:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:29:57.957 01:15:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:57.957 01:15:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:57.957 01:15:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:57.957 01:15:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:57.957 01:15:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:57.957 01:15:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:57.957 01:15:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:57.957 01:15:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:29:57.957 Found net devices under 0000:0a:00.0: cvl_0_0 00:29:57.957 01:15:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:57.957 01:15:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:57.957 01:15:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:57.957 01:15:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:57.957 01:15:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:57.957 01:15:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:57.957 01:15:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:57.957 01:15:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:57.957 01:15:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:29:57.957 Found net devices under 0000:0a:00.1: cvl_0_1 00:29:57.957 01:15:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:57.957 01:15:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:29:57.957 01:15:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # is_hw=yes 00:29:57.957 01:15:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:29:57.957 01:15:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:29:57.957 01:15:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:29:57.957 01:15:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:57.957 01:15:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:57.957 01:15:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:57.957 01:15:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:29:57.957 01:15:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:57.957 01:15:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:57.957 01:15:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:29:57.957 01:15:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:57.957 01:15:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:57.957 01:15:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:29:57.957 01:15:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:29:57.957 01:15:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:29:57.957 01:15:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:57.957 01:15:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:57.957 01:15:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:57.957 01:15:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:29:57.957 01:15:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:57.957 01:15:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:57.957 01:15:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:57.957 01:15:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:29:57.957 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:57.957 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.242 ms 00:29:57.957 00:29:57.957 --- 10.0.0.2 ping statistics --- 00:29:57.957 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:57.957 rtt min/avg/max/mdev = 0.242/0.242/0.242/0.000 ms 00:29:57.957 01:15:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:57.957 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:57.957 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.183 ms 00:29:57.957 00:29:57.957 --- 10.0.0.1 ping statistics --- 00:29:57.957 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:57.957 rtt min/avg/max/mdev = 0.183/0.183/0.183/0.000 ms 00:29:57.957 01:15:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:57.957 01:15:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@422 -- # return 0 00:29:57.957 01:15:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:29:57.957 01:15:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:57.957 01:15:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:29:57.957 01:15:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:29:57.957 01:15:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:57.957 01:15:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:29:57.957 01:15:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:29:57.957 01:15:50 nvmf_tcp.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:29:57.957 01:15:50 nvmf_tcp.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:29:57.957 01:15:50 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@720 -- # xtrace_disable 00:29:57.957 01:15:50 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:29:57.957 01:15:50 nvmf_tcp.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=3879580 00:29:57.957 01:15:50 nvmf_tcp.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:29:57.957 01:15:50 nvmf_tcp.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:57.957 01:15:50 nvmf_tcp.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 3879580 00:29:57.957 01:15:50 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@827 -- # '[' -z 3879580 ']' 00:29:57.957 01:15:50 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:57.957 01:15:50 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@832 -- # local max_retries=100 00:29:57.957 01:15:50 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:57.957 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:57.957 01:15:50 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@836 -- # xtrace_disable 00:29:57.957 01:15:50 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:29:57.957 [2024-07-25 01:15:50.930545] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 23.11.0 initialization... 00:29:57.957 [2024-07-25 01:15:50.930634] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:57.957 EAL: No free 2048 kB hugepages reported on node 1 00:29:57.957 [2024-07-25 01:15:50.998650] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:57.957 [2024-07-25 01:15:51.089340] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:57.957 [2024-07-25 01:15:51.089400] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:57.957 [2024-07-25 01:15:51.089426] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:57.957 [2024-07-25 01:15:51.089439] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:57.957 [2024-07-25 01:15:51.089451] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:57.957 [2024-07-25 01:15:51.089543] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:57.957 [2024-07-25 01:15:51.089622] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:29:57.957 [2024-07-25 01:15:51.089717] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:29:57.957 [2024-07-25 01:15:51.089719] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:58.215 01:15:51 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:29:58.215 01:15:51 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@860 -- # return 0 00:29:58.215 01:15:51 nvmf_tcp.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:29:58.473 [2024-07-25 01:15:51.467612] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:58.473 01:15:51 nvmf_tcp.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:29:58.473 01:15:51 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:58.473 01:15:51 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:29:58.473 01:15:51 nvmf_tcp.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:29:58.731 Malloc1 00:29:58.731 01:15:51 nvmf_tcp.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:58.988 01:15:52 nvmf_tcp.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:29:59.246 01:15:52 nvmf_tcp.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:59.504 [2024-07-25 01:15:52.527237] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:59.504 01:15:52 nvmf_tcp.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:59.762 01:15:52 nvmf_tcp.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:29:59.762 01:15:52 nvmf_tcp.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:29:59.762 01:15:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:29:59.762 01:15:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:29:59.762 01:15:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:29:59.762 01:15:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1335 -- # local sanitizers 00:29:59.762 01:15:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:29:59.762 01:15:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # shift 00:29:59.762 01:15:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local asan_lib= 00:29:59.762 01:15:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:29:59.762 01:15:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:29:59.762 01:15:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libasan 00:29:59.762 01:15:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:29:59.762 01:15:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:29:59.762 01:15:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:29:59.762 01:15:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:29:59.762 01:15:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:29:59.762 01:15:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:29:59.762 01:15:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:29:59.762 01:15:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:29:59.762 01:15:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:29:59.762 01:15:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:29:59.762 01:15:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:30:00.019 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:30:00.019 fio-3.35 00:30:00.019 Starting 1 thread 00:30:00.019 EAL: No free 2048 kB hugepages reported on node 1 00:30:02.546 00:30:02.546 test: (groupid=0, jobs=1): err= 0: pid=3879937: Thu Jul 25 01:15:55 2024 00:30:02.546 read: IOPS=8880, BW=34.7MiB/s (36.4MB/s)(69.6MiB/2007msec) 00:30:02.546 slat (usec): min=2, max=158, avg= 2.68, stdev= 1.87 00:30:02.546 clat (usec): min=2518, max=13867, avg=7945.88, stdev=619.80 00:30:02.546 lat (usec): min=2547, max=13870, avg=7948.56, stdev=619.69 00:30:02.546 clat percentiles (usec): 00:30:02.546 | 1.00th=[ 6587], 5.00th=[ 6980], 10.00th=[ 7177], 20.00th=[ 7439], 00:30:02.546 | 30.00th=[ 7635], 40.00th=[ 7832], 50.00th=[ 7963], 60.00th=[ 8094], 00:30:02.546 | 70.00th=[ 8225], 80.00th=[ 8455], 90.00th=[ 8717], 95.00th=[ 8848], 00:30:02.546 | 99.00th=[ 9241], 99.50th=[ 9503], 99.90th=[12125], 99.95th=[13173], 00:30:02.546 | 99.99th=[13829] 00:30:02.546 bw ( KiB/s): min=34648, max=36112, per=99.94%, avg=35504.00, stdev=616.05, samples=4 00:30:02.546 iops : min= 8662, max= 9028, avg=8876.00, stdev=154.01, samples=4 00:30:02.546 write: IOPS=8895, BW=34.7MiB/s (36.4MB/s)(69.7MiB/2007msec); 0 zone resets 00:30:02.546 slat (usec): min=2, max=180, avg= 2.79, stdev= 1.65 00:30:02.546 clat (usec): min=1464, max=12336, avg=6409.58, stdev=531.05 00:30:02.546 lat (usec): min=1473, max=12339, avg=6412.37, stdev=531.00 00:30:02.546 clat percentiles (usec): 00:30:02.546 | 1.00th=[ 5211], 5.00th=[ 5604], 10.00th=[ 5800], 20.00th=[ 5997], 00:30:02.546 | 30.00th=[ 6194], 40.00th=[ 6325], 50.00th=[ 6390], 60.00th=[ 6521], 00:30:02.546 | 70.00th=[ 6652], 80.00th=[ 6783], 90.00th=[ 6980], 95.00th=[ 7177], 00:30:02.546 | 99.00th=[ 7504], 99.50th=[ 7701], 99.90th=[10552], 99.95th=[11469], 00:30:02.546 | 99.99th=[12256] 00:30:02.546 bw ( KiB/s): min=35456, max=35816, per=100.00%, avg=35598.00, stdev=169.38, samples=4 00:30:02.546 iops : min= 8864, max= 8954, avg=8899.50, stdev=42.34, samples=4 00:30:02.546 lat (msec) : 2=0.02%, 4=0.11%, 10=99.70%, 20=0.16% 00:30:02.546 cpu : usr=54.69%, sys=40.58%, ctx=91, majf=0, minf=6 00:30:02.546 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:30:02.546 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:02.546 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:30:02.546 issued rwts: total=17824,17854,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:02.546 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:02.546 00:30:02.546 Run status group 0 (all jobs): 00:30:02.546 READ: bw=34.7MiB/s (36.4MB/s), 34.7MiB/s-34.7MiB/s (36.4MB/s-36.4MB/s), io=69.6MiB (73.0MB), run=2007-2007msec 00:30:02.546 WRITE: bw=34.7MiB/s (36.4MB/s), 34.7MiB/s-34.7MiB/s (36.4MB/s-36.4MB/s), io=69.7MiB (73.1MB), run=2007-2007msec 00:30:02.546 01:15:55 nvmf_tcp.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:30:02.546 01:15:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:30:02.546 01:15:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:30:02.546 01:15:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:02.546 01:15:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1335 -- # local sanitizers 00:30:02.546 01:15:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:02.546 01:15:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # shift 00:30:02.546 01:15:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local asan_lib= 00:30:02.546 01:15:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:30:02.546 01:15:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:02.546 01:15:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libasan 00:30:02.546 01:15:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:30:02.546 01:15:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:30:02.546 01:15:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:30:02.546 01:15:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:30:02.546 01:15:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:02.546 01:15:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:30:02.546 01:15:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:30:02.546 01:15:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:30:02.546 01:15:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:30:02.546 01:15:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:30:02.546 01:15:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:30:02.546 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:30:02.546 fio-3.35 00:30:02.546 Starting 1 thread 00:30:02.804 EAL: No free 2048 kB hugepages reported on node 1 00:30:05.330 00:30:05.330 test: (groupid=0, jobs=1): err= 0: pid=3880386: Thu Jul 25 01:15:58 2024 00:30:05.330 read: IOPS=7638, BW=119MiB/s (125MB/s)(240MiB/2012msec) 00:30:05.330 slat (nsec): min=2855, max=94535, avg=3750.68, stdev=1876.06 00:30:05.330 clat (usec): min=2500, max=53667, avg=9771.66, stdev=4290.10 00:30:05.330 lat (usec): min=2503, max=53670, avg=9775.41, stdev=4290.11 00:30:05.330 clat percentiles (usec): 00:30:05.330 | 1.00th=[ 4883], 5.00th=[ 5866], 10.00th=[ 6521], 20.00th=[ 7504], 00:30:05.330 | 30.00th=[ 8160], 40.00th=[ 8848], 50.00th=[ 9372], 60.00th=[ 9896], 00:30:05.330 | 70.00th=[10552], 80.00th=[11207], 90.00th=[12518], 95.00th=[13960], 00:30:05.330 | 99.00th=[19792], 99.50th=[49021], 99.90th=[52691], 99.95th=[53216], 00:30:05.330 | 99.99th=[53740] 00:30:05.330 bw ( KiB/s): min=51744, max=78499, per=51.27%, avg=62664.75, stdev=11604.54, samples=4 00:30:05.330 iops : min= 3234, max= 4906, avg=3916.50, stdev=725.20, samples=4 00:30:05.330 write: IOPS=4630, BW=72.3MiB/s (75.9MB/s)(128MiB/1773msec); 0 zone resets 00:30:05.330 slat (usec): min=30, max=155, avg=34.24, stdev= 5.61 00:30:05.330 clat (usec): min=4162, max=28175, avg=12327.69, stdev=3136.68 00:30:05.330 lat (usec): min=4194, max=28226, avg=12361.92, stdev=3136.61 00:30:05.330 clat percentiles (usec): 00:30:05.330 | 1.00th=[ 7373], 5.00th=[ 8291], 10.00th=[ 8848], 20.00th=[ 9765], 00:30:05.330 | 30.00th=[10290], 40.00th=[10945], 50.00th=[11600], 60.00th=[12387], 00:30:05.330 | 70.00th=[13566], 80.00th=[15008], 90.00th=[17171], 95.00th=[18220], 00:30:05.331 | 99.00th=[20317], 99.50th=[20579], 99.90th=[25822], 99.95th=[26346], 00:30:05.331 | 99.99th=[28181] 00:30:05.331 bw ( KiB/s): min=55168, max=80862, per=88.14%, avg=65295.50, stdev=11337.16, samples=4 00:30:05.331 iops : min= 3448, max= 5053, avg=4080.75, stdev=708.17, samples=4 00:30:05.331 lat (msec) : 4=0.10%, 10=48.62%, 20=50.20%, 50=0.84%, 100=0.25% 00:30:05.331 cpu : usr=71.16%, sys=25.41%, ctx=45, majf=0, minf=2 00:30:05.331 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:30:05.331 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:05.331 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:30:05.331 issued rwts: total=15369,8209,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:05.331 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:05.331 00:30:05.331 Run status group 0 (all jobs): 00:30:05.331 READ: bw=119MiB/s (125MB/s), 119MiB/s-119MiB/s (125MB/s-125MB/s), io=240MiB (252MB), run=2012-2012msec 00:30:05.331 WRITE: bw=72.3MiB/s (75.9MB/s), 72.3MiB/s-72.3MiB/s (75.9MB/s-75.9MB/s), io=128MiB (134MB), run=1773-1773msec 00:30:05.331 01:15:58 nvmf_tcp.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:05.331 01:15:58 nvmf_tcp.nvmf_fio_host -- host/fio.sh@49 -- # '[' 1 -eq 1 ']' 00:30:05.331 01:15:58 nvmf_tcp.nvmf_fio_host -- host/fio.sh@51 -- # bdfs=($(get_nvme_bdfs)) 00:30:05.331 01:15:58 nvmf_tcp.nvmf_fio_host -- host/fio.sh@51 -- # get_nvme_bdfs 00:30:05.331 01:15:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1509 -- # bdfs=() 00:30:05.331 01:15:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1509 -- # local bdfs 00:30:05.331 01:15:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1510 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:30:05.331 01:15:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1510 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:30:05.331 01:15:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1510 -- # jq -r '.config[].params.traddr' 00:30:05.331 01:15:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1511 -- # (( 1 == 0 )) 00:30:05.331 01:15:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1515 -- # printf '%s\n' 0000:88:00.0 00:30:05.331 01:15:58 nvmf_tcp.nvmf_fio_host -- host/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:88:00.0 -i 10.0.0.2 00:30:08.628 Nvme0n1 00:30:08.628 01:16:01 nvmf_tcp.nvmf_fio_host -- host/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore -c 1073741824 Nvme0n1 lvs_0 00:30:11.149 01:16:04 nvmf_tcp.nvmf_fio_host -- host/fio.sh@53 -- # ls_guid=d5150365-b7b0-47f8-81c2-064e0a9d4491 00:30:11.149 01:16:04 nvmf_tcp.nvmf_fio_host -- host/fio.sh@54 -- # get_lvs_free_mb d5150365-b7b0-47f8-81c2-064e0a9d4491 00:30:11.149 01:16:04 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # local lvs_uuid=d5150365-b7b0-47f8-81c2-064e0a9d4491 00:30:11.149 01:16:04 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1361 -- # local lvs_info 00:30:11.149 01:16:04 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1362 -- # local fc 00:30:11.149 01:16:04 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1363 -- # local cs 00:30:11.149 01:16:04 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1364 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:30:11.407 01:16:04 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1364 -- # lvs_info='[ 00:30:11.407 { 00:30:11.407 "uuid": "d5150365-b7b0-47f8-81c2-064e0a9d4491", 00:30:11.407 "name": "lvs_0", 00:30:11.407 "base_bdev": "Nvme0n1", 00:30:11.407 "total_data_clusters": 930, 00:30:11.407 "free_clusters": 930, 00:30:11.407 "block_size": 512, 00:30:11.407 "cluster_size": 1073741824 00:30:11.407 } 00:30:11.407 ]' 00:30:11.407 01:16:04 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1365 -- # jq '.[] | select(.uuid=="d5150365-b7b0-47f8-81c2-064e0a9d4491") .free_clusters' 00:30:11.407 01:16:04 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1365 -- # fc=930 00:30:11.407 01:16:04 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1366 -- # jq '.[] | select(.uuid=="d5150365-b7b0-47f8-81c2-064e0a9d4491") .cluster_size' 00:30:11.664 01:16:04 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1366 -- # cs=1073741824 00:30:11.664 01:16:04 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1369 -- # free_mb=952320 00:30:11.664 01:16:04 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1370 -- # echo 952320 00:30:11.664 952320 00:30:11.664 01:16:04 nvmf_tcp.nvmf_fio_host -- host/fio.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_0 lbd_0 952320 00:30:11.921 38855e61-dddb-4b8d-8b0d-31a6c191b28a 00:30:11.922 01:16:05 nvmf_tcp.nvmf_fio_host -- host/fio.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000001 00:30:12.179 01:16:05 nvmf_tcp.nvmf_fio_host -- host/fio.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 lvs_0/lbd_0 00:30:12.437 01:16:05 nvmf_tcp.nvmf_fio_host -- host/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:30:12.694 01:16:05 nvmf_tcp.nvmf_fio_host -- host/fio.sh@59 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:30:12.694 01:16:05 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:30:12.694 01:16:05 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:30:12.694 01:16:05 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:12.694 01:16:05 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1335 -- # local sanitizers 00:30:12.695 01:16:05 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:12.695 01:16:05 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # shift 00:30:12.695 01:16:05 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local asan_lib= 00:30:12.695 01:16:05 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:30:12.695 01:16:05 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:12.695 01:16:05 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libasan 00:30:12.695 01:16:05 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:30:12.695 01:16:05 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:30:12.695 01:16:05 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:30:12.695 01:16:05 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:30:12.695 01:16:05 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:12.695 01:16:05 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:30:12.695 01:16:05 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:30:12.695 01:16:05 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:30:12.695 01:16:05 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:30:12.695 01:16:05 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:30:12.695 01:16:05 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:30:12.952 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:30:12.952 fio-3.35 00:30:12.952 Starting 1 thread 00:30:12.952 EAL: No free 2048 kB hugepages reported on node 1 00:30:15.479 00:30:15.479 test: (groupid=0, jobs=1): err= 0: pid=3881667: Thu Jul 25 01:16:08 2024 00:30:15.479 read: IOPS=6046, BW=23.6MiB/s (24.8MB/s)(47.4MiB/2007msec) 00:30:15.479 slat (nsec): min=1947, max=119008, avg=2654.50, stdev=1982.86 00:30:15.479 clat (usec): min=953, max=171232, avg=11649.01, stdev=11613.72 00:30:15.479 lat (usec): min=956, max=171272, avg=11651.67, stdev=11613.93 00:30:15.479 clat percentiles (msec): 00:30:15.479 | 1.00th=[ 9], 5.00th=[ 10], 10.00th=[ 10], 20.00th=[ 11], 00:30:15.479 | 30.00th=[ 11], 40.00th=[ 11], 50.00th=[ 11], 60.00th=[ 12], 00:30:15.479 | 70.00th=[ 12], 80.00th=[ 12], 90.00th=[ 12], 95.00th=[ 13], 00:30:15.479 | 99.00th=[ 14], 99.50th=[ 159], 99.90th=[ 171], 99.95th=[ 171], 00:30:15.479 | 99.99th=[ 171] 00:30:15.479 bw ( KiB/s): min=16976, max=26704, per=99.73%, avg=24120.00, stdev=4765.27, samples=4 00:30:15.479 iops : min= 4244, max= 6676, avg=6030.00, stdev=1191.32, samples=4 00:30:15.479 write: IOPS=6027, BW=23.5MiB/s (24.7MB/s)(47.3MiB/2007msec); 0 zone resets 00:30:15.479 slat (usec): min=2, max=106, avg= 2.79, stdev= 1.60 00:30:15.479 clat (usec): min=309, max=169277, avg=9409.63, stdev=10901.62 00:30:15.479 lat (usec): min=317, max=169283, avg=9412.42, stdev=10901.83 00:30:15.479 clat percentiles (msec): 00:30:15.479 | 1.00th=[ 7], 5.00th=[ 8], 10.00th=[ 8], 20.00th=[ 9], 00:30:15.479 | 30.00th=[ 9], 40.00th=[ 9], 50.00th=[ 9], 60.00th=[ 9], 00:30:15.479 | 70.00th=[ 9], 80.00th=[ 10], 90.00th=[ 10], 95.00th=[ 10], 00:30:15.479 | 99.00th=[ 11], 99.50th=[ 16], 99.90th=[ 169], 99.95th=[ 169], 00:30:15.479 | 99.99th=[ 169] 00:30:15.479 bw ( KiB/s): min=17960, max=26176, per=99.92%, avg=24090.00, stdev=4087.11, samples=4 00:30:15.479 iops : min= 4490, max= 6544, avg=6022.50, stdev=1021.78, samples=4 00:30:15.479 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.01% 00:30:15.479 lat (msec) : 2=0.02%, 4=0.12%, 10=56.92%, 20=42.39%, 250=0.53% 00:30:15.479 cpu : usr=58.52%, sys=38.19%, ctx=130, majf=0, minf=20 00:30:15.479 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 00:30:15.479 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:15.479 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:30:15.479 issued rwts: total=12135,12097,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:15.479 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:15.479 00:30:15.479 Run status group 0 (all jobs): 00:30:15.479 READ: bw=23.6MiB/s (24.8MB/s), 23.6MiB/s-23.6MiB/s (24.8MB/s-24.8MB/s), io=47.4MiB (49.7MB), run=2007-2007msec 00:30:15.479 WRITE: bw=23.5MiB/s (24.7MB/s), 23.5MiB/s-23.5MiB/s (24.7MB/s-24.7MB/s), io=47.3MiB (49.5MB), run=2007-2007msec 00:30:15.479 01:16:08 nvmf_tcp.nvmf_fio_host -- host/fio.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:30:15.737 01:16:08 nvmf_tcp.nvmf_fio_host -- host/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none lvs_0/lbd_0 lvs_n_0 00:30:16.669 01:16:09 nvmf_tcp.nvmf_fio_host -- host/fio.sh@64 -- # ls_nested_guid=9e8bcd6f-7c03-4cb7-9d74-01f7aa9e0112 00:30:16.669 01:16:09 nvmf_tcp.nvmf_fio_host -- host/fio.sh@65 -- # get_lvs_free_mb 9e8bcd6f-7c03-4cb7-9d74-01f7aa9e0112 00:30:16.669 01:16:09 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # local lvs_uuid=9e8bcd6f-7c03-4cb7-9d74-01f7aa9e0112 00:30:16.669 01:16:09 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1361 -- # local lvs_info 00:30:16.669 01:16:09 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1362 -- # local fc 00:30:16.669 01:16:09 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1363 -- # local cs 00:30:16.669 01:16:09 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1364 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:30:16.927 01:16:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1364 -- # lvs_info='[ 00:30:16.927 { 00:30:16.927 "uuid": "d5150365-b7b0-47f8-81c2-064e0a9d4491", 00:30:16.927 "name": "lvs_0", 00:30:16.927 "base_bdev": "Nvme0n1", 00:30:16.927 "total_data_clusters": 930, 00:30:16.927 "free_clusters": 0, 00:30:16.927 "block_size": 512, 00:30:16.927 "cluster_size": 1073741824 00:30:16.927 }, 00:30:16.927 { 00:30:16.927 "uuid": "9e8bcd6f-7c03-4cb7-9d74-01f7aa9e0112", 00:30:16.927 "name": "lvs_n_0", 00:30:16.927 "base_bdev": "38855e61-dddb-4b8d-8b0d-31a6c191b28a", 00:30:16.927 "total_data_clusters": 237847, 00:30:16.927 "free_clusters": 237847, 00:30:16.927 "block_size": 512, 00:30:16.927 "cluster_size": 4194304 00:30:16.927 } 00:30:16.927 ]' 00:30:16.927 01:16:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1365 -- # jq '.[] | select(.uuid=="9e8bcd6f-7c03-4cb7-9d74-01f7aa9e0112") .free_clusters' 00:30:16.927 01:16:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1365 -- # fc=237847 00:30:16.927 01:16:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1366 -- # jq '.[] | select(.uuid=="9e8bcd6f-7c03-4cb7-9d74-01f7aa9e0112") .cluster_size' 00:30:17.185 01:16:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1366 -- # cs=4194304 00:30:17.185 01:16:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1369 -- # free_mb=951388 00:30:17.185 01:16:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1370 -- # echo 951388 00:30:17.185 951388 00:30:17.185 01:16:10 nvmf_tcp.nvmf_fio_host -- host/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_n_0 lbd_nest_0 951388 00:30:17.749 bb25412d-b702-4f00-af00-21d53567b1b6 00:30:17.749 01:16:10 nvmf_tcp.nvmf_fio_host -- host/fio.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000001 00:30:18.007 01:16:10 nvmf_tcp.nvmf_fio_host -- host/fio.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 lvs_n_0/lbd_nest_0 00:30:18.264 01:16:11 nvmf_tcp.nvmf_fio_host -- host/fio.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:30:18.522 01:16:11 nvmf_tcp.nvmf_fio_host -- host/fio.sh@70 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:30:18.522 01:16:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:30:18.522 01:16:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:30:18.522 01:16:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:18.522 01:16:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1335 -- # local sanitizers 00:30:18.522 01:16:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:18.522 01:16:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # shift 00:30:18.522 01:16:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local asan_lib= 00:30:18.522 01:16:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:30:18.522 01:16:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:18.522 01:16:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libasan 00:30:18.522 01:16:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:30:18.522 01:16:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:30:18.522 01:16:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:30:18.522 01:16:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:30:18.522 01:16:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:18.522 01:16:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:30:18.523 01:16:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:30:18.523 01:16:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:30:18.523 01:16:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:30:18.523 01:16:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:30:18.523 01:16:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:30:18.790 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:30:18.790 fio-3.35 00:30:18.790 Starting 1 thread 00:30:18.790 EAL: No free 2048 kB hugepages reported on node 1 00:30:21.372 00:30:21.372 test: (groupid=0, jobs=1): err= 0: pid=3882398: Thu Jul 25 01:16:14 2024 00:30:21.372 read: IOPS=5860, BW=22.9MiB/s (24.0MB/s)(46.0MiB/2009msec) 00:30:21.372 slat (nsec): min=1956, max=169946, avg=2561.04, stdev=2157.42 00:30:21.372 clat (usec): min=4446, max=21013, avg=12009.57, stdev=1057.15 00:30:21.372 lat (usec): min=4451, max=21016, avg=12012.13, stdev=1057.05 00:30:21.372 clat percentiles (usec): 00:30:21.372 | 1.00th=[ 9503], 5.00th=[10290], 10.00th=[10683], 20.00th=[11207], 00:30:21.372 | 30.00th=[11469], 40.00th=[11731], 50.00th=[11994], 60.00th=[12256], 00:30:21.372 | 70.00th=[12518], 80.00th=[12911], 90.00th=[13304], 95.00th=[13566], 00:30:21.372 | 99.00th=[14222], 99.50th=[14615], 99.90th=[17433], 99.95th=[19006], 00:30:21.372 | 99.99th=[20055] 00:30:21.372 bw ( KiB/s): min=22296, max=23880, per=99.91%, avg=23420.00, stdev=753.57, samples=4 00:30:21.372 iops : min= 5574, max= 5970, avg=5855.00, stdev=188.39, samples=4 00:30:21.372 write: IOPS=5851, BW=22.9MiB/s (24.0MB/s)(45.9MiB/2009msec); 0 zone resets 00:30:21.372 slat (usec): min=2, max=113, avg= 2.64, stdev= 1.37 00:30:21.372 clat (usec): min=2164, max=18674, avg=9679.65, stdev=905.32 00:30:21.372 lat (usec): min=2170, max=18677, avg=9682.29, stdev=905.27 00:30:21.372 clat percentiles (usec): 00:30:21.372 | 1.00th=[ 7635], 5.00th=[ 8291], 10.00th=[ 8586], 20.00th=[ 8979], 00:30:21.372 | 30.00th=[ 9241], 40.00th=[ 9503], 50.00th=[ 9634], 60.00th=[ 9896], 00:30:21.372 | 70.00th=[10159], 80.00th=[10421], 90.00th=[10683], 95.00th=[10945], 00:30:21.372 | 99.00th=[11600], 99.50th=[11994], 99.90th=[15795], 99.95th=[17433], 00:30:21.372 | 99.99th=[18744] 00:30:21.372 bw ( KiB/s): min=23160, max=23544, per=99.90%, avg=23382.00, stdev=164.52, samples=4 00:30:21.372 iops : min= 5790, max= 5886, avg=5845.50, stdev=41.13, samples=4 00:30:21.372 lat (msec) : 4=0.05%, 10=33.89%, 20=66.06%, 50=0.01% 00:30:21.372 cpu : usr=59.16%, sys=37.60%, ctx=93, majf=0, minf=20 00:30:21.372 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 00:30:21.372 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:21.373 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:30:21.373 issued rwts: total=11773,11755,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:21.373 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:21.373 00:30:21.373 Run status group 0 (all jobs): 00:30:21.373 READ: bw=22.9MiB/s (24.0MB/s), 22.9MiB/s-22.9MiB/s (24.0MB/s-24.0MB/s), io=46.0MiB (48.2MB), run=2009-2009msec 00:30:21.373 WRITE: bw=22.9MiB/s (24.0MB/s), 22.9MiB/s-22.9MiB/s (24.0MB/s-24.0MB/s), io=45.9MiB (48.1MB), run=2009-2009msec 00:30:21.373 01:16:14 nvmf_tcp.nvmf_fio_host -- host/fio.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:30:21.373 01:16:14 nvmf_tcp.nvmf_fio_host -- host/fio.sh@74 -- # sync 00:30:21.373 01:16:14 nvmf_tcp.nvmf_fio_host -- host/fio.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete lvs_n_0/lbd_nest_0 00:30:25.552 01:16:18 nvmf_tcp.nvmf_fio_host -- host/fio.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:30:25.552 01:16:18 nvmf_tcp.nvmf_fio_host -- host/fio.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete lvs_0/lbd_0 00:30:28.880 01:16:21 nvmf_tcp.nvmf_fio_host -- host/fio.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:30:28.880 01:16:21 nvmf_tcp.nvmf_fio_host -- host/fio.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:30:30.777 01:16:23 nvmf_tcp.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:30:30.777 01:16:23 nvmf_tcp.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:30:30.777 01:16:23 nvmf_tcp.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:30:30.777 01:16:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:30:30.777 01:16:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@117 -- # sync 00:30:30.777 01:16:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:30:30.777 01:16:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@120 -- # set +e 00:30:30.777 01:16:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:30:30.777 01:16:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:30:30.777 rmmod nvme_tcp 00:30:30.777 rmmod nvme_fabrics 00:30:30.777 rmmod nvme_keyring 00:30:30.777 01:16:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:30:30.777 01:16:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@124 -- # set -e 00:30:30.777 01:16:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@125 -- # return 0 00:30:30.777 01:16:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@489 -- # '[' -n 3879580 ']' 00:30:30.777 01:16:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@490 -- # killprocess 3879580 00:30:30.777 01:16:23 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@946 -- # '[' -z 3879580 ']' 00:30:30.777 01:16:23 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@950 -- # kill -0 3879580 00:30:30.777 01:16:23 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@951 -- # uname 00:30:30.777 01:16:23 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:30:30.777 01:16:23 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3879580 00:30:30.777 01:16:23 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:30:30.777 01:16:23 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:30:30.777 01:16:23 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3879580' 00:30:30.777 killing process with pid 3879580 00:30:30.777 01:16:23 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@965 -- # kill 3879580 00:30:30.777 01:16:23 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@970 -- # wait 3879580 00:30:30.777 01:16:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:30:30.777 01:16:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:30:30.777 01:16:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:30:30.777 01:16:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:30.777 01:16:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:30:30.777 01:16:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:30.777 01:16:23 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:30.777 01:16:23 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:32.678 01:16:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:30:32.678 00:30:32.678 real 0m37.125s 00:30:32.678 user 2m21.199s 00:30:32.678 sys 0m7.298s 00:30:32.678 01:16:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1122 -- # xtrace_disable 00:30:32.678 01:16:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:30:32.678 ************************************ 00:30:32.678 END TEST nvmf_fio_host 00:30:32.678 ************************************ 00:30:32.678 01:16:25 nvmf_tcp -- nvmf/nvmf.sh@100 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:30:32.678 01:16:25 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:30:32.678 01:16:25 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:30:32.678 01:16:25 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:32.937 ************************************ 00:30:32.937 START TEST nvmf_failover 00:30:32.937 ************************************ 00:30:32.937 01:16:25 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:30:32.937 * Looking for test storage... 00:30:32.937 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:32.937 01:16:25 nvmf_tcp.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:32.937 01:16:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:30:32.937 01:16:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:32.937 01:16:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:32.937 01:16:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:32.937 01:16:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:32.937 01:16:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:32.937 01:16:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:32.937 01:16:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:32.937 01:16:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:32.937 01:16:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:32.937 01:16:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:32.937 01:16:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:30:32.937 01:16:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:30:32.937 01:16:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:32.937 01:16:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:32.937 01:16:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:32.937 01:16:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:32.937 01:16:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:32.937 01:16:25 nvmf_tcp.nvmf_failover -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:32.937 01:16:25 nvmf_tcp.nvmf_failover -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:32.937 01:16:25 nvmf_tcp.nvmf_failover -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:32.937 01:16:25 nvmf_tcp.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:32.937 01:16:25 nvmf_tcp.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:32.938 01:16:25 nvmf_tcp.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:32.938 01:16:25 nvmf_tcp.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:30:32.938 01:16:25 nvmf_tcp.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:32.938 01:16:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@47 -- # : 0 00:30:32.938 01:16:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:32.938 01:16:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:32.938 01:16:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:32.938 01:16:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:32.938 01:16:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:32.938 01:16:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:32.938 01:16:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:32.938 01:16:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:32.938 01:16:25 nvmf_tcp.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:30:32.938 01:16:25 nvmf_tcp.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:30:32.938 01:16:25 nvmf_tcp.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:32.938 01:16:25 nvmf_tcp.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:30:32.938 01:16:25 nvmf_tcp.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:30:32.938 01:16:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:30:32.938 01:16:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:32.938 01:16:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@448 -- # prepare_net_devs 00:30:32.938 01:16:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@410 -- # local -g is_hw=no 00:30:32.938 01:16:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@412 -- # remove_spdk_ns 00:30:32.938 01:16:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:32.938 01:16:25 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:32.938 01:16:25 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:32.938 01:16:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:30:32.938 01:16:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:30:32.938 01:16:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@285 -- # xtrace_disable 00:30:32.938 01:16:25 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:30:34.838 01:16:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:34.838 01:16:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@291 -- # pci_devs=() 00:30:34.838 01:16:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@291 -- # local -a pci_devs 00:30:34.838 01:16:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@292 -- # pci_net_devs=() 00:30:34.838 01:16:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:30:34.838 01:16:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@293 -- # pci_drivers=() 00:30:34.838 01:16:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@293 -- # local -A pci_drivers 00:30:34.838 01:16:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@295 -- # net_devs=() 00:30:34.838 01:16:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@295 -- # local -ga net_devs 00:30:34.838 01:16:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@296 -- # e810=() 00:30:34.838 01:16:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@296 -- # local -ga e810 00:30:34.838 01:16:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@297 -- # x722=() 00:30:34.838 01:16:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@297 -- # local -ga x722 00:30:34.838 01:16:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@298 -- # mlx=() 00:30:34.838 01:16:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@298 -- # local -ga mlx 00:30:34.838 01:16:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:34.838 01:16:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:34.838 01:16:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:34.838 01:16:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:34.838 01:16:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:34.838 01:16:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:34.838 01:16:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:34.838 01:16:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:34.838 01:16:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:34.838 01:16:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:34.838 01:16:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:34.838 01:16:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:30:34.838 01:16:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:30:34.838 01:16:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:30:34.838 01:16:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:30:34.838 01:16:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:30:34.838 01:16:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:30:34.838 01:16:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:34.838 01:16:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:30:34.838 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:30:34.838 01:16:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:34.838 01:16:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:34.838 01:16:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:34.838 01:16:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:34.838 01:16:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:34.838 01:16:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:34.838 01:16:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:30:34.838 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:30:34.838 01:16:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:34.838 01:16:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:34.838 01:16:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:34.838 01:16:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:34.838 01:16:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:34.838 01:16:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:30:34.838 01:16:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:30:34.838 01:16:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:30:34.838 01:16:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:34.838 01:16:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:34.838 01:16:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:34.838 01:16:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:34.838 01:16:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:34.838 01:16:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:34.838 01:16:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:34.838 01:16:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:30:34.838 Found net devices under 0000:0a:00.0: cvl_0_0 00:30:34.838 01:16:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:34.838 01:16:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:34.838 01:16:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:34.838 01:16:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:34.838 01:16:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:34.838 01:16:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:34.838 01:16:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:34.838 01:16:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:34.839 01:16:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:30:34.839 Found net devices under 0000:0a:00.1: cvl_0_1 00:30:34.839 01:16:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:34.839 01:16:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:30:34.839 01:16:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # is_hw=yes 00:30:34.839 01:16:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:30:34.839 01:16:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:30:34.839 01:16:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:30:34.839 01:16:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:34.839 01:16:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:34.839 01:16:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:34.839 01:16:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:30:34.839 01:16:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:34.839 01:16:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:34.839 01:16:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:30:34.839 01:16:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:34.839 01:16:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:34.839 01:16:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:30:34.839 01:16:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:30:34.839 01:16:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:30:34.839 01:16:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:35.097 01:16:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:35.097 01:16:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:35.097 01:16:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:30:35.097 01:16:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:35.097 01:16:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:35.097 01:16:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:35.097 01:16:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:30:35.097 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:35.097 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.190 ms 00:30:35.097 00:30:35.097 --- 10.0.0.2 ping statistics --- 00:30:35.097 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:35.097 rtt min/avg/max/mdev = 0.190/0.190/0.190/0.000 ms 00:30:35.097 01:16:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:35.097 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:35.097 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.060 ms 00:30:35.097 00:30:35.097 --- 10.0.0.1 ping statistics --- 00:30:35.097 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:35.097 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:30:35.097 01:16:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:35.097 01:16:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@422 -- # return 0 00:30:35.097 01:16:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:30:35.097 01:16:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:35.097 01:16:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:30:35.097 01:16:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:30:35.097 01:16:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:35.097 01:16:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:30:35.097 01:16:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:30:35.097 01:16:28 nvmf_tcp.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:30:35.098 01:16:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:30:35.098 01:16:28 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@720 -- # xtrace_disable 00:30:35.098 01:16:28 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:30:35.098 01:16:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@481 -- # nvmfpid=3885640 00:30:35.098 01:16:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:30:35.098 01:16:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@482 -- # waitforlisten 3885640 00:30:35.098 01:16:28 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@827 -- # '[' -z 3885640 ']' 00:30:35.098 01:16:28 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:35.098 01:16:28 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@832 -- # local max_retries=100 00:30:35.098 01:16:28 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:35.098 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:35.098 01:16:28 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # xtrace_disable 00:30:35.098 01:16:28 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:30:35.098 [2024-07-25 01:16:28.124963] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 23.11.0 initialization... 00:30:35.098 [2024-07-25 01:16:28.125048] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:35.098 EAL: No free 2048 kB hugepages reported on node 1 00:30:35.098 [2024-07-25 01:16:28.192565] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:30:35.355 [2024-07-25 01:16:28.279697] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:35.355 [2024-07-25 01:16:28.279749] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:35.355 [2024-07-25 01:16:28.279775] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:35.355 [2024-07-25 01:16:28.279786] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:35.356 [2024-07-25 01:16:28.279795] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:35.356 [2024-07-25 01:16:28.279844] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:30:35.356 [2024-07-25 01:16:28.279902] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:30:35.356 [2024-07-25 01:16:28.279905] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:30:35.356 01:16:28 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:30:35.356 01:16:28 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@860 -- # return 0 00:30:35.356 01:16:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:30:35.356 01:16:28 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:35.356 01:16:28 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:30:35.356 01:16:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:35.356 01:16:28 nvmf_tcp.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:30:35.613 [2024-07-25 01:16:28.628438] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:35.613 01:16:28 nvmf_tcp.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:30:35.870 Malloc0 00:30:35.870 01:16:28 nvmf_tcp.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:36.127 01:16:29 nvmf_tcp.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:36.383 01:16:29 nvmf_tcp.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:36.639 [2024-07-25 01:16:29.646930] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:36.639 01:16:29 nvmf_tcp.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:30:36.896 [2024-07-25 01:16:29.891719] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:30:36.896 01:16:29 nvmf_tcp.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:30:37.153 [2024-07-25 01:16:30.136699] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:30:37.153 01:16:30 nvmf_tcp.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=3885924 00:30:37.153 01:16:30 nvmf_tcp.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:30:37.153 01:16:30 nvmf_tcp.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:37.153 01:16:30 nvmf_tcp.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 3885924 /var/tmp/bdevperf.sock 00:30:37.153 01:16:30 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@827 -- # '[' -z 3885924 ']' 00:30:37.153 01:16:30 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:37.153 01:16:30 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@832 -- # local max_retries=100 00:30:37.153 01:16:30 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:37.153 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:37.153 01:16:30 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # xtrace_disable 00:30:37.153 01:16:30 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:30:37.411 01:16:30 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:30:37.411 01:16:30 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@860 -- # return 0 00:30:37.411 01:16:30 nvmf_tcp.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:37.975 NVMe0n1 00:30:37.975 01:16:30 nvmf_tcp.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:38.232 00:30:38.232 01:16:31 nvmf_tcp.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=3886062 00:30:38.232 01:16:31 nvmf_tcp.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:30:38.232 01:16:31 nvmf_tcp.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:30:39.604 01:16:32 nvmf_tcp.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:39.604 [2024-07-25 01:16:32.571449] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf4d50 is same with the state(5) to be set 00:30:39.604 [2024-07-25 01:16:32.571555] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf4d50 is same with the state(5) to be set 00:30:39.604 [2024-07-25 01:16:32.571587] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf4d50 is same with the state(5) to be set 00:30:39.604 [2024-07-25 01:16:32.571599] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf4d50 is same with the state(5) to be set 00:30:39.604 [2024-07-25 01:16:32.571617] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf4d50 is same with the state(5) to be set 00:30:39.604 [2024-07-25 01:16:32.571629] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf4d50 is same with the state(5) to be set 00:30:39.604 [2024-07-25 01:16:32.571641] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf4d50 is same with the state(5) to be set 00:30:39.604 [2024-07-25 01:16:32.571662] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf4d50 is same with the state(5) to be set 00:30:39.604 [2024-07-25 01:16:32.571674] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf4d50 is same with the state(5) to be set 00:30:39.604 [2024-07-25 01:16:32.571685] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf4d50 is same with the state(5) to be set 00:30:39.604 [2024-07-25 01:16:32.571711] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf4d50 is same with the state(5) to be set 00:30:39.604 [2024-07-25 01:16:32.571724] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf4d50 is same with the state(5) to be set 00:30:39.604 [2024-07-25 01:16:32.571735] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf4d50 is same with the state(5) to be set 00:30:39.604 [2024-07-25 01:16:32.571746] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf4d50 is same with the state(5) to be set 00:30:39.604 [2024-07-25 01:16:32.571757] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf4d50 is same with the state(5) to be set 00:30:39.604 [2024-07-25 01:16:32.571768] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf4d50 is same with the state(5) to be set 00:30:39.604 [2024-07-25 01:16:32.571779] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf4d50 is same with the state(5) to be set 00:30:39.604 [2024-07-25 01:16:32.571790] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf4d50 is same with the state(5) to be set 00:30:39.604 [2024-07-25 01:16:32.571801] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf4d50 is same with the state(5) to be set 00:30:39.604 [2024-07-25 01:16:32.571812] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf4d50 is same with the state(5) to be set 00:30:39.604 [2024-07-25 01:16:32.571823] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf4d50 is same with the state(5) to be set 00:30:39.604 [2024-07-25 01:16:32.571835] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf4d50 is same with the state(5) to be set 00:30:39.604 [2024-07-25 01:16:32.571846] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf4d50 is same with the state(5) to be set 00:30:39.604 [2024-07-25 01:16:32.571857] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf4d50 is same with the state(5) to be set 00:30:39.604 [2024-07-25 01:16:32.571869] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf4d50 is same with the state(5) to be set 00:30:39.604 [2024-07-25 01:16:32.571879] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf4d50 is same with the state(5) to be set 00:30:39.604 [2024-07-25 01:16:32.571890] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf4d50 is same with the state(5) to be set 00:30:39.604 [2024-07-25 01:16:32.571902] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf4d50 is same with the state(5) to be set 00:30:39.604 [2024-07-25 01:16:32.571913] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf4d50 is same with the state(5) to be set 00:30:39.604 01:16:32 nvmf_tcp.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:30:42.954 01:16:35 nvmf_tcp.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:42.954 00:30:42.954 01:16:35 nvmf_tcp.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:30:43.212 [2024-07-25 01:16:36.204651] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf5bd0 is same with the state(5) to be set 00:30:43.212 [2024-07-25 01:16:36.204714] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf5bd0 is same with the state(5) to be set 00:30:43.212 [2024-07-25 01:16:36.204752] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf5bd0 is same with the state(5) to be set 00:30:43.212 [2024-07-25 01:16:36.204766] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf5bd0 is same with the state(5) to be set 00:30:43.212 [2024-07-25 01:16:36.204778] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf5bd0 is same with the state(5) to be set 00:30:43.212 [2024-07-25 01:16:36.204789] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf5bd0 is same with the state(5) to be set 00:30:43.212 [2024-07-25 01:16:36.204801] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf5bd0 is same with the state(5) to be set 00:30:43.212 [2024-07-25 01:16:36.204813] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf5bd0 is same with the state(5) to be set 00:30:43.212 [2024-07-25 01:16:36.204825] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf5bd0 is same with the state(5) to be set 00:30:43.212 [2024-07-25 01:16:36.204836] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf5bd0 is same with the state(5) to be set 00:30:43.212 [2024-07-25 01:16:36.204848] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf5bd0 is same with the state(5) to be set 00:30:43.212 [2024-07-25 01:16:36.204860] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf5bd0 is same with the state(5) to be set 00:30:43.212 [2024-07-25 01:16:36.204872] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf5bd0 is same with the state(5) to be set 00:30:43.212 [2024-07-25 01:16:36.204884] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf5bd0 is same with the state(5) to be set 00:30:43.212 [2024-07-25 01:16:36.204896] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf5bd0 is same with the state(5) to be set 00:30:43.212 [2024-07-25 01:16:36.204908] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf5bd0 is same with the state(5) to be set 00:30:43.212 [2024-07-25 01:16:36.204920] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf5bd0 is same with the state(5) to be set 00:30:43.212 [2024-07-25 01:16:36.204932] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf5bd0 is same with the state(5) to be set 00:30:43.212 [2024-07-25 01:16:36.204944] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf5bd0 is same with the state(5) to be set 00:30:43.212 [2024-07-25 01:16:36.204957] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf5bd0 is same with the state(5) to be set 00:30:43.212 [2024-07-25 01:16:36.204969] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf5bd0 is same with the state(5) to be set 00:30:43.212 [2024-07-25 01:16:36.204982] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf5bd0 is same with the state(5) to be set 00:30:43.212 [2024-07-25 01:16:36.204994] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf5bd0 is same with the state(5) to be set 00:30:43.212 [2024-07-25 01:16:36.205007] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf5bd0 is same with the state(5) to be set 00:30:43.212 [2024-07-25 01:16:36.205019] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf5bd0 is same with the state(5) to be set 00:30:43.213 [2024-07-25 01:16:36.205031] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf5bd0 is same with the state(5) to be set 00:30:43.213 [2024-07-25 01:16:36.205043] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf5bd0 is same with the state(5) to be set 00:30:43.213 [2024-07-25 01:16:36.205055] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf5bd0 is same with the state(5) to be set 00:30:43.213 [2024-07-25 01:16:36.205067] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf5bd0 is same with the state(5) to be set 00:30:43.213 [2024-07-25 01:16:36.205082] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf5bd0 is same with the state(5) to be set 00:30:43.213 [2024-07-25 01:16:36.205094] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf5bd0 is same with the state(5) to be set 00:30:43.213 [2024-07-25 01:16:36.205106] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf5bd0 is same with the state(5) to be set 00:30:43.213 [2024-07-25 01:16:36.205118] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf5bd0 is same with the state(5) to be set 00:30:43.213 [2024-07-25 01:16:36.205131] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf5bd0 is same with the state(5) to be set 00:30:43.213 [2024-07-25 01:16:36.205143] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf5bd0 is same with the state(5) to be set 00:30:43.213 [2024-07-25 01:16:36.205154] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf5bd0 is same with the state(5) to be set 00:30:43.213 [2024-07-25 01:16:36.205166] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf5bd0 is same with the state(5) to be set 00:30:43.213 [2024-07-25 01:16:36.205178] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf5bd0 is same with the state(5) to be set 00:30:43.213 [2024-07-25 01:16:36.205190] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf5bd0 is same with the state(5) to be set 00:30:43.213 [2024-07-25 01:16:36.205202] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf5bd0 is same with the state(5) to be set 00:30:43.213 [2024-07-25 01:16:36.205214] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf5bd0 is same with the state(5) to be set 00:30:43.213 [2024-07-25 01:16:36.205225] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf5bd0 is same with the state(5) to be set 00:30:43.213 [2024-07-25 01:16:36.205237] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf5bd0 is same with the state(5) to be set 00:30:43.213 [2024-07-25 01:16:36.205269] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf5bd0 is same with the state(5) to be set 00:30:43.213 [2024-07-25 01:16:36.205282] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf5bd0 is same with the state(5) to be set 00:30:43.213 01:16:36 nvmf_tcp.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:30:46.491 01:16:39 nvmf_tcp.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:46.491 [2024-07-25 01:16:39.463087] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:46.491 01:16:39 nvmf_tcp.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:30:47.424 01:16:40 nvmf_tcp.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:30:47.682 [2024-07-25 01:16:40.721485] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf6750 is same with the state(5) to be set 00:30:47.682 [2024-07-25 01:16:40.721545] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf6750 is same with the state(5) to be set 00:30:47.682 [2024-07-25 01:16:40.721582] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf6750 is same with the state(5) to be set 00:30:47.682 [2024-07-25 01:16:40.721602] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf6750 is same with the state(5) to be set 00:30:47.682 [2024-07-25 01:16:40.721616] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf6750 is same with the state(5) to be set 00:30:47.682 [2024-07-25 01:16:40.721628] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf6750 is same with the state(5) to be set 00:30:47.682 [2024-07-25 01:16:40.721651] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf6750 is same with the state(5) to be set 00:30:47.682 [2024-07-25 01:16:40.721664] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf6750 is same with the state(5) to be set 00:30:47.682 [2024-07-25 01:16:40.721677] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf6750 is same with the state(5) to be set 00:30:47.682 [2024-07-25 01:16:40.721688] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf6750 is same with the state(5) to be set 00:30:47.682 [2024-07-25 01:16:40.721700] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf6750 is same with the state(5) to be set 00:30:47.682 [2024-07-25 01:16:40.721711] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf6750 is same with the state(5) to be set 00:30:47.682 [2024-07-25 01:16:40.721722] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf6750 is same with the state(5) to be set 00:30:47.682 [2024-07-25 01:16:40.721734] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf6750 is same with the state(5) to be set 00:30:47.682 [2024-07-25 01:16:40.721745] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf6750 is same with the state(5) to be set 00:30:47.682 [2024-07-25 01:16:40.721757] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf6750 is same with the state(5) to be set 00:30:47.682 [2024-07-25 01:16:40.721769] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf6750 is same with the state(5) to be set 00:30:47.682 [2024-07-25 01:16:40.721780] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf6750 is same with the state(5) to be set 00:30:47.682 [2024-07-25 01:16:40.721792] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf6750 is same with the state(5) to be set 00:30:47.682 [2024-07-25 01:16:40.721804] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf6750 is same with the state(5) to be set 00:30:47.682 [2024-07-25 01:16:40.721829] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf6750 is same with the state(5) to be set 00:30:47.682 [2024-07-25 01:16:40.721841] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf6750 is same with the state(5) to be set 00:30:47.682 [2024-07-25 01:16:40.721852] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf6750 is same with the state(5) to be set 00:30:47.682 [2024-07-25 01:16:40.721863] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf6750 is same with the state(5) to be set 00:30:47.682 [2024-07-25 01:16:40.721875] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf6750 is same with the state(5) to be set 00:30:47.682 [2024-07-25 01:16:40.721886] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf6750 is same with the state(5) to be set 00:30:47.682 [2024-07-25 01:16:40.721897] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf6750 is same with the state(5) to be set 00:30:47.682 [2024-07-25 01:16:40.721908] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf6750 is same with the state(5) to be set 00:30:47.682 [2024-07-25 01:16:40.721920] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf6750 is same with the state(5) to be set 00:30:47.682 [2024-07-25 01:16:40.721931] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf6750 is same with the state(5) to be set 00:30:47.682 [2024-07-25 01:16:40.721942] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf6750 is same with the state(5) to be set 00:30:47.682 [2024-07-25 01:16:40.721953] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf6750 is same with the state(5) to be set 00:30:47.682 [2024-07-25 01:16:40.721965] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf6750 is same with the state(5) to be set 00:30:47.682 [2024-07-25 01:16:40.721976] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf6750 is same with the state(5) to be set 00:30:47.682 [2024-07-25 01:16:40.721990] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf6750 is same with the state(5) to be set 00:30:47.682 [2024-07-25 01:16:40.722002] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf6750 is same with the state(5) to be set 00:30:47.682 [2024-07-25 01:16:40.722013] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf6750 is same with the state(5) to be set 00:30:47.682 01:16:40 nvmf_tcp.nvmf_failover -- host/failover.sh@59 -- # wait 3886062 00:30:54.244 0 00:30:54.244 01:16:46 nvmf_tcp.nvmf_failover -- host/failover.sh@61 -- # killprocess 3885924 00:30:54.244 01:16:46 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@946 -- # '[' -z 3885924 ']' 00:30:54.244 01:16:46 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@950 -- # kill -0 3885924 00:30:54.244 01:16:46 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@951 -- # uname 00:30:54.244 01:16:46 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:30:54.244 01:16:46 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3885924 00:30:54.244 01:16:46 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:30:54.244 01:16:46 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:30:54.244 01:16:46 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3885924' 00:30:54.244 killing process with pid 3885924 00:30:54.244 01:16:46 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@965 -- # kill 3885924 00:30:54.244 01:16:46 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@970 -- # wait 3885924 00:30:54.244 01:16:46 nvmf_tcp.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:30:54.244 [2024-07-25 01:16:30.200752] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 23.11.0 initialization... 00:30:54.244 [2024-07-25 01:16:30.200855] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3885924 ] 00:30:54.244 EAL: No free 2048 kB hugepages reported on node 1 00:30:54.244 [2024-07-25 01:16:30.264096] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:54.244 [2024-07-25 01:16:30.352503] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:54.244 Running I/O for 15 seconds... 00:30:54.244 [2024-07-25 01:16:32.573680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:78376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.244 [2024-07-25 01:16:32.573722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.244 [2024-07-25 01:16:32.573758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:78384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.244 [2024-07-25 01:16:32.573774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.244 [2024-07-25 01:16:32.573790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:78392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.244 [2024-07-25 01:16:32.573804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.244 [2024-07-25 01:16:32.573819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:78400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.244 [2024-07-25 01:16:32.573832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.244 [2024-07-25 01:16:32.573847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:78408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.244 [2024-07-25 01:16:32.573860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.244 [2024-07-25 01:16:32.573875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:78416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.244 [2024-07-25 01:16:32.573889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.244 [2024-07-25 01:16:32.573904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:78424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.244 [2024-07-25 01:16:32.573916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.244 [2024-07-25 01:16:32.573931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:78432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.244 [2024-07-25 01:16:32.573944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.244 [2024-07-25 01:16:32.573958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:78440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.244 [2024-07-25 01:16:32.573970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.244 [2024-07-25 01:16:32.573985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:78448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.244 [2024-07-25 01:16:32.573998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.244 [2024-07-25 01:16:32.574013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:78456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.244 [2024-07-25 01:16:32.574026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.244 [2024-07-25 01:16:32.574048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:78464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.244 [2024-07-25 01:16:32.574062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.244 [2024-07-25 01:16:32.574077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:78472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.244 [2024-07-25 01:16:32.574089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.244 [2024-07-25 01:16:32.574104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:78480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.244 [2024-07-25 01:16:32.574116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.244 [2024-07-25 01:16:32.574131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:78488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.244 [2024-07-25 01:16:32.574144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.244 [2024-07-25 01:16:32.574158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:78496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.244 [2024-07-25 01:16:32.574171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.244 [2024-07-25 01:16:32.574185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:78504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.244 [2024-07-25 01:16:32.574198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.244 [2024-07-25 01:16:32.574212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:78512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.244 [2024-07-25 01:16:32.574240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.244 [2024-07-25 01:16:32.574263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:78520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.244 [2024-07-25 01:16:32.574277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.244 [2024-07-25 01:16:32.574292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:78528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.244 [2024-07-25 01:16:32.574306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.244 [2024-07-25 01:16:32.574321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:78536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.244 [2024-07-25 01:16:32.574335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.244 [2024-07-25 01:16:32.574349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:78544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.244 [2024-07-25 01:16:32.574363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.244 [2024-07-25 01:16:32.574378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:78552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.244 [2024-07-25 01:16:32.574392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.244 [2024-07-25 01:16:32.574407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:78560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.244 [2024-07-25 01:16:32.574424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.245 [2024-07-25 01:16:32.574439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:78568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.245 [2024-07-25 01:16:32.574453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.245 [2024-07-25 01:16:32.574468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:78576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.245 [2024-07-25 01:16:32.574482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.245 [2024-07-25 01:16:32.574497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:78584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.245 [2024-07-25 01:16:32.574510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.245 [2024-07-25 01:16:32.574539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:78592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.245 [2024-07-25 01:16:32.574553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.245 [2024-07-25 01:16:32.574567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:78600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.245 [2024-07-25 01:16:32.574580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.245 [2024-07-25 01:16:32.574594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:78608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.245 [2024-07-25 01:16:32.574606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.245 [2024-07-25 01:16:32.574621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:78616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.245 [2024-07-25 01:16:32.574633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.245 [2024-07-25 01:16:32.574648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:78656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:54.245 [2024-07-25 01:16:32.574661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.245 [2024-07-25 01:16:32.574675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:78664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:54.245 [2024-07-25 01:16:32.574687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.245 [2024-07-25 01:16:32.574702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:78672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:54.245 [2024-07-25 01:16:32.574715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.245 [2024-07-25 01:16:32.574729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:78680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:54.245 [2024-07-25 01:16:32.574742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.245 [2024-07-25 01:16:32.574755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:78688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:54.245 [2024-07-25 01:16:32.574768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.245 [2024-07-25 01:16:32.574785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:78696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:54.245 [2024-07-25 01:16:32.574798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.245 [2024-07-25 01:16:32.574812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:78704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:54.245 [2024-07-25 01:16:32.574825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.245 [2024-07-25 01:16:32.574839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:78712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:54.245 [2024-07-25 01:16:32.574852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.245 [2024-07-25 01:16:32.574866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:78720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:54.245 [2024-07-25 01:16:32.574878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.245 [2024-07-25 01:16:32.574892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:78728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:54.245 [2024-07-25 01:16:32.574905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.245 [2024-07-25 01:16:32.574919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:78736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:54.245 [2024-07-25 01:16:32.574931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.245 [2024-07-25 01:16:32.574946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:78744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:54.245 [2024-07-25 01:16:32.574959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.245 [2024-07-25 01:16:32.574973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:78752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:54.245 [2024-07-25 01:16:32.574986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.245 [2024-07-25 01:16:32.575000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:78760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:54.245 [2024-07-25 01:16:32.575013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.245 [2024-07-25 01:16:32.575027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:78768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:54.245 [2024-07-25 01:16:32.575039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.245 [2024-07-25 01:16:32.575053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:78776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:54.245 [2024-07-25 01:16:32.575066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.245 [2024-07-25 01:16:32.575080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:78784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:54.245 [2024-07-25 01:16:32.575092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.245 [2024-07-25 01:16:32.575106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:78792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:54.245 [2024-07-25 01:16:32.575122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.245 [2024-07-25 01:16:32.575137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:78800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:54.245 [2024-07-25 01:16:32.575150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.245 [2024-07-25 01:16:32.575163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:78808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:54.245 [2024-07-25 01:16:32.575176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.245 [2024-07-25 01:16:32.575190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:78816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:54.245 [2024-07-25 01:16:32.575202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.245 [2024-07-25 01:16:32.575216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:78824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:54.245 [2024-07-25 01:16:32.575239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.245 [2024-07-25 01:16:32.575278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:78832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:54.245 [2024-07-25 01:16:32.575292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.245 [2024-07-25 01:16:32.575307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:78840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:54.245 [2024-07-25 01:16:32.575320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.245 [2024-07-25 01:16:32.575334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:78848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:54.245 [2024-07-25 01:16:32.575347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.245 [2024-07-25 01:16:32.575362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:78856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:54.245 [2024-07-25 01:16:32.575375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.245 [2024-07-25 01:16:32.575389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:78864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:54.245 [2024-07-25 01:16:32.575403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.245 [2024-07-25 01:16:32.575418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:78872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:54.245 [2024-07-25 01:16:32.575432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.245 [2024-07-25 01:16:32.575446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:78880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:54.245 [2024-07-25 01:16:32.575459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.245 [2024-07-25 01:16:32.575474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:78888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:54.245 [2024-07-25 01:16:32.575487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.245 [2024-07-25 01:16:32.575502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:78896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:54.245 [2024-07-25 01:16:32.575519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.245 [2024-07-25 01:16:32.575534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:78904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:54.245 [2024-07-25 01:16:32.575572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.246 [2024-07-25 01:16:32.575586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:78912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:54.246 [2024-07-25 01:16:32.575599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.246 [2024-07-25 01:16:32.575613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:78920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:54.246 [2024-07-25 01:16:32.575626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.246 [2024-07-25 01:16:32.575641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:78928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:54.246 [2024-07-25 01:16:32.575654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.246 [2024-07-25 01:16:32.575668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:78936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:54.246 [2024-07-25 01:16:32.575680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.246 [2024-07-25 01:16:32.575694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:78944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:54.246 [2024-07-25 01:16:32.575707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.246 [2024-07-25 01:16:32.575721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:78952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:54.246 [2024-07-25 01:16:32.575734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.246 [2024-07-25 01:16:32.575748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:78960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:54.246 [2024-07-25 01:16:32.575761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.246 [2024-07-25 01:16:32.575775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:78968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:54.246 [2024-07-25 01:16:32.575787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.246 [2024-07-25 01:16:32.575802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:78976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:54.246 [2024-07-25 01:16:32.575814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.246 [2024-07-25 01:16:32.575829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:78984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:54.246 [2024-07-25 01:16:32.575841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.246 [2024-07-25 01:16:32.575856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:78992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:54.246 [2024-07-25 01:16:32.575868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.246 [2024-07-25 01:16:32.575886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:79000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:54.246 [2024-07-25 01:16:32.575899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.246 [2024-07-25 01:16:32.575914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:79008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:54.246 [2024-07-25 01:16:32.575927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.246 [2024-07-25 01:16:32.575942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:79016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:54.246 [2024-07-25 01:16:32.575954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.246 [2024-07-25 01:16:32.575969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:79024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:54.246 [2024-07-25 01:16:32.575982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.246 [2024-07-25 01:16:32.575996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:79032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:54.246 [2024-07-25 01:16:32.576008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.246 [2024-07-25 01:16:32.576042] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:54.246 [2024-07-25 01:16:32.576058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79040 len:8 PRP1 0x0 PRP2 0x0 00:30:54.246 [2024-07-25 01:16:32.576071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.246 [2024-07-25 01:16:32.576087] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:54.246 [2024-07-25 01:16:32.576098] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:54.246 [2024-07-25 01:16:32.576109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79048 len:8 PRP1 0x0 PRP2 0x0 00:30:54.246 [2024-07-25 01:16:32.576121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.246 [2024-07-25 01:16:32.576148] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:54.246 [2024-07-25 01:16:32.576160] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:54.246 [2024-07-25 01:16:32.576171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79056 len:8 PRP1 0x0 PRP2 0x0 00:30:54.246 [2024-07-25 01:16:32.576184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.246 [2024-07-25 01:16:32.576196] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:54.246 [2024-07-25 01:16:32.576207] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:54.246 [2024-07-25 01:16:32.576217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79064 len:8 PRP1 0x0 PRP2 0x0 00:30:54.246 [2024-07-25 01:16:32.576229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.246 [2024-07-25 01:16:32.576265] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:54.246 [2024-07-25 01:16:32.576279] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:54.246 [2024-07-25 01:16:32.576291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79072 len:8 PRP1 0x0 PRP2 0x0 00:30:54.246 [2024-07-25 01:16:32.576304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.246 [2024-07-25 01:16:32.576322] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:54.246 [2024-07-25 01:16:32.576334] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:54.246 [2024-07-25 01:16:32.576346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79080 len:8 PRP1 0x0 PRP2 0x0 00:30:54.246 [2024-07-25 01:16:32.576359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.246 [2024-07-25 01:16:32.576373] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:54.246 [2024-07-25 01:16:32.576384] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:54.246 [2024-07-25 01:16:32.576395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79088 len:8 PRP1 0x0 PRP2 0x0 00:30:54.246 [2024-07-25 01:16:32.576408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.246 [2024-07-25 01:16:32.576421] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:54.246 [2024-07-25 01:16:32.576432] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:54.246 [2024-07-25 01:16:32.576443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79096 len:8 PRP1 0x0 PRP2 0x0 00:30:54.246 [2024-07-25 01:16:32.576456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.246 [2024-07-25 01:16:32.576469] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:54.246 [2024-07-25 01:16:32.576480] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:54.246 [2024-07-25 01:16:32.576491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79104 len:8 PRP1 0x0 PRP2 0x0 00:30:54.246 [2024-07-25 01:16:32.576504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.246 [2024-07-25 01:16:32.576517] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:54.246 [2024-07-25 01:16:32.576528] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:54.246 [2024-07-25 01:16:32.576539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79112 len:8 PRP1 0x0 PRP2 0x0 00:30:54.246 [2024-07-25 01:16:32.576568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.246 [2024-07-25 01:16:32.576582] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:54.246 [2024-07-25 01:16:32.576592] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:54.246 [2024-07-25 01:16:32.576603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79120 len:8 PRP1 0x0 PRP2 0x0 00:30:54.246 [2024-07-25 01:16:32.576616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.246 [2024-07-25 01:16:32.576629] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:54.246 [2024-07-25 01:16:32.576639] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:54.246 [2024-07-25 01:16:32.576650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79128 len:8 PRP1 0x0 PRP2 0x0 00:30:54.246 [2024-07-25 01:16:32.576662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.246 [2024-07-25 01:16:32.576675] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:54.246 [2024-07-25 01:16:32.576686] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:54.246 [2024-07-25 01:16:32.576696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79136 len:8 PRP1 0x0 PRP2 0x0 00:30:54.246 [2024-07-25 01:16:32.576712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.246 [2024-07-25 01:16:32.576726] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:54.246 [2024-07-25 01:16:32.576736] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:54.246 [2024-07-25 01:16:32.576747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79144 len:8 PRP1 0x0 PRP2 0x0 00:30:54.246 [2024-07-25 01:16:32.576761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.247 [2024-07-25 01:16:32.576774] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:54.247 [2024-07-25 01:16:32.576784] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:54.247 [2024-07-25 01:16:32.576796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79152 len:8 PRP1 0x0 PRP2 0x0 00:30:54.247 [2024-07-25 01:16:32.576808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.247 [2024-07-25 01:16:32.576821] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:54.247 [2024-07-25 01:16:32.576831] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:54.247 [2024-07-25 01:16:32.576842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79160 len:8 PRP1 0x0 PRP2 0x0 00:30:54.247 [2024-07-25 01:16:32.576855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.247 [2024-07-25 01:16:32.576868] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:54.247 [2024-07-25 01:16:32.576879] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:54.247 [2024-07-25 01:16:32.576890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79168 len:8 PRP1 0x0 PRP2 0x0 00:30:54.247 [2024-07-25 01:16:32.576902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.247 [2024-07-25 01:16:32.576915] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:54.247 [2024-07-25 01:16:32.576925] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:54.247 [2024-07-25 01:16:32.576936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79176 len:8 PRP1 0x0 PRP2 0x0 00:30:54.247 [2024-07-25 01:16:32.576949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.247 [2024-07-25 01:16:32.576961] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:54.247 [2024-07-25 01:16:32.576972] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:54.247 [2024-07-25 01:16:32.576983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79184 len:8 PRP1 0x0 PRP2 0x0 00:30:54.247 [2024-07-25 01:16:32.576996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.247 [2024-07-25 01:16:32.577008] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:54.247 [2024-07-25 01:16:32.577019] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:54.247 [2024-07-25 01:16:32.577029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79192 len:8 PRP1 0x0 PRP2 0x0 00:30:54.247 [2024-07-25 01:16:32.577042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.247 [2024-07-25 01:16:32.577055] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:54.247 [2024-07-25 01:16:32.577069] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:54.247 [2024-07-25 01:16:32.577080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79200 len:8 PRP1 0x0 PRP2 0x0 00:30:54.247 [2024-07-25 01:16:32.577093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.247 [2024-07-25 01:16:32.577105] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:54.247 [2024-07-25 01:16:32.577116] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:54.247 [2024-07-25 01:16:32.577127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79208 len:8 PRP1 0x0 PRP2 0x0 00:30:54.247 [2024-07-25 01:16:32.577146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.247 [2024-07-25 01:16:32.577159] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:54.247 [2024-07-25 01:16:32.577170] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:54.247 [2024-07-25 01:16:32.577181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79216 len:8 PRP1 0x0 PRP2 0x0 00:30:54.247 [2024-07-25 01:16:32.577194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.247 [2024-07-25 01:16:32.577206] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:54.247 [2024-07-25 01:16:32.577217] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:54.247 [2024-07-25 01:16:32.577228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79224 len:8 PRP1 0x0 PRP2 0x0 00:30:54.247 [2024-07-25 01:16:32.577247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.247 [2024-07-25 01:16:32.577279] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:54.247 [2024-07-25 01:16:32.577291] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:54.247 [2024-07-25 01:16:32.577302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79232 len:8 PRP1 0x0 PRP2 0x0 00:30:54.247 [2024-07-25 01:16:32.577314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.247 [2024-07-25 01:16:32.577327] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:54.247 [2024-07-25 01:16:32.577338] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:54.247 [2024-07-25 01:16:32.577349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79240 len:8 PRP1 0x0 PRP2 0x0 00:30:54.247 [2024-07-25 01:16:32.577363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.247 [2024-07-25 01:16:32.577376] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:54.247 [2024-07-25 01:16:32.577387] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:54.247 [2024-07-25 01:16:32.577398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79248 len:8 PRP1 0x0 PRP2 0x0 00:30:54.247 [2024-07-25 01:16:32.577411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.247 [2024-07-25 01:16:32.577423] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:54.247 [2024-07-25 01:16:32.577434] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:54.247 [2024-07-25 01:16:32.577445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79256 len:8 PRP1 0x0 PRP2 0x0 00:30:54.247 [2024-07-25 01:16:32.577458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.247 [2024-07-25 01:16:32.577474] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:54.247 [2024-07-25 01:16:32.577485] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:54.247 [2024-07-25 01:16:32.577496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79264 len:8 PRP1 0x0 PRP2 0x0 00:30:54.247 [2024-07-25 01:16:32.577509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.247 [2024-07-25 01:16:32.577522] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:54.247 [2024-07-25 01:16:32.577533] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:54.247 [2024-07-25 01:16:32.577544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79272 len:8 PRP1 0x0 PRP2 0x0 00:30:54.247 [2024-07-25 01:16:32.577578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.247 [2024-07-25 01:16:32.577591] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:54.247 [2024-07-25 01:16:32.577602] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:54.247 [2024-07-25 01:16:32.577613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79280 len:8 PRP1 0x0 PRP2 0x0 00:30:54.247 [2024-07-25 01:16:32.577625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.247 [2024-07-25 01:16:32.577638] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:54.247 [2024-07-25 01:16:32.577648] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:54.247 [2024-07-25 01:16:32.577659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79288 len:8 PRP1 0x0 PRP2 0x0 00:30:54.247 [2024-07-25 01:16:32.577672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.247 [2024-07-25 01:16:32.577685] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:54.247 [2024-07-25 01:16:32.577695] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:54.247 [2024-07-25 01:16:32.577706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79296 len:8 PRP1 0x0 PRP2 0x0 00:30:54.247 [2024-07-25 01:16:32.577719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.247 [2024-07-25 01:16:32.577731] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:54.247 [2024-07-25 01:16:32.577742] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:54.247 [2024-07-25 01:16:32.577752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79304 len:8 PRP1 0x0 PRP2 0x0 00:30:54.247 [2024-07-25 01:16:32.577765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.247 [2024-07-25 01:16:32.577778] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:54.247 [2024-07-25 01:16:32.577788] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:54.247 [2024-07-25 01:16:32.577799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79312 len:8 PRP1 0x0 PRP2 0x0 00:30:54.247 [2024-07-25 01:16:32.577812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.247 [2024-07-25 01:16:32.577824] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:54.247 [2024-07-25 01:16:32.577835] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:54.247 [2024-07-25 01:16:32.577846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79320 len:8 PRP1 0x0 PRP2 0x0 00:30:54.247 [2024-07-25 01:16:32.577861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.247 [2024-07-25 01:16:32.577875] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:54.247 [2024-07-25 01:16:32.577886] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:54.247 [2024-07-25 01:16:32.577897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79328 len:8 PRP1 0x0 PRP2 0x0 00:30:54.247 [2024-07-25 01:16:32.577910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.247 [2024-07-25 01:16:32.577923] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:54.248 [2024-07-25 01:16:32.577934] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:54.248 [2024-07-25 01:16:32.577944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79336 len:8 PRP1 0x0 PRP2 0x0 00:30:54.248 [2024-07-25 01:16:32.577961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.248 [2024-07-25 01:16:32.577975] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:54.248 [2024-07-25 01:16:32.577985] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:54.248 [2024-07-25 01:16:32.577996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79344 len:8 PRP1 0x0 PRP2 0x0 00:30:54.248 [2024-07-25 01:16:32.578009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.248 [2024-07-25 01:16:32.578021] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:54.248 [2024-07-25 01:16:32.578032] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:54.248 [2024-07-25 01:16:32.578042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79352 len:8 PRP1 0x0 PRP2 0x0 00:30:54.248 [2024-07-25 01:16:32.578055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.248 [2024-07-25 01:16:32.578068] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:54.248 [2024-07-25 01:16:32.578078] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:54.248 [2024-07-25 01:16:32.578089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79360 len:8 PRP1 0x0 PRP2 0x0 00:30:54.248 [2024-07-25 01:16:32.578102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.248 [2024-07-25 01:16:32.578115] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:54.248 [2024-07-25 01:16:32.578125] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:54.248 [2024-07-25 01:16:32.578136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79368 len:8 PRP1 0x0 PRP2 0x0 00:30:54.248 [2024-07-25 01:16:32.578149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.248 [2024-07-25 01:16:32.578161] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:54.248 [2024-07-25 01:16:32.578172] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:54.248 [2024-07-25 01:16:32.578183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79376 len:8 PRP1 0x0 PRP2 0x0 00:30:54.248 [2024-07-25 01:16:32.578196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.248 [2024-07-25 01:16:32.578209] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:54.248 [2024-07-25 01:16:32.578219] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:54.248 [2024-07-25 01:16:32.578233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79384 len:8 PRP1 0x0 PRP2 0x0 00:30:54.248 [2024-07-25 01:16:32.578267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.248 [2024-07-25 01:16:32.578283] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:54.248 [2024-07-25 01:16:32.578294] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:54.248 [2024-07-25 01:16:32.578305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79392 len:8 PRP1 0x0 PRP2 0x0 00:30:54.248 [2024-07-25 01:16:32.578318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.248 [2024-07-25 01:16:32.578331] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:54.248 [2024-07-25 01:16:32.578342] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:54.248 [2024-07-25 01:16:32.578353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:78624 len:8 PRP1 0x0 PRP2 0x0 00:30:54.248 [2024-07-25 01:16:32.578367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.248 [2024-07-25 01:16:32.578380] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:54.248 [2024-07-25 01:16:32.578392] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:54.248 [2024-07-25 01:16:32.578403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:78632 len:8 PRP1 0x0 PRP2 0x0 00:30:54.248 [2024-07-25 01:16:32.578416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.248 [2024-07-25 01:16:32.578429] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:54.248 [2024-07-25 01:16:32.578441] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:54.248 [2024-07-25 01:16:32.578452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:78640 len:8 PRP1 0x0 PRP2 0x0 00:30:54.248 [2024-07-25 01:16:32.578466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.248 [2024-07-25 01:16:32.578479] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:54.248 [2024-07-25 01:16:32.578490] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:54.248 [2024-07-25 01:16:32.578501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:78648 len:8 PRP1 0x0 PRP2 0x0 00:30:54.248 [2024-07-25 01:16:32.578514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.248 [2024-07-25 01:16:32.578586] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x21feb50 was disconnected and freed. reset controller. 00:30:54.248 [2024-07-25 01:16:32.578604] bdev_nvme.c:1867:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:30:54.248 [2024-07-25 01:16:32.578637] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:54.248 [2024-07-25 01:16:32.578670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.248 [2024-07-25 01:16:32.578685] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:54.248 [2024-07-25 01:16:32.578703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.248 [2024-07-25 01:16:32.578723] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:54.248 [2024-07-25 01:16:32.578744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.248 [2024-07-25 01:16:32.578759] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:54.248 [2024-07-25 01:16:32.578772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.248 [2024-07-25 01:16:32.578786] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:54.248 [2024-07-25 01:16:32.578844] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21dfeb0 (9): Bad file descriptor 00:30:54.248 [2024-07-25 01:16:32.582114] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:54.248 [2024-07-25 01:16:32.700881] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:30:54.248 [2024-07-25 01:16:36.206217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:98240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:54.248 [2024-07-25 01:16:36.206269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.248 [2024-07-25 01:16:36.206298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:98248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:54.248 [2024-07-25 01:16:36.206313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.248 [2024-07-25 01:16:36.206329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:98256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:54.248 [2024-07-25 01:16:36.206343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.248 [2024-07-25 01:16:36.206358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:98264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:54.248 [2024-07-25 01:16:36.206371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.248 [2024-07-25 01:16:36.206385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:98272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:54.248 [2024-07-25 01:16:36.206399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.249 [2024-07-25 01:16:36.206414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:98280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:54.249 [2024-07-25 01:16:36.206428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.249 [2024-07-25 01:16:36.206444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:98288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:54.249 [2024-07-25 01:16:36.206457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.249 [2024-07-25 01:16:36.206473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:98296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:54.249 [2024-07-25 01:16:36.206489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.249 [2024-07-25 01:16:36.206505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:98304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:54.249 [2024-07-25 01:16:36.206533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.249 [2024-07-25 01:16:36.206547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:98312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:54.249 [2024-07-25 01:16:36.206565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.249 [2024-07-25 01:16:36.206580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:98320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:54.249 [2024-07-25 01:16:36.206592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.249 [2024-07-25 01:16:36.206606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:98328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:54.249 [2024-07-25 01:16:36.206619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.249 [2024-07-25 01:16:36.206633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:98336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:54.249 [2024-07-25 01:16:36.206645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.249 [2024-07-25 01:16:36.206659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:98344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:54.249 [2024-07-25 01:16:36.206672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.249 [2024-07-25 01:16:36.206686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:98352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:54.249 [2024-07-25 01:16:36.206698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.249 [2024-07-25 01:16:36.206713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:98360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:54.249 [2024-07-25 01:16:36.206725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.249 [2024-07-25 01:16:36.206739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:98368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:54.249 [2024-07-25 01:16:36.206753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.249 [2024-07-25 01:16:36.206768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:98376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:54.249 [2024-07-25 01:16:36.206781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.249 [2024-07-25 01:16:36.206794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:98384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:54.249 [2024-07-25 01:16:36.206807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.249 [2024-07-25 01:16:36.206821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:98392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:54.249 [2024-07-25 01:16:36.206835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.249 [2024-07-25 01:16:36.206850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:98400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:54.249 [2024-07-25 01:16:36.206877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.249 [2024-07-25 01:16:36.206893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:98408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:54.249 [2024-07-25 01:16:36.206906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.249 [2024-07-25 01:16:36.206921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:98416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:54.249 [2024-07-25 01:16:36.206947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.249 [2024-07-25 01:16:36.206962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:97864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.249 [2024-07-25 01:16:36.206976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.249 [2024-07-25 01:16:36.206991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:97872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.249 [2024-07-25 01:16:36.207005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.249 [2024-07-25 01:16:36.207019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:97880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.249 [2024-07-25 01:16:36.207032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.249 [2024-07-25 01:16:36.207047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:97888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.249 [2024-07-25 01:16:36.207060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.249 [2024-07-25 01:16:36.207075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:97896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.249 [2024-07-25 01:16:36.207088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.249 [2024-07-25 01:16:36.207103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:97904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.249 [2024-07-25 01:16:36.207116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.249 [2024-07-25 01:16:36.207132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:97912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.249 [2024-07-25 01:16:36.207156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.249 [2024-07-25 01:16:36.207170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:98424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:54.249 [2024-07-25 01:16:36.207183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.249 [2024-07-25 01:16:36.207197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:98432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:54.249 [2024-07-25 01:16:36.207210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.249 [2024-07-25 01:16:36.207224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:98440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:54.249 [2024-07-25 01:16:36.207237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.249 [2024-07-25 01:16:36.207276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:98448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:54.249 [2024-07-25 01:16:36.207291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.249 [2024-07-25 01:16:36.207307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:98456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:54.249 [2024-07-25 01:16:36.207320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.249 [2024-07-25 01:16:36.207339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:98464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:54.249 [2024-07-25 01:16:36.207353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.249 [2024-07-25 01:16:36.207369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:98472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:54.249 [2024-07-25 01:16:36.207382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.249 [2024-07-25 01:16:36.207397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:98480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:54.249 [2024-07-25 01:16:36.207411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.249 [2024-07-25 01:16:36.207425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:98488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:54.249 [2024-07-25 01:16:36.207439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.249 [2024-07-25 01:16:36.207454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:98496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:54.249 [2024-07-25 01:16:36.207468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.249 [2024-07-25 01:16:36.207483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:98504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:54.249 [2024-07-25 01:16:36.207496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.249 [2024-07-25 01:16:36.207511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:98512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:54.249 [2024-07-25 01:16:36.207533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.249 [2024-07-25 01:16:36.207562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:98520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:54.249 [2024-07-25 01:16:36.207576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.249 [2024-07-25 01:16:36.207591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:98528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:54.249 [2024-07-25 01:16:36.207605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.250 [2024-07-25 01:16:36.207619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:98536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:54.250 [2024-07-25 01:16:36.207632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.250 [2024-07-25 01:16:36.207647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:98544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:54.250 [2024-07-25 01:16:36.207664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.250 [2024-07-25 01:16:36.207678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:98552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:54.250 [2024-07-25 01:16:36.207691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.250 [2024-07-25 01:16:36.207706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:98560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:54.250 [2024-07-25 01:16:36.207722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.250 [2024-07-25 01:16:36.207737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:98568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:54.250 [2024-07-25 01:16:36.207750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.250 [2024-07-25 01:16:36.207765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:98576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:54.250 [2024-07-25 01:16:36.207778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.250 [2024-07-25 01:16:36.207792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:98584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:54.250 [2024-07-25 01:16:36.207805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.250 [2024-07-25 01:16:36.207820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:98592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:54.250 [2024-07-25 01:16:36.207833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.250 [2024-07-25 01:16:36.207847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:98600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:54.250 [2024-07-25 01:16:36.207861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.250 [2024-07-25 01:16:36.207876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:98608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:54.250 [2024-07-25 01:16:36.207889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.250 [2024-07-25 01:16:36.207904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:98616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:54.250 [2024-07-25 01:16:36.207916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.250 [2024-07-25 01:16:36.207931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:98624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:54.250 [2024-07-25 01:16:36.207945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.250 [2024-07-25 01:16:36.207959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:98632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:54.250 [2024-07-25 01:16:36.207979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.250 [2024-07-25 01:16:36.207993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:98640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:54.250 [2024-07-25 01:16:36.208007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.250 [2024-07-25 01:16:36.208022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:98648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:54.250 [2024-07-25 01:16:36.208034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.250 [2024-07-25 01:16:36.208049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:98656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:54.250 [2024-07-25 01:16:36.208062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.250 [2024-07-25 01:16:36.208076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:98664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:54.250 [2024-07-25 01:16:36.208093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.250 [2024-07-25 01:16:36.208108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:98672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:54.250 [2024-07-25 01:16:36.208121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.250 [2024-07-25 01:16:36.208135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:98680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:54.250 [2024-07-25 01:16:36.208149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.250 [2024-07-25 01:16:36.208163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:98688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:54.250 [2024-07-25 01:16:36.208176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.250 [2024-07-25 01:16:36.208190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:98696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:54.250 [2024-07-25 01:16:36.208203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.250 [2024-07-25 01:16:36.208218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:98704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:54.250 [2024-07-25 01:16:36.208237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.250 [2024-07-25 01:16:36.208276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:97920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.250 [2024-07-25 01:16:36.208291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.250 [2024-07-25 01:16:36.208306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:97928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.250 [2024-07-25 01:16:36.208320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.250 [2024-07-25 01:16:36.208336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:97936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.250 [2024-07-25 01:16:36.208349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.250 [2024-07-25 01:16:36.208364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:97944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.250 [2024-07-25 01:16:36.208377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.250 [2024-07-25 01:16:36.208392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:97952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.250 [2024-07-25 01:16:36.208406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.250 [2024-07-25 01:16:36.208421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:97960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.250 [2024-07-25 01:16:36.208436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.250 [2024-07-25 01:16:36.208451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:97968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.250 [2024-07-25 01:16:36.208466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.250 [2024-07-25 01:16:36.208485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:97976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.250 [2024-07-25 01:16:36.208499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.250 [2024-07-25 01:16:36.208514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:97984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.250 [2024-07-25 01:16:36.208528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.250 [2024-07-25 01:16:36.208556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:97992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.250 [2024-07-25 01:16:36.208584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.250 [2024-07-25 01:16:36.208600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:98000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.250 [2024-07-25 01:16:36.208619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.250 [2024-07-25 01:16:36.208633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:98008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.250 [2024-07-25 01:16:36.208647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.250 [2024-07-25 01:16:36.208662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:98016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.250 [2024-07-25 01:16:36.208675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.250 [2024-07-25 01:16:36.208690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:98024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.250 [2024-07-25 01:16:36.208703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.250 [2024-07-25 01:16:36.208718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:98032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.250 [2024-07-25 01:16:36.208731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.250 [2024-07-25 01:16:36.208746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:98040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.250 [2024-07-25 01:16:36.208759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.250 [2024-07-25 01:16:36.208773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:54.250 [2024-07-25 01:16:36.208787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.250 [2024-07-25 01:16:36.208801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:98720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:54.251 [2024-07-25 01:16:36.208815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.251 [2024-07-25 01:16:36.208829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:98728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:54.251 [2024-07-25 01:16:36.208842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.251 [2024-07-25 01:16:36.208857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:98736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:54.251 [2024-07-25 01:16:36.208874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.251 [2024-07-25 01:16:36.208889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:98744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:54.251 [2024-07-25 01:16:36.208902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.251 [2024-07-25 01:16:36.208917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:98752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:54.251 [2024-07-25 01:16:36.208934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.251 [2024-07-25 01:16:36.208950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:98760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:54.251 [2024-07-25 01:16:36.208964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.251 [2024-07-25 01:16:36.208980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:98768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:54.251 [2024-07-25 01:16:36.208994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.251 [2024-07-25 01:16:36.209009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:98776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:54.251 [2024-07-25 01:16:36.209022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.251 [2024-07-25 01:16:36.209036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:98784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:54.251 [2024-07-25 01:16:36.209049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.251 [2024-07-25 01:16:36.209063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:98792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:54.251 [2024-07-25 01:16:36.209077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.251 [2024-07-25 01:16:36.209091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:98800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:54.251 [2024-07-25 01:16:36.209104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.251 [2024-07-25 01:16:36.209119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:98808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:54.251 [2024-07-25 01:16:36.209132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.251 [2024-07-25 01:16:36.209147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:98816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:54.251 [2024-07-25 01:16:36.209160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.251 [2024-07-25 01:16:36.209174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:98824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:54.251 [2024-07-25 01:16:36.209188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.251 [2024-07-25 01:16:36.209202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:98832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:54.251 [2024-07-25 01:16:36.209215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.251 [2024-07-25 01:16:36.209248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:98840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:54.251 [2024-07-25 01:16:36.209280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.251 [2024-07-25 01:16:36.209296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:98048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.251 [2024-07-25 01:16:36.209311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.251 [2024-07-25 01:16:36.209327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:98056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.251 [2024-07-25 01:16:36.209340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.251 [2024-07-25 01:16:36.209354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:98064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.251 [2024-07-25 01:16:36.209368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.251 [2024-07-25 01:16:36.209383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:98072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.251 [2024-07-25 01:16:36.209397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.251 [2024-07-25 01:16:36.209412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:98080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.251 [2024-07-25 01:16:36.209426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.251 [2024-07-25 01:16:36.209442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:98088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.251 [2024-07-25 01:16:36.209456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.251 [2024-07-25 01:16:36.209470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:98096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.251 [2024-07-25 01:16:36.209484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.251 [2024-07-25 01:16:36.209500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:98104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.251 [2024-07-25 01:16:36.209514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.251 [2024-07-25 01:16:36.209534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:98848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:54.251 [2024-07-25 01:16:36.209548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.251 [2024-07-25 01:16:36.209577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:98856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:54.251 [2024-07-25 01:16:36.209598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.251 [2024-07-25 01:16:36.209612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:98864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:54.251 [2024-07-25 01:16:36.209626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.251 [2024-07-25 01:16:36.209640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:98872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:54.251 [2024-07-25 01:16:36.209656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.251 [2024-07-25 01:16:36.209672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:98880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:54.251 [2024-07-25 01:16:36.209686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.251 [2024-07-25 01:16:36.209702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:98112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.251 [2024-07-25 01:16:36.209715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.251 [2024-07-25 01:16:36.209731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:98120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.251 [2024-07-25 01:16:36.209744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.251 [2024-07-25 01:16:36.209758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:98128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.251 [2024-07-25 01:16:36.209771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.251 [2024-07-25 01:16:36.209786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:98136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.251 [2024-07-25 01:16:36.209799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.251 [2024-07-25 01:16:36.209813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:98144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.251 [2024-07-25 01:16:36.209826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.251 [2024-07-25 01:16:36.209840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:98152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.251 [2024-07-25 01:16:36.209853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.251 [2024-07-25 01:16:36.209868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:98160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.251 [2024-07-25 01:16:36.209881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.251 [2024-07-25 01:16:36.209895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:98168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.251 [2024-07-25 01:16:36.209908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.251 [2024-07-25 01:16:36.209923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:98176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.251 [2024-07-25 01:16:36.209936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.251 [2024-07-25 01:16:36.209951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:98184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.251 [2024-07-25 01:16:36.209965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.251 [2024-07-25 01:16:36.209981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:98192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.251 [2024-07-25 01:16:36.209994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.251 [2024-07-25 01:16:36.210024] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:54.252 [2024-07-25 01:16:36.210044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:98200 len:8 PRP1 0x0 PRP2 0x0 00:30:54.252 [2024-07-25 01:16:36.210058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.252 [2024-07-25 01:16:36.210076] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:54.252 [2024-07-25 01:16:36.210088] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:54.252 [2024-07-25 01:16:36.210099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:98208 len:8 PRP1 0x0 PRP2 0x0 00:30:54.252 [2024-07-25 01:16:36.210112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.252 [2024-07-25 01:16:36.210125] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:54.252 [2024-07-25 01:16:36.210136] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:54.252 [2024-07-25 01:16:36.210148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:98216 len:8 PRP1 0x0 PRP2 0x0 00:30:54.252 [2024-07-25 01:16:36.210160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.252 [2024-07-25 01:16:36.210173] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:54.252 [2024-07-25 01:16:36.210185] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:54.252 [2024-07-25 01:16:36.210196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:98224 len:8 PRP1 0x0 PRP2 0x0 00:30:54.252 [2024-07-25 01:16:36.210210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.252 [2024-07-25 01:16:36.210233] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:54.252 [2024-07-25 01:16:36.210266] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:54.252 [2024-07-25 01:16:36.210280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:98232 len:8 PRP1 0x0 PRP2 0x0 00:30:54.252 [2024-07-25 01:16:36.210294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.252 [2024-07-25 01:16:36.210351] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x23a95b0 was disconnected and freed. reset controller. 00:30:54.252 [2024-07-25 01:16:36.210382] bdev_nvme.c:1867:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:30:54.252 [2024-07-25 01:16:36.210414] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:54.252 [2024-07-25 01:16:36.210432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.252 [2024-07-25 01:16:36.210448] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:54.252 [2024-07-25 01:16:36.210467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.252 [2024-07-25 01:16:36.210481] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:54.252 [2024-07-25 01:16:36.210495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.252 [2024-07-25 01:16:36.210508] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:54.252 [2024-07-25 01:16:36.210532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.252 [2024-07-25 01:16:36.210545] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:54.252 [2024-07-25 01:16:36.210597] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21dfeb0 (9): Bad file descriptor 00:30:54.252 [2024-07-25 01:16:36.213920] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:54.252 [2024-07-25 01:16:36.403552] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:30:54.252 [2024-07-25 01:16:40.720738] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:54.252 [2024-07-25 01:16:40.720795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.252 [2024-07-25 01:16:40.720823] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:54.252 [2024-07-25 01:16:40.720836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.252 [2024-07-25 01:16:40.720851] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:54.252 [2024-07-25 01:16:40.720865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.252 [2024-07-25 01:16:40.720879] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:54.252 [2024-07-25 01:16:40.720892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.252 [2024-07-25 01:16:40.720905] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21dfeb0 is same with the state(5) to be set 00:30:54.252 [2024-07-25 01:16:40.723481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:55624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.252 [2024-07-25 01:16:40.723508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.252 [2024-07-25 01:16:40.723534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:55632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.252 [2024-07-25 01:16:40.723564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.252 [2024-07-25 01:16:40.723582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:55640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.252 [2024-07-25 01:16:40.723595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.252 [2024-07-25 01:16:40.723610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:55648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.252 [2024-07-25 01:16:40.723623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.252 [2024-07-25 01:16:40.723637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:55656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.252 [2024-07-25 01:16:40.723650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.252 [2024-07-25 01:16:40.723664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:55664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.252 [2024-07-25 01:16:40.723677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.252 [2024-07-25 01:16:40.723692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:55672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.252 [2024-07-25 01:16:40.723705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.252 [2024-07-25 01:16:40.723725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:55680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.252 [2024-07-25 01:16:40.723740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.252 [2024-07-25 01:16:40.723754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:55688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.252 [2024-07-25 01:16:40.723767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.252 [2024-07-25 01:16:40.723782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:55696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.252 [2024-07-25 01:16:40.723795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.252 [2024-07-25 01:16:40.723809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:55704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.252 [2024-07-25 01:16:40.723823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.252 [2024-07-25 01:16:40.723837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:55712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.252 [2024-07-25 01:16:40.723850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.252 [2024-07-25 01:16:40.723865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:55720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.252 [2024-07-25 01:16:40.723878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.252 [2024-07-25 01:16:40.723893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:55728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.252 [2024-07-25 01:16:40.723906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.252 [2024-07-25 01:16:40.723920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:55736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.252 [2024-07-25 01:16:40.723933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.252 [2024-07-25 01:16:40.723947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:55744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.252 [2024-07-25 01:16:40.723960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.252 [2024-07-25 01:16:40.723975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:55752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.252 [2024-07-25 01:16:40.723988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.252 [2024-07-25 01:16:40.724002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:55760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.252 [2024-07-25 01:16:40.724015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.252 [2024-07-25 01:16:40.724029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:55768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.252 [2024-07-25 01:16:40.724042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.252 [2024-07-25 01:16:40.724056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:55776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.252 [2024-07-25 01:16:40.724073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.252 [2024-07-25 01:16:40.724088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:55784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.253 [2024-07-25 01:16:40.724101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.253 [2024-07-25 01:16:40.724116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:55792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.253 [2024-07-25 01:16:40.724129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.253 [2024-07-25 01:16:40.724144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:55800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.253 [2024-07-25 01:16:40.724156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.253 [2024-07-25 01:16:40.724171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:55808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.253 [2024-07-25 01:16:40.724184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.253 [2024-07-25 01:16:40.724199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:55816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.253 [2024-07-25 01:16:40.724212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.253 [2024-07-25 01:16:40.724227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:55824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.253 [2024-07-25 01:16:40.724240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.253 [2024-07-25 01:16:40.724279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:55832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.253 [2024-07-25 01:16:40.724293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.253 [2024-07-25 01:16:40.724308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:55840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.253 [2024-07-25 01:16:40.724322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.253 [2024-07-25 01:16:40.724337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:55848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.253 [2024-07-25 01:16:40.724350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.253 [2024-07-25 01:16:40.724366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:55856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.253 [2024-07-25 01:16:40.724379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.253 [2024-07-25 01:16:40.724395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:55864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.253 [2024-07-25 01:16:40.724408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.253 [2024-07-25 01:16:40.724423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:55872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.253 [2024-07-25 01:16:40.724437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.253 [2024-07-25 01:16:40.724459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:55880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.253 [2024-07-25 01:16:40.724474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.253 [2024-07-25 01:16:40.724489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:55888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.253 [2024-07-25 01:16:40.724503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.253 [2024-07-25 01:16:40.724519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:55896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.253 [2024-07-25 01:16:40.724532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.253 [2024-07-25 01:16:40.724562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:55904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.253 [2024-07-25 01:16:40.724576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.253 [2024-07-25 01:16:40.724591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:55912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.253 [2024-07-25 01:16:40.724604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.253 [2024-07-25 01:16:40.724618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:55920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.253 [2024-07-25 01:16:40.724631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.253 [2024-07-25 01:16:40.724646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:55928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.253 [2024-07-25 01:16:40.724659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.253 [2024-07-25 01:16:40.724674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:55952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:54.253 [2024-07-25 01:16:40.724687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.253 [2024-07-25 01:16:40.724702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:55960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:54.253 [2024-07-25 01:16:40.724715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.253 [2024-07-25 01:16:40.724728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:55968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:54.253 [2024-07-25 01:16:40.724741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.253 [2024-07-25 01:16:40.724755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:55976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:54.253 [2024-07-25 01:16:40.724768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.253 [2024-07-25 01:16:40.724782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:55984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:54.253 [2024-07-25 01:16:40.724795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.253 [2024-07-25 01:16:40.724810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:55992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:54.253 [2024-07-25 01:16:40.724826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.253 [2024-07-25 01:16:40.724841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:56000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:54.253 [2024-07-25 01:16:40.724854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.253 [2024-07-25 01:16:40.724868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:56008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:54.253 [2024-07-25 01:16:40.724882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.253 [2024-07-25 01:16:40.724896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:56016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:54.253 [2024-07-25 01:16:40.724909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.253 [2024-07-25 01:16:40.724923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:56024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:54.253 [2024-07-25 01:16:40.724936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.253 [2024-07-25 01:16:40.724950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:56032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:54.253 [2024-07-25 01:16:40.724963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.253 [2024-07-25 01:16:40.724977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:56040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:54.253 [2024-07-25 01:16:40.724990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.253 [2024-07-25 01:16:40.725005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:56048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:54.253 [2024-07-25 01:16:40.725017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.253 [2024-07-25 01:16:40.725032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:56056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:54.253 [2024-07-25 01:16:40.725045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.253 [2024-07-25 01:16:40.725060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:56064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:54.253 [2024-07-25 01:16:40.725073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.254 [2024-07-25 01:16:40.725087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:56072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:54.254 [2024-07-25 01:16:40.725100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.254 [2024-07-25 01:16:40.725114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:56080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:54.254 [2024-07-25 01:16:40.725127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.254 [2024-07-25 01:16:40.725141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:56088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:54.254 [2024-07-25 01:16:40.725154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.254 [2024-07-25 01:16:40.725168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:56096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:54.254 [2024-07-25 01:16:40.725184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.254 [2024-07-25 01:16:40.725199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:56104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:54.254 [2024-07-25 01:16:40.725212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.254 [2024-07-25 01:16:40.725227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:56112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:54.254 [2024-07-25 01:16:40.725239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.254 [2024-07-25 01:16:40.725277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:56120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:54.254 [2024-07-25 01:16:40.725290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.254 [2024-07-25 01:16:40.725305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:56128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:54.254 [2024-07-25 01:16:40.725318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.254 [2024-07-25 01:16:40.725332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:56136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:54.254 [2024-07-25 01:16:40.725346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.254 [2024-07-25 01:16:40.725360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:56144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:54.254 [2024-07-25 01:16:40.725374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.254 [2024-07-25 01:16:40.725389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:56152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:54.254 [2024-07-25 01:16:40.725402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.254 [2024-07-25 01:16:40.725416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:56160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:54.254 [2024-07-25 01:16:40.725430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.254 [2024-07-25 01:16:40.725445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:56168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:54.254 [2024-07-25 01:16:40.725458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.254 [2024-07-25 01:16:40.725472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:56176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:54.254 [2024-07-25 01:16:40.725486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.254 [2024-07-25 01:16:40.725501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:56184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:54.254 [2024-07-25 01:16:40.725514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.254 [2024-07-25 01:16:40.725528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:56192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:54.254 [2024-07-25 01:16:40.725542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.254 [2024-07-25 01:16:40.725561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:56200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:54.254 [2024-07-25 01:16:40.725584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.254 [2024-07-25 01:16:40.725599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:56208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:54.254 [2024-07-25 01:16:40.725613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.254 [2024-07-25 01:16:40.725627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:56216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:54.254 [2024-07-25 01:16:40.725641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.254 [2024-07-25 01:16:40.725655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:56224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:54.254 [2024-07-25 01:16:40.725669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.254 [2024-07-25 01:16:40.725683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:56232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:54.254 [2024-07-25 01:16:40.725696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.254 [2024-07-25 01:16:40.725710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:56240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:54.254 [2024-07-25 01:16:40.725724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.254 [2024-07-25 01:16:40.725738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:56248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:54.254 [2024-07-25 01:16:40.725751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.254 [2024-07-25 01:16:40.725766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:56256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:54.254 [2024-07-25 01:16:40.725779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.254 [2024-07-25 01:16:40.725793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:56264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:54.254 [2024-07-25 01:16:40.725807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.254 [2024-07-25 01:16:40.725822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:56272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:54.254 [2024-07-25 01:16:40.725836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.254 [2024-07-25 01:16:40.725850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:54.254 [2024-07-25 01:16:40.725863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.254 [2024-07-25 01:16:40.725878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:56288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:54.254 [2024-07-25 01:16:40.725891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.254 [2024-07-25 01:16:40.725906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:56296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:54.254 [2024-07-25 01:16:40.725924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.254 [2024-07-25 01:16:40.725939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:56304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:54.254 [2024-07-25 01:16:40.725953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.254 [2024-07-25 01:16:40.725968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:56312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:54.254 [2024-07-25 01:16:40.725981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.254 [2024-07-25 01:16:40.725996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:56320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:54.254 [2024-07-25 01:16:40.726009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.254 [2024-07-25 01:16:40.726024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:56328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:54.254 [2024-07-25 01:16:40.726037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.254 [2024-07-25 01:16:40.726071] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:54.254 [2024-07-25 01:16:40.726089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56336 len:8 PRP1 0x0 PRP2 0x0 00:30:54.254 [2024-07-25 01:16:40.726102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.254 [2024-07-25 01:16:40.726119] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:54.254 [2024-07-25 01:16:40.726132] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:54.254 [2024-07-25 01:16:40.726143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56344 len:8 PRP1 0x0 PRP2 0x0 00:30:54.254 [2024-07-25 01:16:40.726156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.254 [2024-07-25 01:16:40.726169] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:54.254 [2024-07-25 01:16:40.726180] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:54.254 [2024-07-25 01:16:40.726191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56352 len:8 PRP1 0x0 PRP2 0x0 00:30:54.254 [2024-07-25 01:16:40.726204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.254 [2024-07-25 01:16:40.726216] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:54.254 [2024-07-25 01:16:40.726227] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:54.254 [2024-07-25 01:16:40.726238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56360 len:8 PRP1 0x0 PRP2 0x0 00:30:54.254 [2024-07-25 01:16:40.726276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.254 [2024-07-25 01:16:40.726292] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:54.255 [2024-07-25 01:16:40.726303] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:54.255 [2024-07-25 01:16:40.726315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56368 len:8 PRP1 0x0 PRP2 0x0 00:30:54.255 [2024-07-25 01:16:40.726328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.255 [2024-07-25 01:16:40.726342] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:54.255 [2024-07-25 01:16:40.726357] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:54.255 [2024-07-25 01:16:40.726369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56376 len:8 PRP1 0x0 PRP2 0x0 00:30:54.255 [2024-07-25 01:16:40.726382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.255 [2024-07-25 01:16:40.726396] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:54.255 [2024-07-25 01:16:40.726408] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:54.255 [2024-07-25 01:16:40.726419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56384 len:8 PRP1 0x0 PRP2 0x0 00:30:54.255 [2024-07-25 01:16:40.726432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.255 [2024-07-25 01:16:40.726446] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:54.255 [2024-07-25 01:16:40.726457] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:54.255 [2024-07-25 01:16:40.726468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56392 len:8 PRP1 0x0 PRP2 0x0 00:30:54.255 [2024-07-25 01:16:40.726481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.255 [2024-07-25 01:16:40.726494] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:54.255 [2024-07-25 01:16:40.726505] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:54.255 [2024-07-25 01:16:40.726517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56400 len:8 PRP1 0x0 PRP2 0x0 00:30:54.255 [2024-07-25 01:16:40.726530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.255 [2024-07-25 01:16:40.726543] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:54.255 [2024-07-25 01:16:40.726568] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:54.255 [2024-07-25 01:16:40.726579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56408 len:8 PRP1 0x0 PRP2 0x0 00:30:54.255 [2024-07-25 01:16:40.726592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.255 [2024-07-25 01:16:40.726606] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:54.255 [2024-07-25 01:16:40.726616] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:54.255 [2024-07-25 01:16:40.726627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56416 len:8 PRP1 0x0 PRP2 0x0 00:30:54.255 [2024-07-25 01:16:40.726640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.255 [2024-07-25 01:16:40.726653] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:54.255 [2024-07-25 01:16:40.726664] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:54.255 [2024-07-25 01:16:40.726675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56424 len:8 PRP1 0x0 PRP2 0x0 00:30:54.255 [2024-07-25 01:16:40.726688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.255 [2024-07-25 01:16:40.726701] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:54.255 [2024-07-25 01:16:40.726712] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:54.255 [2024-07-25 01:16:40.726723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56432 len:8 PRP1 0x0 PRP2 0x0 00:30:54.255 [2024-07-25 01:16:40.726736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.255 [2024-07-25 01:16:40.726752] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:54.255 [2024-07-25 01:16:40.726763] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:54.255 [2024-07-25 01:16:40.726774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56440 len:8 PRP1 0x0 PRP2 0x0 00:30:54.255 [2024-07-25 01:16:40.726787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.255 [2024-07-25 01:16:40.726800] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:54.255 [2024-07-25 01:16:40.726811] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:54.255 [2024-07-25 01:16:40.726823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56448 len:8 PRP1 0x0 PRP2 0x0 00:30:54.255 [2024-07-25 01:16:40.726835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.255 [2024-07-25 01:16:40.726849] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:54.255 [2024-07-25 01:16:40.726859] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:54.255 [2024-07-25 01:16:40.726870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56456 len:8 PRP1 0x0 PRP2 0x0 00:30:54.255 [2024-07-25 01:16:40.726883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.255 [2024-07-25 01:16:40.726895] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:54.255 [2024-07-25 01:16:40.726906] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:54.255 [2024-07-25 01:16:40.726918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56464 len:8 PRP1 0x0 PRP2 0x0 00:30:54.255 [2024-07-25 01:16:40.726931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.255 [2024-07-25 01:16:40.726944] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:54.255 [2024-07-25 01:16:40.726955] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:54.255 [2024-07-25 01:16:40.726966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56472 len:8 PRP1 0x0 PRP2 0x0 00:30:54.255 [2024-07-25 01:16:40.726978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.255 [2024-07-25 01:16:40.726991] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:54.255 [2024-07-25 01:16:40.727001] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:54.255 [2024-07-25 01:16:40.727012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56480 len:8 PRP1 0x0 PRP2 0x0 00:30:54.255 [2024-07-25 01:16:40.727025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.255 [2024-07-25 01:16:40.727037] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:54.255 [2024-07-25 01:16:40.727048] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:54.255 [2024-07-25 01:16:40.727059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56488 len:8 PRP1 0x0 PRP2 0x0 00:30:54.255 [2024-07-25 01:16:40.727072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.255 [2024-07-25 01:16:40.727084] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:54.255 [2024-07-25 01:16:40.727095] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:54.255 [2024-07-25 01:16:40.727106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56496 len:8 PRP1 0x0 PRP2 0x0 00:30:54.255 [2024-07-25 01:16:40.727122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.255 [2024-07-25 01:16:40.727135] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:54.255 [2024-07-25 01:16:40.727146] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:54.255 [2024-07-25 01:16:40.727157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56504 len:8 PRP1 0x0 PRP2 0x0 00:30:54.255 [2024-07-25 01:16:40.727170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.255 [2024-07-25 01:16:40.727183] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:54.255 [2024-07-25 01:16:40.727194] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:54.255 [2024-07-25 01:16:40.727205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56512 len:8 PRP1 0x0 PRP2 0x0 00:30:54.255 [2024-07-25 01:16:40.727218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.255 [2024-07-25 01:16:40.727230] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:54.255 [2024-07-25 01:16:40.727246] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:54.255 [2024-07-25 01:16:40.727274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56520 len:8 PRP1 0x0 PRP2 0x0 00:30:54.255 [2024-07-25 01:16:40.727288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.255 [2024-07-25 01:16:40.727301] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:54.255 [2024-07-25 01:16:40.727312] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:54.255 [2024-07-25 01:16:40.727324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56528 len:8 PRP1 0x0 PRP2 0x0 00:30:54.255 [2024-07-25 01:16:40.727337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.255 [2024-07-25 01:16:40.727351] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:54.255 [2024-07-25 01:16:40.727362] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:54.255 [2024-07-25 01:16:40.727373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56536 len:8 PRP1 0x0 PRP2 0x0 00:30:54.255 [2024-07-25 01:16:40.727386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.255 [2024-07-25 01:16:40.727400] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:54.255 [2024-07-25 01:16:40.727411] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:54.255 [2024-07-25 01:16:40.727422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56544 len:8 PRP1 0x0 PRP2 0x0 00:30:54.255 [2024-07-25 01:16:40.727435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.255 [2024-07-25 01:16:40.727448] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:54.256 [2024-07-25 01:16:40.727459] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:54.256 [2024-07-25 01:16:40.727470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56552 len:8 PRP1 0x0 PRP2 0x0 00:30:54.256 [2024-07-25 01:16:40.727483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.256 [2024-07-25 01:16:40.727496] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:54.256 [2024-07-25 01:16:40.727510] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:54.256 [2024-07-25 01:16:40.727522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56560 len:8 PRP1 0x0 PRP2 0x0 00:30:54.256 [2024-07-25 01:16:40.727535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.256 [2024-07-25 01:16:40.727563] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:54.256 [2024-07-25 01:16:40.727574] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:54.256 [2024-07-25 01:16:40.727585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56568 len:8 PRP1 0x0 PRP2 0x0 00:30:54.256 [2024-07-25 01:16:40.727598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.256 [2024-07-25 01:16:40.727611] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:54.256 [2024-07-25 01:16:40.727622] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:54.256 [2024-07-25 01:16:40.727633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56576 len:8 PRP1 0x0 PRP2 0x0 00:30:54.256 [2024-07-25 01:16:40.727646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.256 [2024-07-25 01:16:40.727659] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:54.256 [2024-07-25 01:16:40.727669] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:54.256 [2024-07-25 01:16:40.727680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56584 len:8 PRP1 0x0 PRP2 0x0 00:30:54.256 [2024-07-25 01:16:40.727693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.256 [2024-07-25 01:16:40.727706] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:54.256 [2024-07-25 01:16:40.727716] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:54.256 [2024-07-25 01:16:40.727728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56592 len:8 PRP1 0x0 PRP2 0x0 00:30:54.256 [2024-07-25 01:16:40.727740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.256 [2024-07-25 01:16:40.727753] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:54.256 [2024-07-25 01:16:40.727764] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:54.256 [2024-07-25 01:16:40.727774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56600 len:8 PRP1 0x0 PRP2 0x0 00:30:54.256 [2024-07-25 01:16:40.727787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.256 [2024-07-25 01:16:40.727800] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:54.256 [2024-07-25 01:16:40.727810] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:54.256 [2024-07-25 01:16:40.727822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56608 len:8 PRP1 0x0 PRP2 0x0 00:30:54.256 [2024-07-25 01:16:40.727834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.256 [2024-07-25 01:16:40.727847] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:54.256 [2024-07-25 01:16:40.727859] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:54.256 [2024-07-25 01:16:40.727869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56616 len:8 PRP1 0x0 PRP2 0x0 00:30:54.256 [2024-07-25 01:16:40.727882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.256 [2024-07-25 01:16:40.727898] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:54.256 [2024-07-25 01:16:40.727909] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:54.256 [2024-07-25 01:16:40.727919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56624 len:8 PRP1 0x0 PRP2 0x0 00:30:54.256 [2024-07-25 01:16:40.727932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.256 [2024-07-25 01:16:40.727944] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:54.256 [2024-07-25 01:16:40.727955] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:54.256 [2024-07-25 01:16:40.727973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56632 len:8 PRP1 0x0 PRP2 0x0 00:30:54.256 [2024-07-25 01:16:40.727986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.256 [2024-07-25 01:16:40.727999] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:54.256 [2024-07-25 01:16:40.728010] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:54.256 [2024-07-25 01:16:40.728021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56640 len:8 PRP1 0x0 PRP2 0x0 00:30:54.256 [2024-07-25 01:16:40.728033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.256 [2024-07-25 01:16:40.728046] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:54.256 [2024-07-25 01:16:40.728057] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:54.256 [2024-07-25 01:16:40.728068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:55936 len:8 PRP1 0x0 PRP2 0x0 00:30:54.256 [2024-07-25 01:16:40.728080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.256 [2024-07-25 01:16:40.728093] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:54.256 [2024-07-25 01:16:40.728104] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:54.256 [2024-07-25 01:16:40.728114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:55944 len:8 PRP1 0x0 PRP2 0x0 00:30:54.256 [2024-07-25 01:16:40.728127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.256 [2024-07-25 01:16:40.728182] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x22036d0 was disconnected and freed. reset controller. 00:30:54.256 [2024-07-25 01:16:40.728200] bdev_nvme.c:1867:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:30:54.256 [2024-07-25 01:16:40.728215] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:54.256 [2024-07-25 01:16:40.731497] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:54.256 [2024-07-25 01:16:40.731536] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21dfeb0 (9): Bad file descriptor 00:30:54.256 [2024-07-25 01:16:40.896551] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:30:54.256 00:30:54.256 Latency(us) 00:30:54.256 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:54.256 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:30:54.256 Verification LBA range: start 0x0 length 0x4000 00:30:54.256 NVMe0n1 : 15.00 8337.73 32.57 1229.29 0.00 13352.34 813.13 17767.54 00:30:54.256 =================================================================================================================== 00:30:54.256 Total : 8337.73 32.57 1229.29 0.00 13352.34 813.13 17767.54 00:30:54.256 Received shutdown signal, test time was about 15.000000 seconds 00:30:54.256 00:30:54.256 Latency(us) 00:30:54.256 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:54.256 =================================================================================================================== 00:30:54.256 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:54.256 01:16:46 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:30:54.256 01:16:46 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # count=3 00:30:54.256 01:16:46 nvmf_tcp.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:30:54.256 01:16:46 nvmf_tcp.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=3887901 00:30:54.256 01:16:46 nvmf_tcp.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:30:54.256 01:16:46 nvmf_tcp.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 3887901 /var/tmp/bdevperf.sock 00:30:54.256 01:16:46 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@827 -- # '[' -z 3887901 ']' 00:30:54.256 01:16:46 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:54.256 01:16:46 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@832 -- # local max_retries=100 00:30:54.256 01:16:46 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:54.256 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:54.256 01:16:46 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # xtrace_disable 00:30:54.256 01:16:46 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:30:54.256 01:16:47 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:30:54.256 01:16:47 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@860 -- # return 0 00:30:54.256 01:16:47 nvmf_tcp.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:30:54.256 [2024-07-25 01:16:47.222461] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:30:54.256 01:16:47 nvmf_tcp.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:30:54.515 [2024-07-25 01:16:47.475181] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:30:54.515 01:16:47 nvmf_tcp.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:54.771 NVMe0n1 00:30:54.772 01:16:47 nvmf_tcp.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:55.029 00:30:55.029 01:16:48 nvmf_tcp.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:55.287 00:30:55.544 01:16:48 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:30:55.544 01:16:48 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:30:55.801 01:16:48 nvmf_tcp.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:56.059 01:16:48 nvmf_tcp.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:30:59.334 01:16:51 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:30:59.334 01:16:51 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:30:59.334 01:16:52 nvmf_tcp.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=3888562 00:30:59.334 01:16:52 nvmf_tcp.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:30:59.334 01:16:52 nvmf_tcp.nvmf_failover -- host/failover.sh@92 -- # wait 3888562 00:31:00.267 0 00:31:00.267 01:16:53 nvmf_tcp.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:31:00.267 [2024-07-25 01:16:46.759421] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 23.11.0 initialization... 00:31:00.267 [2024-07-25 01:16:46.759520] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3887901 ] 00:31:00.267 EAL: No free 2048 kB hugepages reported on node 1 00:31:00.267 [2024-07-25 01:16:46.820961] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:00.267 [2024-07-25 01:16:46.903649] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:00.267 [2024-07-25 01:16:48.934033] bdev_nvme.c:1867:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:31:00.267 [2024-07-25 01:16:48.934127] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:00.267 [2024-07-25 01:16:48.934149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.267 [2024-07-25 01:16:48.934166] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:00.267 [2024-07-25 01:16:48.934179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.267 [2024-07-25 01:16:48.934193] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:00.267 [2024-07-25 01:16:48.934221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.267 [2024-07-25 01:16:48.934236] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:00.267 [2024-07-25 01:16:48.934257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.267 [2024-07-25 01:16:48.934271] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:00.267 [2024-07-25 01:16:48.934316] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:00.267 [2024-07-25 01:16:48.934348] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15bfeb0 (9): Bad file descriptor 00:31:00.267 [2024-07-25 01:16:48.942786] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:31:00.267 Running I/O for 1 seconds... 00:31:00.267 00:31:00.267 Latency(us) 00:31:00.267 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:00.267 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:31:00.267 Verification LBA range: start 0x0 length 0x4000 00:31:00.267 NVMe0n1 : 1.01 8594.56 33.57 0.00 0.00 14830.60 1426.01 18058.81 00:31:00.267 =================================================================================================================== 00:31:00.267 Total : 8594.56 33.57 0.00 0.00 14830.60 1426.01 18058.81 00:31:00.267 01:16:53 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:31:00.267 01:16:53 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:31:00.524 01:16:53 nvmf_tcp.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:31:00.781 01:16:53 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:31:00.781 01:16:53 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:31:01.038 01:16:54 nvmf_tcp.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:31:01.296 01:16:54 nvmf_tcp.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:31:04.572 01:16:57 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:31:04.572 01:16:57 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:31:04.572 01:16:57 nvmf_tcp.nvmf_failover -- host/failover.sh@108 -- # killprocess 3887901 00:31:04.572 01:16:57 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@946 -- # '[' -z 3887901 ']' 00:31:04.572 01:16:57 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@950 -- # kill -0 3887901 00:31:04.572 01:16:57 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@951 -- # uname 00:31:04.572 01:16:57 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:31:04.572 01:16:57 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3887901 00:31:04.573 01:16:57 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:31:04.573 01:16:57 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:31:04.573 01:16:57 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3887901' 00:31:04.573 killing process with pid 3887901 00:31:04.573 01:16:57 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@965 -- # kill 3887901 00:31:04.573 01:16:57 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@970 -- # wait 3887901 00:31:04.830 01:16:57 nvmf_tcp.nvmf_failover -- host/failover.sh@110 -- # sync 00:31:04.830 01:16:57 nvmf_tcp.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:05.088 01:16:58 nvmf_tcp.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:31:05.088 01:16:58 nvmf_tcp.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:31:05.088 01:16:58 nvmf_tcp.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:31:05.088 01:16:58 nvmf_tcp.nvmf_failover -- nvmf/common.sh@488 -- # nvmfcleanup 00:31:05.088 01:16:58 nvmf_tcp.nvmf_failover -- nvmf/common.sh@117 -- # sync 00:31:05.088 01:16:58 nvmf_tcp.nvmf_failover -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:31:05.088 01:16:58 nvmf_tcp.nvmf_failover -- nvmf/common.sh@120 -- # set +e 00:31:05.088 01:16:58 nvmf_tcp.nvmf_failover -- nvmf/common.sh@121 -- # for i in {1..20} 00:31:05.088 01:16:58 nvmf_tcp.nvmf_failover -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:31:05.088 rmmod nvme_tcp 00:31:05.088 rmmod nvme_fabrics 00:31:05.088 rmmod nvme_keyring 00:31:05.350 01:16:58 nvmf_tcp.nvmf_failover -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:31:05.350 01:16:58 nvmf_tcp.nvmf_failover -- nvmf/common.sh@124 -- # set -e 00:31:05.350 01:16:58 nvmf_tcp.nvmf_failover -- nvmf/common.sh@125 -- # return 0 00:31:05.350 01:16:58 nvmf_tcp.nvmf_failover -- nvmf/common.sh@489 -- # '[' -n 3885640 ']' 00:31:05.350 01:16:58 nvmf_tcp.nvmf_failover -- nvmf/common.sh@490 -- # killprocess 3885640 00:31:05.350 01:16:58 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@946 -- # '[' -z 3885640 ']' 00:31:05.350 01:16:58 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@950 -- # kill -0 3885640 00:31:05.350 01:16:58 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@951 -- # uname 00:31:05.350 01:16:58 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:31:05.350 01:16:58 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3885640 00:31:05.350 01:16:58 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:31:05.350 01:16:58 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:31:05.350 01:16:58 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3885640' 00:31:05.350 killing process with pid 3885640 00:31:05.350 01:16:58 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@965 -- # kill 3885640 00:31:05.350 01:16:58 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@970 -- # wait 3885640 00:31:05.649 01:16:58 nvmf_tcp.nvmf_failover -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:31:05.649 01:16:58 nvmf_tcp.nvmf_failover -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:31:05.649 01:16:58 nvmf_tcp.nvmf_failover -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:31:05.649 01:16:58 nvmf_tcp.nvmf_failover -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:05.649 01:16:58 nvmf_tcp.nvmf_failover -- nvmf/common.sh@278 -- # remove_spdk_ns 00:31:05.649 01:16:58 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:05.649 01:16:58 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:05.649 01:16:58 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:07.553 01:17:00 nvmf_tcp.nvmf_failover -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:31:07.553 00:31:07.553 real 0m34.691s 00:31:07.553 user 2m0.040s 00:31:07.553 sys 0m6.576s 00:31:07.553 01:17:00 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1122 -- # xtrace_disable 00:31:07.553 01:17:00 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:31:07.553 ************************************ 00:31:07.553 END TEST nvmf_failover 00:31:07.553 ************************************ 00:31:07.553 01:17:00 nvmf_tcp -- nvmf/nvmf.sh@101 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:31:07.553 01:17:00 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:31:07.553 01:17:00 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:31:07.553 01:17:00 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:07.553 ************************************ 00:31:07.553 START TEST nvmf_host_discovery 00:31:07.553 ************************************ 00:31:07.553 01:17:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:31:07.554 * Looking for test storage... 00:31:07.554 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:07.554 01:17:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:07.554 01:17:00 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:31:07.554 01:17:00 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:07.554 01:17:00 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:07.554 01:17:00 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:07.554 01:17:00 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:07.554 01:17:00 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:07.554 01:17:00 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:07.554 01:17:00 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:07.554 01:17:00 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:07.554 01:17:00 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:07.554 01:17:00 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:07.554 01:17:00 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:31:07.554 01:17:00 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:31:07.554 01:17:00 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:07.554 01:17:00 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:07.554 01:17:00 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:07.554 01:17:00 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:07.554 01:17:00 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:07.554 01:17:00 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:07.554 01:17:00 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:07.554 01:17:00 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:07.554 01:17:00 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:07.554 01:17:00 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:07.554 01:17:00 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:07.554 01:17:00 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:31:07.554 01:17:00 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:07.554 01:17:00 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@47 -- # : 0 00:31:07.554 01:17:00 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:31:07.554 01:17:00 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:31:07.554 01:17:00 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:07.554 01:17:00 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:07.554 01:17:00 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:07.554 01:17:00 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:31:07.554 01:17:00 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:31:07.554 01:17:00 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:31:07.554 01:17:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:31:07.554 01:17:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:31:07.554 01:17:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:31:07.554 01:17:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:31:07.554 01:17:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:31:07.554 01:17:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:31:07.554 01:17:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:31:07.554 01:17:00 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:31:07.554 01:17:00 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:07.554 01:17:00 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:31:07.554 01:17:00 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:31:07.554 01:17:00 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:31:07.554 01:17:00 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:07.554 01:17:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:07.554 01:17:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:07.554 01:17:00 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:31:07.554 01:17:00 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:31:07.554 01:17:00 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:31:07.554 01:17:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:09.454 01:17:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:09.454 01:17:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:31:09.454 01:17:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:31:09.454 01:17:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:31:09.454 01:17:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:31:09.454 01:17:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:31:09.454 01:17:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:31:09.454 01:17:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:31:09.454 01:17:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:31:09.454 01:17:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@296 -- # e810=() 00:31:09.454 01:17:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:31:09.454 01:17:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@297 -- # x722=() 00:31:09.454 01:17:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:31:09.454 01:17:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@298 -- # mlx=() 00:31:09.454 01:17:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:31:09.454 01:17:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:09.454 01:17:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:09.454 01:17:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:09.454 01:17:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:09.454 01:17:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:09.454 01:17:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:09.454 01:17:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:09.454 01:17:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:09.454 01:17:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:09.454 01:17:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:09.454 01:17:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:09.454 01:17:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:31:09.454 01:17:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:31:09.454 01:17:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:31:09.454 01:17:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:31:09.454 01:17:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:31:09.454 01:17:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:31:09.454 01:17:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:09.454 01:17:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:31:09.454 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:31:09.454 01:17:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:09.454 01:17:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:09.454 01:17:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:09.454 01:17:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:09.454 01:17:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:09.454 01:17:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:09.454 01:17:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:31:09.454 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:31:09.454 01:17:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:09.454 01:17:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:09.454 01:17:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:09.454 01:17:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:09.454 01:17:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:09.454 01:17:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:31:09.454 01:17:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:31:09.454 01:17:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:31:09.454 01:17:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:09.454 01:17:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:09.454 01:17:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:09.454 01:17:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:09.454 01:17:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:09.454 01:17:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:09.454 01:17:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:09.454 01:17:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:31:09.454 Found net devices under 0000:0a:00.0: cvl_0_0 00:31:09.454 01:17:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:09.454 01:17:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:09.454 01:17:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:09.454 01:17:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:09.454 01:17:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:09.454 01:17:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:09.454 01:17:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:09.454 01:17:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:09.454 01:17:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:31:09.454 Found net devices under 0000:0a:00.1: cvl_0_1 00:31:09.454 01:17:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:09.454 01:17:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:31:09.454 01:17:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:31:09.454 01:17:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:31:09.454 01:17:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:31:09.454 01:17:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:31:09.454 01:17:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:09.454 01:17:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:09.454 01:17:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:09.454 01:17:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:31:09.454 01:17:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:09.454 01:17:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:09.454 01:17:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:31:09.454 01:17:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:09.454 01:17:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:09.454 01:17:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:31:09.454 01:17:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:31:09.454 01:17:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:31:09.454 01:17:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:09.454 01:17:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:09.454 01:17:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:09.454 01:17:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:31:09.454 01:17:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:09.454 01:17:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:09.454 01:17:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:09.712 01:17:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:31:09.712 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:09.712 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.194 ms 00:31:09.712 00:31:09.712 --- 10.0.0.2 ping statistics --- 00:31:09.712 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:09.712 rtt min/avg/max/mdev = 0.194/0.194/0.194/0.000 ms 00:31:09.712 01:17:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:09.712 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:09.712 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.142 ms 00:31:09.712 00:31:09.712 --- 10.0.0.1 ping statistics --- 00:31:09.713 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:09.713 rtt min/avg/max/mdev = 0.142/0.142/0.142/0.000 ms 00:31:09.713 01:17:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:09.713 01:17:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@422 -- # return 0 00:31:09.713 01:17:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:31:09.713 01:17:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:09.713 01:17:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:31:09.713 01:17:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:31:09.713 01:17:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:09.713 01:17:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:31:09.713 01:17:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:31:09.713 01:17:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:31:09.713 01:17:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:31:09.713 01:17:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@720 -- # xtrace_disable 00:31:09.713 01:17:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:09.713 01:17:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@481 -- # nvmfpid=3891169 00:31:09.713 01:17:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:31:09.713 01:17:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@482 -- # waitforlisten 3891169 00:31:09.713 01:17:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@827 -- # '[' -z 3891169 ']' 00:31:09.713 01:17:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:09.713 01:17:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@832 -- # local max_retries=100 00:31:09.713 01:17:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:09.713 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:09.713 01:17:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # xtrace_disable 00:31:09.713 01:17:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:09.713 [2024-07-25 01:17:02.685270] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 23.11.0 initialization... 00:31:09.713 [2024-07-25 01:17:02.685343] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:09.713 EAL: No free 2048 kB hugepages reported on node 1 00:31:09.713 [2024-07-25 01:17:02.748653] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:09.713 [2024-07-25 01:17:02.832235] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:09.713 [2024-07-25 01:17:02.832309] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:09.713 [2024-07-25 01:17:02.832330] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:09.713 [2024-07-25 01:17:02.832341] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:09.713 [2024-07-25 01:17:02.832351] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:09.713 [2024-07-25 01:17:02.832376] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:31:09.971 01:17:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:31:09.971 01:17:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@860 -- # return 0 00:31:09.971 01:17:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:31:09.971 01:17:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:09.971 01:17:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:09.971 01:17:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:09.971 01:17:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:09.971 01:17:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:09.971 01:17:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:09.971 [2024-07-25 01:17:02.964259] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:09.971 01:17:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:09.971 01:17:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:31:09.971 01:17:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:09.971 01:17:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:09.971 [2024-07-25 01:17:02.972456] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:31:09.971 01:17:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:09.971 01:17:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:31:09.971 01:17:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:09.971 01:17:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:09.971 null0 00:31:09.971 01:17:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:09.971 01:17:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:31:09.971 01:17:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:09.971 01:17:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:09.971 null1 00:31:09.971 01:17:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:09.971 01:17:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:31:09.971 01:17:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:09.971 01:17:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:09.971 01:17:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:09.971 01:17:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=3891188 00:31:09.971 01:17:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:31:09.971 01:17:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 3891188 /tmp/host.sock 00:31:09.971 01:17:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@827 -- # '[' -z 3891188 ']' 00:31:09.971 01:17:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@831 -- # local rpc_addr=/tmp/host.sock 00:31:09.971 01:17:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@832 -- # local max_retries=100 00:31:09.971 01:17:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:31:09.971 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:31:09.971 01:17:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # xtrace_disable 00:31:09.971 01:17:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:09.971 [2024-07-25 01:17:03.044960] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 23.11.0 initialization... 00:31:09.971 [2024-07-25 01:17:03.045027] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3891188 ] 00:31:09.971 EAL: No free 2048 kB hugepages reported on node 1 00:31:09.971 [2024-07-25 01:17:03.107046] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:10.229 [2024-07-25 01:17:03.197456] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:10.229 01:17:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:31:10.229 01:17:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@860 -- # return 0 00:31:10.229 01:17:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:31:10.229 01:17:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:31:10.229 01:17:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:10.229 01:17:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:10.229 01:17:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:10.229 01:17:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:31:10.229 01:17:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:10.229 01:17:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:10.229 01:17:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:10.229 01:17:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:31:10.229 01:17:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:31:10.229 01:17:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:10.229 01:17:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:10.229 01:17:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:10.229 01:17:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:10.229 01:17:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:31:10.229 01:17:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:31:10.229 01:17:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:10.229 01:17:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:31:10.229 01:17:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:31:10.229 01:17:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:10.229 01:17:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:10.229 01:17:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:10.229 01:17:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:10.229 01:17:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:31:10.229 01:17:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:31:10.487 01:17:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:10.487 01:17:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:31:10.487 01:17:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:31:10.488 01:17:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:10.488 01:17:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:10.488 01:17:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:10.488 01:17:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:31:10.488 01:17:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:10.488 01:17:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:10.488 01:17:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:10.488 01:17:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:31:10.488 01:17:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:10.488 01:17:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:31:10.488 01:17:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:10.488 01:17:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:31:10.488 01:17:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:31:10.488 01:17:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:10.488 01:17:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:10.488 01:17:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:10.488 01:17:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:10.488 01:17:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:31:10.488 01:17:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:31:10.488 01:17:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:10.488 01:17:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:31:10.488 01:17:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:31:10.488 01:17:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:10.488 01:17:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:10.488 01:17:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:10.488 01:17:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:31:10.488 01:17:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:10.488 01:17:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:10.488 01:17:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:10.488 01:17:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:10.488 01:17:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:31:10.488 01:17:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:31:10.488 01:17:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:10.488 01:17:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:31:10.488 01:17:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:31:10.488 01:17:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:10.488 01:17:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:10.488 01:17:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:10.488 01:17:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:10.488 01:17:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:31:10.488 01:17:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:31:10.488 01:17:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:10.488 01:17:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:31:10.488 01:17:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:10.488 01:17:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:10.488 01:17:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:10.488 [2024-07-25 01:17:03.594086] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:10.488 01:17:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:10.488 01:17:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:31:10.488 01:17:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:10.488 01:17:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:10.488 01:17:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:10.488 01:17:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:10.488 01:17:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:31:10.488 01:17:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:31:10.488 01:17:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:10.746 01:17:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:31:10.746 01:17:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:31:10.746 01:17:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:10.746 01:17:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:10.746 01:17:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:10.746 01:17:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:10.746 01:17:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:31:10.746 01:17:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:31:10.746 01:17:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:10.746 01:17:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:31:10.746 01:17:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:31:10.746 01:17:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:31:10.746 01:17:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:31:10.746 01:17:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:31:10.746 01:17:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:31:10.746 01:17:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:31:10.746 01:17:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:31:10.746 01:17:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_notification_count 00:31:10.746 01:17:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:31:10.746 01:17:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:31:10.746 01:17:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:10.746 01:17:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:10.746 01:17:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:10.746 01:17:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:31:10.746 01:17:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:31:10.746 01:17:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( notification_count == expected_count )) 00:31:10.746 01:17:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:31:10.746 01:17:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:31:10.746 01:17:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:10.746 01:17:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:10.746 01:17:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:10.746 01:17:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:31:10.746 01:17:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:31:10.746 01:17:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:31:10.746 01:17:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:31:10.746 01:17:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:31:10.746 01:17:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_names 00:31:10.746 01:17:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:10.746 01:17:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:10.746 01:17:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:10.746 01:17:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:10.746 01:17:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:31:10.746 01:17:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:31:10.746 01:17:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:10.746 01:17:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ '' == \n\v\m\e\0 ]] 00:31:10.746 01:17:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # sleep 1 00:31:11.311 [2024-07-25 01:17:04.373049] bdev_nvme.c:6984:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:31:11.311 [2024-07-25 01:17:04.373082] bdev_nvme.c:7064:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:31:11.311 [2024-07-25 01:17:04.373104] bdev_nvme.c:6947:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:31:11.569 [2024-07-25 01:17:04.501515] bdev_nvme.c:6913:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:31:11.569 [2024-07-25 01:17:04.603110] bdev_nvme.c:6803:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:31:11.569 [2024-07-25 01:17:04.603136] bdev_nvme.c:6762:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:31:11.827 01:17:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:31:11.827 01:17:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:31:11.827 01:17:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_names 00:31:11.827 01:17:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:11.827 01:17:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:11.827 01:17:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:11.827 01:17:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:31:11.827 01:17:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:11.827 01:17:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:31:11.827 01:17:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:11.827 01:17:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:11.827 01:17:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:31:11.827 01:17:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:31:11.827 01:17:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:31:11.827 01:17:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:31:11.827 01:17:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:31:11.827 01:17:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:31:11.827 01:17:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_bdev_list 00:31:11.827 01:17:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:11.827 01:17:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:11.827 01:17:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:11.827 01:17:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:11.827 01:17:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:31:11.827 01:17:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:31:11.827 01:17:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:11.827 01:17:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:31:11.827 01:17:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:31:11.827 01:17:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:31:11.827 01:17:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:31:11.827 01:17:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:31:11.827 01:17:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:31:11.827 01:17:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:31:11.827 01:17:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_paths nvme0 00:31:11.827 01:17:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:31:11.827 01:17:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:11.827 01:17:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:31:11.827 01:17:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:11.827 01:17:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:31:11.827 01:17:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:31:11.827 01:17:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:11.827 01:17:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ 4420 == \4\4\2\0 ]] 00:31:11.827 01:17:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:31:11.827 01:17:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:31:11.827 01:17:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:31:11.827 01:17:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:31:11.827 01:17:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:31:11.827 01:17:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:31:11.827 01:17:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:31:11.827 01:17:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:31:11.827 01:17:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_notification_count 00:31:11.827 01:17:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:31:11.827 01:17:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:31:11.827 01:17:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:11.827 01:17:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:11.827 01:17:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:11.827 01:17:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:31:11.827 01:17:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:31:11.827 01:17:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( notification_count == expected_count )) 00:31:11.827 01:17:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:31:11.827 01:17:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:31:11.827 01:17:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:11.827 01:17:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:11.827 01:17:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:11.827 01:17:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:31:11.827 01:17:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:31:11.827 01:17:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:31:11.827 01:17:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:31:11.828 01:17:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:31:11.828 01:17:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_bdev_list 00:31:11.828 01:17:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:11.828 01:17:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:11.828 01:17:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:11.828 01:17:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:11.828 01:17:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:31:11.828 01:17:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:31:12.086 01:17:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:12.086 01:17:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:31:12.086 01:17:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:31:12.086 01:17:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:31:12.086 01:17:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:31:12.086 01:17:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:31:12.086 01:17:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:31:12.086 01:17:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:31:12.086 01:17:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:31:12.086 01:17:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:31:12.086 01:17:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_notification_count 00:31:12.086 01:17:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:31:12.086 01:17:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:12.086 01:17:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:31:12.086 01:17:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:12.086 01:17:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:12.086 01:17:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:31:12.086 01:17:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:31:12.086 01:17:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( notification_count == expected_count )) 00:31:12.086 01:17:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # sleep 1 00:31:13.458 01:17:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:31:13.459 01:17:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:31:13.459 01:17:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_notification_count 00:31:13.459 01:17:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:31:13.459 01:17:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:31:13.459 01:17:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:13.459 01:17:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:13.459 01:17:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:13.459 01:17:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:31:13.459 01:17:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:31:13.459 01:17:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( notification_count == expected_count )) 00:31:13.459 01:17:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:31:13.459 01:17:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:31:13.459 01:17:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:13.459 01:17:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:13.459 [2024-07-25 01:17:06.277872] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:31:13.459 [2024-07-25 01:17:06.279013] bdev_nvme.c:6966:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:31:13.459 [2024-07-25 01:17:06.279054] bdev_nvme.c:6947:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:31:13.459 01:17:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:13.459 01:17:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:31:13.459 01:17:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:31:13.459 01:17:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:31:13.459 01:17:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:31:13.459 01:17:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:31:13.459 01:17:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_names 00:31:13.459 01:17:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:13.459 01:17:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:13.459 01:17:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:13.459 01:17:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:13.459 01:17:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:31:13.459 01:17:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:31:13.459 01:17:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:13.459 01:17:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:13.459 01:17:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:31:13.459 01:17:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:31:13.459 01:17:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:31:13.459 01:17:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:31:13.459 01:17:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:31:13.459 01:17:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:31:13.459 01:17:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_bdev_list 00:31:13.459 01:17:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:13.459 01:17:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:13.459 01:17:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:13.459 01:17:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:13.459 01:17:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:31:13.459 01:17:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:31:13.459 01:17:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:13.459 01:17:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:31:13.459 01:17:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:31:13.459 01:17:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:31:13.459 01:17:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:31:13.459 01:17:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:31:13.459 01:17:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:31:13.459 01:17:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:31:13.459 01:17:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_paths nvme0 00:31:13.459 01:17:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:31:13.459 01:17:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:13.459 01:17:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:13.459 01:17:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:31:13.459 01:17:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:31:13.459 01:17:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:31:13.459 01:17:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:13.459 01:17:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:31:13.459 01:17:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # sleep 1 00:31:13.459 [2024-07-25 01:17:06.406449] bdev_nvme.c:6908:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:31:13.459 [2024-07-25 01:17:06.509188] bdev_nvme.c:6803:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:31:13.459 [2024-07-25 01:17:06.509214] bdev_nvme.c:6762:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:31:13.459 [2024-07-25 01:17:06.509225] bdev_nvme.c:6762:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:31:14.392 01:17:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:31:14.392 01:17:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:31:14.392 01:17:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_paths nvme0 00:31:14.392 01:17:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:31:14.392 01:17:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:31:14.392 01:17:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:14.392 01:17:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:31:14.392 01:17:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:14.392 01:17:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:31:14.392 01:17:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:14.392 01:17:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:31:14.392 01:17:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:31:14.392 01:17:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:31:14.393 01:17:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:31:14.393 01:17:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:31:14.393 01:17:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:31:14.393 01:17:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:31:14.393 01:17:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:31:14.393 01:17:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:31:14.393 01:17:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_notification_count 00:31:14.393 01:17:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:31:14.393 01:17:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:14.393 01:17:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:31:14.393 01:17:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:14.393 01:17:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:14.393 01:17:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:31:14.393 01:17:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:31:14.393 01:17:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( notification_count == expected_count )) 00:31:14.393 01:17:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:31:14.393 01:17:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:14.393 01:17:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:14.393 01:17:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:14.393 [2024-07-25 01:17:07.502469] bdev_nvme.c:6966:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:31:14.393 [2024-07-25 01:17:07.502507] bdev_nvme.c:6947:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:31:14.393 [2024-07-25 01:17:07.504880] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:14.393 [2024-07-25 01:17:07.504916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.393 [2024-07-25 01:17:07.504935] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:14.393 [2024-07-25 01:17:07.504951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.393 [2024-07-25 01:17:07.504966] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:14.393 [2024-07-25 01:17:07.504993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.393 [2024-07-25 01:17:07.505008] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:14.393 [2024-07-25 01:17:07.505022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.393 [2024-07-25 01:17:07.505037] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1472450 is same with the state(5) to be set 00:31:14.393 01:17:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:14.393 01:17:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:31:14.393 01:17:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:31:14.393 01:17:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:31:14.393 01:17:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:31:14.393 01:17:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:31:14.393 01:17:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_names 00:31:14.393 01:17:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:14.393 01:17:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:14.393 01:17:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:14.393 01:17:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:31:14.393 01:17:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:14.393 01:17:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:31:14.393 [2024-07-25 01:17:07.514884] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1472450 (9): Bad file descriptor 00:31:14.393 01:17:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:14.393 [2024-07-25 01:17:07.524931] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:31:14.393 [2024-07-25 01:17:07.525199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.393 [2024-07-25 01:17:07.525231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1472450 with addr=10.0.0.2, port=4420 00:31:14.393 [2024-07-25 01:17:07.525260] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1472450 is same with the state(5) to be set 00:31:14.393 [2024-07-25 01:17:07.525301] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1472450 (9): Bad file descriptor 00:31:14.393 [2024-07-25 01:17:07.525324] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:14.393 [2024-07-25 01:17:07.525339] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:31:14.393 [2024-07-25 01:17:07.525354] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:14.393 [2024-07-25 01:17:07.525376] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:14.393 [2024-07-25 01:17:07.535015] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:31:14.393 [2024-07-25 01:17:07.535228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.393 [2024-07-25 01:17:07.535267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1472450 with addr=10.0.0.2, port=4420 00:31:14.393 [2024-07-25 01:17:07.535300] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1472450 is same with the state(5) to be set 00:31:14.393 [2024-07-25 01:17:07.535323] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1472450 (9): Bad file descriptor 00:31:14.393 [2024-07-25 01:17:07.535344] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:14.393 [2024-07-25 01:17:07.535358] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:31:14.393 [2024-07-25 01:17:07.535372] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:14.393 [2024-07-25 01:17:07.535407] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:14.652 [2024-07-25 01:17:07.545091] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:31:14.652 [2024-07-25 01:17:07.545270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.652 [2024-07-25 01:17:07.545316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1472450 with addr=10.0.0.2, port=4420 00:31:14.652 [2024-07-25 01:17:07.545333] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1472450 is same with the state(5) to be set 00:31:14.652 [2024-07-25 01:17:07.545356] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1472450 (9): Bad file descriptor 00:31:14.652 [2024-07-25 01:17:07.545377] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:14.652 [2024-07-25 01:17:07.545392] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:31:14.652 [2024-07-25 01:17:07.545405] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:14.652 [2024-07-25 01:17:07.545425] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:14.652 01:17:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:14.652 01:17:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:31:14.652 01:17:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:31:14.652 01:17:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:31:14.652 01:17:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:31:14.652 01:17:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:31:14.652 01:17:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:31:14.652 01:17:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_bdev_list 00:31:14.652 [2024-07-25 01:17:07.555168] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:31:14.652 [2024-07-25 01:17:07.555412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.652 [2024-07-25 01:17:07.555441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1472450 with addr=10.0.0.2, port=4420 00:31:14.652 [2024-07-25 01:17:07.555459] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1472450 is same with the state(5) to be set 00:31:14.652 [2024-07-25 01:17:07.555482] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1472450 (9): Bad file descriptor 00:31:14.652 [2024-07-25 01:17:07.555507] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:14.652 [2024-07-25 01:17:07.555522] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:31:14.652 [2024-07-25 01:17:07.555536] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:14.652 01:17:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:14.652 [2024-07-25 01:17:07.555555] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:14.652 01:17:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:14.652 01:17:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:14.652 01:17:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:31:14.652 01:17:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:14.652 01:17:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:31:14.652 [2024-07-25 01:17:07.565256] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:31:14.652 [2024-07-25 01:17:07.565463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.652 [2024-07-25 01:17:07.565492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1472450 with addr=10.0.0.2, port=4420 00:31:14.652 [2024-07-25 01:17:07.565508] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1472450 is same with the state(5) to be set 00:31:14.652 [2024-07-25 01:17:07.565547] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1472450 (9): Bad file descriptor 00:31:14.652 [2024-07-25 01:17:07.565572] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:14.652 [2024-07-25 01:17:07.565588] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:31:14.652 [2024-07-25 01:17:07.565604] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:14.652 [2024-07-25 01:17:07.565624] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:14.652 [2024-07-25 01:17:07.575366] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:31:14.652 [2024-07-25 01:17:07.575551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.652 [2024-07-25 01:17:07.575579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1472450 with addr=10.0.0.2, port=4420 00:31:14.652 [2024-07-25 01:17:07.575602] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1472450 is same with the state(5) to be set 00:31:14.652 [2024-07-25 01:17:07.575625] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1472450 (9): Bad file descriptor 00:31:14.652 [2024-07-25 01:17:07.575646] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:14.652 [2024-07-25 01:17:07.575660] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:31:14.652 [2024-07-25 01:17:07.575674] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:14.652 [2024-07-25 01:17:07.575693] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:14.652 01:17:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:14.652 [2024-07-25 01:17:07.585436] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:31:14.652 [2024-07-25 01:17:07.585628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.652 [2024-07-25 01:17:07.585660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1472450 with addr=10.0.0.2, port=4420 00:31:14.652 [2024-07-25 01:17:07.585678] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1472450 is same with the state(5) to be set 00:31:14.652 [2024-07-25 01:17:07.585703] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1472450 (9): Bad file descriptor 00:31:14.652 [2024-07-25 01:17:07.585727] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:14.652 [2024-07-25 01:17:07.585743] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:31:14.652 [2024-07-25 01:17:07.585758] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:14.652 [2024-07-25 01:17:07.585780] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:14.652 [2024-07-25 01:17:07.589830] bdev_nvme.c:6771:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:31:14.652 [2024-07-25 01:17:07.589866] bdev_nvme.c:6762:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:31:14.652 01:17:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:31:14.652 01:17:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:31:14.652 01:17:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:31:14.652 01:17:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:31:14.652 01:17:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:31:14.652 01:17:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:31:14.652 01:17:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:31:14.652 01:17:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_paths nvme0 00:31:14.652 01:17:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:31:14.652 01:17:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:14.652 01:17:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:31:14.652 01:17:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:14.652 01:17:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:31:14.652 01:17:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:31:14.652 01:17:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:14.652 01:17:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ 4421 == \4\4\2\1 ]] 00:31:14.652 01:17:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:31:14.652 01:17:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:31:14.652 01:17:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:31:14.652 01:17:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:31:14.652 01:17:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:31:14.652 01:17:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:31:14.652 01:17:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:31:14.653 01:17:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:31:14.653 01:17:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_notification_count 00:31:14.653 01:17:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:31:14.653 01:17:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:31:14.653 01:17:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:14.653 01:17:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:14.653 01:17:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:14.653 01:17:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:31:14.653 01:17:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:31:14.653 01:17:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( notification_count == expected_count )) 00:31:14.653 01:17:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:31:14.653 01:17:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:31:14.653 01:17:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:14.653 01:17:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:14.653 01:17:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:14.653 01:17:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:31:14.653 01:17:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:31:14.653 01:17:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:31:14.653 01:17:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:31:14.653 01:17:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:31:14.653 01:17:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_names 00:31:14.653 01:17:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:14.653 01:17:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:14.653 01:17:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:14.653 01:17:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:14.653 01:17:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:31:14.653 01:17:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:31:14.653 01:17:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:14.653 01:17:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ '' == '' ]] 00:31:14.653 01:17:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:31:14.653 01:17:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:31:14.653 01:17:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:31:14.653 01:17:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:31:14.653 01:17:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:31:14.653 01:17:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:31:14.653 01:17:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_bdev_list 00:31:14.653 01:17:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:14.653 01:17:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:14.653 01:17:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:14.653 01:17:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:14.653 01:17:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:31:14.653 01:17:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:31:14.653 01:17:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:14.653 01:17:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ '' == '' ]] 00:31:14.653 01:17:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:31:14.653 01:17:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:31:14.653 01:17:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:31:14.653 01:17:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:31:14.653 01:17:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:31:14.653 01:17:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:31:14.653 01:17:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:31:14.653 01:17:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:31:14.653 01:17:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_notification_count 00:31:14.653 01:17:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:31:14.653 01:17:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:31:14.653 01:17:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:14.653 01:17:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:14.653 01:17:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:14.913 01:17:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:31:14.913 01:17:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:31:14.913 01:17:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( notification_count == expected_count )) 00:31:14.913 01:17:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:31:14.913 01:17:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:31:14.913 01:17:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:14.913 01:17:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:15.844 [2024-07-25 01:17:08.875015] bdev_nvme.c:6984:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:31:15.844 [2024-07-25 01:17:08.875050] bdev_nvme.c:7064:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:31:15.844 [2024-07-25 01:17:08.875072] bdev_nvme.c:6947:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:31:16.102 [2024-07-25 01:17:09.003462] bdev_nvme.c:6913:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:31:16.102 [2024-07-25 01:17:09.068376] bdev_nvme.c:6803:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:31:16.102 [2024-07-25 01:17:09.068420] bdev_nvme.c:6762:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:31:16.102 01:17:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:16.102 01:17:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:31:16.102 01:17:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:31:16.102 01:17:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:31:16.102 01:17:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:31:16.102 01:17:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:16.102 01:17:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:31:16.102 01:17:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:16.102 01:17:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:31:16.102 01:17:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:16.102 01:17:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:16.102 request: 00:31:16.102 { 00:31:16.102 "name": "nvme", 00:31:16.102 "trtype": "tcp", 00:31:16.102 "traddr": "10.0.0.2", 00:31:16.102 "hostnqn": "nqn.2021-12.io.spdk:test", 00:31:16.102 "adrfam": "ipv4", 00:31:16.102 "trsvcid": "8009", 00:31:16.102 "wait_for_attach": true, 00:31:16.102 "method": "bdev_nvme_start_discovery", 00:31:16.102 "req_id": 1 00:31:16.102 } 00:31:16.102 Got JSON-RPC error response 00:31:16.102 response: 00:31:16.102 { 00:31:16.102 "code": -17, 00:31:16.102 "message": "File exists" 00:31:16.102 } 00:31:16.102 01:17:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:31:16.102 01:17:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:31:16.102 01:17:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:31:16.102 01:17:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:31:16.102 01:17:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:31:16.102 01:17:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:31:16.102 01:17:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:31:16.102 01:17:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:16.102 01:17:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:31:16.102 01:17:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:16.102 01:17:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:31:16.102 01:17:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:31:16.102 01:17:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:16.102 01:17:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:31:16.102 01:17:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:31:16.102 01:17:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:16.102 01:17:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:16.102 01:17:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:16.102 01:17:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:16.102 01:17:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:31:16.102 01:17:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:31:16.102 01:17:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:16.102 01:17:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:31:16.102 01:17:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:31:16.102 01:17:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:31:16.102 01:17:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:31:16.102 01:17:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:31:16.102 01:17:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:16.102 01:17:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:31:16.102 01:17:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:16.102 01:17:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:31:16.102 01:17:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:16.102 01:17:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:16.102 request: 00:31:16.102 { 00:31:16.102 "name": "nvme_second", 00:31:16.102 "trtype": "tcp", 00:31:16.102 "traddr": "10.0.0.2", 00:31:16.102 "hostnqn": "nqn.2021-12.io.spdk:test", 00:31:16.102 "adrfam": "ipv4", 00:31:16.102 "trsvcid": "8009", 00:31:16.102 "wait_for_attach": true, 00:31:16.102 "method": "bdev_nvme_start_discovery", 00:31:16.102 "req_id": 1 00:31:16.102 } 00:31:16.102 Got JSON-RPC error response 00:31:16.102 response: 00:31:16.102 { 00:31:16.102 "code": -17, 00:31:16.102 "message": "File exists" 00:31:16.102 } 00:31:16.102 01:17:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:31:16.102 01:17:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:31:16.102 01:17:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:31:16.102 01:17:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:31:16.102 01:17:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:31:16.102 01:17:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:31:16.102 01:17:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:31:16.102 01:17:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:16.102 01:17:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:31:16.103 01:17:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:16.103 01:17:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:31:16.103 01:17:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:31:16.103 01:17:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:16.103 01:17:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:31:16.103 01:17:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:31:16.103 01:17:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:16.103 01:17:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:16.103 01:17:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:16.103 01:17:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:16.103 01:17:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:31:16.103 01:17:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:31:16.103 01:17:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:16.360 01:17:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:31:16.360 01:17:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:31:16.360 01:17:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:31:16.360 01:17:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:31:16.360 01:17:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:31:16.360 01:17:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:16.360 01:17:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:31:16.360 01:17:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:16.360 01:17:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:31:16.360 01:17:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:16.360 01:17:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:17.293 [2024-07-25 01:17:10.267858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:17.293 [2024-07-25 01:17:10.267911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146e500 with addr=10.0.0.2, port=8010 00:31:17.293 [2024-07-25 01:17:10.267940] nvme_tcp.c:2702:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:31:17.293 [2024-07-25 01:17:10.267956] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:31:17.293 [2024-07-25 01:17:10.267969] bdev_nvme.c:7046:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:31:18.226 [2024-07-25 01:17:11.270308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:18.226 [2024-07-25 01:17:11.270370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146e500 with addr=10.0.0.2, port=8010 00:31:18.226 [2024-07-25 01:17:11.270400] nvme_tcp.c:2702:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:31:18.226 [2024-07-25 01:17:11.270415] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:31:18.226 [2024-07-25 01:17:11.270429] bdev_nvme.c:7046:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:31:19.159 [2024-07-25 01:17:12.272446] bdev_nvme.c:7027:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:31:19.159 request: 00:31:19.159 { 00:31:19.159 "name": "nvme_second", 00:31:19.159 "trtype": "tcp", 00:31:19.159 "traddr": "10.0.0.2", 00:31:19.159 "hostnqn": "nqn.2021-12.io.spdk:test", 00:31:19.159 "adrfam": "ipv4", 00:31:19.159 "trsvcid": "8010", 00:31:19.159 "attach_timeout_ms": 3000, 00:31:19.159 "method": "bdev_nvme_start_discovery", 00:31:19.159 "req_id": 1 00:31:19.159 } 00:31:19.159 Got JSON-RPC error response 00:31:19.159 response: 00:31:19.159 { 00:31:19.159 "code": -110, 00:31:19.159 "message": "Connection timed out" 00:31:19.159 } 00:31:19.159 01:17:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:31:19.159 01:17:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:31:19.159 01:17:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:31:19.159 01:17:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:31:19.159 01:17:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:31:19.159 01:17:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:31:19.159 01:17:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:31:19.159 01:17:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:31:19.159 01:17:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:19.160 01:17:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:31:19.160 01:17:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:19.160 01:17:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:31:19.160 01:17:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:19.417 01:17:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:31:19.417 01:17:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:31:19.417 01:17:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 3891188 00:31:19.417 01:17:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:31:19.417 01:17:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:31:19.417 01:17:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@117 -- # sync 00:31:19.417 01:17:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:31:19.417 01:17:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@120 -- # set +e 00:31:19.417 01:17:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:31:19.417 01:17:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:31:19.417 rmmod nvme_tcp 00:31:19.417 rmmod nvme_fabrics 00:31:19.417 rmmod nvme_keyring 00:31:19.417 01:17:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:31:19.417 01:17:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@124 -- # set -e 00:31:19.417 01:17:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@125 -- # return 0 00:31:19.417 01:17:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@489 -- # '[' -n 3891169 ']' 00:31:19.417 01:17:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@490 -- # killprocess 3891169 00:31:19.417 01:17:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@946 -- # '[' -z 3891169 ']' 00:31:19.417 01:17:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@950 -- # kill -0 3891169 00:31:19.417 01:17:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@951 -- # uname 00:31:19.418 01:17:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:31:19.418 01:17:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3891169 00:31:19.418 01:17:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:31:19.418 01:17:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:31:19.418 01:17:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3891169' 00:31:19.418 killing process with pid 3891169 00:31:19.418 01:17:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@965 -- # kill 3891169 00:31:19.418 01:17:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@970 -- # wait 3891169 00:31:19.675 01:17:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:31:19.675 01:17:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:31:19.675 01:17:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:31:19.675 01:17:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:19.675 01:17:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:31:19.675 01:17:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:19.675 01:17:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:19.675 01:17:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:21.575 01:17:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:31:21.575 00:31:21.575 real 0m14.095s 00:31:21.575 user 0m21.057s 00:31:21.575 sys 0m2.802s 00:31:21.575 01:17:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1122 -- # xtrace_disable 00:31:21.575 01:17:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:21.575 ************************************ 00:31:21.575 END TEST nvmf_host_discovery 00:31:21.575 ************************************ 00:31:21.575 01:17:14 nvmf_tcp -- nvmf/nvmf.sh@102 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:31:21.575 01:17:14 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:31:21.575 01:17:14 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:31:21.575 01:17:14 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:21.575 ************************************ 00:31:21.575 START TEST nvmf_host_multipath_status 00:31:21.575 ************************************ 00:31:21.575 01:17:14 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:31:21.833 * Looking for test storage... 00:31:21.833 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:21.833 01:17:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:21.833 01:17:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:31:21.833 01:17:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:21.833 01:17:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:21.833 01:17:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:21.833 01:17:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:21.833 01:17:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:21.834 01:17:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:21.834 01:17:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:21.834 01:17:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:21.834 01:17:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:21.834 01:17:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:21.834 01:17:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:31:21.834 01:17:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:31:21.834 01:17:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:21.834 01:17:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:21.834 01:17:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:21.834 01:17:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:21.834 01:17:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:21.834 01:17:14 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:21.834 01:17:14 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:21.834 01:17:14 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:21.834 01:17:14 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:21.834 01:17:14 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:21.834 01:17:14 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:21.834 01:17:14 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:31:21.834 01:17:14 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:21.834 01:17:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@47 -- # : 0 00:31:21.834 01:17:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:31:21.834 01:17:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:31:21.834 01:17:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:21.834 01:17:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:21.834 01:17:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:21.834 01:17:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:31:21.834 01:17:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:31:21.834 01:17:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # have_pci_nics=0 00:31:21.834 01:17:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:31:21.834 01:17:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:31:21.834 01:17:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:21.834 01:17:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:31:21.834 01:17:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:31:21.834 01:17:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:31:21.834 01:17:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:31:21.834 01:17:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:31:21.834 01:17:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:21.834 01:17:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # prepare_net_devs 00:31:21.834 01:17:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # local -g is_hw=no 00:31:21.834 01:17:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@412 -- # remove_spdk_ns 00:31:21.834 01:17:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:21.834 01:17:14 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:21.834 01:17:14 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:21.834 01:17:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:31:21.834 01:17:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:31:21.834 01:17:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@285 -- # xtrace_disable 00:31:21.834 01:17:14 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:31:23.768 01:17:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:23.768 01:17:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # pci_devs=() 00:31:23.768 01:17:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # local -a pci_devs 00:31:23.768 01:17:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # pci_net_devs=() 00:31:23.768 01:17:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:31:23.768 01:17:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # pci_drivers=() 00:31:23.768 01:17:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # local -A pci_drivers 00:31:23.768 01:17:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # net_devs=() 00:31:23.768 01:17:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # local -ga net_devs 00:31:23.768 01:17:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # e810=() 00:31:23.768 01:17:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # local -ga e810 00:31:23.768 01:17:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # x722=() 00:31:23.768 01:17:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # local -ga x722 00:31:23.768 01:17:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # mlx=() 00:31:23.768 01:17:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # local -ga mlx 00:31:23.768 01:17:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:23.768 01:17:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:23.768 01:17:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:23.768 01:17:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:23.768 01:17:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:23.768 01:17:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:23.768 01:17:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:23.768 01:17:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:23.768 01:17:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:23.768 01:17:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:23.768 01:17:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:23.768 01:17:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:31:23.768 01:17:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:31:23.768 01:17:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:31:23.768 01:17:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:31:23.769 01:17:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:31:23.769 01:17:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:31:23.769 01:17:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:23.769 01:17:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:31:23.769 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:31:23.769 01:17:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:23.769 01:17:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:23.769 01:17:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:23.769 01:17:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:23.769 01:17:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:23.769 01:17:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:23.769 01:17:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:31:23.769 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:31:23.769 01:17:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:23.769 01:17:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:23.769 01:17:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:23.769 01:17:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:23.769 01:17:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:23.769 01:17:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:31:23.769 01:17:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:31:23.769 01:17:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:31:23.769 01:17:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:23.769 01:17:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:23.769 01:17:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:23.769 01:17:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:23.769 01:17:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:23.769 01:17:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:23.769 01:17:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:23.769 01:17:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:31:23.769 Found net devices under 0000:0a:00.0: cvl_0_0 00:31:23.769 01:17:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:23.769 01:17:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:23.769 01:17:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:23.769 01:17:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:23.769 01:17:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:23.769 01:17:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:23.769 01:17:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:23.769 01:17:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:23.769 01:17:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:31:23.769 Found net devices under 0000:0a:00.1: cvl_0_1 00:31:23.769 01:17:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:23.769 01:17:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:31:23.769 01:17:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # is_hw=yes 00:31:23.769 01:17:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:31:23.769 01:17:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:31:23.769 01:17:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:31:23.769 01:17:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:23.769 01:17:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:23.769 01:17:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:23.769 01:17:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:31:23.769 01:17:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:23.769 01:17:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:23.769 01:17:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:31:23.769 01:17:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:23.769 01:17:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:23.769 01:17:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:31:23.769 01:17:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:31:23.769 01:17:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:31:23.769 01:17:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:23.769 01:17:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:23.769 01:17:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:23.769 01:17:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:31:23.769 01:17:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:23.769 01:17:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:23.769 01:17:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:23.769 01:17:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:31:23.769 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:23.769 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.127 ms 00:31:23.769 00:31:23.769 --- 10.0.0.2 ping statistics --- 00:31:23.769 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:23.769 rtt min/avg/max/mdev = 0.127/0.127/0.127/0.000 ms 00:31:23.769 01:17:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:23.769 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:23.769 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.082 ms 00:31:23.769 00:31:23.769 --- 10.0.0.1 ping statistics --- 00:31:23.769 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:23.769 rtt min/avg/max/mdev = 0.082/0.082/0.082/0.000 ms 00:31:23.769 01:17:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:23.769 01:17:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # return 0 00:31:23.769 01:17:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:31:23.769 01:17:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:23.769 01:17:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:31:23.769 01:17:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:31:23.769 01:17:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:23.769 01:17:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:31:23.769 01:17:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:31:23.769 01:17:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:31:23.769 01:17:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:31:23.769 01:17:16 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@720 -- # xtrace_disable 00:31:23.769 01:17:16 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:31:23.769 01:17:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@481 -- # nvmfpid=3894361 00:31:23.769 01:17:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:31:23.769 01:17:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # waitforlisten 3894361 00:31:23.769 01:17:16 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@827 -- # '[' -z 3894361 ']' 00:31:23.769 01:17:16 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:23.769 01:17:16 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@832 -- # local max_retries=100 00:31:23.769 01:17:16 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:23.769 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:23.769 01:17:16 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # xtrace_disable 00:31:23.769 01:17:16 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:31:23.769 [2024-07-25 01:17:16.860186] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 23.11.0 initialization... 00:31:23.769 [2024-07-25 01:17:16.860295] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:23.769 EAL: No free 2048 kB hugepages reported on node 1 00:31:24.028 [2024-07-25 01:17:16.924867] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:31:24.028 [2024-07-25 01:17:17.008714] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:24.028 [2024-07-25 01:17:17.008767] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:24.028 [2024-07-25 01:17:17.008794] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:24.028 [2024-07-25 01:17:17.008805] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:24.028 [2024-07-25 01:17:17.008814] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:24.028 [2024-07-25 01:17:17.008896] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:31:24.028 [2024-07-25 01:17:17.008901] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:24.028 01:17:17 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:31:24.028 01:17:17 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # return 0 00:31:24.028 01:17:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:31:24.028 01:17:17 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:24.028 01:17:17 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:31:24.028 01:17:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:24.028 01:17:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=3894361 00:31:24.028 01:17:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:31:24.286 [2024-07-25 01:17:17.359025] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:24.286 01:17:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:31:24.544 Malloc0 00:31:24.544 01:17:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:31:24.802 01:17:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:31:25.060 01:17:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:25.317 [2024-07-25 01:17:18.443723] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:25.317 01:17:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:31:25.575 [2024-07-25 01:17:18.680334] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:31:25.575 01:17:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=3894634 00:31:25.575 01:17:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:31:25.575 01:17:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:31:25.575 01:17:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 3894634 /var/tmp/bdevperf.sock 00:31:25.575 01:17:18 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@827 -- # '[' -z 3894634 ']' 00:31:25.575 01:17:18 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:31:25.575 01:17:18 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@832 -- # local max_retries=100 00:31:25.575 01:17:18 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:31:25.575 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:31:25.575 01:17:18 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # xtrace_disable 00:31:25.575 01:17:18 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:31:26.141 01:17:18 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:31:26.141 01:17:18 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # return 0 00:31:26.141 01:17:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:31:26.141 01:17:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:31:26.707 Nvme0n1 00:31:26.707 01:17:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:31:27.272 Nvme0n1 00:31:27.272 01:17:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:31:27.272 01:17:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:31:29.171 01:17:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:31:29.171 01:17:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:31:29.738 01:17:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:31:29.738 01:17:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:31:31.113 01:17:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:31:31.113 01:17:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:31:31.113 01:17:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:31.113 01:17:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:31.113 01:17:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:31.113 01:17:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:31:31.113 01:17:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:31.113 01:17:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:31.371 01:17:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:31.371 01:17:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:31.371 01:17:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:31.371 01:17:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:31.628 01:17:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:31.628 01:17:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:31.628 01:17:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:31.628 01:17:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:31.885 01:17:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:31.885 01:17:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:31:31.885 01:17:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:31.885 01:17:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:32.142 01:17:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:32.142 01:17:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:31:32.142 01:17:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:32.142 01:17:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:32.399 01:17:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:32.399 01:17:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:31:32.399 01:17:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:31:32.656 01:17:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:31:32.914 01:17:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:31:33.845 01:17:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:31:33.845 01:17:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:31:33.845 01:17:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:33.845 01:17:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:34.103 01:17:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:34.103 01:17:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:31:34.103 01:17:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:34.103 01:17:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:34.360 01:17:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:34.360 01:17:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:34.360 01:17:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:34.360 01:17:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:34.622 01:17:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:34.622 01:17:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:34.622 01:17:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:34.622 01:17:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:34.881 01:17:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:34.881 01:17:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:31:34.881 01:17:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:34.881 01:17:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:35.139 01:17:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:35.139 01:17:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:31:35.139 01:17:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:35.139 01:17:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:35.396 01:17:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:35.397 01:17:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:31:35.397 01:17:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:31:35.654 01:17:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:31:35.911 01:17:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:31:36.843 01:17:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:31:36.843 01:17:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:31:36.843 01:17:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:36.843 01:17:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:37.101 01:17:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:37.101 01:17:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:31:37.101 01:17:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:37.101 01:17:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:37.359 01:17:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:37.359 01:17:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:37.359 01:17:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:37.359 01:17:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:37.617 01:17:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:37.617 01:17:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:37.617 01:17:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:37.617 01:17:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:37.874 01:17:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:37.874 01:17:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:31:37.874 01:17:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:37.874 01:17:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:38.132 01:17:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:38.132 01:17:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:31:38.132 01:17:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:38.132 01:17:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:38.390 01:17:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:38.390 01:17:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:31:38.390 01:17:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:31:38.648 01:17:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:31:38.906 01:17:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:31:39.840 01:17:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:31:39.840 01:17:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:31:39.840 01:17:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:39.840 01:17:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:40.129 01:17:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:40.129 01:17:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:31:40.129 01:17:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:40.129 01:17:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:40.402 01:17:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:40.402 01:17:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:40.402 01:17:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:40.402 01:17:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:40.666 01:17:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:40.666 01:17:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:40.666 01:17:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:40.666 01:17:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:40.924 01:17:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:40.924 01:17:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:31:40.924 01:17:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:40.924 01:17:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:41.181 01:17:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:41.181 01:17:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:31:41.181 01:17:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:41.182 01:17:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:41.439 01:17:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:41.439 01:17:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:31:41.439 01:17:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:31:41.697 01:17:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:31:41.955 01:17:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:31:42.889 01:17:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:31:42.889 01:17:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:31:42.889 01:17:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:42.889 01:17:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:43.146 01:17:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:43.146 01:17:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:31:43.146 01:17:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:43.146 01:17:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:43.404 01:17:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:43.404 01:17:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:43.404 01:17:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:43.404 01:17:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:43.661 01:17:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:43.661 01:17:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:43.661 01:17:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:43.661 01:17:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:43.919 01:17:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:43.919 01:17:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:31:43.919 01:17:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:43.919 01:17:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:44.177 01:17:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:44.177 01:17:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:31:44.177 01:17:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:44.177 01:17:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:44.435 01:17:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:44.435 01:17:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:31:44.435 01:17:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:31:44.692 01:17:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:31:44.950 01:17:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:31:45.883 01:17:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:31:45.883 01:17:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:31:45.883 01:17:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:45.883 01:17:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:46.141 01:17:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:46.141 01:17:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:31:46.141 01:17:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:46.141 01:17:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:46.399 01:17:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:46.399 01:17:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:46.399 01:17:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:46.399 01:17:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:46.657 01:17:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:46.657 01:17:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:46.657 01:17:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:46.657 01:17:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:46.914 01:17:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:46.914 01:17:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:31:46.915 01:17:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:46.915 01:17:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:47.172 01:17:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:47.172 01:17:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:31:47.172 01:17:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:47.172 01:17:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:47.430 01:17:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:47.430 01:17:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:31:47.688 01:17:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:31:47.688 01:17:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:31:47.946 01:17:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:31:48.204 01:17:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:31:49.137 01:17:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:31:49.137 01:17:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:31:49.137 01:17:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:49.137 01:17:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:49.394 01:17:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:49.394 01:17:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:31:49.394 01:17:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:49.394 01:17:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:49.652 01:17:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:49.652 01:17:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:49.652 01:17:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:49.652 01:17:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:49.910 01:17:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:49.910 01:17:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:49.910 01:17:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:49.910 01:17:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:50.168 01:17:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:50.168 01:17:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:31:50.168 01:17:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:50.168 01:17:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:50.424 01:17:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:50.424 01:17:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:31:50.424 01:17:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:50.424 01:17:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:50.705 01:17:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:50.705 01:17:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:31:50.705 01:17:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:31:50.963 01:17:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:31:51.220 01:17:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:31:52.153 01:17:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:31:52.153 01:17:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:31:52.153 01:17:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:52.153 01:17:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:52.411 01:17:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:52.411 01:17:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:31:52.411 01:17:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:52.411 01:17:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:52.669 01:17:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:52.669 01:17:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:52.669 01:17:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:52.669 01:17:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:52.927 01:17:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:52.927 01:17:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:52.927 01:17:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:52.927 01:17:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:53.184 01:17:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:53.184 01:17:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:31:53.184 01:17:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:53.184 01:17:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:53.443 01:17:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:53.443 01:17:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:31:53.443 01:17:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:53.443 01:17:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:53.701 01:17:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:53.701 01:17:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:31:53.701 01:17:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:31:53.958 01:17:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:31:54.217 01:17:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:31:55.149 01:17:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:31:55.149 01:17:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:31:55.149 01:17:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:55.149 01:17:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:55.407 01:17:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:55.407 01:17:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:31:55.407 01:17:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:55.407 01:17:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:55.666 01:17:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:55.666 01:17:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:55.666 01:17:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:55.666 01:17:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:55.957 01:17:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:55.957 01:17:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:55.957 01:17:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:55.957 01:17:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:56.215 01:17:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:56.215 01:17:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:31:56.215 01:17:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:56.215 01:17:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:56.473 01:17:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:56.473 01:17:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:31:56.473 01:17:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:56.473 01:17:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:56.730 01:17:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:56.730 01:17:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:31:56.730 01:17:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:31:56.988 01:17:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:31:57.246 01:17:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:31:58.179 01:17:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:31:58.179 01:17:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:31:58.179 01:17:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:58.179 01:17:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:58.436 01:17:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:58.436 01:17:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:31:58.436 01:17:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:58.436 01:17:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:58.693 01:17:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:58.693 01:17:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:58.693 01:17:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:58.693 01:17:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:58.951 01:17:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:58.951 01:17:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:58.951 01:17:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:58.951 01:17:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:59.208 01:17:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:59.208 01:17:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:31:59.208 01:17:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:59.208 01:17:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:59.465 01:17:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:59.465 01:17:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:31:59.465 01:17:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:59.465 01:17:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:59.723 01:17:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:59.723 01:17:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 3894634 00:31:59.723 01:17:52 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@946 -- # '[' -z 3894634 ']' 00:31:59.723 01:17:52 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # kill -0 3894634 00:31:59.723 01:17:52 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@951 -- # uname 00:31:59.723 01:17:52 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:31:59.723 01:17:52 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3894634 00:31:59.723 01:17:52 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:31:59.723 01:17:52 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:31:59.723 01:17:52 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3894634' 00:31:59.723 killing process with pid 3894634 00:31:59.723 01:17:52 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@965 -- # kill 3894634 00:31:59.723 01:17:52 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@970 -- # wait 3894634 00:31:59.723 Connection closed with partial response: 00:31:59.723 00:31:59.723 00:31:59.985 01:17:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 3894634 00:31:59.985 01:17:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:31:59.985 [2024-07-25 01:17:18.742284] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 23.11.0 initialization... 00:31:59.985 [2024-07-25 01:17:18.742379] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3894634 ] 00:31:59.985 EAL: No free 2048 kB hugepages reported on node 1 00:31:59.985 [2024-07-25 01:17:18.804869] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:59.985 [2024-07-25 01:17:18.889391] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:31:59.985 Running I/O for 90 seconds... 00:31:59.985 [2024-07-25 01:17:34.668382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:83584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.985 [2024-07-25 01:17:34.668438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:31:59.985 [2024-07-25 01:17:34.668511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:83712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.985 [2024-07-25 01:17:34.668531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:31:59.985 [2024-07-25 01:17:34.668555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:83720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.985 [2024-07-25 01:17:34.668571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:31:59.985 [2024-07-25 01:17:34.668592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:83728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.985 [2024-07-25 01:17:34.668608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:31:59.985 [2024-07-25 01:17:34.668630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:83736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.985 [2024-07-25 01:17:34.668646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:31:59.985 [2024-07-25 01:17:34.668667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:83744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.985 [2024-07-25 01:17:34.668682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:31:59.985 [2024-07-25 01:17:34.668704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:83752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.985 [2024-07-25 01:17:34.668720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:59.985 [2024-07-25 01:17:34.668741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:83760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.985 [2024-07-25 01:17:34.668757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:31:59.985 [2024-07-25 01:17:34.668777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:83768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.985 [2024-07-25 01:17:34.668792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:31:59.985 [2024-07-25 01:17:34.668813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:83776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.985 [2024-07-25 01:17:34.668828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:31:59.985 [2024-07-25 01:17:34.668849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:83784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.985 [2024-07-25 01:17:34.668874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:31:59.985 [2024-07-25 01:17:34.668896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:83792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.985 [2024-07-25 01:17:34.668912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:31:59.985 [2024-07-25 01:17:34.668933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:83800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.985 [2024-07-25 01:17:34.668948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:31:59.985 [2024-07-25 01:17:34.668968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:83808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.985 [2024-07-25 01:17:34.668984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:31:59.985 [2024-07-25 01:17:34.669005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:83816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.985 [2024-07-25 01:17:34.669020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:31:59.985 [2024-07-25 01:17:34.669041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:83824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.985 [2024-07-25 01:17:34.669056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:31:59.985 [2024-07-25 01:17:34.669076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:83592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.985 [2024-07-25 01:17:34.669092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:31:59.985 [2024-07-25 01:17:34.669113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:83600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.985 [2024-07-25 01:17:34.669129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:31:59.985 [2024-07-25 01:17:34.669149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:83608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.986 [2024-07-25 01:17:34.669165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:31:59.986 [2024-07-25 01:17:34.669186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:83616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.986 [2024-07-25 01:17:34.669201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:31:59.986 [2024-07-25 01:17:34.669222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:83624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.986 [2024-07-25 01:17:34.669237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:31:59.986 [2024-07-25 01:17:34.669283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:83632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.986 [2024-07-25 01:17:34.669302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:31:59.986 [2024-07-25 01:17:34.669324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:83640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.986 [2024-07-25 01:17:34.669340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:31:59.986 [2024-07-25 01:17:34.669366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:83832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.986 [2024-07-25 01:17:34.669383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:31:59.986 [2024-07-25 01:17:34.669404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:83840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.986 [2024-07-25 01:17:34.669421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:31:59.986 [2024-07-25 01:17:34.669442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:83848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.986 [2024-07-25 01:17:34.669458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.986 [2024-07-25 01:17:34.669480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:83856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.986 [2024-07-25 01:17:34.669496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:59.986 [2024-07-25 01:17:34.669518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:83864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.986 [2024-07-25 01:17:34.669534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:59.986 [2024-07-25 01:17:34.669555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:83872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.986 [2024-07-25 01:17:34.669587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:31:59.986 [2024-07-25 01:17:34.669609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:83880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.986 [2024-07-25 01:17:34.669626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:31:59.986 [2024-07-25 01:17:34.669648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:83888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.986 [2024-07-25 01:17:34.669665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:31:59.986 [2024-07-25 01:17:34.669686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:83896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.986 [2024-07-25 01:17:34.669703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:31:59.986 [2024-07-25 01:17:34.669725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:83904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.986 [2024-07-25 01:17:34.669741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:31:59.986 [2024-07-25 01:17:34.669762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:83912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.986 [2024-07-25 01:17:34.669779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:31:59.986 [2024-07-25 01:17:34.669801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:83920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.986 [2024-07-25 01:17:34.669817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:31:59.986 [2024-07-25 01:17:34.669844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:83928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.986 [2024-07-25 01:17:34.669861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:31:59.986 [2024-07-25 01:17:34.669883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:83936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.986 [2024-07-25 01:17:34.669899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:31:59.986 [2024-07-25 01:17:34.669921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:83944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.986 [2024-07-25 01:17:34.669937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:31:59.986 [2024-07-25 01:17:34.669960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:83952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.986 [2024-07-25 01:17:34.669976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:31:59.986 [2024-07-25 01:17:34.670332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:83960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.986 [2024-07-25 01:17:34.670357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:31:59.986 [2024-07-25 01:17:34.670388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:83968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.986 [2024-07-25 01:17:34.670407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:31:59.986 [2024-07-25 01:17:34.670432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:83976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.986 [2024-07-25 01:17:34.670449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:31:59.986 [2024-07-25 01:17:34.670474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:83984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.986 [2024-07-25 01:17:34.670491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:31:59.986 [2024-07-25 01:17:34.670517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:83992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.986 [2024-07-25 01:17:34.670533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:31:59.986 [2024-07-25 01:17:34.670558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:84000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.986 [2024-07-25 01:17:34.670575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:31:59.986 [2024-07-25 01:17:34.670600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:84008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.986 [2024-07-25 01:17:34.670617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:31:59.986 [2024-07-25 01:17:34.670642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:84016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.986 [2024-07-25 01:17:34.670659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:31:59.986 [2024-07-25 01:17:34.670685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:84024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.986 [2024-07-25 01:17:34.670708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:31:59.986 [2024-07-25 01:17:34.670734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:84032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.986 [2024-07-25 01:17:34.670751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:31:59.986 [2024-07-25 01:17:34.670777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:84040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.986 [2024-07-25 01:17:34.670793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:31:59.986 [2024-07-25 01:17:34.670819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:84048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.986 [2024-07-25 01:17:34.670836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:31:59.986 [2024-07-25 01:17:34.670861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:84056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.986 [2024-07-25 01:17:34.670893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:31:59.986 [2024-07-25 01:17:34.670919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:84064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.986 [2024-07-25 01:17:34.670935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:31:59.986 [2024-07-25 01:17:34.670962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:84072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.986 [2024-07-25 01:17:34.670978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:31:59.986 [2024-07-25 01:17:34.671003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:84080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.986 [2024-07-25 01:17:34.671019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:31:59.986 [2024-07-25 01:17:34.671042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:84088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.986 [2024-07-25 01:17:34.671058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:31:59.986 [2024-07-25 01:17:34.671082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:84096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.987 [2024-07-25 01:17:34.671113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:31:59.987 [2024-07-25 01:17:34.671139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:84104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.987 [2024-07-25 01:17:34.671156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:31:59.987 [2024-07-25 01:17:34.671182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:84112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.987 [2024-07-25 01:17:34.671199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:59.987 [2024-07-25 01:17:34.671223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:84120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.987 [2024-07-25 01:17:34.671251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:59.987 [2024-07-25 01:17:34.671280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:84128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.987 [2024-07-25 01:17:34.671297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:31:59.987 [2024-07-25 01:17:34.671322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:84136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.987 [2024-07-25 01:17:34.671338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:31:59.987 [2024-07-25 01:17:34.671363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:84144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.987 [2024-07-25 01:17:34.671379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:31:59.987 [2024-07-25 01:17:34.671404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:84152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.987 [2024-07-25 01:17:34.671420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:31:59.987 [2024-07-25 01:17:34.671445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:84160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.987 [2024-07-25 01:17:34.671462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:31:59.987 [2024-07-25 01:17:34.671487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:84168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.987 [2024-07-25 01:17:34.671503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:31:59.987 [2024-07-25 01:17:34.671544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:84176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.987 [2024-07-25 01:17:34.671560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:31:59.987 [2024-07-25 01:17:34.671585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:84184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.987 [2024-07-25 01:17:34.671602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:31:59.987 [2024-07-25 01:17:34.671626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:84192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.987 [2024-07-25 01:17:34.671642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:31:59.987 [2024-07-25 01:17:34.671667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:84200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.987 [2024-07-25 01:17:34.671683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:31:59.987 [2024-07-25 01:17:34.671707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:84208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.987 [2024-07-25 01:17:34.671723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:31:59.987 [2024-07-25 01:17:34.671747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:84216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.987 [2024-07-25 01:17:34.671762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:31:59.987 [2024-07-25 01:17:34.671791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:84224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.987 [2024-07-25 01:17:34.671808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:31:59.987 [2024-07-25 01:17:34.671833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:84232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.987 [2024-07-25 01:17:34.671849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:31:59.987 [2024-07-25 01:17:34.671873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:84240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.987 [2024-07-25 01:17:34.671889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:31:59.987 [2024-07-25 01:17:34.671913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:84248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.987 [2024-07-25 01:17:34.671929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:31:59.987 [2024-07-25 01:17:34.671953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:84256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.987 [2024-07-25 01:17:34.671969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:31:59.987 [2024-07-25 01:17:34.671993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:84264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.987 [2024-07-25 01:17:34.672009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:31:59.987 [2024-07-25 01:17:34.672033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:84272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.987 [2024-07-25 01:17:34.672048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:31:59.987 [2024-07-25 01:17:34.672073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:84280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.987 [2024-07-25 01:17:34.672089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:31:59.987 [2024-07-25 01:17:34.672113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:84288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.987 [2024-07-25 01:17:34.672129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:31:59.987 [2024-07-25 01:17:34.672152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:84296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.987 [2024-07-25 01:17:34.672168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:31:59.987 [2024-07-25 01:17:34.672192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:84304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.987 [2024-07-25 01:17:34.672207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:31:59.987 [2024-07-25 01:17:34.672255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:84312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.987 [2024-07-25 01:17:34.672274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:31:59.987 [2024-07-25 01:17:34.672309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:84320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.987 [2024-07-25 01:17:34.672326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:31:59.987 [2024-07-25 01:17:34.672352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:84328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.987 [2024-07-25 01:17:34.672368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:31:59.987 [2024-07-25 01:17:34.672394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:84336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.987 [2024-07-25 01:17:34.672410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:31:59.987 [2024-07-25 01:17:34.672435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:84344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.987 [2024-07-25 01:17:34.672452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:31:59.987 [2024-07-25 01:17:34.672477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:84352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.987 [2024-07-25 01:17:34.672493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:31:59.987 [2024-07-25 01:17:34.672518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:84360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.987 [2024-07-25 01:17:34.672549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:31:59.987 [2024-07-25 01:17:34.672574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:84368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.987 [2024-07-25 01:17:34.672590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:59.987 [2024-07-25 01:17:34.672613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:84376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.987 [2024-07-25 01:17:34.672629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:59.987 [2024-07-25 01:17:34.672653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:84384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.987 [2024-07-25 01:17:34.672669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:31:59.987 [2024-07-25 01:17:34.672693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:84392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.987 [2024-07-25 01:17:34.672708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:31:59.987 [2024-07-25 01:17:34.672732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:84400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.987 [2024-07-25 01:17:34.672748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:31:59.988 [2024-07-25 01:17:34.672772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:84408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.988 [2024-07-25 01:17:34.672788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:31:59.988 [2024-07-25 01:17:34.672812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:84416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.988 [2024-07-25 01:17:34.672831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:31:59.988 [2024-07-25 01:17:34.672856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:84424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.988 [2024-07-25 01:17:34.672872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:31:59.988 [2024-07-25 01:17:34.672896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:84432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.988 [2024-07-25 01:17:34.672912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:31:59.988 [2024-07-25 01:17:34.672936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:84440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.988 [2024-07-25 01:17:34.672952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:31:59.988 [2024-07-25 01:17:34.672976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:84448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.988 [2024-07-25 01:17:34.672992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:31:59.988 [2024-07-25 01:17:34.673016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:84456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.988 [2024-07-25 01:17:34.673032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:31:59.988 [2024-07-25 01:17:34.673056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:84464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.988 [2024-07-25 01:17:34.673072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:31:59.988 [2024-07-25 01:17:34.673275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:83648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.988 [2024-07-25 01:17:34.673297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:31:59.988 [2024-07-25 01:17:34.673332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:83656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.988 [2024-07-25 01:17:34.673351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:31:59.988 [2024-07-25 01:17:34.673381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:83664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.988 [2024-07-25 01:17:34.673398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:31:59.988 [2024-07-25 01:17:34.673428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:83672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.988 [2024-07-25 01:17:34.673445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:31:59.988 [2024-07-25 01:17:34.673475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:83680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.988 [2024-07-25 01:17:34.673492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:31:59.988 [2024-07-25 01:17:34.673522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:83688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.988 [2024-07-25 01:17:34.673543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:31:59.988 [2024-07-25 01:17:34.673595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:83696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.988 [2024-07-25 01:17:34.673612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:31:59.988 [2024-07-25 01:17:34.673657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:83704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.988 [2024-07-25 01:17:34.673673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:31:59.988 [2024-07-25 01:17:34.673702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:84472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.988 [2024-07-25 01:17:34.673718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:31:59.988 [2024-07-25 01:17:34.673746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:84480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.988 [2024-07-25 01:17:34.673762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:31:59.988 [2024-07-25 01:17:34.673789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:84488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.988 [2024-07-25 01:17:34.673806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:31:59.988 [2024-07-25 01:17:34.673834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:84496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.988 [2024-07-25 01:17:34.673850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:31:59.988 [2024-07-25 01:17:34.673878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:84504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.988 [2024-07-25 01:17:34.673894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:31:59.988 [2024-07-25 01:17:34.673923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:84512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.988 [2024-07-25 01:17:34.673939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:31:59.988 [2024-07-25 01:17:34.673968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:84520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.988 [2024-07-25 01:17:34.673984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:31:59.988 [2024-07-25 01:17:34.674012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:84528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.988 [2024-07-25 01:17:34.674029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:31:59.988 [2024-07-25 01:17:34.674057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:84536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.988 [2024-07-25 01:17:34.674073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:31:59.988 [2024-07-25 01:17:34.674101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:84544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.988 [2024-07-25 01:17:34.674117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:31:59.988 [2024-07-25 01:17:34.674150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:84552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.988 [2024-07-25 01:17:34.674167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:31:59.988 [2024-07-25 01:17:34.674196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:84560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.988 [2024-07-25 01:17:34.674212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:59.988 [2024-07-25 01:17:34.674264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:84568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.988 [2024-07-25 01:17:34.674283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:59.988 [2024-07-25 01:17:34.674313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:84576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.988 [2024-07-25 01:17:34.674330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:31:59.988 [2024-07-25 01:17:34.674359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:84584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.988 [2024-07-25 01:17:34.674375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:31:59.988 [2024-07-25 01:17:34.674404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:84592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.988 [2024-07-25 01:17:34.674421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:31:59.988 [2024-07-25 01:17:34.674451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:84600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.988 [2024-07-25 01:17:34.674467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:31:59.988 [2024-07-25 01:17:50.173746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:72600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.988 [2024-07-25 01:17:50.173808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:31:59.988 [2024-07-25 01:17:50.173876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:72616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.988 [2024-07-25 01:17:50.173898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:31:59.988 [2024-07-25 01:17:50.173923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:72632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.988 [2024-07-25 01:17:50.173940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:59.988 [2024-07-25 01:17:50.173964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:72648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.988 [2024-07-25 01:17:50.173980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:31:59.988 [2024-07-25 01:17:50.174002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:72664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.988 [2024-07-25 01:17:50.174019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:31:59.988 [2024-07-25 01:17:50.174053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:72680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.988 [2024-07-25 01:17:50.174071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:31:59.989 [2024-07-25 01:17:50.174108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:72696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.989 [2024-07-25 01:17:50.174125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:31:59.989 [2024-07-25 01:17:50.174147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:72392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.989 [2024-07-25 01:17:50.174162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:31:59.989 [2024-07-25 01:17:50.174184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:72712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.989 [2024-07-25 01:17:50.174199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:31:59.989 [2024-07-25 01:17:50.174221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:72728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.989 [2024-07-25 01:17:50.174236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:31:59.989 [2024-07-25 01:17:50.174285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:72744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.989 [2024-07-25 01:17:50.174303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:31:59.989 [2024-07-25 01:17:50.174325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:72760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.989 [2024-07-25 01:17:50.174342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:31:59.989 [2024-07-25 01:17:50.174364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:72776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.989 [2024-07-25 01:17:50.174380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:31:59.989 [2024-07-25 01:17:50.174402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:72792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.989 [2024-07-25 01:17:50.174418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:31:59.989 [2024-07-25 01:17:50.174440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:72808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.989 [2024-07-25 01:17:50.174456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:31:59.989 [2024-07-25 01:17:50.174478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:72824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.989 [2024-07-25 01:17:50.174494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:31:59.989 [2024-07-25 01:17:50.174516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:72840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.989 [2024-07-25 01:17:50.174532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:31:59.989 [2024-07-25 01:17:50.174568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:72856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.989 [2024-07-25 01:17:50.174589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:31:59.989 [2024-07-25 01:17:50.174612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:72872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.989 [2024-07-25 01:17:50.174643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:31:59.989 [2024-07-25 01:17:50.174669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:72424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.989 [2024-07-25 01:17:50.174696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:31:59.989 [2024-07-25 01:17:50.174724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:72456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.989 [2024-07-25 01:17:50.174740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:31:59.989 [2024-07-25 01:17:50.174763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:72352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.989 [2024-07-25 01:17:50.174779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.989 [2024-07-25 01:17:50.174801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:72472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.989 [2024-07-25 01:17:50.174817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:59.989 [2024-07-25 01:17:50.174839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:72504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.989 [2024-07-25 01:17:50.174855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:59.989 [2024-07-25 01:17:50.174877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:72536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.989 [2024-07-25 01:17:50.174892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:31:59.989 [2024-07-25 01:17:50.174914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:72568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.989 [2024-07-25 01:17:50.174931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:31:59.989 [2024-07-25 01:17:50.175479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:72880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.989 [2024-07-25 01:17:50.175503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:31:59.989 [2024-07-25 01:17:50.175530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:72896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.989 [2024-07-25 01:17:50.175548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:31:59.989 [2024-07-25 01:17:50.175571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:72912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.989 [2024-07-25 01:17:50.175588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:31:59.989 [2024-07-25 01:17:50.175610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:72928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.989 [2024-07-25 01:17:50.175631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:31:59.989 [2024-07-25 01:17:50.175654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:72944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.989 [2024-07-25 01:17:50.175671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:31:59.989 [2024-07-25 01:17:50.175693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:72960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.989 [2024-07-25 01:17:50.175709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:31:59.989 [2024-07-25 01:17:50.175731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:72976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.989 [2024-07-25 01:17:50.175747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:31:59.989 [2024-07-25 01:17:50.175782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:72992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.989 [2024-07-25 01:17:50.175804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:31:59.989 [2024-07-25 01:17:50.175828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:73008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.989 [2024-07-25 01:17:50.175845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:31:59.989 [2024-07-25 01:17:50.175867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:73024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.989 [2024-07-25 01:17:50.175883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:31:59.989 [2024-07-25 01:17:50.175905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:73040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.989 [2024-07-25 01:17:50.175921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:31:59.989 [2024-07-25 01:17:50.175943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:73056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.989 [2024-07-25 01:17:50.175958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:31:59.990 [2024-07-25 01:17:50.175980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:73072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.990 [2024-07-25 01:17:50.175996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:31:59.990 [2024-07-25 01:17:50.176017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:73088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.990 [2024-07-25 01:17:50.176033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:31:59.990 [2024-07-25 01:17:50.176054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:73104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.990 [2024-07-25 01:17:50.176070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:31:59.990 [2024-07-25 01:17:50.176092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:73120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.990 [2024-07-25 01:17:50.176108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:31:59.990 [2024-07-25 01:17:50.176134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:73136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.990 [2024-07-25 01:17:50.176151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:31:59.990 [2024-07-25 01:17:50.176172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:73152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.990 [2024-07-25 01:17:50.176204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:31:59.990 [2024-07-25 01:17:50.176226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:73168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.990 [2024-07-25 01:17:50.176250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:31:59.990 [2024-07-25 01:17:50.176293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:73184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.990 [2024-07-25 01:17:50.176310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:31:59.990 [2024-07-25 01:17:50.176332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:73200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.990 [2024-07-25 01:17:50.176348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:31:59.990 [2024-07-25 01:17:50.176370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:73216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.990 [2024-07-25 01:17:50.176386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:31:59.990 [2024-07-25 01:17:50.176408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:73232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.990 [2024-07-25 01:17:50.176424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:31:59.990 [2024-07-25 01:17:50.176445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:73248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.990 [2024-07-25 01:17:50.176461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:31:59.990 [2024-07-25 01:17:50.176483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:73264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.990 [2024-07-25 01:17:50.176498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:31:59.990 [2024-07-25 01:17:50.176521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:73280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.990 [2024-07-25 01:17:50.176537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:31:59.990 [2024-07-25 01:17:50.176559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:73296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.990 [2024-07-25 01:17:50.176575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:31:59.990 [2024-07-25 01:17:50.176597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:73312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.990 [2024-07-25 01:17:50.176612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:31:59.990 [2024-07-25 01:17:50.176638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:73328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.990 [2024-07-25 01:17:50.176655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:59.990 [2024-07-25 01:17:50.176691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:72384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.990 [2024-07-25 01:17:50.176707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:59.990 [2024-07-25 01:17:50.176730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:72416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.990 [2024-07-25 01:17:50.176745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:31:59.990 [2024-07-25 01:17:50.176766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:72448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.990 [2024-07-25 01:17:50.176781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:31:59.990 [2024-07-25 01:17:50.176817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:72480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.990 [2024-07-25 01:17:50.176834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:31:59.990 [2024-07-25 01:17:50.176856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:72512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.990 [2024-07-25 01:17:50.176871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:31:59.990 [2024-07-25 01:17:50.176894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:72544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.990 [2024-07-25 01:17:50.176910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:31:59.990 [2024-07-25 01:17:50.178865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:72576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.990 [2024-07-25 01:17:50.178890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:31:59.990 [2024-07-25 01:17:50.178917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:73344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.990 [2024-07-25 01:17:50.178936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:31:59.990 [2024-07-25 01:17:50.178958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:73360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.990 [2024-07-25 01:17:50.178975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:31:59.990 Received shutdown signal, test time was about 32.278028 seconds 00:31:59.990 00:31:59.990 Latency(us) 00:31:59.990 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:59.990 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:31:59.990 Verification LBA range: start 0x0 length 0x4000 00:31:59.990 Nvme0n1 : 32.28 8066.43 31.51 0.00 0.00 15843.00 634.12 4026531.84 00:31:59.990 =================================================================================================================== 00:31:59.990 Total : 8066.43 31.51 0.00 0.00 15843.00 634.12 4026531.84 00:31:59.990 01:17:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:00.248 01:17:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:32:00.248 01:17:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:32:00.248 01:17:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:32:00.248 01:17:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@488 -- # nvmfcleanup 00:32:00.248 01:17:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # sync 00:32:00.248 01:17:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:32:00.248 01:17:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@120 -- # set +e 00:32:00.248 01:17:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # for i in {1..20} 00:32:00.248 01:17:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:32:00.248 rmmod nvme_tcp 00:32:00.248 rmmod nvme_fabrics 00:32:00.248 rmmod nvme_keyring 00:32:00.248 01:17:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:32:00.248 01:17:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set -e 00:32:00.248 01:17:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # return 0 00:32:00.248 01:17:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@489 -- # '[' -n 3894361 ']' 00:32:00.248 01:17:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@490 -- # killprocess 3894361 00:32:00.248 01:17:53 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@946 -- # '[' -z 3894361 ']' 00:32:00.248 01:17:53 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # kill -0 3894361 00:32:00.248 01:17:53 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@951 -- # uname 00:32:00.248 01:17:53 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:32:00.248 01:17:53 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3894361 00:32:00.248 01:17:53 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:32:00.248 01:17:53 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:32:00.248 01:17:53 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3894361' 00:32:00.248 killing process with pid 3894361 00:32:00.248 01:17:53 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@965 -- # kill 3894361 00:32:00.248 01:17:53 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@970 -- # wait 3894361 00:32:00.505 01:17:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:32:00.505 01:17:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:32:00.505 01:17:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:32:00.505 01:17:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:32:00.505 01:17:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # remove_spdk_ns 00:32:00.505 01:17:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:00.505 01:17:53 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:00.505 01:17:53 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:03.034 01:17:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:32:03.034 00:32:03.034 real 0m40.948s 00:32:03.034 user 2m3.861s 00:32:03.034 sys 0m10.275s 00:32:03.034 01:17:55 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1122 -- # xtrace_disable 00:32:03.034 01:17:55 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:32:03.034 ************************************ 00:32:03.034 END TEST nvmf_host_multipath_status 00:32:03.034 ************************************ 00:32:03.034 01:17:55 nvmf_tcp -- nvmf/nvmf.sh@103 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:32:03.034 01:17:55 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:32:03.034 01:17:55 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:32:03.034 01:17:55 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:03.034 ************************************ 00:32:03.034 START TEST nvmf_discovery_remove_ifc 00:32:03.034 ************************************ 00:32:03.034 01:17:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:32:03.034 * Looking for test storage... 00:32:03.034 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:32:03.034 01:17:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:03.034 01:17:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:32:03.034 01:17:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:03.034 01:17:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:03.034 01:17:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:03.034 01:17:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:03.034 01:17:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:03.034 01:17:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:03.034 01:17:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:03.034 01:17:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:03.034 01:17:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:03.034 01:17:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:03.034 01:17:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:32:03.034 01:17:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:32:03.034 01:17:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:03.034 01:17:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:03.034 01:17:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:03.034 01:17:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:03.034 01:17:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:03.034 01:17:55 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:03.034 01:17:55 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:03.034 01:17:55 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:03.034 01:17:55 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:03.034 01:17:55 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:03.034 01:17:55 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:03.034 01:17:55 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:32:03.034 01:17:55 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:03.034 01:17:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@47 -- # : 0 00:32:03.034 01:17:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:32:03.034 01:17:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:32:03.034 01:17:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:03.034 01:17:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:03.034 01:17:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:03.034 01:17:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:32:03.034 01:17:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:32:03.034 01:17:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:32:03.034 01:17:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:32:03.034 01:17:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:32:03.034 01:17:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:32:03.034 01:17:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:32:03.034 01:17:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:32:03.034 01:17:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:32:03.034 01:17:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:32:03.034 01:17:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:32:03.034 01:17:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:03.034 01:17:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@448 -- # prepare_net_devs 00:32:03.034 01:17:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:32:03.034 01:17:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:32:03.035 01:17:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:03.035 01:17:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:03.035 01:17:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:03.035 01:17:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:32:03.035 01:17:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:32:03.035 01:17:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@285 -- # xtrace_disable 00:32:03.035 01:17:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:04.935 01:17:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:04.935 01:17:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # pci_devs=() 00:32:04.935 01:17:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # local -a pci_devs 00:32:04.935 01:17:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:32:04.935 01:17:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:32:04.935 01:17:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # pci_drivers=() 00:32:04.935 01:17:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:32:04.935 01:17:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # net_devs=() 00:32:04.935 01:17:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # local -ga net_devs 00:32:04.935 01:17:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # e810=() 00:32:04.935 01:17:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # local -ga e810 00:32:04.935 01:17:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # x722=() 00:32:04.935 01:17:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # local -ga x722 00:32:04.935 01:17:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # mlx=() 00:32:04.935 01:17:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # local -ga mlx 00:32:04.935 01:17:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:04.935 01:17:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:04.935 01:17:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:04.935 01:17:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:04.935 01:17:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:04.935 01:17:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:04.935 01:17:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:04.935 01:17:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:04.935 01:17:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:04.935 01:17:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:04.935 01:17:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:04.935 01:17:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:32:04.935 01:17:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:32:04.935 01:17:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:32:04.935 01:17:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:32:04.935 01:17:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:32:04.935 01:17:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:32:04.935 01:17:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:04.935 01:17:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:32:04.935 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:32:04.935 01:17:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:04.935 01:17:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:04.935 01:17:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:04.935 01:17:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:04.935 01:17:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:04.935 01:17:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:04.935 01:17:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:32:04.935 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:32:04.935 01:17:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:04.935 01:17:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:04.935 01:17:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:04.935 01:17:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:04.935 01:17:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:04.935 01:17:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:32:04.935 01:17:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:32:04.935 01:17:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:32:04.935 01:17:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:04.935 01:17:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:04.935 01:17:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:04.935 01:17:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:04.935 01:17:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:04.935 01:17:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:04.935 01:17:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:04.935 01:17:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:32:04.935 Found net devices under 0000:0a:00.0: cvl_0_0 00:32:04.935 01:17:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:04.935 01:17:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:04.935 01:17:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:04.935 01:17:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:04.935 01:17:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:04.935 01:17:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:04.935 01:17:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:04.935 01:17:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:04.935 01:17:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:32:04.935 Found net devices under 0000:0a:00.1: cvl_0_1 00:32:04.935 01:17:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:04.935 01:17:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:32:04.935 01:17:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # is_hw=yes 00:32:04.935 01:17:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:32:04.935 01:17:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:32:04.935 01:17:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:32:04.935 01:17:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:04.935 01:17:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:04.935 01:17:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:04.935 01:17:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:32:04.935 01:17:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:04.935 01:17:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:04.935 01:17:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:32:04.935 01:17:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:04.935 01:17:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:04.935 01:17:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:32:04.935 01:17:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:32:04.935 01:17:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:32:04.935 01:17:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:04.935 01:17:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:04.935 01:17:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:04.935 01:17:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:32:04.935 01:17:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:04.935 01:17:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:04.935 01:17:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:04.936 01:17:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:32:04.936 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:04.936 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.147 ms 00:32:04.936 00:32:04.936 --- 10.0.0.2 ping statistics --- 00:32:04.936 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:04.936 rtt min/avg/max/mdev = 0.147/0.147/0.147/0.000 ms 00:32:04.936 01:17:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:04.936 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:04.936 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.115 ms 00:32:04.936 00:32:04.936 --- 10.0.0.1 ping statistics --- 00:32:04.936 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:04.936 rtt min/avg/max/mdev = 0.115/0.115/0.115/0.000 ms 00:32:04.936 01:17:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:04.936 01:17:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # return 0 00:32:04.936 01:17:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:32:04.936 01:17:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:04.936 01:17:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:32:04.936 01:17:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:32:04.936 01:17:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:04.936 01:17:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:32:04.936 01:17:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:32:04.936 01:17:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:32:04.936 01:17:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:32:04.936 01:17:57 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@720 -- # xtrace_disable 00:32:04.936 01:17:57 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:04.936 01:17:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@481 -- # nvmfpid=3900818 00:32:04.936 01:17:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:32:04.936 01:17:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # waitforlisten 3900818 00:32:04.936 01:17:57 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@827 -- # '[' -z 3900818 ']' 00:32:04.936 01:17:57 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:04.936 01:17:57 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@832 -- # local max_retries=100 00:32:04.936 01:17:57 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:04.936 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:04.936 01:17:57 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # xtrace_disable 00:32:04.936 01:17:57 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:04.936 [2024-07-25 01:17:57.840562] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 23.11.0 initialization... 00:32:04.936 [2024-07-25 01:17:57.840658] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:04.936 EAL: No free 2048 kB hugepages reported on node 1 00:32:04.936 [2024-07-25 01:17:57.909271] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:04.936 [2024-07-25 01:17:57.997802] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:04.936 [2024-07-25 01:17:57.997864] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:04.936 [2024-07-25 01:17:57.997890] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:04.936 [2024-07-25 01:17:57.997903] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:04.936 [2024-07-25 01:17:57.997915] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:04.936 [2024-07-25 01:17:57.997963] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:32:05.194 01:17:58 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:32:05.194 01:17:58 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@860 -- # return 0 00:32:05.194 01:17:58 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:32:05.194 01:17:58 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:05.194 01:17:58 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:05.194 01:17:58 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:05.194 01:17:58 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:32:05.194 01:17:58 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:05.194 01:17:58 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:05.194 [2024-07-25 01:17:58.155127] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:05.194 [2024-07-25 01:17:58.163388] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:32:05.194 null0 00:32:05.194 [2024-07-25 01:17:58.195266] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:05.194 01:17:58 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:05.194 01:17:58 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=3900844 00:32:05.194 01:17:58 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:32:05.194 01:17:58 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 3900844 /tmp/host.sock 00:32:05.194 01:17:58 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@827 -- # '[' -z 3900844 ']' 00:32:05.194 01:17:58 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@831 -- # local rpc_addr=/tmp/host.sock 00:32:05.194 01:17:58 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@832 -- # local max_retries=100 00:32:05.194 01:17:58 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:32:05.194 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:32:05.194 01:17:58 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # xtrace_disable 00:32:05.194 01:17:58 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:05.194 [2024-07-25 01:17:58.260159] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 23.11.0 initialization... 00:32:05.194 [2024-07-25 01:17:58.260252] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3900844 ] 00:32:05.194 EAL: No free 2048 kB hugepages reported on node 1 00:32:05.194 [2024-07-25 01:17:58.322066] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:05.452 [2024-07-25 01:17:58.414367] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:32:05.452 01:17:58 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:32:05.452 01:17:58 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@860 -- # return 0 00:32:05.452 01:17:58 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:32:05.452 01:17:58 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:32:05.452 01:17:58 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:05.452 01:17:58 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:05.452 01:17:58 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:05.452 01:17:58 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:32:05.452 01:17:58 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:05.452 01:17:58 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:05.452 01:17:58 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:05.452 01:17:58 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:32:05.452 01:17:58 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:05.452 01:17:58 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:06.825 [2024-07-25 01:17:59.654104] bdev_nvme.c:6984:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:32:06.825 [2024-07-25 01:17:59.654141] bdev_nvme.c:7064:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:32:06.825 [2024-07-25 01:17:59.654176] bdev_nvme.c:6947:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:32:06.825 [2024-07-25 01:17:59.781616] bdev_nvme.c:6913:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:32:07.083 [2024-07-25 01:18:00.005950] bdev_nvme.c:7774:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:32:07.083 [2024-07-25 01:18:00.006054] bdev_nvme.c:7774:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:32:07.083 [2024-07-25 01:18:00.006123] bdev_nvme.c:7774:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:32:07.083 [2024-07-25 01:18:00.006158] bdev_nvme.c:6803:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:32:07.083 [2024-07-25 01:18:00.006203] bdev_nvme.c:6762:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:32:07.083 01:18:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:07.083 01:18:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:32:07.083 01:18:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:07.083 01:18:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:07.083 01:18:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:07.083 01:18:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:07.083 01:18:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:07.083 01:18:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:07.083 01:18:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:07.083 [2024-07-25 01:18:00.012342] bdev_nvme.c:1614:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x1f66df0 was disconnected and freed. delete nvme_qpair. 00:32:07.083 01:18:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:07.083 01:18:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:32:07.083 01:18:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:32:07.083 01:18:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:32:07.083 01:18:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:32:07.083 01:18:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:07.083 01:18:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:07.083 01:18:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:07.083 01:18:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:07.083 01:18:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:07.083 01:18:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:07.083 01:18:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:07.083 01:18:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:07.083 01:18:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:32:07.083 01:18:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:32:08.015 01:18:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:08.015 01:18:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:08.015 01:18:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:08.015 01:18:01 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:08.015 01:18:01 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:08.015 01:18:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:08.015 01:18:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:08.015 01:18:01 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:08.272 01:18:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:32:08.273 01:18:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:32:09.204 01:18:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:09.204 01:18:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:09.204 01:18:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:09.204 01:18:02 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:09.204 01:18:02 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:09.204 01:18:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:09.204 01:18:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:09.204 01:18:02 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:09.204 01:18:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:32:09.204 01:18:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:32:10.136 01:18:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:10.136 01:18:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:10.136 01:18:03 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:10.136 01:18:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:10.136 01:18:03 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:10.136 01:18:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:10.136 01:18:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:10.136 01:18:03 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:10.136 01:18:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:32:10.136 01:18:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:32:11.526 01:18:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:11.526 01:18:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:11.526 01:18:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:11.526 01:18:04 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:11.526 01:18:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:11.527 01:18:04 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:11.527 01:18:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:11.527 01:18:04 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:11.527 01:18:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:32:11.527 01:18:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:32:12.458 01:18:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:12.458 01:18:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:12.458 01:18:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:12.458 01:18:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:12.458 01:18:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:12.458 01:18:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:12.458 01:18:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:12.458 01:18:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:12.458 01:18:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:32:12.458 01:18:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:32:12.458 [2024-07-25 01:18:05.446962] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:32:12.458 [2024-07-25 01:18:05.447032] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:32:12.458 [2024-07-25 01:18:05.447054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:12.458 [2024-07-25 01:18:05.447074] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:32:12.458 [2024-07-25 01:18:05.447090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:12.458 [2024-07-25 01:18:05.447107] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:32:12.458 [2024-07-25 01:18:05.447121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:12.458 [2024-07-25 01:18:05.447137] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:32:12.458 [2024-07-25 01:18:05.447152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:12.458 [2024-07-25 01:18:05.447168] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:32:12.458 [2024-07-25 01:18:05.447183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:12.458 [2024-07-25 01:18:05.447198] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f2df80 is same with the state(5) to be set 00:32:12.458 [2024-07-25 01:18:05.456980] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f2df80 (9): Bad file descriptor 00:32:12.458 [2024-07-25 01:18:05.467029] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:32:13.391 01:18:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:13.391 01:18:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:13.391 01:18:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:13.391 01:18:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:13.391 01:18:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:13.391 01:18:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:13.391 01:18:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:13.391 [2024-07-25 01:18:06.494393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:32:13.391 [2024-07-25 01:18:06.494465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f2df80 with addr=10.0.0.2, port=4420 00:32:13.391 [2024-07-25 01:18:06.494493] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f2df80 is same with the state(5) to be set 00:32:13.391 [2024-07-25 01:18:06.494543] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f2df80 (9): Bad file descriptor 00:32:13.391 [2024-07-25 01:18:06.495006] bdev_nvme.c:2896:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:32:13.391 [2024-07-25 01:18:06.495041] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:32:13.391 [2024-07-25 01:18:06.495066] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:32:13.391 [2024-07-25 01:18:06.495084] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:32:13.391 [2024-07-25 01:18:06.495120] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:13.391 [2024-07-25 01:18:06.495140] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:32:13.391 01:18:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:13.391 01:18:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:32:13.391 01:18:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:32:14.365 [2024-07-25 01:18:07.497640] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:32:14.365 [2024-07-25 01:18:07.497680] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:32:14.365 [2024-07-25 01:18:07.497697] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:32:14.365 [2024-07-25 01:18:07.497722] nvme_ctrlr.c:1031:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:32:14.365 [2024-07-25 01:18:07.497760] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:14.365 [2024-07-25 01:18:07.497809] bdev_nvme.c:6735:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:32:14.365 [2024-07-25 01:18:07.497858] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:32:14.365 [2024-07-25 01:18:07.497879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.365 [2024-07-25 01:18:07.497896] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:32:14.365 [2024-07-25 01:18:07.497909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.365 [2024-07-25 01:18:07.497922] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:32:14.365 [2024-07-25 01:18:07.497949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.365 [2024-07-25 01:18:07.497963] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:32:14.365 [2024-07-25 01:18:07.497975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.365 [2024-07-25 01:18:07.497988] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:32:14.365 [2024-07-25 01:18:07.498000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.365 [2024-07-25 01:18:07.498012] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:32:14.365 [2024-07-25 01:18:07.498182] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f2d410 (9): Bad file descriptor 00:32:14.365 [2024-07-25 01:18:07.499203] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:32:14.365 [2024-07-25 01:18:07.499239] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:32:14.365 01:18:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:14.365 01:18:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:14.365 01:18:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:14.365 01:18:07 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:14.365 01:18:07 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:14.365 01:18:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:14.365 01:18:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:14.623 01:18:07 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:14.623 01:18:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:32:14.623 01:18:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:14.623 01:18:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:14.623 01:18:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:32:14.623 01:18:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:14.623 01:18:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:14.623 01:18:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:14.623 01:18:07 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:14.623 01:18:07 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:14.623 01:18:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:14.623 01:18:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:14.623 01:18:07 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:14.623 01:18:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:32:14.623 01:18:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:32:15.555 01:18:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:15.555 01:18:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:15.555 01:18:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:15.555 01:18:08 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:15.555 01:18:08 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:15.555 01:18:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:15.555 01:18:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:15.555 01:18:08 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:15.555 01:18:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:32:15.555 01:18:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:32:16.488 [2024-07-25 01:18:09.550458] bdev_nvme.c:6984:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:32:16.488 [2024-07-25 01:18:09.550491] bdev_nvme.c:7064:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:32:16.488 [2024-07-25 01:18:09.550516] bdev_nvme.c:6947:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:32:16.488 [2024-07-25 01:18:09.636794] bdev_nvme.c:6913:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:32:16.746 01:18:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:16.746 01:18:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:16.746 01:18:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:16.746 01:18:09 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:16.746 01:18:09 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:16.746 01:18:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:16.746 01:18:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:16.746 01:18:09 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:16.747 01:18:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:32:16.747 01:18:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:32:16.747 [2024-07-25 01:18:09.740716] bdev_nvme.c:7774:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:32:16.747 [2024-07-25 01:18:09.740762] bdev_nvme.c:7774:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:32:16.747 [2024-07-25 01:18:09.740795] bdev_nvme.c:7774:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:32:16.747 [2024-07-25 01:18:09.740819] bdev_nvme.c:6803:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:32:16.747 [2024-07-25 01:18:09.740833] bdev_nvme.c:6762:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:32:16.747 [2024-07-25 01:18:09.747667] bdev_nvme.c:1614:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x1f3ad30 was disconnected and freed. delete nvme_qpair. 00:32:17.680 01:18:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:17.680 01:18:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:17.680 01:18:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:17.680 01:18:10 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:17.680 01:18:10 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:17.680 01:18:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:17.680 01:18:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:17.680 01:18:10 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:17.680 01:18:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:32:17.680 01:18:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:32:17.680 01:18:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 3900844 00:32:17.680 01:18:10 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@946 -- # '[' -z 3900844 ']' 00:32:17.680 01:18:10 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@950 -- # kill -0 3900844 00:32:17.680 01:18:10 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@951 -- # uname 00:32:17.680 01:18:10 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:32:17.680 01:18:10 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3900844 00:32:17.680 01:18:10 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:32:17.680 01:18:10 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:32:17.680 01:18:10 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3900844' 00:32:17.680 killing process with pid 3900844 00:32:17.680 01:18:10 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@965 -- # kill 3900844 00:32:17.680 01:18:10 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@970 -- # wait 3900844 00:32:17.938 01:18:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:32:17.938 01:18:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@488 -- # nvmfcleanup 00:32:17.938 01:18:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@117 -- # sync 00:32:17.938 01:18:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:32:17.938 01:18:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@120 -- # set +e 00:32:17.938 01:18:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # for i in {1..20} 00:32:17.938 01:18:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:32:17.938 rmmod nvme_tcp 00:32:17.938 rmmod nvme_fabrics 00:32:17.938 rmmod nvme_keyring 00:32:17.938 01:18:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:32:17.938 01:18:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set -e 00:32:17.938 01:18:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # return 0 00:32:17.938 01:18:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@489 -- # '[' -n 3900818 ']' 00:32:17.938 01:18:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@490 -- # killprocess 3900818 00:32:17.938 01:18:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@946 -- # '[' -z 3900818 ']' 00:32:17.938 01:18:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@950 -- # kill -0 3900818 00:32:17.938 01:18:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@951 -- # uname 00:32:18.196 01:18:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:32:18.196 01:18:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3900818 00:32:18.196 01:18:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:32:18.196 01:18:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:32:18.196 01:18:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3900818' 00:32:18.196 killing process with pid 3900818 00:32:18.196 01:18:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@965 -- # kill 3900818 00:32:18.196 01:18:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@970 -- # wait 3900818 00:32:18.455 01:18:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:32:18.455 01:18:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:32:18.455 01:18:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:32:18.455 01:18:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:32:18.455 01:18:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:32:18.455 01:18:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:18.455 01:18:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:18.455 01:18:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:20.355 01:18:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:32:20.355 00:32:20.355 real 0m17.665s 00:32:20.355 user 0m25.789s 00:32:20.355 sys 0m2.932s 00:32:20.355 01:18:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:32:20.355 01:18:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:20.355 ************************************ 00:32:20.355 END TEST nvmf_discovery_remove_ifc 00:32:20.355 ************************************ 00:32:20.355 01:18:13 nvmf_tcp -- nvmf/nvmf.sh@104 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:32:20.355 01:18:13 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:32:20.355 01:18:13 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:32:20.355 01:18:13 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:20.355 ************************************ 00:32:20.355 START TEST nvmf_identify_kernel_target 00:32:20.355 ************************************ 00:32:20.355 01:18:13 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:32:20.355 * Looking for test storage... 00:32:20.355 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:32:20.355 01:18:13 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:20.355 01:18:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:32:20.355 01:18:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:20.355 01:18:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:20.355 01:18:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:20.355 01:18:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:20.355 01:18:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:20.355 01:18:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:20.355 01:18:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:20.355 01:18:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:20.355 01:18:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:20.355 01:18:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:20.614 01:18:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:32:20.614 01:18:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:32:20.614 01:18:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:20.614 01:18:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:20.614 01:18:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:20.614 01:18:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:20.614 01:18:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:20.614 01:18:13 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:20.614 01:18:13 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:20.614 01:18:13 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:20.614 01:18:13 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:20.614 01:18:13 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:20.614 01:18:13 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:20.614 01:18:13 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:32:20.614 01:18:13 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:20.614 01:18:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@47 -- # : 0 00:32:20.614 01:18:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:32:20.614 01:18:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:32:20.614 01:18:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:20.614 01:18:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:20.614 01:18:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:20.614 01:18:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:32:20.614 01:18:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:32:20.614 01:18:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:32:20.614 01:18:13 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:32:20.614 01:18:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:32:20.614 01:18:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:20.614 01:18:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:32:20.614 01:18:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:32:20.614 01:18:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:32:20.614 01:18:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:20.614 01:18:13 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:20.614 01:18:13 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:20.614 01:18:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:32:20.614 01:18:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:32:20.614 01:18:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@285 -- # xtrace_disable 00:32:20.614 01:18:13 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:32:22.514 01:18:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:22.514 01:18:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # pci_devs=() 00:32:22.514 01:18:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:32:22.514 01:18:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:32:22.514 01:18:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:32:22.514 01:18:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:32:22.514 01:18:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:32:22.514 01:18:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # net_devs=() 00:32:22.514 01:18:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:32:22.514 01:18:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # e810=() 00:32:22.514 01:18:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # local -ga e810 00:32:22.514 01:18:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # x722=() 00:32:22.514 01:18:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # local -ga x722 00:32:22.514 01:18:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # mlx=() 00:32:22.514 01:18:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # local -ga mlx 00:32:22.514 01:18:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:22.514 01:18:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:22.514 01:18:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:22.514 01:18:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:22.514 01:18:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:22.514 01:18:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:22.514 01:18:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:22.514 01:18:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:22.514 01:18:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:22.514 01:18:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:22.514 01:18:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:22.514 01:18:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:32:22.514 01:18:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:32:22.514 01:18:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:32:22.514 01:18:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:32:22.514 01:18:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:32:22.514 01:18:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:32:22.514 01:18:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:22.514 01:18:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:32:22.514 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:32:22.514 01:18:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:22.514 01:18:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:22.514 01:18:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:22.514 01:18:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:22.514 01:18:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:22.514 01:18:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:22.514 01:18:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:32:22.514 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:32:22.514 01:18:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:22.514 01:18:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:22.514 01:18:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:22.514 01:18:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:22.514 01:18:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:22.514 01:18:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:32:22.514 01:18:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:32:22.514 01:18:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:32:22.514 01:18:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:22.514 01:18:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:22.514 01:18:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:22.514 01:18:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:22.514 01:18:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:22.514 01:18:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:22.514 01:18:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:22.514 01:18:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:32:22.514 Found net devices under 0000:0a:00.0: cvl_0_0 00:32:22.514 01:18:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:22.514 01:18:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:22.514 01:18:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:22.514 01:18:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:22.514 01:18:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:22.514 01:18:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:22.514 01:18:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:22.514 01:18:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:22.514 01:18:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:32:22.514 Found net devices under 0000:0a:00.1: cvl_0_1 00:32:22.514 01:18:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:22.514 01:18:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:32:22.514 01:18:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # is_hw=yes 00:32:22.514 01:18:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:32:22.514 01:18:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:32:22.514 01:18:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:32:22.514 01:18:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:22.514 01:18:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:22.514 01:18:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:22.514 01:18:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:32:22.514 01:18:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:22.514 01:18:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:22.514 01:18:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:32:22.514 01:18:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:22.514 01:18:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:22.514 01:18:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:32:22.514 01:18:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:32:22.514 01:18:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:32:22.514 01:18:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:22.514 01:18:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:22.515 01:18:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:22.515 01:18:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:32:22.515 01:18:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:22.515 01:18:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:22.515 01:18:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:22.515 01:18:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:32:22.515 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:22.515 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.247 ms 00:32:22.515 00:32:22.515 --- 10.0.0.2 ping statistics --- 00:32:22.515 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:22.515 rtt min/avg/max/mdev = 0.247/0.247/0.247/0.000 ms 00:32:22.515 01:18:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:22.515 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:22.515 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.176 ms 00:32:22.515 00:32:22.515 --- 10.0.0.1 ping statistics --- 00:32:22.515 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:22.515 rtt min/avg/max/mdev = 0.176/0.176/0.176/0.000 ms 00:32:22.515 01:18:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:22.515 01:18:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # return 0 00:32:22.515 01:18:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:32:22.515 01:18:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:22.515 01:18:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:32:22.515 01:18:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:32:22.515 01:18:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:22.515 01:18:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:32:22.515 01:18:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:32:22.515 01:18:15 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:32:22.515 01:18:15 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:32:22.515 01:18:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@741 -- # local ip 00:32:22.515 01:18:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:22.515 01:18:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:22.515 01:18:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:22.515 01:18:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:22.515 01:18:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:22.515 01:18:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:22.515 01:18:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:22.515 01:18:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:22.515 01:18:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:22.515 01:18:15 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:32:22.515 01:18:15 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:32:22.515 01:18:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:32:22.515 01:18:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:32:22.515 01:18:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:32:22.515 01:18:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:32:22.515 01:18:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:32:22.515 01:18:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@639 -- # local block nvme 00:32:22.515 01:18:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:32:22.515 01:18:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@642 -- # modprobe nvmet 00:32:22.515 01:18:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:32:22.515 01:18:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:32:23.449 Waiting for block devices as requested 00:32:23.707 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:32:23.707 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:32:23.707 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:32:23.965 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:32:23.965 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:32:23.965 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:32:24.222 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:32:24.222 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:32:24.222 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:32:24.222 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:32:24.480 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:32:24.480 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:32:24.480 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:32:24.480 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:32:24.739 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:32:24.739 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:32:24.739 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:32:24.997 01:18:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:32:24.997 01:18:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:32:24.997 01:18:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:32:24.997 01:18:17 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:32:24.997 01:18:17 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:32:24.997 01:18:17 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:32:24.997 01:18:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:32:24.997 01:18:17 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:32:24.997 01:18:17 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:32:24.997 No valid GPT data, bailing 00:32:24.997 01:18:18 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:32:24.997 01:18:18 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:32:24.997 01:18:18 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:32:24.997 01:18:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:32:24.997 01:18:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:32:24.997 01:18:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:32:24.997 01:18:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:32:24.997 01:18:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:32:24.997 01:18:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:32:24.997 01:18:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # echo 1 00:32:24.997 01:18:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:32:24.997 01:18:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # echo 1 00:32:24.997 01:18:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:32:24.997 01:18:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@672 -- # echo tcp 00:32:24.997 01:18:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # echo 4420 00:32:24.997 01:18:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # echo ipv4 00:32:24.997 01:18:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:32:24.997 01:18:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.1 -t tcp -s 4420 00:32:24.997 00:32:24.997 Discovery Log Number of Records 2, Generation counter 2 00:32:24.997 =====Discovery Log Entry 0====== 00:32:24.997 trtype: tcp 00:32:24.997 adrfam: ipv4 00:32:24.997 subtype: current discovery subsystem 00:32:24.997 treq: not specified, sq flow control disable supported 00:32:24.997 portid: 1 00:32:24.997 trsvcid: 4420 00:32:24.997 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:32:24.997 traddr: 10.0.0.1 00:32:24.997 eflags: none 00:32:24.997 sectype: none 00:32:24.997 =====Discovery Log Entry 1====== 00:32:24.997 trtype: tcp 00:32:24.997 adrfam: ipv4 00:32:24.997 subtype: nvme subsystem 00:32:24.997 treq: not specified, sq flow control disable supported 00:32:24.997 portid: 1 00:32:24.997 trsvcid: 4420 00:32:24.997 subnqn: nqn.2016-06.io.spdk:testnqn 00:32:24.997 traddr: 10.0.0.1 00:32:24.997 eflags: none 00:32:24.997 sectype: none 00:32:24.997 01:18:18 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:32:24.997 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:32:24.997 EAL: No free 2048 kB hugepages reported on node 1 00:32:25.256 ===================================================== 00:32:25.256 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:32:25.256 ===================================================== 00:32:25.256 Controller Capabilities/Features 00:32:25.256 ================================ 00:32:25.256 Vendor ID: 0000 00:32:25.256 Subsystem Vendor ID: 0000 00:32:25.256 Serial Number: 7821dba8d09293c2c221 00:32:25.256 Model Number: Linux 00:32:25.256 Firmware Version: 6.7.0-68 00:32:25.256 Recommended Arb Burst: 0 00:32:25.256 IEEE OUI Identifier: 00 00 00 00:32:25.256 Multi-path I/O 00:32:25.256 May have multiple subsystem ports: No 00:32:25.256 May have multiple controllers: No 00:32:25.256 Associated with SR-IOV VF: No 00:32:25.256 Max Data Transfer Size: Unlimited 00:32:25.256 Max Number of Namespaces: 0 00:32:25.256 Max Number of I/O Queues: 1024 00:32:25.256 NVMe Specification Version (VS): 1.3 00:32:25.256 NVMe Specification Version (Identify): 1.3 00:32:25.256 Maximum Queue Entries: 1024 00:32:25.256 Contiguous Queues Required: No 00:32:25.256 Arbitration Mechanisms Supported 00:32:25.256 Weighted Round Robin: Not Supported 00:32:25.256 Vendor Specific: Not Supported 00:32:25.256 Reset Timeout: 7500 ms 00:32:25.256 Doorbell Stride: 4 bytes 00:32:25.256 NVM Subsystem Reset: Not Supported 00:32:25.256 Command Sets Supported 00:32:25.256 NVM Command Set: Supported 00:32:25.256 Boot Partition: Not Supported 00:32:25.256 Memory Page Size Minimum: 4096 bytes 00:32:25.256 Memory Page Size Maximum: 4096 bytes 00:32:25.256 Persistent Memory Region: Not Supported 00:32:25.256 Optional Asynchronous Events Supported 00:32:25.256 Namespace Attribute Notices: Not Supported 00:32:25.256 Firmware Activation Notices: Not Supported 00:32:25.256 ANA Change Notices: Not Supported 00:32:25.256 PLE Aggregate Log Change Notices: Not Supported 00:32:25.256 LBA Status Info Alert Notices: Not Supported 00:32:25.256 EGE Aggregate Log Change Notices: Not Supported 00:32:25.256 Normal NVM Subsystem Shutdown event: Not Supported 00:32:25.256 Zone Descriptor Change Notices: Not Supported 00:32:25.256 Discovery Log Change Notices: Supported 00:32:25.256 Controller Attributes 00:32:25.256 128-bit Host Identifier: Not Supported 00:32:25.256 Non-Operational Permissive Mode: Not Supported 00:32:25.256 NVM Sets: Not Supported 00:32:25.256 Read Recovery Levels: Not Supported 00:32:25.256 Endurance Groups: Not Supported 00:32:25.256 Predictable Latency Mode: Not Supported 00:32:25.256 Traffic Based Keep ALive: Not Supported 00:32:25.256 Namespace Granularity: Not Supported 00:32:25.256 SQ Associations: Not Supported 00:32:25.256 UUID List: Not Supported 00:32:25.256 Multi-Domain Subsystem: Not Supported 00:32:25.256 Fixed Capacity Management: Not Supported 00:32:25.256 Variable Capacity Management: Not Supported 00:32:25.256 Delete Endurance Group: Not Supported 00:32:25.256 Delete NVM Set: Not Supported 00:32:25.256 Extended LBA Formats Supported: Not Supported 00:32:25.256 Flexible Data Placement Supported: Not Supported 00:32:25.256 00:32:25.256 Controller Memory Buffer Support 00:32:25.256 ================================ 00:32:25.256 Supported: No 00:32:25.256 00:32:25.256 Persistent Memory Region Support 00:32:25.256 ================================ 00:32:25.256 Supported: No 00:32:25.256 00:32:25.256 Admin Command Set Attributes 00:32:25.256 ============================ 00:32:25.256 Security Send/Receive: Not Supported 00:32:25.256 Format NVM: Not Supported 00:32:25.256 Firmware Activate/Download: Not Supported 00:32:25.256 Namespace Management: Not Supported 00:32:25.256 Device Self-Test: Not Supported 00:32:25.256 Directives: Not Supported 00:32:25.256 NVMe-MI: Not Supported 00:32:25.257 Virtualization Management: Not Supported 00:32:25.257 Doorbell Buffer Config: Not Supported 00:32:25.257 Get LBA Status Capability: Not Supported 00:32:25.257 Command & Feature Lockdown Capability: Not Supported 00:32:25.257 Abort Command Limit: 1 00:32:25.257 Async Event Request Limit: 1 00:32:25.257 Number of Firmware Slots: N/A 00:32:25.257 Firmware Slot 1 Read-Only: N/A 00:32:25.257 Firmware Activation Without Reset: N/A 00:32:25.257 Multiple Update Detection Support: N/A 00:32:25.257 Firmware Update Granularity: No Information Provided 00:32:25.257 Per-Namespace SMART Log: No 00:32:25.257 Asymmetric Namespace Access Log Page: Not Supported 00:32:25.257 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:32:25.257 Command Effects Log Page: Not Supported 00:32:25.257 Get Log Page Extended Data: Supported 00:32:25.257 Telemetry Log Pages: Not Supported 00:32:25.257 Persistent Event Log Pages: Not Supported 00:32:25.257 Supported Log Pages Log Page: May Support 00:32:25.257 Commands Supported & Effects Log Page: Not Supported 00:32:25.257 Feature Identifiers & Effects Log Page:May Support 00:32:25.257 NVMe-MI Commands & Effects Log Page: May Support 00:32:25.257 Data Area 4 for Telemetry Log: Not Supported 00:32:25.257 Error Log Page Entries Supported: 1 00:32:25.257 Keep Alive: Not Supported 00:32:25.257 00:32:25.257 NVM Command Set Attributes 00:32:25.257 ========================== 00:32:25.257 Submission Queue Entry Size 00:32:25.257 Max: 1 00:32:25.257 Min: 1 00:32:25.257 Completion Queue Entry Size 00:32:25.257 Max: 1 00:32:25.257 Min: 1 00:32:25.257 Number of Namespaces: 0 00:32:25.257 Compare Command: Not Supported 00:32:25.257 Write Uncorrectable Command: Not Supported 00:32:25.257 Dataset Management Command: Not Supported 00:32:25.257 Write Zeroes Command: Not Supported 00:32:25.257 Set Features Save Field: Not Supported 00:32:25.257 Reservations: Not Supported 00:32:25.257 Timestamp: Not Supported 00:32:25.257 Copy: Not Supported 00:32:25.257 Volatile Write Cache: Not Present 00:32:25.257 Atomic Write Unit (Normal): 1 00:32:25.257 Atomic Write Unit (PFail): 1 00:32:25.257 Atomic Compare & Write Unit: 1 00:32:25.257 Fused Compare & Write: Not Supported 00:32:25.257 Scatter-Gather List 00:32:25.257 SGL Command Set: Supported 00:32:25.257 SGL Keyed: Not Supported 00:32:25.257 SGL Bit Bucket Descriptor: Not Supported 00:32:25.257 SGL Metadata Pointer: Not Supported 00:32:25.257 Oversized SGL: Not Supported 00:32:25.257 SGL Metadata Address: Not Supported 00:32:25.257 SGL Offset: Supported 00:32:25.257 Transport SGL Data Block: Not Supported 00:32:25.257 Replay Protected Memory Block: Not Supported 00:32:25.257 00:32:25.257 Firmware Slot Information 00:32:25.257 ========================= 00:32:25.257 Active slot: 0 00:32:25.257 00:32:25.257 00:32:25.257 Error Log 00:32:25.257 ========= 00:32:25.257 00:32:25.257 Active Namespaces 00:32:25.257 ================= 00:32:25.257 Discovery Log Page 00:32:25.257 ================== 00:32:25.257 Generation Counter: 2 00:32:25.257 Number of Records: 2 00:32:25.257 Record Format: 0 00:32:25.257 00:32:25.257 Discovery Log Entry 0 00:32:25.257 ---------------------- 00:32:25.257 Transport Type: 3 (TCP) 00:32:25.257 Address Family: 1 (IPv4) 00:32:25.257 Subsystem Type: 3 (Current Discovery Subsystem) 00:32:25.257 Entry Flags: 00:32:25.257 Duplicate Returned Information: 0 00:32:25.257 Explicit Persistent Connection Support for Discovery: 0 00:32:25.257 Transport Requirements: 00:32:25.257 Secure Channel: Not Specified 00:32:25.257 Port ID: 1 (0x0001) 00:32:25.257 Controller ID: 65535 (0xffff) 00:32:25.257 Admin Max SQ Size: 32 00:32:25.257 Transport Service Identifier: 4420 00:32:25.257 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:32:25.257 Transport Address: 10.0.0.1 00:32:25.257 Discovery Log Entry 1 00:32:25.257 ---------------------- 00:32:25.257 Transport Type: 3 (TCP) 00:32:25.257 Address Family: 1 (IPv4) 00:32:25.257 Subsystem Type: 2 (NVM Subsystem) 00:32:25.257 Entry Flags: 00:32:25.257 Duplicate Returned Information: 0 00:32:25.257 Explicit Persistent Connection Support for Discovery: 0 00:32:25.257 Transport Requirements: 00:32:25.257 Secure Channel: Not Specified 00:32:25.257 Port ID: 1 (0x0001) 00:32:25.257 Controller ID: 65535 (0xffff) 00:32:25.257 Admin Max SQ Size: 32 00:32:25.257 Transport Service Identifier: 4420 00:32:25.257 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:32:25.257 Transport Address: 10.0.0.1 00:32:25.257 01:18:18 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:32:25.257 EAL: No free 2048 kB hugepages reported on node 1 00:32:25.257 get_feature(0x01) failed 00:32:25.257 get_feature(0x02) failed 00:32:25.257 get_feature(0x04) failed 00:32:25.257 ===================================================== 00:32:25.257 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:32:25.257 ===================================================== 00:32:25.257 Controller Capabilities/Features 00:32:25.257 ================================ 00:32:25.257 Vendor ID: 0000 00:32:25.257 Subsystem Vendor ID: 0000 00:32:25.257 Serial Number: 3266a8d7acfaed9279ee 00:32:25.257 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:32:25.257 Firmware Version: 6.7.0-68 00:32:25.257 Recommended Arb Burst: 6 00:32:25.257 IEEE OUI Identifier: 00 00 00 00:32:25.257 Multi-path I/O 00:32:25.257 May have multiple subsystem ports: Yes 00:32:25.257 May have multiple controllers: Yes 00:32:25.257 Associated with SR-IOV VF: No 00:32:25.257 Max Data Transfer Size: Unlimited 00:32:25.257 Max Number of Namespaces: 1024 00:32:25.257 Max Number of I/O Queues: 128 00:32:25.257 NVMe Specification Version (VS): 1.3 00:32:25.257 NVMe Specification Version (Identify): 1.3 00:32:25.257 Maximum Queue Entries: 1024 00:32:25.257 Contiguous Queues Required: No 00:32:25.257 Arbitration Mechanisms Supported 00:32:25.257 Weighted Round Robin: Not Supported 00:32:25.257 Vendor Specific: Not Supported 00:32:25.257 Reset Timeout: 7500 ms 00:32:25.257 Doorbell Stride: 4 bytes 00:32:25.257 NVM Subsystem Reset: Not Supported 00:32:25.257 Command Sets Supported 00:32:25.257 NVM Command Set: Supported 00:32:25.257 Boot Partition: Not Supported 00:32:25.257 Memory Page Size Minimum: 4096 bytes 00:32:25.257 Memory Page Size Maximum: 4096 bytes 00:32:25.257 Persistent Memory Region: Not Supported 00:32:25.257 Optional Asynchronous Events Supported 00:32:25.257 Namespace Attribute Notices: Supported 00:32:25.257 Firmware Activation Notices: Not Supported 00:32:25.257 ANA Change Notices: Supported 00:32:25.257 PLE Aggregate Log Change Notices: Not Supported 00:32:25.257 LBA Status Info Alert Notices: Not Supported 00:32:25.257 EGE Aggregate Log Change Notices: Not Supported 00:32:25.257 Normal NVM Subsystem Shutdown event: Not Supported 00:32:25.257 Zone Descriptor Change Notices: Not Supported 00:32:25.257 Discovery Log Change Notices: Not Supported 00:32:25.257 Controller Attributes 00:32:25.257 128-bit Host Identifier: Supported 00:32:25.257 Non-Operational Permissive Mode: Not Supported 00:32:25.257 NVM Sets: Not Supported 00:32:25.257 Read Recovery Levels: Not Supported 00:32:25.257 Endurance Groups: Not Supported 00:32:25.257 Predictable Latency Mode: Not Supported 00:32:25.257 Traffic Based Keep ALive: Supported 00:32:25.257 Namespace Granularity: Not Supported 00:32:25.257 SQ Associations: Not Supported 00:32:25.257 UUID List: Not Supported 00:32:25.257 Multi-Domain Subsystem: Not Supported 00:32:25.257 Fixed Capacity Management: Not Supported 00:32:25.257 Variable Capacity Management: Not Supported 00:32:25.257 Delete Endurance Group: Not Supported 00:32:25.257 Delete NVM Set: Not Supported 00:32:25.257 Extended LBA Formats Supported: Not Supported 00:32:25.257 Flexible Data Placement Supported: Not Supported 00:32:25.257 00:32:25.257 Controller Memory Buffer Support 00:32:25.257 ================================ 00:32:25.257 Supported: No 00:32:25.257 00:32:25.257 Persistent Memory Region Support 00:32:25.257 ================================ 00:32:25.257 Supported: No 00:32:25.257 00:32:25.257 Admin Command Set Attributes 00:32:25.257 ============================ 00:32:25.257 Security Send/Receive: Not Supported 00:32:25.257 Format NVM: Not Supported 00:32:25.257 Firmware Activate/Download: Not Supported 00:32:25.257 Namespace Management: Not Supported 00:32:25.257 Device Self-Test: Not Supported 00:32:25.257 Directives: Not Supported 00:32:25.257 NVMe-MI: Not Supported 00:32:25.257 Virtualization Management: Not Supported 00:32:25.258 Doorbell Buffer Config: Not Supported 00:32:25.258 Get LBA Status Capability: Not Supported 00:32:25.258 Command & Feature Lockdown Capability: Not Supported 00:32:25.258 Abort Command Limit: 4 00:32:25.258 Async Event Request Limit: 4 00:32:25.258 Number of Firmware Slots: N/A 00:32:25.258 Firmware Slot 1 Read-Only: N/A 00:32:25.258 Firmware Activation Without Reset: N/A 00:32:25.258 Multiple Update Detection Support: N/A 00:32:25.258 Firmware Update Granularity: No Information Provided 00:32:25.258 Per-Namespace SMART Log: Yes 00:32:25.258 Asymmetric Namespace Access Log Page: Supported 00:32:25.258 ANA Transition Time : 10 sec 00:32:25.258 00:32:25.258 Asymmetric Namespace Access Capabilities 00:32:25.258 ANA Optimized State : Supported 00:32:25.258 ANA Non-Optimized State : Supported 00:32:25.258 ANA Inaccessible State : Supported 00:32:25.258 ANA Persistent Loss State : Supported 00:32:25.258 ANA Change State : Supported 00:32:25.258 ANAGRPID is not changed : No 00:32:25.258 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:32:25.258 00:32:25.258 ANA Group Identifier Maximum : 128 00:32:25.258 Number of ANA Group Identifiers : 128 00:32:25.258 Max Number of Allowed Namespaces : 1024 00:32:25.258 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:32:25.258 Command Effects Log Page: Supported 00:32:25.258 Get Log Page Extended Data: Supported 00:32:25.258 Telemetry Log Pages: Not Supported 00:32:25.258 Persistent Event Log Pages: Not Supported 00:32:25.258 Supported Log Pages Log Page: May Support 00:32:25.258 Commands Supported & Effects Log Page: Not Supported 00:32:25.258 Feature Identifiers & Effects Log Page:May Support 00:32:25.258 NVMe-MI Commands & Effects Log Page: May Support 00:32:25.258 Data Area 4 for Telemetry Log: Not Supported 00:32:25.258 Error Log Page Entries Supported: 128 00:32:25.258 Keep Alive: Supported 00:32:25.258 Keep Alive Granularity: 1000 ms 00:32:25.258 00:32:25.258 NVM Command Set Attributes 00:32:25.258 ========================== 00:32:25.258 Submission Queue Entry Size 00:32:25.258 Max: 64 00:32:25.258 Min: 64 00:32:25.258 Completion Queue Entry Size 00:32:25.258 Max: 16 00:32:25.258 Min: 16 00:32:25.258 Number of Namespaces: 1024 00:32:25.258 Compare Command: Not Supported 00:32:25.258 Write Uncorrectable Command: Not Supported 00:32:25.258 Dataset Management Command: Supported 00:32:25.258 Write Zeroes Command: Supported 00:32:25.258 Set Features Save Field: Not Supported 00:32:25.258 Reservations: Not Supported 00:32:25.258 Timestamp: Not Supported 00:32:25.258 Copy: Not Supported 00:32:25.258 Volatile Write Cache: Present 00:32:25.258 Atomic Write Unit (Normal): 1 00:32:25.258 Atomic Write Unit (PFail): 1 00:32:25.258 Atomic Compare & Write Unit: 1 00:32:25.258 Fused Compare & Write: Not Supported 00:32:25.258 Scatter-Gather List 00:32:25.258 SGL Command Set: Supported 00:32:25.258 SGL Keyed: Not Supported 00:32:25.258 SGL Bit Bucket Descriptor: Not Supported 00:32:25.258 SGL Metadata Pointer: Not Supported 00:32:25.258 Oversized SGL: Not Supported 00:32:25.258 SGL Metadata Address: Not Supported 00:32:25.258 SGL Offset: Supported 00:32:25.258 Transport SGL Data Block: Not Supported 00:32:25.258 Replay Protected Memory Block: Not Supported 00:32:25.258 00:32:25.258 Firmware Slot Information 00:32:25.258 ========================= 00:32:25.258 Active slot: 0 00:32:25.258 00:32:25.258 Asymmetric Namespace Access 00:32:25.258 =========================== 00:32:25.258 Change Count : 0 00:32:25.258 Number of ANA Group Descriptors : 1 00:32:25.258 ANA Group Descriptor : 0 00:32:25.258 ANA Group ID : 1 00:32:25.258 Number of NSID Values : 1 00:32:25.258 Change Count : 0 00:32:25.258 ANA State : 1 00:32:25.258 Namespace Identifier : 1 00:32:25.258 00:32:25.258 Commands Supported and Effects 00:32:25.258 ============================== 00:32:25.258 Admin Commands 00:32:25.258 -------------- 00:32:25.258 Get Log Page (02h): Supported 00:32:25.258 Identify (06h): Supported 00:32:25.258 Abort (08h): Supported 00:32:25.258 Set Features (09h): Supported 00:32:25.258 Get Features (0Ah): Supported 00:32:25.258 Asynchronous Event Request (0Ch): Supported 00:32:25.258 Keep Alive (18h): Supported 00:32:25.258 I/O Commands 00:32:25.258 ------------ 00:32:25.258 Flush (00h): Supported 00:32:25.258 Write (01h): Supported LBA-Change 00:32:25.258 Read (02h): Supported 00:32:25.258 Write Zeroes (08h): Supported LBA-Change 00:32:25.258 Dataset Management (09h): Supported 00:32:25.258 00:32:25.258 Error Log 00:32:25.258 ========= 00:32:25.258 Entry: 0 00:32:25.258 Error Count: 0x3 00:32:25.258 Submission Queue Id: 0x0 00:32:25.258 Command Id: 0x5 00:32:25.258 Phase Bit: 0 00:32:25.258 Status Code: 0x2 00:32:25.258 Status Code Type: 0x0 00:32:25.258 Do Not Retry: 1 00:32:25.258 Error Location: 0x28 00:32:25.258 LBA: 0x0 00:32:25.258 Namespace: 0x0 00:32:25.258 Vendor Log Page: 0x0 00:32:25.258 ----------- 00:32:25.258 Entry: 1 00:32:25.258 Error Count: 0x2 00:32:25.258 Submission Queue Id: 0x0 00:32:25.258 Command Id: 0x5 00:32:25.258 Phase Bit: 0 00:32:25.258 Status Code: 0x2 00:32:25.258 Status Code Type: 0x0 00:32:25.258 Do Not Retry: 1 00:32:25.258 Error Location: 0x28 00:32:25.258 LBA: 0x0 00:32:25.258 Namespace: 0x0 00:32:25.258 Vendor Log Page: 0x0 00:32:25.258 ----------- 00:32:25.258 Entry: 2 00:32:25.258 Error Count: 0x1 00:32:25.258 Submission Queue Id: 0x0 00:32:25.258 Command Id: 0x4 00:32:25.258 Phase Bit: 0 00:32:25.258 Status Code: 0x2 00:32:25.258 Status Code Type: 0x0 00:32:25.258 Do Not Retry: 1 00:32:25.258 Error Location: 0x28 00:32:25.258 LBA: 0x0 00:32:25.258 Namespace: 0x0 00:32:25.258 Vendor Log Page: 0x0 00:32:25.258 00:32:25.258 Number of Queues 00:32:25.258 ================ 00:32:25.258 Number of I/O Submission Queues: 128 00:32:25.258 Number of I/O Completion Queues: 128 00:32:25.258 00:32:25.258 ZNS Specific Controller Data 00:32:25.258 ============================ 00:32:25.258 Zone Append Size Limit: 0 00:32:25.258 00:32:25.258 00:32:25.258 Active Namespaces 00:32:25.258 ================= 00:32:25.258 get_feature(0x05) failed 00:32:25.258 Namespace ID:1 00:32:25.258 Command Set Identifier: NVM (00h) 00:32:25.258 Deallocate: Supported 00:32:25.258 Deallocated/Unwritten Error: Not Supported 00:32:25.258 Deallocated Read Value: Unknown 00:32:25.258 Deallocate in Write Zeroes: Not Supported 00:32:25.258 Deallocated Guard Field: 0xFFFF 00:32:25.258 Flush: Supported 00:32:25.258 Reservation: Not Supported 00:32:25.258 Namespace Sharing Capabilities: Multiple Controllers 00:32:25.258 Size (in LBAs): 1953525168 (931GiB) 00:32:25.258 Capacity (in LBAs): 1953525168 (931GiB) 00:32:25.258 Utilization (in LBAs): 1953525168 (931GiB) 00:32:25.258 UUID: 49d848aa-8d23-4156-865c-5f28aac006f2 00:32:25.258 Thin Provisioning: Not Supported 00:32:25.258 Per-NS Atomic Units: Yes 00:32:25.258 Atomic Boundary Size (Normal): 0 00:32:25.258 Atomic Boundary Size (PFail): 0 00:32:25.258 Atomic Boundary Offset: 0 00:32:25.258 NGUID/EUI64 Never Reused: No 00:32:25.258 ANA group ID: 1 00:32:25.258 Namespace Write Protected: No 00:32:25.258 Number of LBA Formats: 1 00:32:25.258 Current LBA Format: LBA Format #00 00:32:25.258 LBA Format #00: Data Size: 512 Metadata Size: 0 00:32:25.258 00:32:25.258 01:18:18 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:32:25.258 01:18:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:32:25.258 01:18:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # sync 00:32:25.258 01:18:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:32:25.258 01:18:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@120 -- # set +e 00:32:25.258 01:18:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:32:25.258 01:18:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:32:25.258 rmmod nvme_tcp 00:32:25.258 rmmod nvme_fabrics 00:32:25.258 01:18:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:32:25.258 01:18:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set -e 00:32:25.258 01:18:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # return 0 00:32:25.258 01:18:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:32:25.258 01:18:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:32:25.258 01:18:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:32:25.258 01:18:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:32:25.259 01:18:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:32:25.259 01:18:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:32:25.259 01:18:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:25.259 01:18:18 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:25.259 01:18:18 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:27.785 01:18:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:32:27.785 01:18:20 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:32:27.785 01:18:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:32:27.785 01:18:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # echo 0 00:32:27.785 01:18:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:32:27.785 01:18:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:32:27.785 01:18:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:32:27.785 01:18:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:32:27.785 01:18:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:32:27.785 01:18:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:32:27.785 01:18:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:32:28.350 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:32:28.350 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:32:28.350 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:32:28.607 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:32:28.607 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:32:28.607 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:32:28.607 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:32:28.607 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:32:28.607 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:32:28.607 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:32:28.607 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:32:28.607 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:32:28.607 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:32:28.607 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:32:28.607 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:32:28.607 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:32:29.540 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:32:29.798 00:32:29.798 real 0m9.250s 00:32:29.798 user 0m1.922s 00:32:29.798 sys 0m3.236s 00:32:29.798 01:18:22 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1122 -- # xtrace_disable 00:32:29.798 01:18:22 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:32:29.798 ************************************ 00:32:29.798 END TEST nvmf_identify_kernel_target 00:32:29.798 ************************************ 00:32:29.798 01:18:22 nvmf_tcp -- nvmf/nvmf.sh@105 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:32:29.798 01:18:22 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:32:29.799 01:18:22 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:32:29.799 01:18:22 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:29.799 ************************************ 00:32:29.799 START TEST nvmf_auth_host 00:32:29.799 ************************************ 00:32:29.799 01:18:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:32:29.799 * Looking for test storage... 00:32:29.799 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:32:29.799 01:18:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:29.799 01:18:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:32:29.799 01:18:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:29.799 01:18:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:29.799 01:18:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:29.799 01:18:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:29.799 01:18:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:29.799 01:18:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:29.799 01:18:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:29.799 01:18:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:29.799 01:18:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:29.799 01:18:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:29.799 01:18:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:32:29.799 01:18:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:32:29.799 01:18:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:29.799 01:18:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:29.799 01:18:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:29.799 01:18:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:29.799 01:18:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:29.799 01:18:22 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:29.799 01:18:22 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:29.799 01:18:22 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:29.799 01:18:22 nvmf_tcp.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:29.799 01:18:22 nvmf_tcp.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:29.799 01:18:22 nvmf_tcp.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:29.799 01:18:22 nvmf_tcp.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:32:29.799 01:18:22 nvmf_tcp.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:29.799 01:18:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@47 -- # : 0 00:32:29.799 01:18:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:32:29.799 01:18:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:32:29.799 01:18:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:29.799 01:18:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:29.799 01:18:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:29.799 01:18:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:32:29.799 01:18:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:32:29.799 01:18:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:32:29.799 01:18:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:32:29.799 01:18:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:32:29.799 01:18:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:32:29.799 01:18:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:32:29.799 01:18:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:32:29.799 01:18:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:32:29.799 01:18:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:32:29.799 01:18:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:32:29.799 01:18:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:32:29.799 01:18:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:32:29.799 01:18:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:29.799 01:18:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:32:29.799 01:18:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:32:29.799 01:18:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:32:29.799 01:18:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:29.799 01:18:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:29.799 01:18:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:29.799 01:18:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:32:29.799 01:18:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:32:29.799 01:18:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@285 -- # xtrace_disable 00:32:29.799 01:18:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:31.699 01:18:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:31.699 01:18:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@291 -- # pci_devs=() 00:32:31.699 01:18:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:32:31.699 01:18:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:32:31.699 01:18:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:32:31.699 01:18:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:32:31.699 01:18:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:32:31.699 01:18:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@295 -- # net_devs=() 00:32:31.699 01:18:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:32:31.699 01:18:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@296 -- # e810=() 00:32:31.699 01:18:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@296 -- # local -ga e810 00:32:31.699 01:18:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@297 -- # x722=() 00:32:31.699 01:18:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@297 -- # local -ga x722 00:32:31.700 01:18:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@298 -- # mlx=() 00:32:31.700 01:18:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@298 -- # local -ga mlx 00:32:31.700 01:18:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:31.700 01:18:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:31.700 01:18:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:31.700 01:18:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:31.700 01:18:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:31.700 01:18:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:31.700 01:18:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:31.700 01:18:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:31.700 01:18:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:31.700 01:18:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:31.700 01:18:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:31.700 01:18:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:32:31.700 01:18:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:32:31.700 01:18:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:32:31.700 01:18:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:32:31.700 01:18:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:32:31.700 01:18:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:32:31.700 01:18:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:31.700 01:18:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:32:31.700 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:32:31.700 01:18:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:31.700 01:18:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:31.700 01:18:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:31.700 01:18:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:31.700 01:18:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:31.700 01:18:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:31.700 01:18:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:32:31.700 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:32:31.700 01:18:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:31.700 01:18:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:31.700 01:18:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:31.700 01:18:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:31.700 01:18:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:31.700 01:18:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:32:31.700 01:18:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:32:31.700 01:18:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:32:31.700 01:18:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:31.700 01:18:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:31.700 01:18:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:31.700 01:18:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:31.700 01:18:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:31.700 01:18:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:31.700 01:18:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:31.700 01:18:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:32:31.700 Found net devices under 0000:0a:00.0: cvl_0_0 00:32:31.700 01:18:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:31.700 01:18:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:31.700 01:18:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:31.700 01:18:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:31.700 01:18:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:31.700 01:18:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:31.700 01:18:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:31.700 01:18:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:31.700 01:18:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:32:31.700 Found net devices under 0000:0a:00.1: cvl_0_1 00:32:31.700 01:18:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:31.700 01:18:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:32:31.700 01:18:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # is_hw=yes 00:32:31.700 01:18:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:32:31.700 01:18:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:32:31.700 01:18:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:32:31.700 01:18:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:31.700 01:18:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:31.700 01:18:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:31.700 01:18:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:32:31.700 01:18:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:31.700 01:18:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:31.700 01:18:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:32:31.700 01:18:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:31.700 01:18:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:31.700 01:18:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:32:31.700 01:18:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:32:31.700 01:18:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:32:31.700 01:18:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:31.700 01:18:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:31.700 01:18:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:31.700 01:18:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:32:31.700 01:18:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:31.700 01:18:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:31.700 01:18:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:31.700 01:18:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:32:31.700 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:31.700 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.158 ms 00:32:31.700 00:32:31.700 --- 10.0.0.2 ping statistics --- 00:32:31.700 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:31.700 rtt min/avg/max/mdev = 0.158/0.158/0.158/0.000 ms 00:32:31.700 01:18:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:31.700 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:31.700 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.060 ms 00:32:31.700 00:32:31.700 --- 10.0.0.1 ping statistics --- 00:32:31.700 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:31.700 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:32:31.700 01:18:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:31.700 01:18:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@422 -- # return 0 00:32:31.700 01:18:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:32:31.700 01:18:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:31.700 01:18:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:32:31.700 01:18:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:32:31.700 01:18:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:31.700 01:18:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:32:31.700 01:18:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:32:31.958 01:18:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:32:31.958 01:18:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:32:31.958 01:18:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@720 -- # xtrace_disable 00:32:31.958 01:18:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:31.958 01:18:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@481 -- # nvmfpid=3908438 00:32:31.959 01:18:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:32:31.959 01:18:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@482 -- # waitforlisten 3908438 00:32:31.959 01:18:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@827 -- # '[' -z 3908438 ']' 00:32:31.959 01:18:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:31.959 01:18:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@832 -- # local max_retries=100 00:32:31.959 01:18:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:31.959 01:18:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@836 -- # xtrace_disable 00:32:31.959 01:18:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:32.217 01:18:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:32:32.217 01:18:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@860 -- # return 0 00:32:32.217 01:18:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:32:32.217 01:18:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:32.217 01:18:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:32.217 01:18:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:32.217 01:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:32:32.217 01:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:32:32.217 01:18:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:32:32.217 01:18:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:32.217 01:18:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:32:32.217 01:18:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:32:32.217 01:18:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:32:32.217 01:18:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:32:32.217 01:18:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=a64fa362f90e627569dec0a823da0af9 00:32:32.217 01:18:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:32:32.217 01:18:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.jbY 00:32:32.217 01:18:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key a64fa362f90e627569dec0a823da0af9 0 00:32:32.217 01:18:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 a64fa362f90e627569dec0a823da0af9 0 00:32:32.217 01:18:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:32:32.217 01:18:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:32:32.217 01:18:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=a64fa362f90e627569dec0a823da0af9 00:32:32.217 01:18:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:32:32.217 01:18:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:32:32.217 01:18:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.jbY 00:32:32.217 01:18:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.jbY 00:32:32.217 01:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.jbY 00:32:32.217 01:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:32:32.217 01:18:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:32:32.217 01:18:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:32.217 01:18:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:32:32.217 01:18:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:32:32.217 01:18:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:32:32.217 01:18:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:32:32.217 01:18:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=02f38fc5e8b4fb954d22145a2520df2dbe5570676ccd71fdf0006f022c9a4d6b 00:32:32.217 01:18:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:32:32.217 01:18:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.nGK 00:32:32.217 01:18:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 02f38fc5e8b4fb954d22145a2520df2dbe5570676ccd71fdf0006f022c9a4d6b 3 00:32:32.217 01:18:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 02f38fc5e8b4fb954d22145a2520df2dbe5570676ccd71fdf0006f022c9a4d6b 3 00:32:32.217 01:18:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:32:32.217 01:18:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:32:32.217 01:18:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=02f38fc5e8b4fb954d22145a2520df2dbe5570676ccd71fdf0006f022c9a4d6b 00:32:32.217 01:18:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:32:32.217 01:18:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:32:32.217 01:18:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.nGK 00:32:32.217 01:18:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.nGK 00:32:32.217 01:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.nGK 00:32:32.217 01:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:32:32.217 01:18:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:32:32.217 01:18:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:32.217 01:18:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:32:32.217 01:18:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:32:32.217 01:18:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:32:32.217 01:18:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:32:32.217 01:18:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=82d949bc711fed06bd66e6c868b5f928f9823855851b7c6c 00:32:32.217 01:18:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:32:32.217 01:18:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.cqn 00:32:32.217 01:18:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 82d949bc711fed06bd66e6c868b5f928f9823855851b7c6c 0 00:32:32.217 01:18:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 82d949bc711fed06bd66e6c868b5f928f9823855851b7c6c 0 00:32:32.217 01:18:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:32:32.217 01:18:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:32:32.217 01:18:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=82d949bc711fed06bd66e6c868b5f928f9823855851b7c6c 00:32:32.217 01:18:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:32:32.217 01:18:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:32:32.217 01:18:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.cqn 00:32:32.217 01:18:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.cqn 00:32:32.218 01:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.cqn 00:32:32.218 01:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:32:32.218 01:18:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:32:32.218 01:18:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:32.218 01:18:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:32:32.218 01:18:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:32:32.218 01:18:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:32:32.218 01:18:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:32:32.218 01:18:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=daf37de70bd00c7302fde2c602c20035c04abca0ac470347 00:32:32.218 01:18:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:32:32.218 01:18:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.dn1 00:32:32.218 01:18:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key daf37de70bd00c7302fde2c602c20035c04abca0ac470347 2 00:32:32.218 01:18:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 daf37de70bd00c7302fde2c602c20035c04abca0ac470347 2 00:32:32.218 01:18:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:32:32.218 01:18:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:32:32.218 01:18:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=daf37de70bd00c7302fde2c602c20035c04abca0ac470347 00:32:32.218 01:18:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:32:32.218 01:18:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:32:32.476 01:18:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.dn1 00:32:32.476 01:18:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.dn1 00:32:32.476 01:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.dn1 00:32:32.476 01:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:32:32.476 01:18:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:32:32.476 01:18:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:32.476 01:18:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:32:32.476 01:18:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:32:32.476 01:18:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:32:32.476 01:18:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:32:32.476 01:18:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=bf8ba2643890a196701ce1bb6fe71645 00:32:32.476 01:18:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:32:32.476 01:18:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.ZcT 00:32:32.476 01:18:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key bf8ba2643890a196701ce1bb6fe71645 1 00:32:32.476 01:18:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 bf8ba2643890a196701ce1bb6fe71645 1 00:32:32.476 01:18:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:32:32.476 01:18:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:32:32.476 01:18:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=bf8ba2643890a196701ce1bb6fe71645 00:32:32.476 01:18:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:32:32.476 01:18:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:32:32.476 01:18:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.ZcT 00:32:32.476 01:18:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.ZcT 00:32:32.476 01:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.ZcT 00:32:32.476 01:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:32:32.476 01:18:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:32:32.477 01:18:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:32.477 01:18:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:32:32.477 01:18:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:32:32.477 01:18:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:32:32.477 01:18:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:32:32.477 01:18:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=039d88fa66f541f0cdce12641cb0dddd 00:32:32.477 01:18:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:32:32.477 01:18:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.L8o 00:32:32.477 01:18:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 039d88fa66f541f0cdce12641cb0dddd 1 00:32:32.477 01:18:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 039d88fa66f541f0cdce12641cb0dddd 1 00:32:32.477 01:18:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:32:32.477 01:18:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:32:32.477 01:18:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=039d88fa66f541f0cdce12641cb0dddd 00:32:32.477 01:18:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:32:32.477 01:18:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:32:32.477 01:18:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.L8o 00:32:32.477 01:18:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.L8o 00:32:32.477 01:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.L8o 00:32:32.477 01:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:32:32.477 01:18:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:32:32.477 01:18:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:32.477 01:18:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:32:32.477 01:18:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:32:32.477 01:18:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:32:32.477 01:18:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:32:32.477 01:18:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=e1d6babe80269638c1c781c8a8a5c2f4a2f53c0955a8d8fd 00:32:32.477 01:18:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:32:32.477 01:18:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.j5B 00:32:32.477 01:18:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key e1d6babe80269638c1c781c8a8a5c2f4a2f53c0955a8d8fd 2 00:32:32.477 01:18:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 e1d6babe80269638c1c781c8a8a5c2f4a2f53c0955a8d8fd 2 00:32:32.477 01:18:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:32:32.477 01:18:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:32:32.477 01:18:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=e1d6babe80269638c1c781c8a8a5c2f4a2f53c0955a8d8fd 00:32:32.477 01:18:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:32:32.477 01:18:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:32:32.477 01:18:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.j5B 00:32:32.477 01:18:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.j5B 00:32:32.477 01:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.j5B 00:32:32.477 01:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:32:32.477 01:18:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:32:32.477 01:18:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:32.477 01:18:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:32:32.477 01:18:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:32:32.477 01:18:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:32:32.477 01:18:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:32:32.477 01:18:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=f9d518a861447135f0282756f2b22b9b 00:32:32.477 01:18:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:32:32.477 01:18:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.gF8 00:32:32.477 01:18:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key f9d518a861447135f0282756f2b22b9b 0 00:32:32.477 01:18:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 f9d518a861447135f0282756f2b22b9b 0 00:32:32.477 01:18:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:32:32.477 01:18:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:32:32.477 01:18:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=f9d518a861447135f0282756f2b22b9b 00:32:32.477 01:18:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:32:32.477 01:18:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:32:32.477 01:18:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.gF8 00:32:32.477 01:18:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.gF8 00:32:32.477 01:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.gF8 00:32:32.477 01:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:32:32.477 01:18:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:32:32.477 01:18:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:32.477 01:18:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:32:32.477 01:18:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:32:32.477 01:18:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:32:32.477 01:18:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:32:32.477 01:18:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=214445e001bdf36a62864a1e9417e915db6e69c67f0f004bb527da2b0e1c60fb 00:32:32.477 01:18:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:32:32.477 01:18:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.U5Z 00:32:32.477 01:18:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 214445e001bdf36a62864a1e9417e915db6e69c67f0f004bb527da2b0e1c60fb 3 00:32:32.477 01:18:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 214445e001bdf36a62864a1e9417e915db6e69c67f0f004bb527da2b0e1c60fb 3 00:32:32.477 01:18:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:32:32.477 01:18:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:32:32.477 01:18:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=214445e001bdf36a62864a1e9417e915db6e69c67f0f004bb527da2b0e1c60fb 00:32:32.477 01:18:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:32:32.477 01:18:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:32:32.767 01:18:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.U5Z 00:32:32.767 01:18:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.U5Z 00:32:32.767 01:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.U5Z 00:32:32.767 01:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:32:32.767 01:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 3908438 00:32:32.767 01:18:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@827 -- # '[' -z 3908438 ']' 00:32:32.767 01:18:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:32.767 01:18:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@832 -- # local max_retries=100 00:32:32.767 01:18:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:32.767 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:32.767 01:18:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@836 -- # xtrace_disable 00:32:32.767 01:18:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:32.767 01:18:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:32:32.767 01:18:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@860 -- # return 0 00:32:32.767 01:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:32:32.767 01:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.jbY 00:32:32.767 01:18:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:32.767 01:18:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:32.767 01:18:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:32.767 01:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.nGK ]] 00:32:32.767 01:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.nGK 00:32:32.767 01:18:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:32.767 01:18:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:33.025 01:18:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:33.025 01:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:32:33.025 01:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.cqn 00:32:33.025 01:18:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:33.025 01:18:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:33.025 01:18:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:33.025 01:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.dn1 ]] 00:32:33.025 01:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.dn1 00:32:33.025 01:18:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:33.025 01:18:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:33.025 01:18:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:33.025 01:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:32:33.025 01:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.ZcT 00:32:33.025 01:18:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:33.025 01:18:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:33.025 01:18:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:33.025 01:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.L8o ]] 00:32:33.025 01:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.L8o 00:32:33.025 01:18:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:33.025 01:18:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:33.025 01:18:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:33.025 01:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:32:33.025 01:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.j5B 00:32:33.025 01:18:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:33.025 01:18:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:33.025 01:18:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:33.025 01:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.gF8 ]] 00:32:33.025 01:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.gF8 00:32:33.025 01:18:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:33.025 01:18:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:33.025 01:18:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:33.025 01:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:32:33.025 01:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.U5Z 00:32:33.025 01:18:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:33.025 01:18:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:33.025 01:18:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:33.025 01:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:32:33.025 01:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:32:33.025 01:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:32:33.025 01:18:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:33.025 01:18:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:33.026 01:18:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:33.026 01:18:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:33.026 01:18:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:33.026 01:18:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:33.026 01:18:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:33.026 01:18:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:33.026 01:18:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:33.026 01:18:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:33.026 01:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:32:33.026 01:18:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@632 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:32:33.026 01:18:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:32:33.026 01:18:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:32:33.026 01:18:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:32:33.026 01:18:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:32:33.026 01:18:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@639 -- # local block nvme 00:32:33.026 01:18:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:32:33.026 01:18:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@642 -- # modprobe nvmet 00:32:33.026 01:18:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:32:33.026 01:18:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:32:33.960 Waiting for block devices as requested 00:32:33.960 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:32:34.218 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:32:34.218 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:32:34.476 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:32:34.476 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:32:34.476 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:32:34.733 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:32:34.733 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:32:34.733 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:32:34.733 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:32:34.991 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:32:34.991 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:32:34.991 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:32:34.991 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:32:35.249 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:32:35.249 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:32:35.249 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:32:35.815 01:18:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:32:35.815 01:18:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:32:35.815 01:18:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:32:35.815 01:18:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:32:35.815 01:18:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:32:35.815 01:18:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:32:35.815 01:18:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:32:35.815 01:18:28 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:32:35.815 01:18:28 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:32:35.815 No valid GPT data, bailing 00:32:35.815 01:18:28 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:32:35.815 01:18:28 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:32:35.815 01:18:28 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:32:35.815 01:18:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:32:35.815 01:18:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:32:35.815 01:18:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:32:35.815 01:18:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:32:35.815 01:18:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:32:35.815 01:18:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@665 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:32:35.815 01:18:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@667 -- # echo 1 00:32:35.815 01:18:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:32:35.815 01:18:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@669 -- # echo 1 00:32:35.815 01:18:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:32:35.815 01:18:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@672 -- # echo tcp 00:32:35.815 01:18:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@673 -- # echo 4420 00:32:35.815 01:18:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@674 -- # echo ipv4 00:32:35.815 01:18:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:32:35.815 01:18:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.1 -t tcp -s 4420 00:32:35.815 00:32:35.815 Discovery Log Number of Records 2, Generation counter 2 00:32:35.815 =====Discovery Log Entry 0====== 00:32:35.815 trtype: tcp 00:32:35.815 adrfam: ipv4 00:32:35.815 subtype: current discovery subsystem 00:32:35.815 treq: not specified, sq flow control disable supported 00:32:35.815 portid: 1 00:32:35.815 trsvcid: 4420 00:32:35.815 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:32:35.815 traddr: 10.0.0.1 00:32:35.815 eflags: none 00:32:35.815 sectype: none 00:32:35.815 =====Discovery Log Entry 1====== 00:32:35.815 trtype: tcp 00:32:35.815 adrfam: ipv4 00:32:35.815 subtype: nvme subsystem 00:32:35.815 treq: not specified, sq flow control disable supported 00:32:35.815 portid: 1 00:32:35.815 trsvcid: 4420 00:32:35.815 subnqn: nqn.2024-02.io.spdk:cnode0 00:32:35.815 traddr: 10.0.0.1 00:32:35.815 eflags: none 00:32:35.815 sectype: none 00:32:35.815 01:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:32:35.815 01:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:32:35.815 01:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:32:35.815 01:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:32:35.815 01:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:35.815 01:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:35.815 01:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:35.815 01:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:35.815 01:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODJkOTQ5YmM3MTFmZWQwNmJkNjZlNmM4NjhiNWY5MjhmOTgyMzg1NTg1MWI3YzZjK2hdmQ==: 00:32:35.815 01:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZGFmMzdkZTcwYmQwMGM3MzAyZmRlMmM2MDJjMjAwMzVjMDRhYmNhMGFjNDcwMzQ3OLTAGA==: 00:32:35.815 01:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:35.815 01:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:35.815 01:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODJkOTQ5YmM3MTFmZWQwNmJkNjZlNmM4NjhiNWY5MjhmOTgyMzg1NTg1MWI3YzZjK2hdmQ==: 00:32:35.815 01:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZGFmMzdkZTcwYmQwMGM3MzAyZmRlMmM2MDJjMjAwMzVjMDRhYmNhMGFjNDcwMzQ3OLTAGA==: ]] 00:32:35.815 01:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZGFmMzdkZTcwYmQwMGM3MzAyZmRlMmM2MDJjMjAwMzVjMDRhYmNhMGFjNDcwMzQ3OLTAGA==: 00:32:35.815 01:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:32:35.815 01:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:32:35.815 01:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:32:35.815 01:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:32:35.815 01:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:32:35.815 01:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:35.815 01:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:32:35.815 01:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:32:35.815 01:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:35.815 01:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:35.815 01:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:32:35.815 01:18:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:35.815 01:18:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:35.815 01:18:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:35.815 01:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:35.815 01:18:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:35.815 01:18:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:35.815 01:18:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:35.815 01:18:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:35.815 01:18:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:35.815 01:18:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:35.815 01:18:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:35.815 01:18:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:35.815 01:18:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:35.815 01:18:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:35.815 01:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:35.815 01:18:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:35.815 01:18:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:36.073 nvme0n1 00:32:36.073 01:18:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:36.073 01:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:36.073 01:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:36.073 01:18:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:36.073 01:18:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:36.073 01:18:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:36.073 01:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:36.073 01:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:36.073 01:18:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:36.073 01:18:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:36.073 01:18:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:36.073 01:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:32:36.073 01:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:36.073 01:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:36.073 01:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:32:36.073 01:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:36.073 01:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:36.073 01:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:36.074 01:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:36.074 01:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTY0ZmEzNjJmOTBlNjI3NTY5ZGVjMGE4MjNkYTBhZjmwd//x: 00:32:36.074 01:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDJmMzhmYzVlOGI0ZmI5NTRkMjIxNDVhMjUyMGRmMmRiZTU1NzA2NzZjY2Q3MWZkZjAwMDZmMDIyYzlhNGQ2YjwKKdU=: 00:32:36.074 01:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:36.074 01:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:36.074 01:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTY0ZmEzNjJmOTBlNjI3NTY5ZGVjMGE4MjNkYTBhZjmwd//x: 00:32:36.074 01:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDJmMzhmYzVlOGI0ZmI5NTRkMjIxNDVhMjUyMGRmMmRiZTU1NzA2NzZjY2Q3MWZkZjAwMDZmMDIyYzlhNGQ2YjwKKdU=: ]] 00:32:36.074 01:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDJmMzhmYzVlOGI0ZmI5NTRkMjIxNDVhMjUyMGRmMmRiZTU1NzA2NzZjY2Q3MWZkZjAwMDZmMDIyYzlhNGQ2YjwKKdU=: 00:32:36.074 01:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:32:36.074 01:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:36.074 01:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:36.074 01:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:36.074 01:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:36.074 01:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:36.074 01:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:32:36.074 01:18:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:36.074 01:18:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:36.074 01:18:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:36.074 01:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:36.074 01:18:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:36.074 01:18:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:36.074 01:18:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:36.074 01:18:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:36.074 01:18:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:36.074 01:18:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:36.074 01:18:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:36.074 01:18:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:36.074 01:18:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:36.074 01:18:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:36.074 01:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:36.074 01:18:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:36.074 01:18:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:36.332 nvme0n1 00:32:36.332 01:18:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:36.332 01:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:36.332 01:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:36.332 01:18:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:36.332 01:18:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:36.332 01:18:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:36.332 01:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:36.332 01:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:36.332 01:18:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:36.332 01:18:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:36.332 01:18:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:36.332 01:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:36.332 01:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:32:36.332 01:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:36.332 01:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:36.332 01:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:36.332 01:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:36.332 01:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODJkOTQ5YmM3MTFmZWQwNmJkNjZlNmM4NjhiNWY5MjhmOTgyMzg1NTg1MWI3YzZjK2hdmQ==: 00:32:36.332 01:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZGFmMzdkZTcwYmQwMGM3MzAyZmRlMmM2MDJjMjAwMzVjMDRhYmNhMGFjNDcwMzQ3OLTAGA==: 00:32:36.332 01:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:36.332 01:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:36.332 01:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODJkOTQ5YmM3MTFmZWQwNmJkNjZlNmM4NjhiNWY5MjhmOTgyMzg1NTg1MWI3YzZjK2hdmQ==: 00:32:36.332 01:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZGFmMzdkZTcwYmQwMGM3MzAyZmRlMmM2MDJjMjAwMzVjMDRhYmNhMGFjNDcwMzQ3OLTAGA==: ]] 00:32:36.332 01:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZGFmMzdkZTcwYmQwMGM3MzAyZmRlMmM2MDJjMjAwMzVjMDRhYmNhMGFjNDcwMzQ3OLTAGA==: 00:32:36.332 01:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:32:36.332 01:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:36.332 01:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:36.332 01:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:36.332 01:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:36.332 01:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:36.332 01:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:32:36.332 01:18:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:36.332 01:18:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:36.332 01:18:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:36.332 01:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:36.332 01:18:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:36.332 01:18:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:36.332 01:18:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:36.332 01:18:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:36.332 01:18:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:36.332 01:18:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:36.332 01:18:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:36.332 01:18:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:36.332 01:18:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:36.332 01:18:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:36.332 01:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:36.332 01:18:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:36.332 01:18:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:36.590 nvme0n1 00:32:36.590 01:18:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:36.590 01:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:36.590 01:18:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:36.590 01:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:36.590 01:18:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:36.590 01:18:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:36.590 01:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:36.590 01:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:36.590 01:18:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:36.590 01:18:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:36.590 01:18:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:36.590 01:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:36.590 01:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:32:36.591 01:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:36.591 01:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:36.591 01:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:36.591 01:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:36.591 01:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YmY4YmEyNjQzODkwYTE5NjcwMWNlMWJiNmZlNzE2NDWD2ggJ: 00:32:36.591 01:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDM5ZDg4ZmE2NmY1NDFmMGNkY2UxMjY0MWNiMGRkZGQnEYox: 00:32:36.591 01:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:36.591 01:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:36.591 01:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YmY4YmEyNjQzODkwYTE5NjcwMWNlMWJiNmZlNzE2NDWD2ggJ: 00:32:36.591 01:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDM5ZDg4ZmE2NmY1NDFmMGNkY2UxMjY0MWNiMGRkZGQnEYox: ]] 00:32:36.591 01:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDM5ZDg4ZmE2NmY1NDFmMGNkY2UxMjY0MWNiMGRkZGQnEYox: 00:32:36.591 01:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:32:36.591 01:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:36.591 01:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:36.591 01:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:36.591 01:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:36.591 01:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:36.591 01:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:32:36.591 01:18:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:36.591 01:18:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:36.591 01:18:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:36.591 01:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:36.591 01:18:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:36.591 01:18:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:36.591 01:18:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:36.591 01:18:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:36.591 01:18:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:36.591 01:18:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:36.591 01:18:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:36.591 01:18:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:36.591 01:18:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:36.591 01:18:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:36.591 01:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:36.591 01:18:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:36.591 01:18:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:36.591 nvme0n1 00:32:36.591 01:18:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:36.591 01:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:36.591 01:18:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:36.591 01:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:36.591 01:18:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:36.591 01:18:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:36.849 01:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:36.849 01:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:36.849 01:18:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:36.849 01:18:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:36.849 01:18:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:36.849 01:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:36.849 01:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:32:36.849 01:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:36.849 01:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:36.849 01:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:36.849 01:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:36.849 01:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTFkNmJhYmU4MDI2OTYzOGMxYzc4MWM4YThhNWMyZjRhMmY1M2MwOTU1YThkOGZkMOHq9g==: 00:32:36.849 01:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZjlkNTE4YTg2MTQ0NzEzNWYwMjgyNzU2ZjJiMjJiOWIxPv+g: 00:32:36.849 01:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:36.849 01:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:36.849 01:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTFkNmJhYmU4MDI2OTYzOGMxYzc4MWM4YThhNWMyZjRhMmY1M2MwOTU1YThkOGZkMOHq9g==: 00:32:36.849 01:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZjlkNTE4YTg2MTQ0NzEzNWYwMjgyNzU2ZjJiMjJiOWIxPv+g: ]] 00:32:36.849 01:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZjlkNTE4YTg2MTQ0NzEzNWYwMjgyNzU2ZjJiMjJiOWIxPv+g: 00:32:36.849 01:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:32:36.849 01:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:36.849 01:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:36.849 01:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:36.849 01:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:36.849 01:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:36.849 01:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:32:36.849 01:18:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:36.849 01:18:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:36.849 01:18:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:36.849 01:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:36.849 01:18:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:36.849 01:18:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:36.849 01:18:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:36.849 01:18:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:36.849 01:18:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:36.849 01:18:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:36.849 01:18:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:36.849 01:18:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:36.849 01:18:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:36.849 01:18:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:36.849 01:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:36.849 01:18:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:36.849 01:18:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:36.849 nvme0n1 00:32:36.849 01:18:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:36.849 01:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:36.849 01:18:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:36.849 01:18:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:36.849 01:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:36.849 01:18:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:36.849 01:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:36.849 01:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:36.849 01:18:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:36.849 01:18:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:37.108 01:18:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:37.108 01:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:37.108 01:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:32:37.108 01:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:37.108 01:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:37.108 01:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:37.108 01:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:37.108 01:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MjE0NDQ1ZTAwMWJkZjM2YTYyODY0YTFlOTQxN2U5MTVkYjZlNjljNjdmMGYwMDRiYjUyN2RhMmIwZTFjNjBmYm2Aid4=: 00:32:37.108 01:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:37.108 01:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:37.108 01:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:37.108 01:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MjE0NDQ1ZTAwMWJkZjM2YTYyODY0YTFlOTQxN2U5MTVkYjZlNjljNjdmMGYwMDRiYjUyN2RhMmIwZTFjNjBmYm2Aid4=: 00:32:37.108 01:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:37.108 01:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:32:37.108 01:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:37.108 01:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:37.108 01:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:37.108 01:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:37.108 01:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:37.108 01:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:32:37.108 01:18:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:37.108 01:18:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:37.108 01:18:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:37.108 01:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:37.108 01:18:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:37.108 01:18:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:37.108 01:18:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:37.108 01:18:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:37.108 01:18:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:37.108 01:18:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:37.108 01:18:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:37.108 01:18:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:37.108 01:18:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:37.108 01:18:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:37.108 01:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:37.108 01:18:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:37.108 01:18:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:37.108 nvme0n1 00:32:37.108 01:18:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:37.108 01:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:37.108 01:18:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:37.108 01:18:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:37.108 01:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:37.108 01:18:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:37.108 01:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:37.108 01:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:37.108 01:18:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:37.108 01:18:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:37.108 01:18:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:37.108 01:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:37.108 01:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:37.108 01:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:32:37.108 01:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:37.108 01:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:37.108 01:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:37.108 01:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:37.108 01:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTY0ZmEzNjJmOTBlNjI3NTY5ZGVjMGE4MjNkYTBhZjmwd//x: 00:32:37.108 01:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDJmMzhmYzVlOGI0ZmI5NTRkMjIxNDVhMjUyMGRmMmRiZTU1NzA2NzZjY2Q3MWZkZjAwMDZmMDIyYzlhNGQ2YjwKKdU=: 00:32:37.108 01:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:37.108 01:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:37.108 01:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTY0ZmEzNjJmOTBlNjI3NTY5ZGVjMGE4MjNkYTBhZjmwd//x: 00:32:37.108 01:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDJmMzhmYzVlOGI0ZmI5NTRkMjIxNDVhMjUyMGRmMmRiZTU1NzA2NzZjY2Q3MWZkZjAwMDZmMDIyYzlhNGQ2YjwKKdU=: ]] 00:32:37.108 01:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDJmMzhmYzVlOGI0ZmI5NTRkMjIxNDVhMjUyMGRmMmRiZTU1NzA2NzZjY2Q3MWZkZjAwMDZmMDIyYzlhNGQ2YjwKKdU=: 00:32:37.108 01:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:32:37.108 01:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:37.108 01:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:37.108 01:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:37.108 01:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:37.108 01:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:37.108 01:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:32:37.108 01:18:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:37.108 01:18:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:37.108 01:18:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:37.108 01:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:37.108 01:18:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:37.108 01:18:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:37.108 01:18:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:37.108 01:18:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:37.108 01:18:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:37.108 01:18:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:37.108 01:18:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:37.108 01:18:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:37.108 01:18:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:37.108 01:18:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:37.108 01:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:37.108 01:18:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:37.109 01:18:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:37.367 nvme0n1 00:32:37.367 01:18:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:37.367 01:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:37.367 01:18:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:37.367 01:18:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:37.367 01:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:37.367 01:18:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:37.367 01:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:37.367 01:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:37.367 01:18:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:37.367 01:18:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:37.367 01:18:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:37.367 01:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:37.367 01:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:32:37.367 01:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:37.367 01:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:37.367 01:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:37.367 01:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:37.367 01:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODJkOTQ5YmM3MTFmZWQwNmJkNjZlNmM4NjhiNWY5MjhmOTgyMzg1NTg1MWI3YzZjK2hdmQ==: 00:32:37.367 01:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZGFmMzdkZTcwYmQwMGM3MzAyZmRlMmM2MDJjMjAwMzVjMDRhYmNhMGFjNDcwMzQ3OLTAGA==: 00:32:37.367 01:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:37.367 01:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:37.367 01:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODJkOTQ5YmM3MTFmZWQwNmJkNjZlNmM4NjhiNWY5MjhmOTgyMzg1NTg1MWI3YzZjK2hdmQ==: 00:32:37.367 01:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZGFmMzdkZTcwYmQwMGM3MzAyZmRlMmM2MDJjMjAwMzVjMDRhYmNhMGFjNDcwMzQ3OLTAGA==: ]] 00:32:37.367 01:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZGFmMzdkZTcwYmQwMGM3MzAyZmRlMmM2MDJjMjAwMzVjMDRhYmNhMGFjNDcwMzQ3OLTAGA==: 00:32:37.367 01:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:32:37.367 01:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:37.367 01:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:37.367 01:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:37.367 01:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:37.367 01:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:37.367 01:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:32:37.367 01:18:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:37.367 01:18:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:37.367 01:18:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:37.367 01:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:37.367 01:18:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:37.367 01:18:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:37.367 01:18:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:37.367 01:18:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:37.367 01:18:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:37.367 01:18:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:37.367 01:18:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:37.367 01:18:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:37.367 01:18:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:37.367 01:18:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:37.367 01:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:37.367 01:18:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:37.367 01:18:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:37.625 nvme0n1 00:32:37.625 01:18:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:37.625 01:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:37.625 01:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:37.625 01:18:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:37.625 01:18:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:37.625 01:18:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:37.625 01:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:37.625 01:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:37.625 01:18:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:37.625 01:18:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:37.625 01:18:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:37.626 01:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:37.626 01:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:32:37.626 01:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:37.626 01:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:37.626 01:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:37.626 01:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:37.626 01:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YmY4YmEyNjQzODkwYTE5NjcwMWNlMWJiNmZlNzE2NDWD2ggJ: 00:32:37.626 01:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDM5ZDg4ZmE2NmY1NDFmMGNkY2UxMjY0MWNiMGRkZGQnEYox: 00:32:37.626 01:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:37.626 01:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:37.626 01:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YmY4YmEyNjQzODkwYTE5NjcwMWNlMWJiNmZlNzE2NDWD2ggJ: 00:32:37.626 01:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDM5ZDg4ZmE2NmY1NDFmMGNkY2UxMjY0MWNiMGRkZGQnEYox: ]] 00:32:37.626 01:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDM5ZDg4ZmE2NmY1NDFmMGNkY2UxMjY0MWNiMGRkZGQnEYox: 00:32:37.626 01:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:32:37.626 01:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:37.626 01:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:37.626 01:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:37.626 01:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:37.626 01:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:37.626 01:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:32:37.626 01:18:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:37.626 01:18:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:37.626 01:18:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:37.884 01:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:37.884 01:18:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:37.884 01:18:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:37.884 01:18:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:37.884 01:18:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:37.884 01:18:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:37.884 01:18:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:37.884 01:18:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:37.884 01:18:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:37.884 01:18:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:37.884 01:18:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:37.884 01:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:37.884 01:18:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:37.884 01:18:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:37.884 nvme0n1 00:32:37.884 01:18:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:37.884 01:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:37.884 01:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:37.884 01:18:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:37.884 01:18:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:37.884 01:18:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:37.884 01:18:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:37.884 01:18:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:37.884 01:18:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:37.884 01:18:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:37.884 01:18:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:37.884 01:18:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:37.884 01:18:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:32:37.884 01:18:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:37.884 01:18:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:37.884 01:18:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:37.884 01:18:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:37.884 01:18:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTFkNmJhYmU4MDI2OTYzOGMxYzc4MWM4YThhNWMyZjRhMmY1M2MwOTU1YThkOGZkMOHq9g==: 00:32:37.884 01:18:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZjlkNTE4YTg2MTQ0NzEzNWYwMjgyNzU2ZjJiMjJiOWIxPv+g: 00:32:37.884 01:18:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:37.884 01:18:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:37.884 01:18:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTFkNmJhYmU4MDI2OTYzOGMxYzc4MWM4YThhNWMyZjRhMmY1M2MwOTU1YThkOGZkMOHq9g==: 00:32:37.884 01:18:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZjlkNTE4YTg2MTQ0NzEzNWYwMjgyNzU2ZjJiMjJiOWIxPv+g: ]] 00:32:37.884 01:18:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZjlkNTE4YTg2MTQ0NzEzNWYwMjgyNzU2ZjJiMjJiOWIxPv+g: 00:32:37.884 01:18:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:32:37.884 01:18:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:37.884 01:18:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:37.884 01:18:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:37.884 01:18:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:37.884 01:18:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:37.884 01:18:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:32:37.884 01:18:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:37.884 01:18:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:37.884 01:18:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:37.884 01:18:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:37.884 01:18:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:37.884 01:18:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:37.884 01:18:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:37.884 01:18:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:37.884 01:18:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:37.884 01:18:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:37.884 01:18:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:37.884 01:18:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:37.884 01:18:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:37.884 01:18:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:37.884 01:18:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:37.884 01:18:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:37.884 01:18:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:38.142 nvme0n1 00:32:38.142 01:18:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:38.142 01:18:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:38.142 01:18:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:38.142 01:18:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:38.142 01:18:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:38.142 01:18:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:38.142 01:18:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:38.142 01:18:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:38.142 01:18:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:38.142 01:18:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:38.142 01:18:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:38.142 01:18:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:38.142 01:18:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:32:38.142 01:18:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:38.142 01:18:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:38.142 01:18:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:38.142 01:18:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:38.142 01:18:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MjE0NDQ1ZTAwMWJkZjM2YTYyODY0YTFlOTQxN2U5MTVkYjZlNjljNjdmMGYwMDRiYjUyN2RhMmIwZTFjNjBmYm2Aid4=: 00:32:38.142 01:18:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:38.142 01:18:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:38.142 01:18:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:38.142 01:18:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MjE0NDQ1ZTAwMWJkZjM2YTYyODY0YTFlOTQxN2U5MTVkYjZlNjljNjdmMGYwMDRiYjUyN2RhMmIwZTFjNjBmYm2Aid4=: 00:32:38.142 01:18:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:38.142 01:18:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:32:38.142 01:18:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:38.142 01:18:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:38.142 01:18:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:38.142 01:18:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:38.142 01:18:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:38.142 01:18:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:32:38.142 01:18:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:38.142 01:18:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:38.142 01:18:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:38.142 01:18:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:38.142 01:18:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:38.142 01:18:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:38.142 01:18:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:38.142 01:18:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:38.142 01:18:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:38.142 01:18:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:38.142 01:18:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:38.142 01:18:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:38.142 01:18:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:38.142 01:18:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:38.142 01:18:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:38.142 01:18:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:38.142 01:18:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:38.401 nvme0n1 00:32:38.401 01:18:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:38.401 01:18:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:38.401 01:18:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:38.401 01:18:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:38.401 01:18:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:38.401 01:18:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:38.401 01:18:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:38.401 01:18:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:38.401 01:18:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:38.401 01:18:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:38.401 01:18:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:38.401 01:18:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:38.401 01:18:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:38.401 01:18:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:32:38.401 01:18:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:38.401 01:18:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:38.401 01:18:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:38.401 01:18:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:38.401 01:18:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTY0ZmEzNjJmOTBlNjI3NTY5ZGVjMGE4MjNkYTBhZjmwd//x: 00:32:38.401 01:18:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDJmMzhmYzVlOGI0ZmI5NTRkMjIxNDVhMjUyMGRmMmRiZTU1NzA2NzZjY2Q3MWZkZjAwMDZmMDIyYzlhNGQ2YjwKKdU=: 00:32:38.401 01:18:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:38.401 01:18:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:38.401 01:18:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTY0ZmEzNjJmOTBlNjI3NTY5ZGVjMGE4MjNkYTBhZjmwd//x: 00:32:38.401 01:18:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDJmMzhmYzVlOGI0ZmI5NTRkMjIxNDVhMjUyMGRmMmRiZTU1NzA2NzZjY2Q3MWZkZjAwMDZmMDIyYzlhNGQ2YjwKKdU=: ]] 00:32:38.401 01:18:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDJmMzhmYzVlOGI0ZmI5NTRkMjIxNDVhMjUyMGRmMmRiZTU1NzA2NzZjY2Q3MWZkZjAwMDZmMDIyYzlhNGQ2YjwKKdU=: 00:32:38.401 01:18:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:32:38.401 01:18:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:38.401 01:18:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:38.401 01:18:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:38.401 01:18:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:38.401 01:18:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:38.401 01:18:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:32:38.401 01:18:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:38.401 01:18:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:38.401 01:18:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:38.401 01:18:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:38.401 01:18:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:38.401 01:18:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:38.401 01:18:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:38.401 01:18:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:38.401 01:18:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:38.401 01:18:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:38.401 01:18:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:38.401 01:18:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:38.401 01:18:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:38.401 01:18:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:38.401 01:18:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:38.401 01:18:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:38.401 01:18:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:38.967 nvme0n1 00:32:38.967 01:18:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:38.967 01:18:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:38.967 01:18:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:38.967 01:18:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:38.967 01:18:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:38.967 01:18:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:38.967 01:18:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:38.967 01:18:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:38.967 01:18:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:38.967 01:18:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:38.967 01:18:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:38.967 01:18:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:38.967 01:18:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:32:38.967 01:18:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:38.967 01:18:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:38.967 01:18:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:38.967 01:18:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:38.967 01:18:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODJkOTQ5YmM3MTFmZWQwNmJkNjZlNmM4NjhiNWY5MjhmOTgyMzg1NTg1MWI3YzZjK2hdmQ==: 00:32:38.967 01:18:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZGFmMzdkZTcwYmQwMGM3MzAyZmRlMmM2MDJjMjAwMzVjMDRhYmNhMGFjNDcwMzQ3OLTAGA==: 00:32:38.967 01:18:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:38.967 01:18:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:38.967 01:18:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODJkOTQ5YmM3MTFmZWQwNmJkNjZlNmM4NjhiNWY5MjhmOTgyMzg1NTg1MWI3YzZjK2hdmQ==: 00:32:38.967 01:18:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZGFmMzdkZTcwYmQwMGM3MzAyZmRlMmM2MDJjMjAwMzVjMDRhYmNhMGFjNDcwMzQ3OLTAGA==: ]] 00:32:38.967 01:18:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZGFmMzdkZTcwYmQwMGM3MzAyZmRlMmM2MDJjMjAwMzVjMDRhYmNhMGFjNDcwMzQ3OLTAGA==: 00:32:38.967 01:18:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:32:38.967 01:18:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:38.967 01:18:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:38.967 01:18:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:38.967 01:18:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:38.967 01:18:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:38.967 01:18:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:32:38.967 01:18:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:38.967 01:18:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:38.967 01:18:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:38.967 01:18:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:38.967 01:18:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:38.967 01:18:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:38.967 01:18:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:38.967 01:18:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:38.967 01:18:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:38.967 01:18:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:38.967 01:18:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:38.967 01:18:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:38.967 01:18:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:38.967 01:18:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:38.967 01:18:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:38.967 01:18:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:38.967 01:18:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:39.225 nvme0n1 00:32:39.225 01:18:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:39.225 01:18:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:39.225 01:18:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:39.225 01:18:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:39.225 01:18:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:39.225 01:18:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:39.225 01:18:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:39.225 01:18:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:39.225 01:18:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:39.225 01:18:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:39.225 01:18:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:39.225 01:18:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:39.225 01:18:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:32:39.225 01:18:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:39.225 01:18:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:39.225 01:18:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:39.225 01:18:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:39.225 01:18:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YmY4YmEyNjQzODkwYTE5NjcwMWNlMWJiNmZlNzE2NDWD2ggJ: 00:32:39.225 01:18:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDM5ZDg4ZmE2NmY1NDFmMGNkY2UxMjY0MWNiMGRkZGQnEYox: 00:32:39.225 01:18:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:39.225 01:18:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:39.225 01:18:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YmY4YmEyNjQzODkwYTE5NjcwMWNlMWJiNmZlNzE2NDWD2ggJ: 00:32:39.225 01:18:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDM5ZDg4ZmE2NmY1NDFmMGNkY2UxMjY0MWNiMGRkZGQnEYox: ]] 00:32:39.225 01:18:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDM5ZDg4ZmE2NmY1NDFmMGNkY2UxMjY0MWNiMGRkZGQnEYox: 00:32:39.225 01:18:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:32:39.225 01:18:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:39.225 01:18:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:39.225 01:18:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:39.225 01:18:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:39.225 01:18:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:39.225 01:18:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:32:39.225 01:18:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:39.225 01:18:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:39.225 01:18:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:39.225 01:18:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:39.225 01:18:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:39.225 01:18:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:39.225 01:18:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:39.225 01:18:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:39.225 01:18:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:39.225 01:18:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:39.225 01:18:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:39.225 01:18:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:39.225 01:18:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:39.225 01:18:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:39.225 01:18:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:39.225 01:18:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:39.225 01:18:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:39.484 nvme0n1 00:32:39.484 01:18:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:39.484 01:18:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:39.484 01:18:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:39.484 01:18:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:39.484 01:18:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:39.484 01:18:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:39.484 01:18:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:39.484 01:18:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:39.484 01:18:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:39.484 01:18:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:39.484 01:18:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:39.484 01:18:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:39.484 01:18:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:32:39.484 01:18:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:39.484 01:18:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:39.484 01:18:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:39.484 01:18:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:39.484 01:18:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTFkNmJhYmU4MDI2OTYzOGMxYzc4MWM4YThhNWMyZjRhMmY1M2MwOTU1YThkOGZkMOHq9g==: 00:32:39.484 01:18:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZjlkNTE4YTg2MTQ0NzEzNWYwMjgyNzU2ZjJiMjJiOWIxPv+g: 00:32:39.484 01:18:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:39.484 01:18:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:39.484 01:18:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTFkNmJhYmU4MDI2OTYzOGMxYzc4MWM4YThhNWMyZjRhMmY1M2MwOTU1YThkOGZkMOHq9g==: 00:32:39.484 01:18:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZjlkNTE4YTg2MTQ0NzEzNWYwMjgyNzU2ZjJiMjJiOWIxPv+g: ]] 00:32:39.484 01:18:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZjlkNTE4YTg2MTQ0NzEzNWYwMjgyNzU2ZjJiMjJiOWIxPv+g: 00:32:39.484 01:18:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:32:39.484 01:18:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:39.484 01:18:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:39.484 01:18:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:39.484 01:18:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:39.484 01:18:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:39.484 01:18:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:32:39.484 01:18:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:39.484 01:18:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:39.484 01:18:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:39.484 01:18:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:39.484 01:18:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:39.484 01:18:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:39.484 01:18:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:39.484 01:18:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:39.484 01:18:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:39.484 01:18:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:39.484 01:18:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:39.484 01:18:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:39.484 01:18:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:39.484 01:18:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:39.484 01:18:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:39.484 01:18:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:39.484 01:18:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:40.050 nvme0n1 00:32:40.050 01:18:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:40.050 01:18:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:40.050 01:18:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:40.050 01:18:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:40.050 01:18:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:40.050 01:18:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:40.050 01:18:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:40.050 01:18:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:40.050 01:18:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:40.050 01:18:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:40.050 01:18:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:40.050 01:18:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:40.050 01:18:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:32:40.050 01:18:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:40.050 01:18:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:40.050 01:18:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:40.050 01:18:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:40.050 01:18:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MjE0NDQ1ZTAwMWJkZjM2YTYyODY0YTFlOTQxN2U5MTVkYjZlNjljNjdmMGYwMDRiYjUyN2RhMmIwZTFjNjBmYm2Aid4=: 00:32:40.050 01:18:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:40.050 01:18:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:40.050 01:18:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:40.050 01:18:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MjE0NDQ1ZTAwMWJkZjM2YTYyODY0YTFlOTQxN2U5MTVkYjZlNjljNjdmMGYwMDRiYjUyN2RhMmIwZTFjNjBmYm2Aid4=: 00:32:40.050 01:18:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:40.050 01:18:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:32:40.050 01:18:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:40.050 01:18:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:40.050 01:18:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:40.050 01:18:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:40.050 01:18:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:40.050 01:18:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:32:40.050 01:18:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:40.050 01:18:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:40.050 01:18:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:40.050 01:18:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:40.050 01:18:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:40.050 01:18:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:40.050 01:18:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:40.050 01:18:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:40.050 01:18:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:40.050 01:18:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:40.050 01:18:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:40.050 01:18:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:40.050 01:18:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:40.050 01:18:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:40.050 01:18:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:40.050 01:18:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:40.050 01:18:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:40.308 nvme0n1 00:32:40.308 01:18:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:40.308 01:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:40.308 01:18:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:40.308 01:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:40.308 01:18:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:40.308 01:18:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:40.308 01:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:40.308 01:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:40.308 01:18:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:40.308 01:18:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:40.308 01:18:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:40.308 01:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:40.308 01:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:40.308 01:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:32:40.308 01:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:40.308 01:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:40.308 01:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:40.308 01:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:40.308 01:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTY0ZmEzNjJmOTBlNjI3NTY5ZGVjMGE4MjNkYTBhZjmwd//x: 00:32:40.308 01:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDJmMzhmYzVlOGI0ZmI5NTRkMjIxNDVhMjUyMGRmMmRiZTU1NzA2NzZjY2Q3MWZkZjAwMDZmMDIyYzlhNGQ2YjwKKdU=: 00:32:40.308 01:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:40.308 01:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:40.308 01:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTY0ZmEzNjJmOTBlNjI3NTY5ZGVjMGE4MjNkYTBhZjmwd//x: 00:32:40.309 01:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDJmMzhmYzVlOGI0ZmI5NTRkMjIxNDVhMjUyMGRmMmRiZTU1NzA2NzZjY2Q3MWZkZjAwMDZmMDIyYzlhNGQ2YjwKKdU=: ]] 00:32:40.309 01:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDJmMzhmYzVlOGI0ZmI5NTRkMjIxNDVhMjUyMGRmMmRiZTU1NzA2NzZjY2Q3MWZkZjAwMDZmMDIyYzlhNGQ2YjwKKdU=: 00:32:40.309 01:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:32:40.309 01:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:40.309 01:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:40.309 01:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:40.309 01:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:40.309 01:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:40.309 01:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:32:40.309 01:18:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:40.309 01:18:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:40.309 01:18:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:40.309 01:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:40.309 01:18:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:40.309 01:18:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:40.309 01:18:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:40.309 01:18:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:40.309 01:18:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:40.309 01:18:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:40.309 01:18:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:40.309 01:18:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:40.309 01:18:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:40.309 01:18:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:40.309 01:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:40.309 01:18:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:40.309 01:18:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:40.874 nvme0n1 00:32:40.874 01:18:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:40.874 01:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:40.874 01:18:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:40.874 01:18:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:40.874 01:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:40.874 01:18:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:40.874 01:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:40.874 01:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:40.874 01:18:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:40.874 01:18:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:40.874 01:18:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:40.874 01:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:40.874 01:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:32:40.874 01:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:40.874 01:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:40.874 01:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:40.874 01:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:40.874 01:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODJkOTQ5YmM3MTFmZWQwNmJkNjZlNmM4NjhiNWY5MjhmOTgyMzg1NTg1MWI3YzZjK2hdmQ==: 00:32:40.874 01:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZGFmMzdkZTcwYmQwMGM3MzAyZmRlMmM2MDJjMjAwMzVjMDRhYmNhMGFjNDcwMzQ3OLTAGA==: 00:32:40.874 01:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:40.874 01:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:40.874 01:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODJkOTQ5YmM3MTFmZWQwNmJkNjZlNmM4NjhiNWY5MjhmOTgyMzg1NTg1MWI3YzZjK2hdmQ==: 00:32:40.874 01:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZGFmMzdkZTcwYmQwMGM3MzAyZmRlMmM2MDJjMjAwMzVjMDRhYmNhMGFjNDcwMzQ3OLTAGA==: ]] 00:32:40.874 01:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZGFmMzdkZTcwYmQwMGM3MzAyZmRlMmM2MDJjMjAwMzVjMDRhYmNhMGFjNDcwMzQ3OLTAGA==: 00:32:40.874 01:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:32:40.874 01:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:40.874 01:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:40.874 01:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:40.874 01:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:40.874 01:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:40.874 01:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:32:40.874 01:18:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:40.874 01:18:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:40.874 01:18:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:40.874 01:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:40.874 01:18:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:40.874 01:18:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:40.874 01:18:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:40.874 01:18:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:40.874 01:18:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:40.874 01:18:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:40.874 01:18:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:40.874 01:18:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:40.874 01:18:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:40.874 01:18:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:40.874 01:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:40.874 01:18:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:40.874 01:18:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:41.440 nvme0n1 00:32:41.440 01:18:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:41.440 01:18:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:41.440 01:18:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:41.440 01:18:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:41.440 01:18:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:41.440 01:18:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:41.440 01:18:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:41.440 01:18:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:41.440 01:18:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:41.441 01:18:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:41.441 01:18:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:41.441 01:18:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:41.441 01:18:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:32:41.441 01:18:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:41.441 01:18:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:41.441 01:18:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:41.441 01:18:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:41.441 01:18:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YmY4YmEyNjQzODkwYTE5NjcwMWNlMWJiNmZlNzE2NDWD2ggJ: 00:32:41.441 01:18:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDM5ZDg4ZmE2NmY1NDFmMGNkY2UxMjY0MWNiMGRkZGQnEYox: 00:32:41.441 01:18:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:41.441 01:18:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:41.441 01:18:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YmY4YmEyNjQzODkwYTE5NjcwMWNlMWJiNmZlNzE2NDWD2ggJ: 00:32:41.441 01:18:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDM5ZDg4ZmE2NmY1NDFmMGNkY2UxMjY0MWNiMGRkZGQnEYox: ]] 00:32:41.441 01:18:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDM5ZDg4ZmE2NmY1NDFmMGNkY2UxMjY0MWNiMGRkZGQnEYox: 00:32:41.441 01:18:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:32:41.441 01:18:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:41.441 01:18:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:41.441 01:18:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:41.441 01:18:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:41.441 01:18:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:41.441 01:18:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:32:41.441 01:18:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:41.441 01:18:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:41.441 01:18:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:41.441 01:18:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:41.441 01:18:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:41.441 01:18:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:41.441 01:18:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:41.441 01:18:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:41.441 01:18:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:41.441 01:18:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:41.441 01:18:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:41.441 01:18:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:41.441 01:18:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:41.441 01:18:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:41.441 01:18:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:41.441 01:18:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:41.441 01:18:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:42.007 nvme0n1 00:32:42.007 01:18:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:42.007 01:18:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:42.007 01:18:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:42.007 01:18:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:42.007 01:18:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:42.007 01:18:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:42.007 01:18:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:42.007 01:18:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:42.007 01:18:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:42.007 01:18:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:42.007 01:18:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:42.007 01:18:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:42.007 01:18:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:32:42.007 01:18:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:42.007 01:18:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:42.007 01:18:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:42.007 01:18:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:42.007 01:18:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTFkNmJhYmU4MDI2OTYzOGMxYzc4MWM4YThhNWMyZjRhMmY1M2MwOTU1YThkOGZkMOHq9g==: 00:32:42.007 01:18:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZjlkNTE4YTg2MTQ0NzEzNWYwMjgyNzU2ZjJiMjJiOWIxPv+g: 00:32:42.007 01:18:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:42.007 01:18:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:42.007 01:18:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTFkNmJhYmU4MDI2OTYzOGMxYzc4MWM4YThhNWMyZjRhMmY1M2MwOTU1YThkOGZkMOHq9g==: 00:32:42.007 01:18:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZjlkNTE4YTg2MTQ0NzEzNWYwMjgyNzU2ZjJiMjJiOWIxPv+g: ]] 00:32:42.007 01:18:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZjlkNTE4YTg2MTQ0NzEzNWYwMjgyNzU2ZjJiMjJiOWIxPv+g: 00:32:42.007 01:18:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:32:42.007 01:18:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:42.007 01:18:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:42.007 01:18:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:42.007 01:18:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:42.007 01:18:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:42.007 01:18:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:32:42.007 01:18:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:42.007 01:18:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:42.007 01:18:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:42.007 01:18:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:42.007 01:18:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:42.007 01:18:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:42.007 01:18:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:42.007 01:18:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:42.007 01:18:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:42.007 01:18:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:42.007 01:18:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:42.265 01:18:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:42.265 01:18:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:42.265 01:18:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:42.265 01:18:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:42.265 01:18:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:42.265 01:18:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:42.523 nvme0n1 00:32:42.523 01:18:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:42.523 01:18:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:42.523 01:18:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:42.523 01:18:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:42.523 01:18:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:42.523 01:18:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:42.782 01:18:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:42.782 01:18:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:42.782 01:18:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:42.782 01:18:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:42.782 01:18:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:42.782 01:18:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:42.782 01:18:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:32:42.782 01:18:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:42.782 01:18:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:42.782 01:18:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:42.782 01:18:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:42.782 01:18:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MjE0NDQ1ZTAwMWJkZjM2YTYyODY0YTFlOTQxN2U5MTVkYjZlNjljNjdmMGYwMDRiYjUyN2RhMmIwZTFjNjBmYm2Aid4=: 00:32:42.782 01:18:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:42.782 01:18:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:42.782 01:18:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:42.782 01:18:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MjE0NDQ1ZTAwMWJkZjM2YTYyODY0YTFlOTQxN2U5MTVkYjZlNjljNjdmMGYwMDRiYjUyN2RhMmIwZTFjNjBmYm2Aid4=: 00:32:42.782 01:18:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:42.782 01:18:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:32:42.782 01:18:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:42.782 01:18:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:42.782 01:18:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:42.782 01:18:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:42.782 01:18:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:42.782 01:18:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:32:42.782 01:18:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:42.782 01:18:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:42.782 01:18:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:42.782 01:18:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:42.782 01:18:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:42.782 01:18:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:42.782 01:18:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:42.782 01:18:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:42.782 01:18:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:42.782 01:18:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:42.782 01:18:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:42.782 01:18:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:42.782 01:18:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:42.782 01:18:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:42.782 01:18:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:42.782 01:18:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:42.782 01:18:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:43.347 nvme0n1 00:32:43.347 01:18:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:43.347 01:18:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:43.347 01:18:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:43.347 01:18:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:43.347 01:18:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:43.347 01:18:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:43.347 01:18:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:43.347 01:18:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:43.347 01:18:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:43.347 01:18:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:43.347 01:18:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:43.347 01:18:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:43.347 01:18:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:43.347 01:18:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:32:43.347 01:18:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:43.347 01:18:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:43.347 01:18:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:43.347 01:18:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:43.347 01:18:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTY0ZmEzNjJmOTBlNjI3NTY5ZGVjMGE4MjNkYTBhZjmwd//x: 00:32:43.347 01:18:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDJmMzhmYzVlOGI0ZmI5NTRkMjIxNDVhMjUyMGRmMmRiZTU1NzA2NzZjY2Q3MWZkZjAwMDZmMDIyYzlhNGQ2YjwKKdU=: 00:32:43.347 01:18:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:43.347 01:18:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:43.348 01:18:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTY0ZmEzNjJmOTBlNjI3NTY5ZGVjMGE4MjNkYTBhZjmwd//x: 00:32:43.348 01:18:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDJmMzhmYzVlOGI0ZmI5NTRkMjIxNDVhMjUyMGRmMmRiZTU1NzA2NzZjY2Q3MWZkZjAwMDZmMDIyYzlhNGQ2YjwKKdU=: ]] 00:32:43.348 01:18:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDJmMzhmYzVlOGI0ZmI5NTRkMjIxNDVhMjUyMGRmMmRiZTU1NzA2NzZjY2Q3MWZkZjAwMDZmMDIyYzlhNGQ2YjwKKdU=: 00:32:43.348 01:18:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:32:43.348 01:18:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:43.348 01:18:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:43.348 01:18:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:43.348 01:18:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:43.348 01:18:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:43.348 01:18:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:32:43.348 01:18:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:43.348 01:18:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:43.348 01:18:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:43.348 01:18:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:43.348 01:18:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:43.348 01:18:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:43.348 01:18:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:43.348 01:18:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:43.348 01:18:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:43.348 01:18:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:43.348 01:18:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:43.348 01:18:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:43.348 01:18:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:43.348 01:18:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:43.348 01:18:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:43.348 01:18:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:43.348 01:18:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:44.281 nvme0n1 00:32:44.281 01:18:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:44.281 01:18:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:44.281 01:18:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:44.281 01:18:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:44.281 01:18:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:44.281 01:18:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:44.281 01:18:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:44.281 01:18:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:44.281 01:18:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:44.281 01:18:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:44.281 01:18:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:44.281 01:18:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:44.281 01:18:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:32:44.281 01:18:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:44.281 01:18:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:44.281 01:18:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:44.281 01:18:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:44.281 01:18:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODJkOTQ5YmM3MTFmZWQwNmJkNjZlNmM4NjhiNWY5MjhmOTgyMzg1NTg1MWI3YzZjK2hdmQ==: 00:32:44.281 01:18:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZGFmMzdkZTcwYmQwMGM3MzAyZmRlMmM2MDJjMjAwMzVjMDRhYmNhMGFjNDcwMzQ3OLTAGA==: 00:32:44.281 01:18:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:44.281 01:18:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:44.281 01:18:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODJkOTQ5YmM3MTFmZWQwNmJkNjZlNmM4NjhiNWY5MjhmOTgyMzg1NTg1MWI3YzZjK2hdmQ==: 00:32:44.281 01:18:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZGFmMzdkZTcwYmQwMGM3MzAyZmRlMmM2MDJjMjAwMzVjMDRhYmNhMGFjNDcwMzQ3OLTAGA==: ]] 00:32:44.281 01:18:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZGFmMzdkZTcwYmQwMGM3MzAyZmRlMmM2MDJjMjAwMzVjMDRhYmNhMGFjNDcwMzQ3OLTAGA==: 00:32:44.281 01:18:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:32:44.281 01:18:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:44.281 01:18:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:44.281 01:18:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:44.281 01:18:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:44.281 01:18:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:44.281 01:18:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:32:44.281 01:18:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:44.281 01:18:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:44.281 01:18:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:44.281 01:18:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:44.281 01:18:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:44.281 01:18:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:44.281 01:18:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:44.281 01:18:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:44.281 01:18:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:44.281 01:18:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:44.281 01:18:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:44.281 01:18:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:44.281 01:18:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:44.282 01:18:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:44.282 01:18:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:44.282 01:18:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:44.282 01:18:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:45.214 nvme0n1 00:32:45.214 01:18:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:45.215 01:18:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:45.215 01:18:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:45.215 01:18:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:45.215 01:18:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:45.215 01:18:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:45.215 01:18:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:45.215 01:18:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:45.215 01:18:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:45.215 01:18:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:45.473 01:18:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:45.473 01:18:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:45.473 01:18:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:32:45.473 01:18:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:45.473 01:18:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:45.473 01:18:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:45.473 01:18:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:45.473 01:18:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YmY4YmEyNjQzODkwYTE5NjcwMWNlMWJiNmZlNzE2NDWD2ggJ: 00:32:45.473 01:18:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDM5ZDg4ZmE2NmY1NDFmMGNkY2UxMjY0MWNiMGRkZGQnEYox: 00:32:45.473 01:18:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:45.473 01:18:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:45.473 01:18:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YmY4YmEyNjQzODkwYTE5NjcwMWNlMWJiNmZlNzE2NDWD2ggJ: 00:32:45.473 01:18:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDM5ZDg4ZmE2NmY1NDFmMGNkY2UxMjY0MWNiMGRkZGQnEYox: ]] 00:32:45.473 01:18:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDM5ZDg4ZmE2NmY1NDFmMGNkY2UxMjY0MWNiMGRkZGQnEYox: 00:32:45.473 01:18:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:32:45.473 01:18:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:45.473 01:18:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:45.473 01:18:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:45.473 01:18:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:45.473 01:18:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:45.473 01:18:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:32:45.473 01:18:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:45.473 01:18:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:45.473 01:18:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:45.473 01:18:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:45.473 01:18:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:45.473 01:18:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:45.473 01:18:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:45.473 01:18:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:45.473 01:18:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:45.473 01:18:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:45.473 01:18:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:45.473 01:18:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:45.473 01:18:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:45.473 01:18:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:45.473 01:18:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:45.473 01:18:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:45.473 01:18:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:46.407 nvme0n1 00:32:46.407 01:18:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:46.407 01:18:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:46.407 01:18:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:46.407 01:18:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:46.407 01:18:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:46.407 01:18:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:46.407 01:18:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:46.407 01:18:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:46.407 01:18:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:46.407 01:18:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:46.407 01:18:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:46.407 01:18:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:46.407 01:18:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:32:46.407 01:18:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:46.407 01:18:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:46.407 01:18:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:46.407 01:18:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:46.407 01:18:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTFkNmJhYmU4MDI2OTYzOGMxYzc4MWM4YThhNWMyZjRhMmY1M2MwOTU1YThkOGZkMOHq9g==: 00:32:46.407 01:18:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZjlkNTE4YTg2MTQ0NzEzNWYwMjgyNzU2ZjJiMjJiOWIxPv+g: 00:32:46.407 01:18:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:46.407 01:18:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:46.407 01:18:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTFkNmJhYmU4MDI2OTYzOGMxYzc4MWM4YThhNWMyZjRhMmY1M2MwOTU1YThkOGZkMOHq9g==: 00:32:46.407 01:18:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZjlkNTE4YTg2MTQ0NzEzNWYwMjgyNzU2ZjJiMjJiOWIxPv+g: ]] 00:32:46.407 01:18:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZjlkNTE4YTg2MTQ0NzEzNWYwMjgyNzU2ZjJiMjJiOWIxPv+g: 00:32:46.407 01:18:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:32:46.407 01:18:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:46.407 01:18:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:46.407 01:18:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:46.407 01:18:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:46.407 01:18:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:46.407 01:18:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:32:46.407 01:18:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:46.407 01:18:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:46.407 01:18:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:46.407 01:18:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:46.407 01:18:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:46.407 01:18:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:46.407 01:18:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:46.407 01:18:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:46.407 01:18:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:46.407 01:18:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:46.407 01:18:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:46.407 01:18:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:46.407 01:18:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:46.407 01:18:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:46.407 01:18:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:46.407 01:18:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:46.407 01:18:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:47.341 nvme0n1 00:32:47.341 01:18:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:47.341 01:18:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:47.341 01:18:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:47.341 01:18:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:47.341 01:18:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:47.341 01:18:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:47.341 01:18:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:47.341 01:18:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:47.341 01:18:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:47.341 01:18:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:47.341 01:18:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:47.341 01:18:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:47.341 01:18:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:32:47.341 01:18:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:47.341 01:18:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:47.341 01:18:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:47.341 01:18:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:47.341 01:18:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MjE0NDQ1ZTAwMWJkZjM2YTYyODY0YTFlOTQxN2U5MTVkYjZlNjljNjdmMGYwMDRiYjUyN2RhMmIwZTFjNjBmYm2Aid4=: 00:32:47.341 01:18:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:47.341 01:18:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:47.341 01:18:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:47.341 01:18:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MjE0NDQ1ZTAwMWJkZjM2YTYyODY0YTFlOTQxN2U5MTVkYjZlNjljNjdmMGYwMDRiYjUyN2RhMmIwZTFjNjBmYm2Aid4=: 00:32:47.341 01:18:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:47.341 01:18:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:32:47.341 01:18:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:47.341 01:18:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:47.341 01:18:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:47.341 01:18:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:47.341 01:18:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:47.341 01:18:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:32:47.341 01:18:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:47.341 01:18:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:47.341 01:18:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:47.341 01:18:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:47.341 01:18:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:47.341 01:18:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:47.341 01:18:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:47.341 01:18:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:47.341 01:18:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:47.341 01:18:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:47.341 01:18:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:47.341 01:18:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:47.341 01:18:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:47.341 01:18:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:47.341 01:18:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:47.341 01:18:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:47.341 01:18:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:48.274 nvme0n1 00:32:48.274 01:18:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:48.274 01:18:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:48.274 01:18:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:48.274 01:18:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:48.274 01:18:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:48.274 01:18:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:48.275 01:18:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:48.275 01:18:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:48.275 01:18:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:48.275 01:18:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:48.275 01:18:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:48.275 01:18:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:32:48.275 01:18:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:48.275 01:18:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:48.275 01:18:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:32:48.275 01:18:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:48.275 01:18:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:48.275 01:18:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:48.275 01:18:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:48.275 01:18:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTY0ZmEzNjJmOTBlNjI3NTY5ZGVjMGE4MjNkYTBhZjmwd//x: 00:32:48.275 01:18:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDJmMzhmYzVlOGI0ZmI5NTRkMjIxNDVhMjUyMGRmMmRiZTU1NzA2NzZjY2Q3MWZkZjAwMDZmMDIyYzlhNGQ2YjwKKdU=: 00:32:48.275 01:18:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:48.275 01:18:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:48.275 01:18:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTY0ZmEzNjJmOTBlNjI3NTY5ZGVjMGE4MjNkYTBhZjmwd//x: 00:32:48.275 01:18:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDJmMzhmYzVlOGI0ZmI5NTRkMjIxNDVhMjUyMGRmMmRiZTU1NzA2NzZjY2Q3MWZkZjAwMDZmMDIyYzlhNGQ2YjwKKdU=: ]] 00:32:48.275 01:18:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDJmMzhmYzVlOGI0ZmI5NTRkMjIxNDVhMjUyMGRmMmRiZTU1NzA2NzZjY2Q3MWZkZjAwMDZmMDIyYzlhNGQ2YjwKKdU=: 00:32:48.275 01:18:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:32:48.275 01:18:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:48.275 01:18:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:48.275 01:18:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:48.275 01:18:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:48.275 01:18:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:48.275 01:18:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:32:48.275 01:18:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:48.275 01:18:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:48.275 01:18:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:48.275 01:18:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:48.275 01:18:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:48.275 01:18:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:48.275 01:18:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:48.275 01:18:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:48.275 01:18:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:48.275 01:18:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:48.275 01:18:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:48.275 01:18:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:48.275 01:18:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:48.275 01:18:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:48.275 01:18:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:48.275 01:18:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:48.275 01:18:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:48.534 nvme0n1 00:32:48.534 01:18:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:48.534 01:18:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:48.534 01:18:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:48.534 01:18:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:48.534 01:18:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:48.534 01:18:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:48.534 01:18:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:48.534 01:18:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:48.534 01:18:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:48.534 01:18:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:48.534 01:18:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:48.534 01:18:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:48.534 01:18:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:32:48.534 01:18:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:48.534 01:18:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:48.534 01:18:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:48.534 01:18:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:48.534 01:18:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODJkOTQ5YmM3MTFmZWQwNmJkNjZlNmM4NjhiNWY5MjhmOTgyMzg1NTg1MWI3YzZjK2hdmQ==: 00:32:48.534 01:18:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZGFmMzdkZTcwYmQwMGM3MzAyZmRlMmM2MDJjMjAwMzVjMDRhYmNhMGFjNDcwMzQ3OLTAGA==: 00:32:48.534 01:18:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:48.534 01:18:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:48.534 01:18:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODJkOTQ5YmM3MTFmZWQwNmJkNjZlNmM4NjhiNWY5MjhmOTgyMzg1NTg1MWI3YzZjK2hdmQ==: 00:32:48.534 01:18:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZGFmMzdkZTcwYmQwMGM3MzAyZmRlMmM2MDJjMjAwMzVjMDRhYmNhMGFjNDcwMzQ3OLTAGA==: ]] 00:32:48.534 01:18:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZGFmMzdkZTcwYmQwMGM3MzAyZmRlMmM2MDJjMjAwMzVjMDRhYmNhMGFjNDcwMzQ3OLTAGA==: 00:32:48.534 01:18:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:32:48.534 01:18:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:48.534 01:18:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:48.534 01:18:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:48.534 01:18:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:48.534 01:18:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:48.534 01:18:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:32:48.534 01:18:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:48.534 01:18:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:48.534 01:18:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:48.534 01:18:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:48.534 01:18:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:48.534 01:18:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:48.534 01:18:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:48.534 01:18:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:48.534 01:18:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:48.534 01:18:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:48.534 01:18:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:48.534 01:18:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:48.534 01:18:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:48.534 01:18:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:48.534 01:18:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:48.534 01:18:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:48.534 01:18:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:48.817 nvme0n1 00:32:48.817 01:18:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:48.817 01:18:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:48.817 01:18:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:48.817 01:18:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:48.817 01:18:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:48.817 01:18:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:48.817 01:18:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:48.817 01:18:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:48.817 01:18:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:48.817 01:18:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:48.817 01:18:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:48.817 01:18:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:48.817 01:18:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:32:48.817 01:18:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:48.817 01:18:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:48.817 01:18:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:48.817 01:18:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:48.817 01:18:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YmY4YmEyNjQzODkwYTE5NjcwMWNlMWJiNmZlNzE2NDWD2ggJ: 00:32:48.818 01:18:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDM5ZDg4ZmE2NmY1NDFmMGNkY2UxMjY0MWNiMGRkZGQnEYox: 00:32:48.818 01:18:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:48.818 01:18:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:48.818 01:18:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YmY4YmEyNjQzODkwYTE5NjcwMWNlMWJiNmZlNzE2NDWD2ggJ: 00:32:48.818 01:18:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDM5ZDg4ZmE2NmY1NDFmMGNkY2UxMjY0MWNiMGRkZGQnEYox: ]] 00:32:48.818 01:18:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDM5ZDg4ZmE2NmY1NDFmMGNkY2UxMjY0MWNiMGRkZGQnEYox: 00:32:48.818 01:18:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:32:48.818 01:18:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:48.818 01:18:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:48.818 01:18:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:48.818 01:18:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:48.818 01:18:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:48.818 01:18:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:32:48.818 01:18:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:48.818 01:18:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:48.818 01:18:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:48.818 01:18:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:48.818 01:18:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:48.818 01:18:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:48.818 01:18:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:48.818 01:18:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:48.818 01:18:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:48.818 01:18:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:48.818 01:18:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:48.818 01:18:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:48.818 01:18:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:48.818 01:18:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:48.818 01:18:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:48.818 01:18:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:48.818 01:18:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:49.081 nvme0n1 00:32:49.081 01:18:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:49.081 01:18:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:49.081 01:18:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:49.081 01:18:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:49.081 01:18:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:49.081 01:18:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:49.081 01:18:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:49.081 01:18:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:49.081 01:18:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:49.081 01:18:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:49.081 01:18:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:49.081 01:18:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:49.081 01:18:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:32:49.081 01:18:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:49.081 01:18:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:49.081 01:18:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:49.081 01:18:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:49.081 01:18:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTFkNmJhYmU4MDI2OTYzOGMxYzc4MWM4YThhNWMyZjRhMmY1M2MwOTU1YThkOGZkMOHq9g==: 00:32:49.081 01:18:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZjlkNTE4YTg2MTQ0NzEzNWYwMjgyNzU2ZjJiMjJiOWIxPv+g: 00:32:49.081 01:18:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:49.081 01:18:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:49.081 01:18:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTFkNmJhYmU4MDI2OTYzOGMxYzc4MWM4YThhNWMyZjRhMmY1M2MwOTU1YThkOGZkMOHq9g==: 00:32:49.081 01:18:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZjlkNTE4YTg2MTQ0NzEzNWYwMjgyNzU2ZjJiMjJiOWIxPv+g: ]] 00:32:49.081 01:18:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZjlkNTE4YTg2MTQ0NzEzNWYwMjgyNzU2ZjJiMjJiOWIxPv+g: 00:32:49.081 01:18:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:32:49.081 01:18:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:49.081 01:18:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:49.081 01:18:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:49.081 01:18:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:49.081 01:18:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:49.081 01:18:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:32:49.081 01:18:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:49.081 01:18:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:49.081 01:18:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:49.081 01:18:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:49.081 01:18:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:49.081 01:18:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:49.081 01:18:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:49.081 01:18:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:49.081 01:18:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:49.081 01:18:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:49.081 01:18:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:49.081 01:18:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:49.081 01:18:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:49.081 01:18:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:49.081 01:18:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:49.081 01:18:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:49.081 01:18:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:49.081 nvme0n1 00:32:49.081 01:18:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:49.081 01:18:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:49.081 01:18:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:49.081 01:18:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:49.081 01:18:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:49.081 01:18:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:49.340 01:18:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:49.340 01:18:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:49.340 01:18:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:49.340 01:18:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:49.340 01:18:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:49.340 01:18:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:49.340 01:18:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:32:49.340 01:18:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:49.340 01:18:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:49.340 01:18:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:49.340 01:18:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:49.340 01:18:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MjE0NDQ1ZTAwMWJkZjM2YTYyODY0YTFlOTQxN2U5MTVkYjZlNjljNjdmMGYwMDRiYjUyN2RhMmIwZTFjNjBmYm2Aid4=: 00:32:49.340 01:18:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:49.340 01:18:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:49.340 01:18:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:49.340 01:18:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MjE0NDQ1ZTAwMWJkZjM2YTYyODY0YTFlOTQxN2U5MTVkYjZlNjljNjdmMGYwMDRiYjUyN2RhMmIwZTFjNjBmYm2Aid4=: 00:32:49.340 01:18:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:49.340 01:18:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:32:49.340 01:18:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:49.340 01:18:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:49.340 01:18:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:49.340 01:18:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:49.340 01:18:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:49.340 01:18:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:32:49.340 01:18:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:49.340 01:18:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:49.340 01:18:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:49.340 01:18:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:49.340 01:18:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:49.340 01:18:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:49.340 01:18:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:49.340 01:18:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:49.340 01:18:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:49.340 01:18:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:49.340 01:18:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:49.340 01:18:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:49.340 01:18:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:49.340 01:18:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:49.340 01:18:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:49.340 01:18:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:49.340 01:18:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:49.340 nvme0n1 00:32:49.340 01:18:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:49.340 01:18:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:49.340 01:18:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:49.341 01:18:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:49.341 01:18:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:49.341 01:18:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:49.341 01:18:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:49.341 01:18:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:49.341 01:18:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:49.341 01:18:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:49.341 01:18:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:49.341 01:18:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:49.341 01:18:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:49.341 01:18:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:32:49.341 01:18:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:49.341 01:18:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:49.341 01:18:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:49.341 01:18:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:49.341 01:18:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTY0ZmEzNjJmOTBlNjI3NTY5ZGVjMGE4MjNkYTBhZjmwd//x: 00:32:49.341 01:18:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDJmMzhmYzVlOGI0ZmI5NTRkMjIxNDVhMjUyMGRmMmRiZTU1NzA2NzZjY2Q3MWZkZjAwMDZmMDIyYzlhNGQ2YjwKKdU=: 00:32:49.341 01:18:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:49.341 01:18:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:49.341 01:18:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTY0ZmEzNjJmOTBlNjI3NTY5ZGVjMGE4MjNkYTBhZjmwd//x: 00:32:49.341 01:18:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDJmMzhmYzVlOGI0ZmI5NTRkMjIxNDVhMjUyMGRmMmRiZTU1NzA2NzZjY2Q3MWZkZjAwMDZmMDIyYzlhNGQ2YjwKKdU=: ]] 00:32:49.341 01:18:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDJmMzhmYzVlOGI0ZmI5NTRkMjIxNDVhMjUyMGRmMmRiZTU1NzA2NzZjY2Q3MWZkZjAwMDZmMDIyYzlhNGQ2YjwKKdU=: 00:32:49.341 01:18:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:32:49.341 01:18:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:49.341 01:18:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:49.341 01:18:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:49.341 01:18:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:49.341 01:18:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:49.341 01:18:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:32:49.341 01:18:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:49.341 01:18:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:49.599 01:18:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:49.599 01:18:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:49.599 01:18:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:49.599 01:18:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:49.599 01:18:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:49.599 01:18:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:49.599 01:18:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:49.599 01:18:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:49.599 01:18:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:49.599 01:18:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:49.599 01:18:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:49.599 01:18:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:49.599 01:18:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:49.599 01:18:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:49.599 01:18:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:49.599 nvme0n1 00:32:49.599 01:18:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:49.599 01:18:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:49.599 01:18:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:49.600 01:18:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:49.600 01:18:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:49.600 01:18:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:49.600 01:18:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:49.600 01:18:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:49.600 01:18:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:49.600 01:18:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:49.600 01:18:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:49.600 01:18:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:49.600 01:18:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:32:49.600 01:18:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:49.600 01:18:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:49.600 01:18:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:49.600 01:18:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:49.600 01:18:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODJkOTQ5YmM3MTFmZWQwNmJkNjZlNmM4NjhiNWY5MjhmOTgyMzg1NTg1MWI3YzZjK2hdmQ==: 00:32:49.600 01:18:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZGFmMzdkZTcwYmQwMGM3MzAyZmRlMmM2MDJjMjAwMzVjMDRhYmNhMGFjNDcwMzQ3OLTAGA==: 00:32:49.600 01:18:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:49.600 01:18:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:49.600 01:18:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODJkOTQ5YmM3MTFmZWQwNmJkNjZlNmM4NjhiNWY5MjhmOTgyMzg1NTg1MWI3YzZjK2hdmQ==: 00:32:49.600 01:18:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZGFmMzdkZTcwYmQwMGM3MzAyZmRlMmM2MDJjMjAwMzVjMDRhYmNhMGFjNDcwMzQ3OLTAGA==: ]] 00:32:49.600 01:18:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZGFmMzdkZTcwYmQwMGM3MzAyZmRlMmM2MDJjMjAwMzVjMDRhYmNhMGFjNDcwMzQ3OLTAGA==: 00:32:49.600 01:18:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:32:49.600 01:18:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:49.600 01:18:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:49.600 01:18:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:49.600 01:18:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:49.600 01:18:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:49.600 01:18:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:32:49.600 01:18:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:49.600 01:18:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:49.857 01:18:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:49.857 01:18:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:49.857 01:18:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:49.857 01:18:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:49.857 01:18:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:49.857 01:18:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:49.857 01:18:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:49.857 01:18:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:49.858 01:18:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:49.858 01:18:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:49.858 01:18:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:49.858 01:18:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:49.858 01:18:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:49.858 01:18:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:49.858 01:18:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:49.858 nvme0n1 00:32:49.858 01:18:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:49.858 01:18:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:49.858 01:18:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:49.858 01:18:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:49.858 01:18:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:49.858 01:18:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:49.858 01:18:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:49.858 01:18:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:49.858 01:18:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:49.858 01:18:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:49.858 01:18:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:49.858 01:18:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:50.116 01:18:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:32:50.116 01:18:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:50.116 01:18:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:50.116 01:18:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:50.116 01:18:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:50.116 01:18:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YmY4YmEyNjQzODkwYTE5NjcwMWNlMWJiNmZlNzE2NDWD2ggJ: 00:32:50.116 01:18:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDM5ZDg4ZmE2NmY1NDFmMGNkY2UxMjY0MWNiMGRkZGQnEYox: 00:32:50.116 01:18:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:50.116 01:18:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:50.116 01:18:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YmY4YmEyNjQzODkwYTE5NjcwMWNlMWJiNmZlNzE2NDWD2ggJ: 00:32:50.116 01:18:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDM5ZDg4ZmE2NmY1NDFmMGNkY2UxMjY0MWNiMGRkZGQnEYox: ]] 00:32:50.116 01:18:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDM5ZDg4ZmE2NmY1NDFmMGNkY2UxMjY0MWNiMGRkZGQnEYox: 00:32:50.116 01:18:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:32:50.116 01:18:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:50.116 01:18:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:50.116 01:18:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:50.116 01:18:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:50.116 01:18:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:50.116 01:18:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:32:50.116 01:18:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:50.116 01:18:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:50.116 01:18:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:50.116 01:18:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:50.116 01:18:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:50.116 01:18:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:50.116 01:18:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:50.116 01:18:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:50.116 01:18:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:50.116 01:18:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:50.116 01:18:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:50.116 01:18:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:50.116 01:18:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:50.116 01:18:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:50.116 01:18:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:50.116 01:18:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:50.116 01:18:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:50.116 nvme0n1 00:32:50.116 01:18:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:50.116 01:18:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:50.116 01:18:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:50.116 01:18:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:50.116 01:18:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:50.116 01:18:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:50.116 01:18:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:50.116 01:18:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:50.116 01:18:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:50.116 01:18:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:50.374 01:18:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:50.374 01:18:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:50.374 01:18:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:32:50.374 01:18:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:50.374 01:18:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:50.374 01:18:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:50.374 01:18:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:50.374 01:18:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTFkNmJhYmU4MDI2OTYzOGMxYzc4MWM4YThhNWMyZjRhMmY1M2MwOTU1YThkOGZkMOHq9g==: 00:32:50.374 01:18:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZjlkNTE4YTg2MTQ0NzEzNWYwMjgyNzU2ZjJiMjJiOWIxPv+g: 00:32:50.374 01:18:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:50.374 01:18:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:50.374 01:18:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTFkNmJhYmU4MDI2OTYzOGMxYzc4MWM4YThhNWMyZjRhMmY1M2MwOTU1YThkOGZkMOHq9g==: 00:32:50.374 01:18:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZjlkNTE4YTg2MTQ0NzEzNWYwMjgyNzU2ZjJiMjJiOWIxPv+g: ]] 00:32:50.374 01:18:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZjlkNTE4YTg2MTQ0NzEzNWYwMjgyNzU2ZjJiMjJiOWIxPv+g: 00:32:50.374 01:18:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:32:50.374 01:18:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:50.374 01:18:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:50.374 01:18:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:50.374 01:18:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:50.374 01:18:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:50.374 01:18:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:32:50.374 01:18:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:50.374 01:18:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:50.374 01:18:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:50.374 01:18:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:50.374 01:18:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:50.374 01:18:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:50.374 01:18:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:50.374 01:18:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:50.374 01:18:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:50.374 01:18:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:50.374 01:18:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:50.374 01:18:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:50.374 01:18:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:50.374 01:18:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:50.374 01:18:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:50.374 01:18:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:50.374 01:18:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:50.374 nvme0n1 00:32:50.374 01:18:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:50.374 01:18:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:50.374 01:18:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:50.374 01:18:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:50.374 01:18:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:50.374 01:18:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:50.632 01:18:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:50.632 01:18:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:50.632 01:18:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:50.632 01:18:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:50.632 01:18:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:50.632 01:18:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:50.632 01:18:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:32:50.632 01:18:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:50.632 01:18:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:50.632 01:18:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:50.632 01:18:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:50.632 01:18:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MjE0NDQ1ZTAwMWJkZjM2YTYyODY0YTFlOTQxN2U5MTVkYjZlNjljNjdmMGYwMDRiYjUyN2RhMmIwZTFjNjBmYm2Aid4=: 00:32:50.632 01:18:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:50.632 01:18:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:50.632 01:18:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:50.632 01:18:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MjE0NDQ1ZTAwMWJkZjM2YTYyODY0YTFlOTQxN2U5MTVkYjZlNjljNjdmMGYwMDRiYjUyN2RhMmIwZTFjNjBmYm2Aid4=: 00:32:50.632 01:18:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:50.632 01:18:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:32:50.632 01:18:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:50.632 01:18:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:50.632 01:18:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:50.632 01:18:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:50.632 01:18:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:50.632 01:18:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:32:50.632 01:18:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:50.632 01:18:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:50.632 01:18:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:50.632 01:18:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:50.632 01:18:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:50.632 01:18:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:50.632 01:18:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:50.632 01:18:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:50.632 01:18:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:50.632 01:18:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:50.632 01:18:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:50.632 01:18:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:50.632 01:18:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:50.632 01:18:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:50.632 01:18:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:50.632 01:18:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:50.632 01:18:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:50.632 nvme0n1 00:32:50.632 01:18:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:50.633 01:18:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:50.633 01:18:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:50.633 01:18:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:50.633 01:18:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:50.633 01:18:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:50.891 01:18:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:50.891 01:18:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:50.891 01:18:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:50.891 01:18:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:50.891 01:18:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:50.891 01:18:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:50.891 01:18:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:50.891 01:18:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:32:50.891 01:18:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:50.891 01:18:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:50.891 01:18:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:50.891 01:18:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:50.891 01:18:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTY0ZmEzNjJmOTBlNjI3NTY5ZGVjMGE4MjNkYTBhZjmwd//x: 00:32:50.891 01:18:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDJmMzhmYzVlOGI0ZmI5NTRkMjIxNDVhMjUyMGRmMmRiZTU1NzA2NzZjY2Q3MWZkZjAwMDZmMDIyYzlhNGQ2YjwKKdU=: 00:32:50.891 01:18:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:50.891 01:18:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:50.891 01:18:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTY0ZmEzNjJmOTBlNjI3NTY5ZGVjMGE4MjNkYTBhZjmwd//x: 00:32:50.891 01:18:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDJmMzhmYzVlOGI0ZmI5NTRkMjIxNDVhMjUyMGRmMmRiZTU1NzA2NzZjY2Q3MWZkZjAwMDZmMDIyYzlhNGQ2YjwKKdU=: ]] 00:32:50.891 01:18:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDJmMzhmYzVlOGI0ZmI5NTRkMjIxNDVhMjUyMGRmMmRiZTU1NzA2NzZjY2Q3MWZkZjAwMDZmMDIyYzlhNGQ2YjwKKdU=: 00:32:50.891 01:18:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:32:50.891 01:18:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:50.891 01:18:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:50.891 01:18:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:50.891 01:18:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:50.891 01:18:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:50.891 01:18:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:32:50.891 01:18:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:50.891 01:18:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:50.891 01:18:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:50.891 01:18:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:50.891 01:18:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:50.891 01:18:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:50.891 01:18:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:50.891 01:18:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:50.891 01:18:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:50.891 01:18:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:50.891 01:18:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:50.891 01:18:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:50.891 01:18:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:50.891 01:18:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:50.891 01:18:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:50.891 01:18:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:50.891 01:18:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:51.150 nvme0n1 00:32:51.150 01:18:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:51.150 01:18:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:51.150 01:18:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:51.150 01:18:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:51.150 01:18:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:51.150 01:18:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:51.150 01:18:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:51.150 01:18:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:51.150 01:18:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:51.150 01:18:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:51.150 01:18:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:51.150 01:18:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:51.150 01:18:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:32:51.150 01:18:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:51.150 01:18:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:51.150 01:18:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:51.150 01:18:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:51.150 01:18:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODJkOTQ5YmM3MTFmZWQwNmJkNjZlNmM4NjhiNWY5MjhmOTgyMzg1NTg1MWI3YzZjK2hdmQ==: 00:32:51.150 01:18:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZGFmMzdkZTcwYmQwMGM3MzAyZmRlMmM2MDJjMjAwMzVjMDRhYmNhMGFjNDcwMzQ3OLTAGA==: 00:32:51.150 01:18:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:51.150 01:18:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:51.150 01:18:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODJkOTQ5YmM3MTFmZWQwNmJkNjZlNmM4NjhiNWY5MjhmOTgyMzg1NTg1MWI3YzZjK2hdmQ==: 00:32:51.150 01:18:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZGFmMzdkZTcwYmQwMGM3MzAyZmRlMmM2MDJjMjAwMzVjMDRhYmNhMGFjNDcwMzQ3OLTAGA==: ]] 00:32:51.150 01:18:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZGFmMzdkZTcwYmQwMGM3MzAyZmRlMmM2MDJjMjAwMzVjMDRhYmNhMGFjNDcwMzQ3OLTAGA==: 00:32:51.150 01:18:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:32:51.150 01:18:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:51.150 01:18:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:51.150 01:18:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:51.150 01:18:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:51.150 01:18:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:51.150 01:18:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:32:51.150 01:18:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:51.150 01:18:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:51.150 01:18:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:51.150 01:18:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:51.150 01:18:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:51.150 01:18:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:51.150 01:18:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:51.150 01:18:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:51.150 01:18:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:51.150 01:18:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:51.150 01:18:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:51.150 01:18:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:51.150 01:18:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:51.150 01:18:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:51.150 01:18:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:51.150 01:18:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:51.150 01:18:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:51.408 nvme0n1 00:32:51.408 01:18:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:51.408 01:18:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:51.408 01:18:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:51.408 01:18:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:51.408 01:18:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:51.408 01:18:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:51.408 01:18:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:51.408 01:18:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:51.408 01:18:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:51.408 01:18:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:51.408 01:18:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:51.408 01:18:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:51.408 01:18:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:32:51.408 01:18:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:51.408 01:18:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:51.408 01:18:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:51.408 01:18:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:51.408 01:18:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YmY4YmEyNjQzODkwYTE5NjcwMWNlMWJiNmZlNzE2NDWD2ggJ: 00:32:51.408 01:18:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDM5ZDg4ZmE2NmY1NDFmMGNkY2UxMjY0MWNiMGRkZGQnEYox: 00:32:51.408 01:18:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:51.408 01:18:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:51.408 01:18:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YmY4YmEyNjQzODkwYTE5NjcwMWNlMWJiNmZlNzE2NDWD2ggJ: 00:32:51.408 01:18:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDM5ZDg4ZmE2NmY1NDFmMGNkY2UxMjY0MWNiMGRkZGQnEYox: ]] 00:32:51.408 01:18:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDM5ZDg4ZmE2NmY1NDFmMGNkY2UxMjY0MWNiMGRkZGQnEYox: 00:32:51.408 01:18:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:32:51.408 01:18:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:51.408 01:18:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:51.408 01:18:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:51.666 01:18:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:51.666 01:18:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:51.666 01:18:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:32:51.666 01:18:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:51.666 01:18:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:51.666 01:18:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:51.666 01:18:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:51.666 01:18:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:51.666 01:18:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:51.666 01:18:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:51.666 01:18:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:51.666 01:18:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:51.666 01:18:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:51.666 01:18:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:51.666 01:18:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:51.666 01:18:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:51.666 01:18:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:51.666 01:18:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:51.666 01:18:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:51.666 01:18:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:51.924 nvme0n1 00:32:51.924 01:18:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:51.924 01:18:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:51.924 01:18:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:51.924 01:18:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:51.924 01:18:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:51.924 01:18:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:51.924 01:18:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:51.924 01:18:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:51.924 01:18:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:51.924 01:18:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:51.924 01:18:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:51.924 01:18:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:51.924 01:18:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:32:51.924 01:18:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:51.924 01:18:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:51.924 01:18:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:51.924 01:18:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:51.924 01:18:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTFkNmJhYmU4MDI2OTYzOGMxYzc4MWM4YThhNWMyZjRhMmY1M2MwOTU1YThkOGZkMOHq9g==: 00:32:51.924 01:18:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZjlkNTE4YTg2MTQ0NzEzNWYwMjgyNzU2ZjJiMjJiOWIxPv+g: 00:32:51.924 01:18:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:51.924 01:18:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:51.924 01:18:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTFkNmJhYmU4MDI2OTYzOGMxYzc4MWM4YThhNWMyZjRhMmY1M2MwOTU1YThkOGZkMOHq9g==: 00:32:51.924 01:18:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZjlkNTE4YTg2MTQ0NzEzNWYwMjgyNzU2ZjJiMjJiOWIxPv+g: ]] 00:32:51.924 01:18:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZjlkNTE4YTg2MTQ0NzEzNWYwMjgyNzU2ZjJiMjJiOWIxPv+g: 00:32:51.924 01:18:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:32:51.924 01:18:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:51.924 01:18:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:51.924 01:18:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:51.924 01:18:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:51.924 01:18:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:51.924 01:18:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:32:51.924 01:18:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:51.924 01:18:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:51.924 01:18:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:51.924 01:18:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:51.924 01:18:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:51.924 01:18:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:51.924 01:18:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:51.924 01:18:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:51.924 01:18:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:51.924 01:18:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:51.924 01:18:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:51.924 01:18:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:51.924 01:18:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:51.924 01:18:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:51.924 01:18:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:51.924 01:18:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:51.924 01:18:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:52.186 nvme0n1 00:32:52.186 01:18:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:52.186 01:18:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:52.186 01:18:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:52.186 01:18:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:52.186 01:18:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:52.186 01:18:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:52.186 01:18:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:52.186 01:18:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:52.186 01:18:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:52.186 01:18:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:52.186 01:18:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:52.186 01:18:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:52.186 01:18:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:32:52.186 01:18:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:52.186 01:18:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:52.186 01:18:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:52.186 01:18:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:52.186 01:18:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MjE0NDQ1ZTAwMWJkZjM2YTYyODY0YTFlOTQxN2U5MTVkYjZlNjljNjdmMGYwMDRiYjUyN2RhMmIwZTFjNjBmYm2Aid4=: 00:32:52.186 01:18:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:52.186 01:18:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:52.186 01:18:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:52.186 01:18:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MjE0NDQ1ZTAwMWJkZjM2YTYyODY0YTFlOTQxN2U5MTVkYjZlNjljNjdmMGYwMDRiYjUyN2RhMmIwZTFjNjBmYm2Aid4=: 00:32:52.186 01:18:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:52.186 01:18:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:32:52.186 01:18:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:52.186 01:18:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:52.186 01:18:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:52.186 01:18:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:52.186 01:18:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:52.186 01:18:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:32:52.186 01:18:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:52.186 01:18:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:52.186 01:18:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:52.186 01:18:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:52.186 01:18:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:52.186 01:18:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:52.186 01:18:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:52.186 01:18:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:52.186 01:18:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:52.186 01:18:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:52.186 01:18:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:52.186 01:18:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:52.186 01:18:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:52.186 01:18:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:52.186 01:18:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:52.186 01:18:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:52.186 01:18:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:52.443 nvme0n1 00:32:52.443 01:18:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:52.443 01:18:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:52.443 01:18:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:52.443 01:18:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:52.443 01:18:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:52.443 01:18:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:52.701 01:18:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:52.701 01:18:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:52.701 01:18:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:52.701 01:18:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:52.701 01:18:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:52.701 01:18:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:52.701 01:18:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:52.701 01:18:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:32:52.701 01:18:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:52.701 01:18:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:52.701 01:18:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:52.701 01:18:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:52.701 01:18:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTY0ZmEzNjJmOTBlNjI3NTY5ZGVjMGE4MjNkYTBhZjmwd//x: 00:32:52.701 01:18:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDJmMzhmYzVlOGI0ZmI5NTRkMjIxNDVhMjUyMGRmMmRiZTU1NzA2NzZjY2Q3MWZkZjAwMDZmMDIyYzlhNGQ2YjwKKdU=: 00:32:52.701 01:18:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:52.701 01:18:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:52.701 01:18:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTY0ZmEzNjJmOTBlNjI3NTY5ZGVjMGE4MjNkYTBhZjmwd//x: 00:32:52.701 01:18:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDJmMzhmYzVlOGI0ZmI5NTRkMjIxNDVhMjUyMGRmMmRiZTU1NzA2NzZjY2Q3MWZkZjAwMDZmMDIyYzlhNGQ2YjwKKdU=: ]] 00:32:52.701 01:18:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDJmMzhmYzVlOGI0ZmI5NTRkMjIxNDVhMjUyMGRmMmRiZTU1NzA2NzZjY2Q3MWZkZjAwMDZmMDIyYzlhNGQ2YjwKKdU=: 00:32:52.701 01:18:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:32:52.701 01:18:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:52.701 01:18:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:52.701 01:18:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:52.701 01:18:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:52.701 01:18:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:52.701 01:18:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:32:52.701 01:18:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:52.701 01:18:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:52.701 01:18:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:52.701 01:18:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:52.701 01:18:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:52.701 01:18:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:52.701 01:18:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:52.701 01:18:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:52.701 01:18:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:52.701 01:18:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:52.701 01:18:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:52.701 01:18:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:52.701 01:18:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:52.701 01:18:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:52.701 01:18:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:52.701 01:18:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:52.701 01:18:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:53.267 nvme0n1 00:32:53.267 01:18:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:53.267 01:18:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:53.267 01:18:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:53.267 01:18:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:53.267 01:18:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:53.267 01:18:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:53.267 01:18:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:53.267 01:18:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:53.267 01:18:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:53.267 01:18:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:53.267 01:18:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:53.267 01:18:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:53.267 01:18:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:32:53.267 01:18:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:53.267 01:18:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:53.267 01:18:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:53.267 01:18:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:53.267 01:18:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODJkOTQ5YmM3MTFmZWQwNmJkNjZlNmM4NjhiNWY5MjhmOTgyMzg1NTg1MWI3YzZjK2hdmQ==: 00:32:53.267 01:18:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZGFmMzdkZTcwYmQwMGM3MzAyZmRlMmM2MDJjMjAwMzVjMDRhYmNhMGFjNDcwMzQ3OLTAGA==: 00:32:53.267 01:18:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:53.267 01:18:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:53.267 01:18:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODJkOTQ5YmM3MTFmZWQwNmJkNjZlNmM4NjhiNWY5MjhmOTgyMzg1NTg1MWI3YzZjK2hdmQ==: 00:32:53.267 01:18:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZGFmMzdkZTcwYmQwMGM3MzAyZmRlMmM2MDJjMjAwMzVjMDRhYmNhMGFjNDcwMzQ3OLTAGA==: ]] 00:32:53.267 01:18:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZGFmMzdkZTcwYmQwMGM3MzAyZmRlMmM2MDJjMjAwMzVjMDRhYmNhMGFjNDcwMzQ3OLTAGA==: 00:32:53.267 01:18:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:32:53.267 01:18:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:53.267 01:18:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:53.267 01:18:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:53.267 01:18:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:53.267 01:18:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:53.267 01:18:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:32:53.267 01:18:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:53.267 01:18:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:53.267 01:18:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:53.267 01:18:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:53.267 01:18:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:53.267 01:18:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:53.267 01:18:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:53.267 01:18:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:53.267 01:18:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:53.267 01:18:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:53.267 01:18:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:53.267 01:18:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:53.267 01:18:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:53.267 01:18:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:53.267 01:18:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:53.267 01:18:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:53.267 01:18:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:53.832 nvme0n1 00:32:53.832 01:18:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:53.832 01:18:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:53.832 01:18:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:53.832 01:18:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:53.832 01:18:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:53.832 01:18:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:53.832 01:18:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:53.832 01:18:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:53.832 01:18:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:53.832 01:18:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:53.832 01:18:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:53.832 01:18:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:53.832 01:18:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:32:53.832 01:18:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:53.833 01:18:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:53.833 01:18:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:53.833 01:18:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:53.833 01:18:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YmY4YmEyNjQzODkwYTE5NjcwMWNlMWJiNmZlNzE2NDWD2ggJ: 00:32:53.833 01:18:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDM5ZDg4ZmE2NmY1NDFmMGNkY2UxMjY0MWNiMGRkZGQnEYox: 00:32:53.833 01:18:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:53.833 01:18:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:53.833 01:18:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YmY4YmEyNjQzODkwYTE5NjcwMWNlMWJiNmZlNzE2NDWD2ggJ: 00:32:53.833 01:18:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDM5ZDg4ZmE2NmY1NDFmMGNkY2UxMjY0MWNiMGRkZGQnEYox: ]] 00:32:53.833 01:18:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDM5ZDg4ZmE2NmY1NDFmMGNkY2UxMjY0MWNiMGRkZGQnEYox: 00:32:53.833 01:18:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:32:53.833 01:18:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:53.833 01:18:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:53.833 01:18:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:53.833 01:18:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:53.833 01:18:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:53.833 01:18:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:32:53.833 01:18:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:53.833 01:18:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:53.833 01:18:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:53.833 01:18:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:53.833 01:18:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:53.833 01:18:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:53.833 01:18:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:53.833 01:18:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:53.833 01:18:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:53.833 01:18:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:53.833 01:18:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:53.833 01:18:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:53.833 01:18:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:53.833 01:18:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:53.833 01:18:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:53.833 01:18:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:53.833 01:18:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:54.406 nvme0n1 00:32:54.406 01:18:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:54.406 01:18:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:54.406 01:18:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:54.406 01:18:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:54.406 01:18:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:54.406 01:18:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:54.406 01:18:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:54.406 01:18:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:54.406 01:18:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:54.406 01:18:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:54.406 01:18:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:54.406 01:18:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:54.406 01:18:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:32:54.406 01:18:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:54.406 01:18:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:54.406 01:18:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:54.406 01:18:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:54.406 01:18:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTFkNmJhYmU4MDI2OTYzOGMxYzc4MWM4YThhNWMyZjRhMmY1M2MwOTU1YThkOGZkMOHq9g==: 00:32:54.406 01:18:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZjlkNTE4YTg2MTQ0NzEzNWYwMjgyNzU2ZjJiMjJiOWIxPv+g: 00:32:54.406 01:18:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:54.406 01:18:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:54.406 01:18:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTFkNmJhYmU4MDI2OTYzOGMxYzc4MWM4YThhNWMyZjRhMmY1M2MwOTU1YThkOGZkMOHq9g==: 00:32:54.406 01:18:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZjlkNTE4YTg2MTQ0NzEzNWYwMjgyNzU2ZjJiMjJiOWIxPv+g: ]] 00:32:54.406 01:18:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZjlkNTE4YTg2MTQ0NzEzNWYwMjgyNzU2ZjJiMjJiOWIxPv+g: 00:32:54.406 01:18:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:32:54.406 01:18:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:54.406 01:18:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:54.406 01:18:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:54.406 01:18:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:54.406 01:18:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:54.406 01:18:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:32:54.406 01:18:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:54.406 01:18:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:54.406 01:18:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:54.406 01:18:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:54.406 01:18:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:54.406 01:18:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:54.406 01:18:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:54.406 01:18:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:54.406 01:18:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:54.406 01:18:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:54.406 01:18:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:54.406 01:18:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:54.406 01:18:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:54.406 01:18:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:54.406 01:18:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:54.406 01:18:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:54.406 01:18:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:54.972 nvme0n1 00:32:54.972 01:18:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:54.972 01:18:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:54.972 01:18:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:54.972 01:18:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:54.972 01:18:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:54.972 01:18:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:54.972 01:18:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:54.972 01:18:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:54.972 01:18:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:54.972 01:18:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:54.972 01:18:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:54.972 01:18:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:54.972 01:18:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:32:54.972 01:18:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:54.972 01:18:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:54.972 01:18:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:54.972 01:18:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:54.972 01:18:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MjE0NDQ1ZTAwMWJkZjM2YTYyODY0YTFlOTQxN2U5MTVkYjZlNjljNjdmMGYwMDRiYjUyN2RhMmIwZTFjNjBmYm2Aid4=: 00:32:54.972 01:18:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:54.972 01:18:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:54.972 01:18:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:54.972 01:18:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MjE0NDQ1ZTAwMWJkZjM2YTYyODY0YTFlOTQxN2U5MTVkYjZlNjljNjdmMGYwMDRiYjUyN2RhMmIwZTFjNjBmYm2Aid4=: 00:32:54.972 01:18:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:54.972 01:18:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:32:54.972 01:18:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:54.972 01:18:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:54.972 01:18:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:54.972 01:18:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:54.972 01:18:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:54.972 01:18:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:32:54.972 01:18:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:54.972 01:18:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:54.972 01:18:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:54.972 01:18:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:54.972 01:18:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:54.972 01:18:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:54.972 01:18:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:54.972 01:18:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:54.972 01:18:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:54.972 01:18:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:54.972 01:18:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:54.972 01:18:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:54.972 01:18:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:54.972 01:18:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:54.972 01:18:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:54.972 01:18:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:54.972 01:18:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:55.538 nvme0n1 00:32:55.538 01:18:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:55.538 01:18:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:55.538 01:18:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:55.538 01:18:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:55.538 01:18:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:55.538 01:18:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:55.538 01:18:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:55.538 01:18:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:55.538 01:18:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:55.538 01:18:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:55.538 01:18:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:55.538 01:18:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:55.538 01:18:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:55.538 01:18:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:32:55.538 01:18:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:55.538 01:18:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:55.538 01:18:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:55.538 01:18:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:55.538 01:18:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTY0ZmEzNjJmOTBlNjI3NTY5ZGVjMGE4MjNkYTBhZjmwd//x: 00:32:55.538 01:18:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDJmMzhmYzVlOGI0ZmI5NTRkMjIxNDVhMjUyMGRmMmRiZTU1NzA2NzZjY2Q3MWZkZjAwMDZmMDIyYzlhNGQ2YjwKKdU=: 00:32:55.538 01:18:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:55.538 01:18:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:55.538 01:18:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTY0ZmEzNjJmOTBlNjI3NTY5ZGVjMGE4MjNkYTBhZjmwd//x: 00:32:55.538 01:18:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDJmMzhmYzVlOGI0ZmI5NTRkMjIxNDVhMjUyMGRmMmRiZTU1NzA2NzZjY2Q3MWZkZjAwMDZmMDIyYzlhNGQ2YjwKKdU=: ]] 00:32:55.538 01:18:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDJmMzhmYzVlOGI0ZmI5NTRkMjIxNDVhMjUyMGRmMmRiZTU1NzA2NzZjY2Q3MWZkZjAwMDZmMDIyYzlhNGQ2YjwKKdU=: 00:32:55.538 01:18:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:32:55.538 01:18:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:55.538 01:18:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:55.538 01:18:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:55.538 01:18:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:55.538 01:18:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:55.538 01:18:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:32:55.538 01:18:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:55.538 01:18:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:55.538 01:18:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:55.538 01:18:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:55.538 01:18:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:55.538 01:18:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:55.538 01:18:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:55.538 01:18:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:55.538 01:18:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:55.538 01:18:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:55.538 01:18:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:55.538 01:18:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:55.538 01:18:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:55.538 01:18:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:55.538 01:18:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:55.538 01:18:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:55.538 01:18:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:56.472 nvme0n1 00:32:56.472 01:18:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:56.472 01:18:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:56.472 01:18:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:56.472 01:18:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:56.472 01:18:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:56.472 01:18:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:56.730 01:18:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:56.730 01:18:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:56.730 01:18:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:56.730 01:18:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:56.730 01:18:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:56.730 01:18:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:56.730 01:18:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:32:56.730 01:18:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:56.730 01:18:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:56.730 01:18:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:56.730 01:18:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:56.730 01:18:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODJkOTQ5YmM3MTFmZWQwNmJkNjZlNmM4NjhiNWY5MjhmOTgyMzg1NTg1MWI3YzZjK2hdmQ==: 00:32:56.730 01:18:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZGFmMzdkZTcwYmQwMGM3MzAyZmRlMmM2MDJjMjAwMzVjMDRhYmNhMGFjNDcwMzQ3OLTAGA==: 00:32:56.730 01:18:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:56.730 01:18:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:56.730 01:18:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODJkOTQ5YmM3MTFmZWQwNmJkNjZlNmM4NjhiNWY5MjhmOTgyMzg1NTg1MWI3YzZjK2hdmQ==: 00:32:56.730 01:18:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZGFmMzdkZTcwYmQwMGM3MzAyZmRlMmM2MDJjMjAwMzVjMDRhYmNhMGFjNDcwMzQ3OLTAGA==: ]] 00:32:56.730 01:18:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZGFmMzdkZTcwYmQwMGM3MzAyZmRlMmM2MDJjMjAwMzVjMDRhYmNhMGFjNDcwMzQ3OLTAGA==: 00:32:56.730 01:18:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:32:56.730 01:18:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:56.730 01:18:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:56.730 01:18:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:56.730 01:18:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:56.730 01:18:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:56.730 01:18:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:32:56.730 01:18:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:56.730 01:18:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:56.730 01:18:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:56.730 01:18:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:56.730 01:18:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:56.730 01:18:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:56.730 01:18:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:56.730 01:18:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:56.730 01:18:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:56.730 01:18:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:56.730 01:18:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:56.730 01:18:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:56.730 01:18:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:56.730 01:18:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:56.730 01:18:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:56.730 01:18:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:56.730 01:18:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:57.663 nvme0n1 00:32:57.663 01:18:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:57.663 01:18:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:57.663 01:18:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:57.663 01:18:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:57.663 01:18:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:57.663 01:18:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:57.663 01:18:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:57.663 01:18:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:57.663 01:18:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:57.663 01:18:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:57.663 01:18:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:57.663 01:18:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:57.663 01:18:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:32:57.663 01:18:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:57.663 01:18:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:57.663 01:18:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:57.663 01:18:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:57.663 01:18:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YmY4YmEyNjQzODkwYTE5NjcwMWNlMWJiNmZlNzE2NDWD2ggJ: 00:32:57.663 01:18:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDM5ZDg4ZmE2NmY1NDFmMGNkY2UxMjY0MWNiMGRkZGQnEYox: 00:32:57.663 01:18:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:57.663 01:18:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:57.663 01:18:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YmY4YmEyNjQzODkwYTE5NjcwMWNlMWJiNmZlNzE2NDWD2ggJ: 00:32:57.663 01:18:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDM5ZDg4ZmE2NmY1NDFmMGNkY2UxMjY0MWNiMGRkZGQnEYox: ]] 00:32:57.663 01:18:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDM5ZDg4ZmE2NmY1NDFmMGNkY2UxMjY0MWNiMGRkZGQnEYox: 00:32:57.663 01:18:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:32:57.663 01:18:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:57.663 01:18:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:57.663 01:18:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:57.663 01:18:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:57.663 01:18:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:57.663 01:18:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:32:57.663 01:18:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:57.663 01:18:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:57.663 01:18:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:57.663 01:18:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:57.663 01:18:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:57.663 01:18:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:57.663 01:18:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:57.663 01:18:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:57.663 01:18:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:57.664 01:18:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:57.664 01:18:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:57.664 01:18:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:57.664 01:18:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:57.664 01:18:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:57.664 01:18:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:57.664 01:18:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:57.664 01:18:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:58.598 nvme0n1 00:32:58.598 01:18:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:58.598 01:18:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:58.598 01:18:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:58.598 01:18:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:58.598 01:18:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:58.598 01:18:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:58.598 01:18:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:58.598 01:18:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:58.598 01:18:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:58.598 01:18:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:58.598 01:18:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:58.598 01:18:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:58.598 01:18:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:32:58.598 01:18:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:58.598 01:18:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:58.598 01:18:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:58.598 01:18:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:58.598 01:18:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTFkNmJhYmU4MDI2OTYzOGMxYzc4MWM4YThhNWMyZjRhMmY1M2MwOTU1YThkOGZkMOHq9g==: 00:32:58.598 01:18:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZjlkNTE4YTg2MTQ0NzEzNWYwMjgyNzU2ZjJiMjJiOWIxPv+g: 00:32:58.598 01:18:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:58.598 01:18:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:58.598 01:18:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTFkNmJhYmU4MDI2OTYzOGMxYzc4MWM4YThhNWMyZjRhMmY1M2MwOTU1YThkOGZkMOHq9g==: 00:32:58.598 01:18:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZjlkNTE4YTg2MTQ0NzEzNWYwMjgyNzU2ZjJiMjJiOWIxPv+g: ]] 00:32:58.598 01:18:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZjlkNTE4YTg2MTQ0NzEzNWYwMjgyNzU2ZjJiMjJiOWIxPv+g: 00:32:58.598 01:18:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:32:58.598 01:18:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:58.598 01:18:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:58.598 01:18:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:58.598 01:18:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:58.598 01:18:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:58.598 01:18:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:32:58.598 01:18:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:58.598 01:18:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:58.598 01:18:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:58.598 01:18:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:58.598 01:18:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:58.598 01:18:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:58.598 01:18:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:58.598 01:18:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:58.598 01:18:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:58.598 01:18:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:58.598 01:18:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:58.598 01:18:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:58.598 01:18:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:58.598 01:18:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:58.598 01:18:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:58.598 01:18:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:58.598 01:18:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:59.533 nvme0n1 00:32:59.533 01:18:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:59.533 01:18:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:59.533 01:18:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:59.533 01:18:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:59.533 01:18:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:59.533 01:18:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:59.533 01:18:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:59.533 01:18:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:59.533 01:18:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:59.533 01:18:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:59.533 01:18:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:59.533 01:18:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:59.533 01:18:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:32:59.533 01:18:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:59.533 01:18:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:59.533 01:18:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:59.533 01:18:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:59.533 01:18:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MjE0NDQ1ZTAwMWJkZjM2YTYyODY0YTFlOTQxN2U5MTVkYjZlNjljNjdmMGYwMDRiYjUyN2RhMmIwZTFjNjBmYm2Aid4=: 00:32:59.533 01:18:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:59.533 01:18:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:59.533 01:18:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:59.533 01:18:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MjE0NDQ1ZTAwMWJkZjM2YTYyODY0YTFlOTQxN2U5MTVkYjZlNjljNjdmMGYwMDRiYjUyN2RhMmIwZTFjNjBmYm2Aid4=: 00:32:59.533 01:18:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:59.533 01:18:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:32:59.533 01:18:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:59.533 01:18:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:59.533 01:18:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:59.533 01:18:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:59.533 01:18:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:59.533 01:18:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:32:59.533 01:18:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:59.533 01:18:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:59.533 01:18:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:59.533 01:18:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:59.533 01:18:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:59.533 01:18:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:59.533 01:18:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:59.533 01:18:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:59.533 01:18:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:59.533 01:18:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:59.533 01:18:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:59.533 01:18:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:59.533 01:18:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:59.533 01:18:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:59.533 01:18:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:59.533 01:18:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:59.791 01:18:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:00.730 nvme0n1 00:33:00.730 01:18:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:00.730 01:18:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:00.730 01:18:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:00.730 01:18:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:00.730 01:18:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:00.730 01:18:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:00.730 01:18:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:00.730 01:18:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:00.730 01:18:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:00.730 01:18:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:00.730 01:18:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:00.730 01:18:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:33:00.730 01:18:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:33:00.730 01:18:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:00.730 01:18:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:33:00.730 01:18:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:00.730 01:18:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:00.730 01:18:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:00.730 01:18:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:33:00.730 01:18:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTY0ZmEzNjJmOTBlNjI3NTY5ZGVjMGE4MjNkYTBhZjmwd//x: 00:33:00.730 01:18:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDJmMzhmYzVlOGI0ZmI5NTRkMjIxNDVhMjUyMGRmMmRiZTU1NzA2NzZjY2Q3MWZkZjAwMDZmMDIyYzlhNGQ2YjwKKdU=: 00:33:00.730 01:18:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:00.730 01:18:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:00.730 01:18:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTY0ZmEzNjJmOTBlNjI3NTY5ZGVjMGE4MjNkYTBhZjmwd//x: 00:33:00.730 01:18:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDJmMzhmYzVlOGI0ZmI5NTRkMjIxNDVhMjUyMGRmMmRiZTU1NzA2NzZjY2Q3MWZkZjAwMDZmMDIyYzlhNGQ2YjwKKdU=: ]] 00:33:00.730 01:18:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDJmMzhmYzVlOGI0ZmI5NTRkMjIxNDVhMjUyMGRmMmRiZTU1NzA2NzZjY2Q3MWZkZjAwMDZmMDIyYzlhNGQ2YjwKKdU=: 00:33:00.730 01:18:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:33:00.730 01:18:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:00.730 01:18:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:00.730 01:18:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:33:00.730 01:18:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:33:00.730 01:18:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:00.730 01:18:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:33:00.730 01:18:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:00.730 01:18:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:00.730 01:18:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:00.730 01:18:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:00.730 01:18:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:00.730 01:18:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:00.730 01:18:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:00.730 01:18:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:00.730 01:18:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:00.730 01:18:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:00.730 01:18:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:00.730 01:18:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:00.730 01:18:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:00.730 01:18:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:00.730 01:18:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:00.730 01:18:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:00.730 01:18:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:00.730 nvme0n1 00:33:00.730 01:18:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:00.730 01:18:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:00.730 01:18:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:00.730 01:18:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:00.730 01:18:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:00.730 01:18:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:00.730 01:18:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:00.730 01:18:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:00.730 01:18:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:00.730 01:18:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:00.988 01:18:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:00.988 01:18:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:00.988 01:18:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:33:00.988 01:18:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:00.988 01:18:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:00.988 01:18:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:00.988 01:18:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:00.988 01:18:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODJkOTQ5YmM3MTFmZWQwNmJkNjZlNmM4NjhiNWY5MjhmOTgyMzg1NTg1MWI3YzZjK2hdmQ==: 00:33:00.988 01:18:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZGFmMzdkZTcwYmQwMGM3MzAyZmRlMmM2MDJjMjAwMzVjMDRhYmNhMGFjNDcwMzQ3OLTAGA==: 00:33:00.988 01:18:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:00.988 01:18:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:00.988 01:18:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODJkOTQ5YmM3MTFmZWQwNmJkNjZlNmM4NjhiNWY5MjhmOTgyMzg1NTg1MWI3YzZjK2hdmQ==: 00:33:00.988 01:18:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZGFmMzdkZTcwYmQwMGM3MzAyZmRlMmM2MDJjMjAwMzVjMDRhYmNhMGFjNDcwMzQ3OLTAGA==: ]] 00:33:00.988 01:18:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZGFmMzdkZTcwYmQwMGM3MzAyZmRlMmM2MDJjMjAwMzVjMDRhYmNhMGFjNDcwMzQ3OLTAGA==: 00:33:00.988 01:18:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:33:00.988 01:18:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:00.988 01:18:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:00.988 01:18:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:33:00.988 01:18:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:00.988 01:18:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:00.988 01:18:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:33:00.988 01:18:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:00.988 01:18:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:00.988 01:18:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:00.989 01:18:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:00.989 01:18:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:00.989 01:18:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:00.989 01:18:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:00.989 01:18:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:00.989 01:18:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:00.989 01:18:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:00.989 01:18:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:00.989 01:18:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:00.989 01:18:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:00.989 01:18:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:00.989 01:18:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:00.989 01:18:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:00.989 01:18:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:00.989 nvme0n1 00:33:00.989 01:18:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:00.989 01:18:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:00.989 01:18:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:00.989 01:18:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:00.989 01:18:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:00.989 01:18:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:00.989 01:18:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:00.989 01:18:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:00.989 01:18:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:00.989 01:18:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:00.989 01:18:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:00.989 01:18:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:00.989 01:18:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:33:00.989 01:18:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:00.989 01:18:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:00.989 01:18:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:00.989 01:18:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:00.989 01:18:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YmY4YmEyNjQzODkwYTE5NjcwMWNlMWJiNmZlNzE2NDWD2ggJ: 00:33:00.989 01:18:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDM5ZDg4ZmE2NmY1NDFmMGNkY2UxMjY0MWNiMGRkZGQnEYox: 00:33:00.989 01:18:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:00.989 01:18:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:00.989 01:18:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YmY4YmEyNjQzODkwYTE5NjcwMWNlMWJiNmZlNzE2NDWD2ggJ: 00:33:00.989 01:18:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDM5ZDg4ZmE2NmY1NDFmMGNkY2UxMjY0MWNiMGRkZGQnEYox: ]] 00:33:00.989 01:18:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDM5ZDg4ZmE2NmY1NDFmMGNkY2UxMjY0MWNiMGRkZGQnEYox: 00:33:00.989 01:18:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:33:00.989 01:18:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:00.989 01:18:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:00.989 01:18:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:33:00.989 01:18:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:33:00.989 01:18:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:00.989 01:18:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:33:00.989 01:18:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:00.989 01:18:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:00.989 01:18:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:00.989 01:18:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:00.989 01:18:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:00.989 01:18:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:00.989 01:18:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:00.989 01:18:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:00.989 01:18:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:00.989 01:18:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:00.989 01:18:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:00.989 01:18:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:00.989 01:18:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:00.989 01:18:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:00.989 01:18:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:00.989 01:18:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:00.989 01:18:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:01.246 nvme0n1 00:33:01.246 01:18:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:01.246 01:18:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:01.246 01:18:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:01.246 01:18:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:01.247 01:18:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:01.247 01:18:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:01.247 01:18:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:01.247 01:18:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:01.247 01:18:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:01.247 01:18:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:01.247 01:18:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:01.247 01:18:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:01.247 01:18:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:33:01.247 01:18:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:01.247 01:18:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:01.247 01:18:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:01.247 01:18:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:01.247 01:18:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTFkNmJhYmU4MDI2OTYzOGMxYzc4MWM4YThhNWMyZjRhMmY1M2MwOTU1YThkOGZkMOHq9g==: 00:33:01.247 01:18:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZjlkNTE4YTg2MTQ0NzEzNWYwMjgyNzU2ZjJiMjJiOWIxPv+g: 00:33:01.247 01:18:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:01.247 01:18:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:01.247 01:18:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTFkNmJhYmU4MDI2OTYzOGMxYzc4MWM4YThhNWMyZjRhMmY1M2MwOTU1YThkOGZkMOHq9g==: 00:33:01.247 01:18:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZjlkNTE4YTg2MTQ0NzEzNWYwMjgyNzU2ZjJiMjJiOWIxPv+g: ]] 00:33:01.247 01:18:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZjlkNTE4YTg2MTQ0NzEzNWYwMjgyNzU2ZjJiMjJiOWIxPv+g: 00:33:01.247 01:18:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:33:01.247 01:18:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:01.247 01:18:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:01.247 01:18:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:33:01.247 01:18:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:01.247 01:18:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:01.247 01:18:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:33:01.247 01:18:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:01.247 01:18:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:01.247 01:18:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:01.247 01:18:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:01.247 01:18:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:01.247 01:18:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:01.247 01:18:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:01.247 01:18:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:01.247 01:18:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:01.247 01:18:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:01.247 01:18:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:01.247 01:18:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:01.247 01:18:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:01.247 01:18:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:01.247 01:18:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:01.247 01:18:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:01.247 01:18:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:01.504 nvme0n1 00:33:01.504 01:18:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:01.504 01:18:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:01.504 01:18:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:01.504 01:18:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:01.504 01:18:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:01.504 01:18:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:01.504 01:18:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:01.504 01:18:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:01.504 01:18:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:01.504 01:18:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:01.504 01:18:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:01.504 01:18:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:01.505 01:18:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:33:01.505 01:18:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:01.505 01:18:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:01.505 01:18:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:01.505 01:18:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:33:01.505 01:18:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MjE0NDQ1ZTAwMWJkZjM2YTYyODY0YTFlOTQxN2U5MTVkYjZlNjljNjdmMGYwMDRiYjUyN2RhMmIwZTFjNjBmYm2Aid4=: 00:33:01.505 01:18:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:33:01.505 01:18:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:01.505 01:18:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:01.505 01:18:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MjE0NDQ1ZTAwMWJkZjM2YTYyODY0YTFlOTQxN2U5MTVkYjZlNjljNjdmMGYwMDRiYjUyN2RhMmIwZTFjNjBmYm2Aid4=: 00:33:01.505 01:18:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:33:01.505 01:18:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:33:01.505 01:18:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:01.505 01:18:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:01.505 01:18:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:33:01.505 01:18:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:33:01.505 01:18:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:01.505 01:18:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:33:01.505 01:18:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:01.505 01:18:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:01.505 01:18:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:01.505 01:18:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:01.505 01:18:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:01.505 01:18:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:01.505 01:18:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:01.505 01:18:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:01.505 01:18:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:01.505 01:18:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:01.505 01:18:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:01.505 01:18:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:01.505 01:18:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:01.505 01:18:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:01.505 01:18:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:33:01.505 01:18:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:01.505 01:18:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:01.763 nvme0n1 00:33:01.763 01:18:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:01.763 01:18:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:01.763 01:18:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:01.763 01:18:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:01.763 01:18:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:01.763 01:18:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:01.763 01:18:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:01.763 01:18:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:01.763 01:18:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:01.763 01:18:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:01.763 01:18:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:01.763 01:18:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:33:01.763 01:18:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:01.763 01:18:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:33:01.763 01:18:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:01.763 01:18:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:01.763 01:18:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:33:01.763 01:18:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:33:01.763 01:18:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTY0ZmEzNjJmOTBlNjI3NTY5ZGVjMGE4MjNkYTBhZjmwd//x: 00:33:01.763 01:18:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDJmMzhmYzVlOGI0ZmI5NTRkMjIxNDVhMjUyMGRmMmRiZTU1NzA2NzZjY2Q3MWZkZjAwMDZmMDIyYzlhNGQ2YjwKKdU=: 00:33:01.763 01:18:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:01.763 01:18:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:33:01.763 01:18:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTY0ZmEzNjJmOTBlNjI3NTY5ZGVjMGE4MjNkYTBhZjmwd//x: 00:33:01.763 01:18:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDJmMzhmYzVlOGI0ZmI5NTRkMjIxNDVhMjUyMGRmMmRiZTU1NzA2NzZjY2Q3MWZkZjAwMDZmMDIyYzlhNGQ2YjwKKdU=: ]] 00:33:01.763 01:18:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDJmMzhmYzVlOGI0ZmI5NTRkMjIxNDVhMjUyMGRmMmRiZTU1NzA2NzZjY2Q3MWZkZjAwMDZmMDIyYzlhNGQ2YjwKKdU=: 00:33:01.763 01:18:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:33:01.763 01:18:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:01.763 01:18:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:01.763 01:18:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:33:01.763 01:18:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:33:01.763 01:18:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:01.763 01:18:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:33:01.763 01:18:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:01.763 01:18:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:01.763 01:18:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:01.763 01:18:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:01.763 01:18:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:01.763 01:18:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:01.763 01:18:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:01.763 01:18:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:01.763 01:18:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:01.763 01:18:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:01.763 01:18:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:01.763 01:18:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:01.763 01:18:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:01.763 01:18:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:01.763 01:18:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:01.763 01:18:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:01.763 01:18:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:02.022 nvme0n1 00:33:02.022 01:18:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:02.022 01:18:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:02.022 01:18:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:02.022 01:18:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:02.022 01:18:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:02.022 01:18:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:02.022 01:18:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:02.022 01:18:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:02.022 01:18:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:02.022 01:18:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:02.022 01:18:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:02.022 01:18:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:02.022 01:18:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:33:02.022 01:18:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:02.022 01:18:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:02.022 01:18:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:33:02.022 01:18:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:02.022 01:18:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODJkOTQ5YmM3MTFmZWQwNmJkNjZlNmM4NjhiNWY5MjhmOTgyMzg1NTg1MWI3YzZjK2hdmQ==: 00:33:02.022 01:18:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZGFmMzdkZTcwYmQwMGM3MzAyZmRlMmM2MDJjMjAwMzVjMDRhYmNhMGFjNDcwMzQ3OLTAGA==: 00:33:02.022 01:18:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:02.022 01:18:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:33:02.022 01:18:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODJkOTQ5YmM3MTFmZWQwNmJkNjZlNmM4NjhiNWY5MjhmOTgyMzg1NTg1MWI3YzZjK2hdmQ==: 00:33:02.022 01:18:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZGFmMzdkZTcwYmQwMGM3MzAyZmRlMmM2MDJjMjAwMzVjMDRhYmNhMGFjNDcwMzQ3OLTAGA==: ]] 00:33:02.022 01:18:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZGFmMzdkZTcwYmQwMGM3MzAyZmRlMmM2MDJjMjAwMzVjMDRhYmNhMGFjNDcwMzQ3OLTAGA==: 00:33:02.022 01:18:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:33:02.022 01:18:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:02.022 01:18:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:02.022 01:18:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:33:02.022 01:18:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:02.022 01:18:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:02.022 01:18:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:33:02.022 01:18:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:02.022 01:18:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:02.022 01:18:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:02.022 01:18:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:02.022 01:18:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:02.022 01:18:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:02.022 01:18:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:02.022 01:18:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:02.022 01:18:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:02.022 01:18:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:02.022 01:18:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:02.022 01:18:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:02.022 01:18:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:02.022 01:18:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:02.022 01:18:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:02.022 01:18:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:02.022 01:18:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:02.280 nvme0n1 00:33:02.280 01:18:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:02.280 01:18:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:02.280 01:18:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:02.280 01:18:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:02.280 01:18:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:02.280 01:18:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:02.280 01:18:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:02.280 01:18:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:02.280 01:18:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:02.280 01:18:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:02.280 01:18:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:02.280 01:18:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:02.280 01:18:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:33:02.280 01:18:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:02.280 01:18:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:02.280 01:18:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:33:02.280 01:18:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:02.280 01:18:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YmY4YmEyNjQzODkwYTE5NjcwMWNlMWJiNmZlNzE2NDWD2ggJ: 00:33:02.280 01:18:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDM5ZDg4ZmE2NmY1NDFmMGNkY2UxMjY0MWNiMGRkZGQnEYox: 00:33:02.280 01:18:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:02.280 01:18:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:33:02.280 01:18:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YmY4YmEyNjQzODkwYTE5NjcwMWNlMWJiNmZlNzE2NDWD2ggJ: 00:33:02.280 01:18:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDM5ZDg4ZmE2NmY1NDFmMGNkY2UxMjY0MWNiMGRkZGQnEYox: ]] 00:33:02.280 01:18:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDM5ZDg4ZmE2NmY1NDFmMGNkY2UxMjY0MWNiMGRkZGQnEYox: 00:33:02.280 01:18:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:33:02.280 01:18:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:02.280 01:18:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:02.280 01:18:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:33:02.280 01:18:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:33:02.280 01:18:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:02.280 01:18:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:33:02.280 01:18:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:02.280 01:18:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:02.280 01:18:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:02.280 01:18:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:02.280 01:18:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:02.280 01:18:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:02.280 01:18:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:02.280 01:18:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:02.280 01:18:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:02.280 01:18:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:02.280 01:18:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:02.280 01:18:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:02.280 01:18:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:02.280 01:18:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:02.280 01:18:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:02.280 01:18:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:02.280 01:18:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:02.538 nvme0n1 00:33:02.538 01:18:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:02.538 01:18:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:02.538 01:18:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:02.538 01:18:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:02.538 01:18:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:02.538 01:18:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:02.538 01:18:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:02.538 01:18:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:02.538 01:18:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:02.538 01:18:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:02.538 01:18:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:02.538 01:18:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:02.538 01:18:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:33:02.538 01:18:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:02.538 01:18:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:02.538 01:18:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:33:02.538 01:18:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:02.538 01:18:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTFkNmJhYmU4MDI2OTYzOGMxYzc4MWM4YThhNWMyZjRhMmY1M2MwOTU1YThkOGZkMOHq9g==: 00:33:02.539 01:18:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZjlkNTE4YTg2MTQ0NzEzNWYwMjgyNzU2ZjJiMjJiOWIxPv+g: 00:33:02.539 01:18:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:02.539 01:18:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:33:02.539 01:18:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTFkNmJhYmU4MDI2OTYzOGMxYzc4MWM4YThhNWMyZjRhMmY1M2MwOTU1YThkOGZkMOHq9g==: 00:33:02.539 01:18:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZjlkNTE4YTg2MTQ0NzEzNWYwMjgyNzU2ZjJiMjJiOWIxPv+g: ]] 00:33:02.539 01:18:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZjlkNTE4YTg2MTQ0NzEzNWYwMjgyNzU2ZjJiMjJiOWIxPv+g: 00:33:02.539 01:18:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:33:02.539 01:18:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:02.539 01:18:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:02.539 01:18:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:33:02.539 01:18:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:02.539 01:18:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:02.539 01:18:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:33:02.539 01:18:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:02.539 01:18:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:02.539 01:18:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:02.539 01:18:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:02.539 01:18:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:02.539 01:18:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:02.539 01:18:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:02.539 01:18:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:02.539 01:18:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:02.539 01:18:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:02.539 01:18:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:02.539 01:18:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:02.539 01:18:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:02.539 01:18:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:02.539 01:18:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:02.539 01:18:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:02.539 01:18:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:02.796 nvme0n1 00:33:02.796 01:18:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:02.796 01:18:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:02.796 01:18:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:02.796 01:18:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:02.796 01:18:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:02.796 01:18:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:02.796 01:18:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:02.796 01:18:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:02.796 01:18:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:02.796 01:18:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:02.796 01:18:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:02.796 01:18:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:02.796 01:18:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:33:02.796 01:18:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:02.796 01:18:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:02.796 01:18:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:33:02.796 01:18:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:33:02.796 01:18:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MjE0NDQ1ZTAwMWJkZjM2YTYyODY0YTFlOTQxN2U5MTVkYjZlNjljNjdmMGYwMDRiYjUyN2RhMmIwZTFjNjBmYm2Aid4=: 00:33:02.796 01:18:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:33:02.796 01:18:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:02.796 01:18:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:33:02.796 01:18:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MjE0NDQ1ZTAwMWJkZjM2YTYyODY0YTFlOTQxN2U5MTVkYjZlNjljNjdmMGYwMDRiYjUyN2RhMmIwZTFjNjBmYm2Aid4=: 00:33:02.796 01:18:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:33:02.796 01:18:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:33:02.796 01:18:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:02.796 01:18:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:02.796 01:18:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:33:02.796 01:18:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:33:02.796 01:18:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:02.796 01:18:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:33:02.796 01:18:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:02.796 01:18:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:02.796 01:18:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:02.796 01:18:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:02.796 01:18:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:02.796 01:18:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:02.796 01:18:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:02.796 01:18:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:02.796 01:18:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:02.796 01:18:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:02.796 01:18:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:02.796 01:18:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:02.796 01:18:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:02.796 01:18:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:02.796 01:18:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:33:02.796 01:18:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:02.796 01:18:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:03.054 nvme0n1 00:33:03.054 01:18:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:03.054 01:18:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:03.054 01:18:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:03.054 01:18:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:03.054 01:18:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:03.054 01:18:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:03.054 01:18:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:03.054 01:18:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:03.054 01:18:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:03.054 01:18:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:03.054 01:18:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:03.054 01:18:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:33:03.054 01:18:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:03.054 01:18:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:33:03.054 01:18:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:03.054 01:18:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:03.054 01:18:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:33:03.054 01:18:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:33:03.054 01:18:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTY0ZmEzNjJmOTBlNjI3NTY5ZGVjMGE4MjNkYTBhZjmwd//x: 00:33:03.054 01:18:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDJmMzhmYzVlOGI0ZmI5NTRkMjIxNDVhMjUyMGRmMmRiZTU1NzA2NzZjY2Q3MWZkZjAwMDZmMDIyYzlhNGQ2YjwKKdU=: 00:33:03.054 01:18:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:03.054 01:18:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:33:03.054 01:18:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTY0ZmEzNjJmOTBlNjI3NTY5ZGVjMGE4MjNkYTBhZjmwd//x: 00:33:03.054 01:18:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDJmMzhmYzVlOGI0ZmI5NTRkMjIxNDVhMjUyMGRmMmRiZTU1NzA2NzZjY2Q3MWZkZjAwMDZmMDIyYzlhNGQ2YjwKKdU=: ]] 00:33:03.054 01:18:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDJmMzhmYzVlOGI0ZmI5NTRkMjIxNDVhMjUyMGRmMmRiZTU1NzA2NzZjY2Q3MWZkZjAwMDZmMDIyYzlhNGQ2YjwKKdU=: 00:33:03.054 01:18:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:33:03.054 01:18:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:03.054 01:18:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:03.054 01:18:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:33:03.054 01:18:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:33:03.054 01:18:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:03.054 01:18:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:33:03.054 01:18:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:03.054 01:18:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:03.054 01:18:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:03.054 01:18:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:03.054 01:18:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:03.054 01:18:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:03.054 01:18:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:03.054 01:18:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:03.054 01:18:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:03.054 01:18:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:03.054 01:18:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:03.054 01:18:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:03.054 01:18:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:03.055 01:18:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:03.055 01:18:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:03.055 01:18:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:03.055 01:18:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:03.313 nvme0n1 00:33:03.313 01:18:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:03.313 01:18:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:03.313 01:18:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:03.313 01:18:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:03.313 01:18:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:03.313 01:18:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:03.313 01:18:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:03.313 01:18:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:03.313 01:18:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:03.313 01:18:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:03.313 01:18:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:03.313 01:18:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:03.313 01:18:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:33:03.313 01:18:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:03.313 01:18:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:03.313 01:18:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:33:03.313 01:18:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:03.313 01:18:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODJkOTQ5YmM3MTFmZWQwNmJkNjZlNmM4NjhiNWY5MjhmOTgyMzg1NTg1MWI3YzZjK2hdmQ==: 00:33:03.313 01:18:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZGFmMzdkZTcwYmQwMGM3MzAyZmRlMmM2MDJjMjAwMzVjMDRhYmNhMGFjNDcwMzQ3OLTAGA==: 00:33:03.313 01:18:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:03.313 01:18:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:33:03.313 01:18:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODJkOTQ5YmM3MTFmZWQwNmJkNjZlNmM4NjhiNWY5MjhmOTgyMzg1NTg1MWI3YzZjK2hdmQ==: 00:33:03.313 01:18:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZGFmMzdkZTcwYmQwMGM3MzAyZmRlMmM2MDJjMjAwMzVjMDRhYmNhMGFjNDcwMzQ3OLTAGA==: ]] 00:33:03.313 01:18:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZGFmMzdkZTcwYmQwMGM3MzAyZmRlMmM2MDJjMjAwMzVjMDRhYmNhMGFjNDcwMzQ3OLTAGA==: 00:33:03.313 01:18:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:33:03.313 01:18:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:03.313 01:18:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:03.313 01:18:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:33:03.313 01:18:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:03.313 01:18:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:03.313 01:18:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:33:03.313 01:18:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:03.313 01:18:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:03.313 01:18:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:03.313 01:18:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:03.313 01:18:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:03.313 01:18:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:03.313 01:18:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:03.313 01:18:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:03.313 01:18:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:03.313 01:18:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:03.313 01:18:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:03.313 01:18:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:03.313 01:18:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:03.313 01:18:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:03.314 01:18:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:03.314 01:18:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:03.314 01:18:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:03.880 nvme0n1 00:33:03.880 01:18:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:03.880 01:18:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:03.880 01:18:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:03.880 01:18:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:03.880 01:18:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:03.880 01:18:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:03.880 01:18:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:03.880 01:18:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:03.880 01:18:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:03.880 01:18:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:03.880 01:18:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:03.880 01:18:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:03.880 01:18:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:33:03.880 01:18:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:03.880 01:18:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:03.880 01:18:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:33:03.880 01:18:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:03.880 01:18:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YmY4YmEyNjQzODkwYTE5NjcwMWNlMWJiNmZlNzE2NDWD2ggJ: 00:33:03.880 01:18:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDM5ZDg4ZmE2NmY1NDFmMGNkY2UxMjY0MWNiMGRkZGQnEYox: 00:33:03.880 01:18:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:03.880 01:18:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:33:03.880 01:18:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YmY4YmEyNjQzODkwYTE5NjcwMWNlMWJiNmZlNzE2NDWD2ggJ: 00:33:03.880 01:18:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDM5ZDg4ZmE2NmY1NDFmMGNkY2UxMjY0MWNiMGRkZGQnEYox: ]] 00:33:03.880 01:18:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDM5ZDg4ZmE2NmY1NDFmMGNkY2UxMjY0MWNiMGRkZGQnEYox: 00:33:03.880 01:18:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:33:03.880 01:18:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:03.880 01:18:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:03.880 01:18:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:33:03.880 01:18:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:33:03.880 01:18:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:03.880 01:18:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:33:03.880 01:18:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:03.880 01:18:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:03.880 01:18:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:03.880 01:18:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:03.881 01:18:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:03.881 01:18:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:03.881 01:18:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:03.881 01:18:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:03.881 01:18:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:03.881 01:18:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:03.881 01:18:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:03.881 01:18:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:03.881 01:18:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:03.881 01:18:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:03.881 01:18:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:03.881 01:18:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:03.881 01:18:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:04.139 nvme0n1 00:33:04.139 01:18:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:04.139 01:18:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:04.139 01:18:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:04.139 01:18:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:04.139 01:18:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:04.139 01:18:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:04.139 01:18:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:04.139 01:18:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:04.139 01:18:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:04.139 01:18:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:04.139 01:18:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:04.139 01:18:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:04.139 01:18:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:33:04.139 01:18:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:04.139 01:18:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:04.139 01:18:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:33:04.139 01:18:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:04.139 01:18:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTFkNmJhYmU4MDI2OTYzOGMxYzc4MWM4YThhNWMyZjRhMmY1M2MwOTU1YThkOGZkMOHq9g==: 00:33:04.139 01:18:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZjlkNTE4YTg2MTQ0NzEzNWYwMjgyNzU2ZjJiMjJiOWIxPv+g: 00:33:04.139 01:18:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:04.139 01:18:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:33:04.139 01:18:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTFkNmJhYmU4MDI2OTYzOGMxYzc4MWM4YThhNWMyZjRhMmY1M2MwOTU1YThkOGZkMOHq9g==: 00:33:04.139 01:18:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZjlkNTE4YTg2MTQ0NzEzNWYwMjgyNzU2ZjJiMjJiOWIxPv+g: ]] 00:33:04.139 01:18:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZjlkNTE4YTg2MTQ0NzEzNWYwMjgyNzU2ZjJiMjJiOWIxPv+g: 00:33:04.139 01:18:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:33:04.139 01:18:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:04.139 01:18:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:04.139 01:18:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:33:04.139 01:18:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:04.139 01:18:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:04.139 01:18:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:33:04.139 01:18:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:04.139 01:18:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:04.139 01:18:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:04.139 01:18:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:04.139 01:18:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:04.139 01:18:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:04.139 01:18:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:04.139 01:18:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:04.139 01:18:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:04.139 01:18:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:04.139 01:18:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:04.139 01:18:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:04.139 01:18:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:04.139 01:18:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:04.139 01:18:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:04.139 01:18:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:04.139 01:18:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:04.402 nvme0n1 00:33:04.402 01:18:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:04.402 01:18:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:04.402 01:18:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:04.402 01:18:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:04.402 01:18:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:04.402 01:18:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:04.402 01:18:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:04.402 01:18:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:04.402 01:18:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:04.402 01:18:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:04.402 01:18:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:04.402 01:18:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:04.402 01:18:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:33:04.402 01:18:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:04.402 01:18:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:04.402 01:18:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:33:04.402 01:18:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:33:04.402 01:18:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MjE0NDQ1ZTAwMWJkZjM2YTYyODY0YTFlOTQxN2U5MTVkYjZlNjljNjdmMGYwMDRiYjUyN2RhMmIwZTFjNjBmYm2Aid4=: 00:33:04.402 01:18:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:33:04.402 01:18:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:04.402 01:18:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:33:04.402 01:18:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MjE0NDQ1ZTAwMWJkZjM2YTYyODY0YTFlOTQxN2U5MTVkYjZlNjljNjdmMGYwMDRiYjUyN2RhMmIwZTFjNjBmYm2Aid4=: 00:33:04.402 01:18:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:33:04.402 01:18:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:33:04.402 01:18:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:04.402 01:18:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:04.402 01:18:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:33:04.402 01:18:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:33:04.402 01:18:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:04.402 01:18:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:33:04.402 01:18:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:04.402 01:18:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:04.402 01:18:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:04.402 01:18:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:04.402 01:18:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:04.402 01:18:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:04.402 01:18:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:04.402 01:18:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:04.402 01:18:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:04.402 01:18:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:04.402 01:18:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:04.402 01:18:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:04.402 01:18:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:04.402 01:18:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:04.402 01:18:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:33:04.402 01:18:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:04.402 01:18:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:04.711 nvme0n1 00:33:04.711 01:18:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:04.711 01:18:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:04.711 01:18:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:04.711 01:18:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:04.711 01:18:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:04.711 01:18:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:04.969 01:18:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:04.969 01:18:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:04.969 01:18:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:04.969 01:18:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:04.969 01:18:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:04.969 01:18:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:33:04.969 01:18:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:04.969 01:18:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:33:04.969 01:18:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:04.969 01:18:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:04.969 01:18:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:33:04.969 01:18:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:33:04.969 01:18:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTY0ZmEzNjJmOTBlNjI3NTY5ZGVjMGE4MjNkYTBhZjmwd//x: 00:33:04.969 01:18:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDJmMzhmYzVlOGI0ZmI5NTRkMjIxNDVhMjUyMGRmMmRiZTU1NzA2NzZjY2Q3MWZkZjAwMDZmMDIyYzlhNGQ2YjwKKdU=: 00:33:04.969 01:18:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:04.969 01:18:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:33:04.969 01:18:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTY0ZmEzNjJmOTBlNjI3NTY5ZGVjMGE4MjNkYTBhZjmwd//x: 00:33:04.969 01:18:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDJmMzhmYzVlOGI0ZmI5NTRkMjIxNDVhMjUyMGRmMmRiZTU1NzA2NzZjY2Q3MWZkZjAwMDZmMDIyYzlhNGQ2YjwKKdU=: ]] 00:33:04.969 01:18:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDJmMzhmYzVlOGI0ZmI5NTRkMjIxNDVhMjUyMGRmMmRiZTU1NzA2NzZjY2Q3MWZkZjAwMDZmMDIyYzlhNGQ2YjwKKdU=: 00:33:04.969 01:18:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:33:04.969 01:18:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:04.969 01:18:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:04.969 01:18:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:33:04.969 01:18:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:33:04.969 01:18:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:04.969 01:18:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:33:04.969 01:18:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:04.969 01:18:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:04.969 01:18:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:04.969 01:18:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:04.969 01:18:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:04.969 01:18:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:04.969 01:18:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:04.969 01:18:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:04.969 01:18:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:04.969 01:18:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:04.969 01:18:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:04.969 01:18:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:04.969 01:18:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:04.969 01:18:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:04.969 01:18:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:04.969 01:18:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:04.969 01:18:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:05.536 nvme0n1 00:33:05.536 01:18:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:05.536 01:18:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:05.536 01:18:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:05.536 01:18:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:05.536 01:18:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:05.536 01:18:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:05.536 01:18:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:05.536 01:18:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:05.536 01:18:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:05.536 01:18:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:05.536 01:18:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:05.536 01:18:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:05.536 01:18:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:33:05.536 01:18:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:05.536 01:18:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:05.536 01:18:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:33:05.536 01:18:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:05.536 01:18:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODJkOTQ5YmM3MTFmZWQwNmJkNjZlNmM4NjhiNWY5MjhmOTgyMzg1NTg1MWI3YzZjK2hdmQ==: 00:33:05.536 01:18:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZGFmMzdkZTcwYmQwMGM3MzAyZmRlMmM2MDJjMjAwMzVjMDRhYmNhMGFjNDcwMzQ3OLTAGA==: 00:33:05.536 01:18:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:05.536 01:18:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:33:05.536 01:18:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODJkOTQ5YmM3MTFmZWQwNmJkNjZlNmM4NjhiNWY5MjhmOTgyMzg1NTg1MWI3YzZjK2hdmQ==: 00:33:05.536 01:18:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZGFmMzdkZTcwYmQwMGM3MzAyZmRlMmM2MDJjMjAwMzVjMDRhYmNhMGFjNDcwMzQ3OLTAGA==: ]] 00:33:05.536 01:18:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZGFmMzdkZTcwYmQwMGM3MzAyZmRlMmM2MDJjMjAwMzVjMDRhYmNhMGFjNDcwMzQ3OLTAGA==: 00:33:05.536 01:18:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:33:05.536 01:18:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:05.536 01:18:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:05.536 01:18:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:33:05.536 01:18:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:05.536 01:18:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:05.536 01:18:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:33:05.536 01:18:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:05.536 01:18:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:05.536 01:18:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:05.536 01:18:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:05.536 01:18:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:05.536 01:18:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:05.536 01:18:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:05.536 01:18:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:05.536 01:18:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:05.536 01:18:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:05.536 01:18:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:05.536 01:18:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:05.536 01:18:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:05.536 01:18:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:05.536 01:18:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:05.536 01:18:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:05.536 01:18:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:06.102 nvme0n1 00:33:06.102 01:18:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:06.102 01:18:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:06.103 01:18:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:06.103 01:18:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:06.103 01:18:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:06.103 01:18:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:06.103 01:18:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:06.103 01:18:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:06.103 01:18:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:06.103 01:18:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:06.103 01:18:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:06.103 01:18:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:06.103 01:18:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:33:06.103 01:18:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:06.103 01:18:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:06.103 01:18:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:33:06.103 01:18:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:06.103 01:18:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YmY4YmEyNjQzODkwYTE5NjcwMWNlMWJiNmZlNzE2NDWD2ggJ: 00:33:06.103 01:18:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDM5ZDg4ZmE2NmY1NDFmMGNkY2UxMjY0MWNiMGRkZGQnEYox: 00:33:06.103 01:18:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:06.103 01:18:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:33:06.103 01:18:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YmY4YmEyNjQzODkwYTE5NjcwMWNlMWJiNmZlNzE2NDWD2ggJ: 00:33:06.103 01:18:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDM5ZDg4ZmE2NmY1NDFmMGNkY2UxMjY0MWNiMGRkZGQnEYox: ]] 00:33:06.103 01:18:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDM5ZDg4ZmE2NmY1NDFmMGNkY2UxMjY0MWNiMGRkZGQnEYox: 00:33:06.103 01:18:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:33:06.103 01:18:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:06.103 01:18:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:06.103 01:18:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:33:06.103 01:18:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:33:06.103 01:18:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:06.103 01:18:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:33:06.103 01:18:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:06.103 01:18:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:06.103 01:18:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:06.103 01:18:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:06.103 01:18:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:06.103 01:18:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:06.103 01:18:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:06.103 01:18:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:06.103 01:18:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:06.103 01:18:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:06.103 01:18:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:06.103 01:18:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:06.103 01:18:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:06.103 01:18:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:06.103 01:18:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:06.103 01:18:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:06.103 01:18:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:06.669 nvme0n1 00:33:06.669 01:18:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:06.669 01:18:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:06.669 01:18:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:06.669 01:18:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:06.669 01:18:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:06.669 01:18:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:06.669 01:18:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:06.669 01:18:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:06.669 01:18:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:06.669 01:18:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:06.669 01:18:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:06.669 01:18:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:06.669 01:18:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:33:06.669 01:18:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:06.669 01:18:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:06.669 01:18:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:33:06.669 01:18:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:06.669 01:18:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTFkNmJhYmU4MDI2OTYzOGMxYzc4MWM4YThhNWMyZjRhMmY1M2MwOTU1YThkOGZkMOHq9g==: 00:33:06.669 01:18:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZjlkNTE4YTg2MTQ0NzEzNWYwMjgyNzU2ZjJiMjJiOWIxPv+g: 00:33:06.669 01:18:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:06.669 01:18:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:33:06.669 01:18:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTFkNmJhYmU4MDI2OTYzOGMxYzc4MWM4YThhNWMyZjRhMmY1M2MwOTU1YThkOGZkMOHq9g==: 00:33:06.669 01:18:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZjlkNTE4YTg2MTQ0NzEzNWYwMjgyNzU2ZjJiMjJiOWIxPv+g: ]] 00:33:06.669 01:18:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZjlkNTE4YTg2MTQ0NzEzNWYwMjgyNzU2ZjJiMjJiOWIxPv+g: 00:33:06.669 01:18:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:33:06.669 01:18:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:06.669 01:18:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:06.669 01:18:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:33:06.669 01:18:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:06.669 01:18:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:06.669 01:18:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:33:06.669 01:18:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:06.669 01:18:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:06.669 01:18:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:06.669 01:18:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:06.669 01:18:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:06.669 01:18:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:06.669 01:18:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:06.669 01:18:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:06.669 01:18:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:06.669 01:18:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:06.669 01:18:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:06.669 01:18:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:06.669 01:18:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:06.669 01:18:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:06.669 01:18:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:06.669 01:18:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:06.669 01:18:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:07.235 nvme0n1 00:33:07.235 01:19:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:07.235 01:19:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:07.235 01:19:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:07.235 01:19:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:07.235 01:19:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:07.235 01:19:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:07.235 01:19:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:07.235 01:19:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:07.235 01:19:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:07.235 01:19:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:07.235 01:19:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:07.235 01:19:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:07.235 01:19:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:33:07.235 01:19:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:07.235 01:19:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:07.235 01:19:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:33:07.235 01:19:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:33:07.235 01:19:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MjE0NDQ1ZTAwMWJkZjM2YTYyODY0YTFlOTQxN2U5MTVkYjZlNjljNjdmMGYwMDRiYjUyN2RhMmIwZTFjNjBmYm2Aid4=: 00:33:07.235 01:19:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:33:07.235 01:19:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:07.235 01:19:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:33:07.235 01:19:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MjE0NDQ1ZTAwMWJkZjM2YTYyODY0YTFlOTQxN2U5MTVkYjZlNjljNjdmMGYwMDRiYjUyN2RhMmIwZTFjNjBmYm2Aid4=: 00:33:07.235 01:19:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:33:07.235 01:19:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:33:07.235 01:19:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:07.235 01:19:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:07.235 01:19:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:33:07.235 01:19:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:33:07.235 01:19:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:07.235 01:19:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:33:07.235 01:19:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:07.235 01:19:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:07.235 01:19:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:07.235 01:19:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:07.235 01:19:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:07.235 01:19:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:07.235 01:19:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:07.235 01:19:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:07.235 01:19:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:07.235 01:19:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:07.235 01:19:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:07.235 01:19:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:07.235 01:19:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:07.235 01:19:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:07.235 01:19:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:33:07.235 01:19:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:07.235 01:19:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:07.801 nvme0n1 00:33:07.801 01:19:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:07.801 01:19:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:07.801 01:19:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:07.801 01:19:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:07.801 01:19:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:07.801 01:19:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:07.801 01:19:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:07.801 01:19:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:07.801 01:19:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:07.801 01:19:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:07.801 01:19:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:07.801 01:19:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:33:07.801 01:19:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:07.801 01:19:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:33:07.801 01:19:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:07.801 01:19:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:07.801 01:19:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:07.801 01:19:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:33:07.801 01:19:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTY0ZmEzNjJmOTBlNjI3NTY5ZGVjMGE4MjNkYTBhZjmwd//x: 00:33:07.801 01:19:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDJmMzhmYzVlOGI0ZmI5NTRkMjIxNDVhMjUyMGRmMmRiZTU1NzA2NzZjY2Q3MWZkZjAwMDZmMDIyYzlhNGQ2YjwKKdU=: 00:33:07.801 01:19:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:07.801 01:19:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:07.801 01:19:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTY0ZmEzNjJmOTBlNjI3NTY5ZGVjMGE4MjNkYTBhZjmwd//x: 00:33:07.801 01:19:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDJmMzhmYzVlOGI0ZmI5NTRkMjIxNDVhMjUyMGRmMmRiZTU1NzA2NzZjY2Q3MWZkZjAwMDZmMDIyYzlhNGQ2YjwKKdU=: ]] 00:33:07.801 01:19:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDJmMzhmYzVlOGI0ZmI5NTRkMjIxNDVhMjUyMGRmMmRiZTU1NzA2NzZjY2Q3MWZkZjAwMDZmMDIyYzlhNGQ2YjwKKdU=: 00:33:07.801 01:19:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:33:07.801 01:19:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:07.801 01:19:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:07.801 01:19:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:07.801 01:19:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:33:07.801 01:19:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:07.801 01:19:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:33:07.801 01:19:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:07.801 01:19:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:07.801 01:19:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:07.801 01:19:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:07.801 01:19:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:07.801 01:19:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:07.801 01:19:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:07.801 01:19:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:07.801 01:19:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:07.801 01:19:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:07.801 01:19:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:07.801 01:19:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:07.801 01:19:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:07.801 01:19:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:07.801 01:19:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:07.801 01:19:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:07.801 01:19:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:08.734 nvme0n1 00:33:08.734 01:19:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:08.734 01:19:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:08.734 01:19:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:08.734 01:19:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:08.734 01:19:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:08.992 01:19:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:08.992 01:19:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:08.992 01:19:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:08.992 01:19:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:08.992 01:19:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:08.992 01:19:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:08.992 01:19:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:08.992 01:19:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:33:08.992 01:19:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:08.992 01:19:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:08.992 01:19:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:08.992 01:19:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:08.992 01:19:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODJkOTQ5YmM3MTFmZWQwNmJkNjZlNmM4NjhiNWY5MjhmOTgyMzg1NTg1MWI3YzZjK2hdmQ==: 00:33:08.992 01:19:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZGFmMzdkZTcwYmQwMGM3MzAyZmRlMmM2MDJjMjAwMzVjMDRhYmNhMGFjNDcwMzQ3OLTAGA==: 00:33:08.992 01:19:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:08.992 01:19:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:08.992 01:19:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODJkOTQ5YmM3MTFmZWQwNmJkNjZlNmM4NjhiNWY5MjhmOTgyMzg1NTg1MWI3YzZjK2hdmQ==: 00:33:08.992 01:19:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZGFmMzdkZTcwYmQwMGM3MzAyZmRlMmM2MDJjMjAwMzVjMDRhYmNhMGFjNDcwMzQ3OLTAGA==: ]] 00:33:08.992 01:19:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZGFmMzdkZTcwYmQwMGM3MzAyZmRlMmM2MDJjMjAwMzVjMDRhYmNhMGFjNDcwMzQ3OLTAGA==: 00:33:08.992 01:19:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:33:08.992 01:19:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:08.992 01:19:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:08.992 01:19:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:08.992 01:19:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:08.992 01:19:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:08.992 01:19:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:33:08.992 01:19:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:08.992 01:19:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:08.992 01:19:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:08.992 01:19:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:08.992 01:19:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:08.992 01:19:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:08.992 01:19:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:08.992 01:19:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:08.992 01:19:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:08.992 01:19:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:08.992 01:19:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:08.992 01:19:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:08.992 01:19:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:08.992 01:19:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:08.992 01:19:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:08.992 01:19:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:08.992 01:19:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:09.926 nvme0n1 00:33:09.926 01:19:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:09.926 01:19:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:09.926 01:19:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:09.926 01:19:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:09.926 01:19:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:09.926 01:19:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:09.926 01:19:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:09.926 01:19:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:09.926 01:19:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:09.926 01:19:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:09.926 01:19:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:09.926 01:19:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:09.926 01:19:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:33:09.926 01:19:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:09.926 01:19:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:09.926 01:19:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:09.926 01:19:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:09.926 01:19:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YmY4YmEyNjQzODkwYTE5NjcwMWNlMWJiNmZlNzE2NDWD2ggJ: 00:33:09.926 01:19:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDM5ZDg4ZmE2NmY1NDFmMGNkY2UxMjY0MWNiMGRkZGQnEYox: 00:33:09.926 01:19:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:09.926 01:19:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:09.926 01:19:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YmY4YmEyNjQzODkwYTE5NjcwMWNlMWJiNmZlNzE2NDWD2ggJ: 00:33:09.926 01:19:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDM5ZDg4ZmE2NmY1NDFmMGNkY2UxMjY0MWNiMGRkZGQnEYox: ]] 00:33:09.926 01:19:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDM5ZDg4ZmE2NmY1NDFmMGNkY2UxMjY0MWNiMGRkZGQnEYox: 00:33:09.926 01:19:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:33:09.926 01:19:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:09.926 01:19:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:09.926 01:19:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:09.926 01:19:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:33:09.926 01:19:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:09.926 01:19:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:33:09.926 01:19:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:09.926 01:19:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:09.926 01:19:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:09.926 01:19:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:09.926 01:19:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:09.926 01:19:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:09.926 01:19:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:09.926 01:19:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:09.926 01:19:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:09.926 01:19:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:09.926 01:19:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:09.926 01:19:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:09.926 01:19:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:09.926 01:19:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:09.926 01:19:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:09.926 01:19:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:09.926 01:19:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:10.858 nvme0n1 00:33:10.858 01:19:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:10.858 01:19:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:10.858 01:19:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:10.858 01:19:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:10.858 01:19:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:10.858 01:19:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:11.116 01:19:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:11.116 01:19:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:11.116 01:19:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:11.116 01:19:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:11.116 01:19:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:11.116 01:19:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:11.116 01:19:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:33:11.116 01:19:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:11.116 01:19:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:11.116 01:19:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:11.116 01:19:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:11.116 01:19:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTFkNmJhYmU4MDI2OTYzOGMxYzc4MWM4YThhNWMyZjRhMmY1M2MwOTU1YThkOGZkMOHq9g==: 00:33:11.116 01:19:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZjlkNTE4YTg2MTQ0NzEzNWYwMjgyNzU2ZjJiMjJiOWIxPv+g: 00:33:11.116 01:19:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:11.116 01:19:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:11.116 01:19:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTFkNmJhYmU4MDI2OTYzOGMxYzc4MWM4YThhNWMyZjRhMmY1M2MwOTU1YThkOGZkMOHq9g==: 00:33:11.116 01:19:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZjlkNTE4YTg2MTQ0NzEzNWYwMjgyNzU2ZjJiMjJiOWIxPv+g: ]] 00:33:11.117 01:19:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZjlkNTE4YTg2MTQ0NzEzNWYwMjgyNzU2ZjJiMjJiOWIxPv+g: 00:33:11.117 01:19:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:33:11.117 01:19:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:11.117 01:19:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:11.117 01:19:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:11.117 01:19:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:11.117 01:19:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:11.117 01:19:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:33:11.117 01:19:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:11.117 01:19:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:11.117 01:19:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:11.117 01:19:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:11.117 01:19:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:11.117 01:19:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:11.117 01:19:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:11.117 01:19:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:11.117 01:19:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:11.117 01:19:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:11.117 01:19:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:11.117 01:19:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:11.117 01:19:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:11.117 01:19:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:11.117 01:19:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:11.117 01:19:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:11.117 01:19:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:12.051 nvme0n1 00:33:12.051 01:19:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:12.051 01:19:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:12.052 01:19:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:12.052 01:19:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:12.052 01:19:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:12.052 01:19:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:12.052 01:19:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:12.052 01:19:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:12.052 01:19:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:12.052 01:19:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:12.052 01:19:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:12.052 01:19:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:12.052 01:19:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:33:12.052 01:19:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:12.052 01:19:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:12.052 01:19:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:12.052 01:19:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:33:12.052 01:19:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MjE0NDQ1ZTAwMWJkZjM2YTYyODY0YTFlOTQxN2U5MTVkYjZlNjljNjdmMGYwMDRiYjUyN2RhMmIwZTFjNjBmYm2Aid4=: 00:33:12.052 01:19:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:33:12.052 01:19:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:12.052 01:19:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:12.052 01:19:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MjE0NDQ1ZTAwMWJkZjM2YTYyODY0YTFlOTQxN2U5MTVkYjZlNjljNjdmMGYwMDRiYjUyN2RhMmIwZTFjNjBmYm2Aid4=: 00:33:12.052 01:19:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:33:12.052 01:19:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:33:12.052 01:19:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:12.052 01:19:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:12.052 01:19:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:12.052 01:19:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:33:12.052 01:19:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:12.052 01:19:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:33:12.052 01:19:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:12.052 01:19:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:12.052 01:19:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:12.052 01:19:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:12.052 01:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:12.052 01:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:12.052 01:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:12.052 01:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:12.052 01:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:12.052 01:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:12.052 01:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:12.052 01:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:12.052 01:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:12.052 01:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:12.052 01:19:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:33:12.052 01:19:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:12.052 01:19:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:12.987 nvme0n1 00:33:12.987 01:19:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:12.987 01:19:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:12.987 01:19:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:12.987 01:19:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:12.987 01:19:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:12.987 01:19:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:12.987 01:19:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:12.987 01:19:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:12.987 01:19:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:12.987 01:19:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:12.987 01:19:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:12.987 01:19:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:33:12.987 01:19:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:12.987 01:19:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:12.987 01:19:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:12.987 01:19:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:12.987 01:19:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODJkOTQ5YmM3MTFmZWQwNmJkNjZlNmM4NjhiNWY5MjhmOTgyMzg1NTg1MWI3YzZjK2hdmQ==: 00:33:12.987 01:19:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZGFmMzdkZTcwYmQwMGM3MzAyZmRlMmM2MDJjMjAwMzVjMDRhYmNhMGFjNDcwMzQ3OLTAGA==: 00:33:12.987 01:19:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:12.987 01:19:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:12.987 01:19:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODJkOTQ5YmM3MTFmZWQwNmJkNjZlNmM4NjhiNWY5MjhmOTgyMzg1NTg1MWI3YzZjK2hdmQ==: 00:33:12.987 01:19:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZGFmMzdkZTcwYmQwMGM3MzAyZmRlMmM2MDJjMjAwMzVjMDRhYmNhMGFjNDcwMzQ3OLTAGA==: ]] 00:33:12.987 01:19:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZGFmMzdkZTcwYmQwMGM3MzAyZmRlMmM2MDJjMjAwMzVjMDRhYmNhMGFjNDcwMzQ3OLTAGA==: 00:33:12.987 01:19:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:33:12.987 01:19:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:12.987 01:19:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:12.987 01:19:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:12.987 01:19:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:33:12.987 01:19:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:12.987 01:19:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:12.987 01:19:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:12.987 01:19:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:12.987 01:19:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:12.987 01:19:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:12.987 01:19:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:12.987 01:19:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:12.987 01:19:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:12.987 01:19:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:12.987 01:19:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:33:12.987 01:19:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:33:12.987 01:19:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:33:12.987 01:19:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:33:12.987 01:19:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:12.987 01:19:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:33:12.987 01:19:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:12.987 01:19:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:33:12.987 01:19:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:12.987 01:19:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:12.987 request: 00:33:12.987 { 00:33:12.987 "name": "nvme0", 00:33:12.987 "trtype": "tcp", 00:33:12.987 "traddr": "10.0.0.1", 00:33:12.987 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:33:12.987 "adrfam": "ipv4", 00:33:12.987 "trsvcid": "4420", 00:33:12.987 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:33:12.987 "method": "bdev_nvme_attach_controller", 00:33:12.987 "req_id": 1 00:33:12.987 } 00:33:12.987 Got JSON-RPC error response 00:33:12.987 response: 00:33:12.987 { 00:33:12.987 "code": -5, 00:33:12.987 "message": "Input/output error" 00:33:12.987 } 00:33:12.987 01:19:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:33:12.987 01:19:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:33:12.987 01:19:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:33:12.987 01:19:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:33:12.987 01:19:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:33:12.987 01:19:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:33:12.987 01:19:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:12.987 01:19:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:12.987 01:19:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:33:12.987 01:19:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:13.246 01:19:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:33:13.246 01:19:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:33:13.246 01:19:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:13.246 01:19:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:13.246 01:19:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:13.246 01:19:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:13.246 01:19:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:13.246 01:19:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:13.246 01:19:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:13.246 01:19:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:13.246 01:19:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:13.246 01:19:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:13.246 01:19:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:33:13.246 01:19:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:33:13.246 01:19:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:33:13.246 01:19:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:33:13.246 01:19:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:13.246 01:19:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:33:13.246 01:19:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:13.246 01:19:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:33:13.246 01:19:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:13.246 01:19:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:13.246 request: 00:33:13.246 { 00:33:13.246 "name": "nvme0", 00:33:13.246 "trtype": "tcp", 00:33:13.246 "traddr": "10.0.0.1", 00:33:13.246 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:33:13.246 "adrfam": "ipv4", 00:33:13.246 "trsvcid": "4420", 00:33:13.246 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:33:13.246 "dhchap_key": "key2", 00:33:13.246 "method": "bdev_nvme_attach_controller", 00:33:13.246 "req_id": 1 00:33:13.246 } 00:33:13.246 Got JSON-RPC error response 00:33:13.246 response: 00:33:13.246 { 00:33:13.246 "code": -5, 00:33:13.246 "message": "Input/output error" 00:33:13.246 } 00:33:13.246 01:19:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:33:13.246 01:19:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:33:13.246 01:19:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:33:13.246 01:19:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:33:13.246 01:19:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:33:13.246 01:19:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:33:13.246 01:19:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:33:13.246 01:19:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:13.246 01:19:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:13.246 01:19:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:13.246 01:19:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:33:13.246 01:19:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:33:13.246 01:19:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:13.246 01:19:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:13.246 01:19:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:13.246 01:19:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:13.246 01:19:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:13.246 01:19:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:13.246 01:19:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:13.246 01:19:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:13.246 01:19:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:13.246 01:19:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:13.246 01:19:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:33:13.246 01:19:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:33:13.246 01:19:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:33:13.246 01:19:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:33:13.246 01:19:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:13.246 01:19:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:33:13.246 01:19:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:13.246 01:19:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:33:13.246 01:19:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:13.246 01:19:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:13.246 request: 00:33:13.246 { 00:33:13.246 "name": "nvme0", 00:33:13.246 "trtype": "tcp", 00:33:13.246 "traddr": "10.0.0.1", 00:33:13.246 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:33:13.246 "adrfam": "ipv4", 00:33:13.246 "trsvcid": "4420", 00:33:13.246 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:33:13.246 "dhchap_key": "key1", 00:33:13.246 "dhchap_ctrlr_key": "ckey2", 00:33:13.246 "method": "bdev_nvme_attach_controller", 00:33:13.246 "req_id": 1 00:33:13.246 } 00:33:13.246 Got JSON-RPC error response 00:33:13.246 response: 00:33:13.246 { 00:33:13.246 "code": -5, 00:33:13.246 "message": "Input/output error" 00:33:13.246 } 00:33:13.246 01:19:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:33:13.246 01:19:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:33:13.246 01:19:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:33:13.246 01:19:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:33:13.246 01:19:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:33:13.246 01:19:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@127 -- # trap - SIGINT SIGTERM EXIT 00:33:13.246 01:19:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@128 -- # cleanup 00:33:13.246 01:19:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:33:13.246 01:19:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:33:13.246 01:19:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@117 -- # sync 00:33:13.246 01:19:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:33:13.246 01:19:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@120 -- # set +e 00:33:13.246 01:19:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:33:13.246 01:19:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:33:13.246 rmmod nvme_tcp 00:33:13.505 rmmod nvme_fabrics 00:33:13.505 01:19:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:33:13.505 01:19:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@124 -- # set -e 00:33:13.505 01:19:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@125 -- # return 0 00:33:13.505 01:19:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@489 -- # '[' -n 3908438 ']' 00:33:13.505 01:19:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@490 -- # killprocess 3908438 00:33:13.505 01:19:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@946 -- # '[' -z 3908438 ']' 00:33:13.505 01:19:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@950 -- # kill -0 3908438 00:33:13.505 01:19:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@951 -- # uname 00:33:13.505 01:19:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:33:13.505 01:19:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3908438 00:33:13.505 01:19:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:33:13.505 01:19:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:33:13.505 01:19:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3908438' 00:33:13.505 killing process with pid 3908438 00:33:13.505 01:19:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@965 -- # kill 3908438 00:33:13.505 01:19:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@970 -- # wait 3908438 00:33:13.763 01:19:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:33:13.763 01:19:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:33:13.763 01:19:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:33:13.763 01:19:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:33:13.763 01:19:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:33:13.763 01:19:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:13.763 01:19:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:33:13.763 01:19:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:15.665 01:19:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:33:15.665 01:19:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:33:15.665 01:19:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:33:15.665 01:19:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:33:15.665 01:19:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:33:15.665 01:19:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@686 -- # echo 0 00:33:15.665 01:19:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:33:15.665 01:19:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:33:15.665 01:19:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:33:15.665 01:19:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:33:15.665 01:19:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:33:15.665 01:19:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:33:15.665 01:19:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:33:17.038 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:33:17.038 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:33:17.038 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:33:17.038 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:33:17.038 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:33:17.038 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:33:17.038 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:33:17.038 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:33:17.038 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:33:17.038 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:33:17.038 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:33:17.038 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:33:17.038 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:33:17.038 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:33:17.038 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:33:17.038 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:33:17.973 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:33:17.973 01:19:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.jbY /tmp/spdk.key-null.cqn /tmp/spdk.key-sha256.ZcT /tmp/spdk.key-sha384.j5B /tmp/spdk.key-sha512.U5Z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:33:17.973 01:19:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:33:19.349 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:33:19.349 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:33:19.349 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:33:19.349 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:33:19.349 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:33:19.349 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:33:19.349 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:33:19.349 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:33:19.349 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:33:19.349 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:33:19.349 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:33:19.349 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:33:19.349 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:33:19.349 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:33:19.349 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:33:19.349 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:33:19.349 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:33:19.349 00:33:19.349 real 0m49.536s 00:33:19.349 user 0m47.414s 00:33:19.349 sys 0m5.611s 00:33:19.349 01:19:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1122 -- # xtrace_disable 00:33:19.349 01:19:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:19.349 ************************************ 00:33:19.349 END TEST nvmf_auth_host 00:33:19.349 ************************************ 00:33:19.349 01:19:12 nvmf_tcp -- nvmf/nvmf.sh@107 -- # [[ tcp == \t\c\p ]] 00:33:19.349 01:19:12 nvmf_tcp -- nvmf/nvmf.sh@108 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:33:19.349 01:19:12 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:33:19.349 01:19:12 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:33:19.349 01:19:12 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:19.349 ************************************ 00:33:19.349 START TEST nvmf_digest 00:33:19.349 ************************************ 00:33:19.349 01:19:12 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:33:19.349 * Looking for test storage... 00:33:19.349 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:33:19.349 01:19:12 nvmf_tcp.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:19.349 01:19:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:33:19.349 01:19:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:19.349 01:19:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:19.349 01:19:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:19.349 01:19:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:19.349 01:19:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:19.349 01:19:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:19.349 01:19:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:19.349 01:19:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:19.349 01:19:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:19.349 01:19:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:19.349 01:19:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:33:19.349 01:19:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:33:19.349 01:19:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:19.349 01:19:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:19.349 01:19:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:19.349 01:19:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:19.349 01:19:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:19.349 01:19:12 nvmf_tcp.nvmf_digest -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:19.349 01:19:12 nvmf_tcp.nvmf_digest -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:19.349 01:19:12 nvmf_tcp.nvmf_digest -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:19.350 01:19:12 nvmf_tcp.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:19.350 01:19:12 nvmf_tcp.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:19.350 01:19:12 nvmf_tcp.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:19.350 01:19:12 nvmf_tcp.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:33:19.350 01:19:12 nvmf_tcp.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:19.350 01:19:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@47 -- # : 0 00:33:19.350 01:19:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:33:19.350 01:19:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:33:19.350 01:19:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:19.350 01:19:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:19.350 01:19:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:19.350 01:19:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:33:19.350 01:19:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:33:19.350 01:19:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@51 -- # have_pci_nics=0 00:33:19.350 01:19:12 nvmf_tcp.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:33:19.350 01:19:12 nvmf_tcp.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:33:19.350 01:19:12 nvmf_tcp.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:33:19.350 01:19:12 nvmf_tcp.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:33:19.350 01:19:12 nvmf_tcp.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:33:19.350 01:19:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:33:19.350 01:19:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:19.350 01:19:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@448 -- # prepare_net_devs 00:33:19.350 01:19:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@410 -- # local -g is_hw=no 00:33:19.350 01:19:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@412 -- # remove_spdk_ns 00:33:19.350 01:19:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:19.350 01:19:12 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:33:19.350 01:19:12 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:19.350 01:19:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:33:19.350 01:19:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:33:19.350 01:19:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@285 -- # xtrace_disable 00:33:19.350 01:19:12 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:33:21.248 01:19:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:21.248 01:19:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@291 -- # pci_devs=() 00:33:21.248 01:19:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@291 -- # local -a pci_devs 00:33:21.248 01:19:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@292 -- # pci_net_devs=() 00:33:21.248 01:19:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:33:21.248 01:19:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@293 -- # pci_drivers=() 00:33:21.248 01:19:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@293 -- # local -A pci_drivers 00:33:21.248 01:19:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@295 -- # net_devs=() 00:33:21.248 01:19:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@295 -- # local -ga net_devs 00:33:21.248 01:19:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@296 -- # e810=() 00:33:21.248 01:19:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@296 -- # local -ga e810 00:33:21.248 01:19:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@297 -- # x722=() 00:33:21.248 01:19:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@297 -- # local -ga x722 00:33:21.248 01:19:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@298 -- # mlx=() 00:33:21.248 01:19:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@298 -- # local -ga mlx 00:33:21.248 01:19:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:21.248 01:19:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:21.248 01:19:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:21.248 01:19:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:21.248 01:19:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:21.248 01:19:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:21.248 01:19:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:21.248 01:19:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:21.248 01:19:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:21.248 01:19:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:21.248 01:19:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:21.248 01:19:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:33:21.248 01:19:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:33:21.248 01:19:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:33:21.248 01:19:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:33:21.248 01:19:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:33:21.248 01:19:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:33:21.248 01:19:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:21.248 01:19:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:33:21.248 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:33:21.248 01:19:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:33:21.248 01:19:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:33:21.248 01:19:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:21.248 01:19:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:21.248 01:19:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:33:21.248 01:19:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:21.248 01:19:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:33:21.248 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:33:21.248 01:19:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:33:21.248 01:19:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:33:21.248 01:19:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:21.248 01:19:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:21.248 01:19:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:33:21.248 01:19:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:33:21.248 01:19:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:33:21.248 01:19:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:33:21.248 01:19:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:21.248 01:19:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:21.248 01:19:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:33:21.248 01:19:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:21.248 01:19:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:33:21.248 01:19:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:33:21.248 01:19:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:21.248 01:19:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:33:21.248 Found net devices under 0000:0a:00.0: cvl_0_0 00:33:21.248 01:19:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:33:21.248 01:19:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:21.248 01:19:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:21.248 01:19:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:33:21.248 01:19:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:21.248 01:19:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:33:21.248 01:19:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:33:21.248 01:19:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:21.248 01:19:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:33:21.248 Found net devices under 0000:0a:00.1: cvl_0_1 00:33:21.248 01:19:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:33:21.248 01:19:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:33:21.248 01:19:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # is_hw=yes 00:33:21.248 01:19:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:33:21.248 01:19:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:33:21.248 01:19:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:33:21.248 01:19:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:21.248 01:19:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:21.248 01:19:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:21.248 01:19:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:33:21.248 01:19:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:21.248 01:19:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:21.248 01:19:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:33:21.248 01:19:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:21.248 01:19:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:21.248 01:19:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:33:21.248 01:19:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:33:21.248 01:19:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:33:21.248 01:19:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:21.248 01:19:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:21.248 01:19:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:21.248 01:19:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:33:21.507 01:19:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:21.507 01:19:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:21.507 01:19:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:21.507 01:19:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:33:21.507 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:21.507 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.136 ms 00:33:21.507 00:33:21.507 --- 10.0.0.2 ping statistics --- 00:33:21.507 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:21.507 rtt min/avg/max/mdev = 0.136/0.136/0.136/0.000 ms 00:33:21.507 01:19:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:21.507 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:21.507 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.145 ms 00:33:21.507 00:33:21.507 --- 10.0.0.1 ping statistics --- 00:33:21.507 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:21.507 rtt min/avg/max/mdev = 0.145/0.145/0.145/0.000 ms 00:33:21.507 01:19:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:21.507 01:19:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@422 -- # return 0 00:33:21.507 01:19:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:33:21.507 01:19:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:21.507 01:19:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:33:21.507 01:19:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:33:21.507 01:19:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:21.507 01:19:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:33:21.507 01:19:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:33:21.507 01:19:14 nvmf_tcp.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:33:21.507 01:19:14 nvmf_tcp.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:33:21.507 01:19:14 nvmf_tcp.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:33:21.507 01:19:14 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:33:21.507 01:19:14 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1103 -- # xtrace_disable 00:33:21.507 01:19:14 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:33:21.507 ************************************ 00:33:21.507 START TEST nvmf_digest_clean 00:33:21.507 ************************************ 00:33:21.507 01:19:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1121 -- # run_digest 00:33:21.507 01:19:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:33:21.507 01:19:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:33:21.507 01:19:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:33:21.507 01:19:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:33:21.507 01:19:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:33:21.507 01:19:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:33:21.507 01:19:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@720 -- # xtrace_disable 00:33:21.507 01:19:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:33:21.507 01:19:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@481 -- # nvmfpid=3917959 00:33:21.507 01:19:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:33:21.507 01:19:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@482 -- # waitforlisten 3917959 00:33:21.507 01:19:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@827 -- # '[' -z 3917959 ']' 00:33:21.507 01:19:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:21.507 01:19:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@832 -- # local max_retries=100 00:33:21.507 01:19:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:21.507 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:21.507 01:19:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # xtrace_disable 00:33:21.507 01:19:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:33:21.507 [2024-07-25 01:19:14.553143] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 23.11.0 initialization... 00:33:21.507 [2024-07-25 01:19:14.553218] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:21.507 EAL: No free 2048 kB hugepages reported on node 1 00:33:21.507 [2024-07-25 01:19:14.616867] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:21.766 [2024-07-25 01:19:14.705674] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:21.766 [2024-07-25 01:19:14.705727] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:21.766 [2024-07-25 01:19:14.705756] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:21.766 [2024-07-25 01:19:14.705768] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:21.766 [2024-07-25 01:19:14.705777] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:21.766 [2024-07-25 01:19:14.705802] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:33:21.766 01:19:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:33:21.766 01:19:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # return 0 00:33:21.766 01:19:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:33:21.766 01:19:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:21.766 01:19:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:33:21.766 01:19:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:21.766 01:19:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:33:21.766 01:19:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:33:21.766 01:19:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:33:21.766 01:19:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:21.766 01:19:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:33:21.766 null0 00:33:21.766 [2024-07-25 01:19:14.894902] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:22.024 [2024-07-25 01:19:14.919155] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:22.024 01:19:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:22.024 01:19:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:33:22.024 01:19:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:33:22.024 01:19:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:33:22.024 01:19:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:33:22.024 01:19:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:33:22.024 01:19:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:33:22.024 01:19:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:33:22.024 01:19:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3917984 00:33:22.024 01:19:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:33:22.024 01:19:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3917984 /var/tmp/bperf.sock 00:33:22.024 01:19:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@827 -- # '[' -z 3917984 ']' 00:33:22.024 01:19:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:22.024 01:19:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@832 -- # local max_retries=100 00:33:22.024 01:19:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:22.024 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:22.024 01:19:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # xtrace_disable 00:33:22.024 01:19:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:33:22.024 [2024-07-25 01:19:14.965802] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 23.11.0 initialization... 00:33:22.024 [2024-07-25 01:19:14.965878] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3917984 ] 00:33:22.024 EAL: No free 2048 kB hugepages reported on node 1 00:33:22.024 [2024-07-25 01:19:15.027717] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:22.024 [2024-07-25 01:19:15.118752] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:33:22.281 01:19:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:33:22.281 01:19:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # return 0 00:33:22.281 01:19:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:33:22.281 01:19:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:33:22.281 01:19:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:33:22.538 01:19:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:22.538 01:19:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:23.131 nvme0n1 00:33:23.131 01:19:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:33:23.131 01:19:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:23.131 Running I/O for 2 seconds... 00:33:25.027 00:33:25.027 Latency(us) 00:33:25.027 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:25.027 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:33:25.027 nvme0n1 : 2.01 18309.24 71.52 0.00 0.00 6981.52 3762.25 14854.83 00:33:25.027 =================================================================================================================== 00:33:25.027 Total : 18309.24 71.52 0.00 0.00 6981.52 3762.25 14854.83 00:33:25.027 0 00:33:25.027 01:19:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:33:25.027 01:19:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:33:25.027 01:19:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:33:25.027 01:19:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:33:25.027 01:19:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:33:25.027 | select(.opcode=="crc32c") 00:33:25.027 | "\(.module_name) \(.executed)"' 00:33:25.294 01:19:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:33:25.294 01:19:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:33:25.294 01:19:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:33:25.294 01:19:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:33:25.294 01:19:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3917984 00:33:25.294 01:19:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@946 -- # '[' -z 3917984 ']' 00:33:25.294 01:19:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # kill -0 3917984 00:33:25.294 01:19:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # uname 00:33:25.294 01:19:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:33:25.294 01:19:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3917984 00:33:25.551 01:19:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:33:25.551 01:19:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:33:25.551 01:19:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3917984' 00:33:25.551 killing process with pid 3917984 00:33:25.551 01:19:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@965 -- # kill 3917984 00:33:25.551 Received shutdown signal, test time was about 2.000000 seconds 00:33:25.551 00:33:25.551 Latency(us) 00:33:25.551 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:25.551 =================================================================================================================== 00:33:25.551 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:25.551 01:19:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # wait 3917984 00:33:25.551 01:19:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:33:25.551 01:19:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:33:25.551 01:19:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:33:25.551 01:19:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:33:25.551 01:19:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:33:25.551 01:19:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:33:25.551 01:19:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:33:25.551 01:19:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3918396 00:33:25.551 01:19:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:33:25.551 01:19:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3918396 /var/tmp/bperf.sock 00:33:25.551 01:19:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@827 -- # '[' -z 3918396 ']' 00:33:25.551 01:19:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:25.551 01:19:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@832 -- # local max_retries=100 00:33:25.551 01:19:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:25.551 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:25.551 01:19:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # xtrace_disable 00:33:25.552 01:19:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:33:25.809 [2024-07-25 01:19:18.722585] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 23.11.0 initialization... 00:33:25.809 [2024-07-25 01:19:18.722673] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3918396 ] 00:33:25.809 I/O size of 131072 is greater than zero copy threshold (65536). 00:33:25.809 Zero copy mechanism will not be used. 00:33:25.809 EAL: No free 2048 kB hugepages reported on node 1 00:33:25.809 [2024-07-25 01:19:18.781905] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:25.809 [2024-07-25 01:19:18.869601] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:33:25.809 01:19:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:33:25.809 01:19:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # return 0 00:33:25.809 01:19:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:33:25.809 01:19:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:33:25.809 01:19:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:33:26.374 01:19:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:26.374 01:19:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:26.631 nvme0n1 00:33:26.631 01:19:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:33:26.632 01:19:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:26.889 I/O size of 131072 is greater than zero copy threshold (65536). 00:33:26.889 Zero copy mechanism will not be used. 00:33:26.889 Running I/O for 2 seconds... 00:33:28.789 00:33:28.789 Latency(us) 00:33:28.789 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:28.789 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:33:28.789 nvme0n1 : 2.00 3681.23 460.15 0.00 0.00 4341.69 1334.99 11747.93 00:33:28.789 =================================================================================================================== 00:33:28.789 Total : 3681.23 460.15 0.00 0.00 4341.69 1334.99 11747.93 00:33:28.789 0 00:33:28.789 01:19:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:33:28.789 01:19:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:33:28.789 01:19:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:33:28.789 01:19:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:33:28.789 01:19:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:33:28.789 | select(.opcode=="crc32c") 00:33:28.789 | "\(.module_name) \(.executed)"' 00:33:29.047 01:19:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:33:29.047 01:19:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:33:29.047 01:19:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:33:29.047 01:19:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:33:29.047 01:19:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3918396 00:33:29.047 01:19:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@946 -- # '[' -z 3918396 ']' 00:33:29.047 01:19:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # kill -0 3918396 00:33:29.047 01:19:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # uname 00:33:29.047 01:19:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:33:29.047 01:19:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3918396 00:33:29.047 01:19:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:33:29.047 01:19:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:33:29.047 01:19:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3918396' 00:33:29.047 killing process with pid 3918396 00:33:29.047 01:19:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@965 -- # kill 3918396 00:33:29.047 Received shutdown signal, test time was about 2.000000 seconds 00:33:29.048 00:33:29.048 Latency(us) 00:33:29.048 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:29.048 =================================================================================================================== 00:33:29.048 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:29.048 01:19:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # wait 3918396 00:33:29.306 01:19:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:33:29.306 01:19:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:33:29.306 01:19:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:33:29.306 01:19:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:33:29.306 01:19:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:33:29.306 01:19:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:33:29.306 01:19:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:33:29.306 01:19:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3918844 00:33:29.306 01:19:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:33:29.306 01:19:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3918844 /var/tmp/bperf.sock 00:33:29.306 01:19:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@827 -- # '[' -z 3918844 ']' 00:33:29.306 01:19:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:29.306 01:19:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@832 -- # local max_retries=100 00:33:29.306 01:19:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:29.306 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:29.306 01:19:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # xtrace_disable 00:33:29.306 01:19:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:33:29.306 [2024-07-25 01:19:22.376587] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 23.11.0 initialization... 00:33:29.306 [2024-07-25 01:19:22.376678] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3918844 ] 00:33:29.306 EAL: No free 2048 kB hugepages reported on node 1 00:33:29.306 [2024-07-25 01:19:22.444236] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:29.564 [2024-07-25 01:19:22.541541] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:33:29.564 01:19:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:33:29.564 01:19:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # return 0 00:33:29.564 01:19:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:33:29.564 01:19:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:33:29.564 01:19:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:33:29.822 01:19:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:29.822 01:19:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:30.389 nvme0n1 00:33:30.389 01:19:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:33:30.389 01:19:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:30.389 Running I/O for 2 seconds... 00:33:32.915 00:33:32.915 Latency(us) 00:33:32.915 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:32.915 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:32.915 nvme0n1 : 2.00 20559.21 80.31 0.00 0.00 6218.86 3228.25 16505.36 00:33:32.915 =================================================================================================================== 00:33:32.915 Total : 20559.21 80.31 0.00 0.00 6218.86 3228.25 16505.36 00:33:32.915 0 00:33:32.916 01:19:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:33:32.916 01:19:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:33:32.916 01:19:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:33:32.916 01:19:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:33:32.916 01:19:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:33:32.916 | select(.opcode=="crc32c") 00:33:32.916 | "\(.module_name) \(.executed)"' 00:33:32.916 01:19:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:33:32.916 01:19:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:33:32.916 01:19:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:33:32.916 01:19:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:33:32.916 01:19:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3918844 00:33:32.916 01:19:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@946 -- # '[' -z 3918844 ']' 00:33:32.916 01:19:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # kill -0 3918844 00:33:32.916 01:19:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # uname 00:33:32.916 01:19:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:33:32.916 01:19:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3918844 00:33:32.916 01:19:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:33:32.916 01:19:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:33:32.916 01:19:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3918844' 00:33:32.916 killing process with pid 3918844 00:33:32.916 01:19:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@965 -- # kill 3918844 00:33:32.916 Received shutdown signal, test time was about 2.000000 seconds 00:33:32.916 00:33:32.916 Latency(us) 00:33:32.916 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:32.916 =================================================================================================================== 00:33:32.916 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:32.916 01:19:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # wait 3918844 00:33:32.916 01:19:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:33:32.916 01:19:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:33:32.916 01:19:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:33:32.916 01:19:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:33:32.916 01:19:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:33:32.916 01:19:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:33:32.916 01:19:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:33:32.916 01:19:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3919323 00:33:32.916 01:19:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:33:32.916 01:19:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3919323 /var/tmp/bperf.sock 00:33:32.916 01:19:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@827 -- # '[' -z 3919323 ']' 00:33:32.916 01:19:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:32.916 01:19:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@832 -- # local max_retries=100 00:33:32.916 01:19:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:32.916 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:32.916 01:19:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # xtrace_disable 00:33:32.916 01:19:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:33:32.916 [2024-07-25 01:19:26.047054] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 23.11.0 initialization... 00:33:32.916 [2024-07-25 01:19:26.047146] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3919323 ] 00:33:32.916 I/O size of 131072 is greater than zero copy threshold (65536). 00:33:32.916 Zero copy mechanism will not be used. 00:33:33.174 EAL: No free 2048 kB hugepages reported on node 1 00:33:33.174 [2024-07-25 01:19:26.109400] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:33.174 [2024-07-25 01:19:26.199379] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:33:33.174 01:19:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:33:33.174 01:19:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # return 0 00:33:33.174 01:19:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:33:33.174 01:19:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:33:33.174 01:19:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:33:33.433 01:19:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:33.433 01:19:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:33.999 nvme0n1 00:33:33.999 01:19:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:33:33.999 01:19:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:33.999 I/O size of 131072 is greater than zero copy threshold (65536). 00:33:33.999 Zero copy mechanism will not be used. 00:33:33.999 Running I/O for 2 seconds... 00:33:35.897 00:33:35.897 Latency(us) 00:33:35.897 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:35.897 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:33:35.897 nvme0n1 : 2.01 3725.13 465.64 0.00 0.00 4283.32 2305.90 6941.96 00:33:35.897 =================================================================================================================== 00:33:35.897 Total : 3725.13 465.64 0.00 0.00 4283.32 2305.90 6941.96 00:33:35.897 0 00:33:35.897 01:19:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:33:35.897 01:19:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:33:35.897 01:19:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:33:35.897 01:19:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:33:35.897 | select(.opcode=="crc32c") 00:33:35.897 | "\(.module_name) \(.executed)"' 00:33:35.897 01:19:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:33:36.155 01:19:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:33:36.155 01:19:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:33:36.155 01:19:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:33:36.155 01:19:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:33:36.155 01:19:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3919323 00:33:36.155 01:19:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@946 -- # '[' -z 3919323 ']' 00:33:36.155 01:19:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # kill -0 3919323 00:33:36.155 01:19:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # uname 00:33:36.155 01:19:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:33:36.155 01:19:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3919323 00:33:36.413 01:19:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:33:36.413 01:19:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:33:36.413 01:19:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3919323' 00:33:36.413 killing process with pid 3919323 00:33:36.413 01:19:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@965 -- # kill 3919323 00:33:36.413 Received shutdown signal, test time was about 2.000000 seconds 00:33:36.413 00:33:36.413 Latency(us) 00:33:36.413 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:36.413 =================================================================================================================== 00:33:36.413 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:36.414 01:19:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # wait 3919323 00:33:36.414 01:19:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 3917959 00:33:36.414 01:19:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@946 -- # '[' -z 3917959 ']' 00:33:36.414 01:19:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # kill -0 3917959 00:33:36.414 01:19:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # uname 00:33:36.414 01:19:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:33:36.414 01:19:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3917959 00:33:36.672 01:19:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:33:36.672 01:19:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:33:36.672 01:19:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3917959' 00:33:36.672 killing process with pid 3917959 00:33:36.672 01:19:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@965 -- # kill 3917959 00:33:36.672 01:19:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # wait 3917959 00:33:36.672 00:33:36.672 real 0m15.286s 00:33:36.672 user 0m30.510s 00:33:36.672 sys 0m4.093s 00:33:36.672 01:19:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1122 -- # xtrace_disable 00:33:36.672 01:19:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:33:36.672 ************************************ 00:33:36.672 END TEST nvmf_digest_clean 00:33:36.672 ************************************ 00:33:36.672 01:19:29 nvmf_tcp.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:33:36.672 01:19:29 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:33:36.672 01:19:29 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1103 -- # xtrace_disable 00:33:36.672 01:19:29 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:33:36.930 ************************************ 00:33:36.930 START TEST nvmf_digest_error 00:33:36.930 ************************************ 00:33:36.930 01:19:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1121 -- # run_digest_error 00:33:36.930 01:19:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:33:36.930 01:19:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:33:36.930 01:19:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@720 -- # xtrace_disable 00:33:36.930 01:19:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:36.930 01:19:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@481 -- # nvmfpid=3919762 00:33:36.930 01:19:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:33:36.930 01:19:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@482 -- # waitforlisten 3919762 00:33:36.930 01:19:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@827 -- # '[' -z 3919762 ']' 00:33:36.930 01:19:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:36.930 01:19:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@832 -- # local max_retries=100 00:33:36.930 01:19:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:36.930 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:36.930 01:19:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # xtrace_disable 00:33:36.930 01:19:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:36.930 [2024-07-25 01:19:29.890951] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 23.11.0 initialization... 00:33:36.930 [2024-07-25 01:19:29.891044] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:36.930 EAL: No free 2048 kB hugepages reported on node 1 00:33:36.930 [2024-07-25 01:19:29.953871] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:36.930 [2024-07-25 01:19:30.045164] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:36.930 [2024-07-25 01:19:30.045238] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:36.930 [2024-07-25 01:19:30.045285] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:36.930 [2024-07-25 01:19:30.045309] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:36.930 [2024-07-25 01:19:30.045330] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:36.930 [2024-07-25 01:19:30.045370] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:33:37.189 01:19:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:33:37.189 01:19:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # return 0 00:33:37.189 01:19:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:33:37.189 01:19:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:37.189 01:19:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:37.189 01:19:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:37.189 01:19:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:33:37.189 01:19:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:37.189 01:19:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:37.189 [2024-07-25 01:19:30.130119] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:33:37.189 01:19:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:37.189 01:19:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:33:37.189 01:19:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:33:37.189 01:19:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:37.189 01:19:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:37.189 null0 00:33:37.189 [2024-07-25 01:19:30.250936] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:37.189 [2024-07-25 01:19:30.275159] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:37.189 01:19:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:37.189 01:19:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:33:37.189 01:19:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:33:37.189 01:19:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:33:37.189 01:19:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:33:37.189 01:19:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:33:37.189 01:19:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3919822 00:33:37.189 01:19:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:33:37.189 01:19:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3919822 /var/tmp/bperf.sock 00:33:37.189 01:19:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@827 -- # '[' -z 3919822 ']' 00:33:37.189 01:19:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:37.189 01:19:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@832 -- # local max_retries=100 00:33:37.189 01:19:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:37.189 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:37.189 01:19:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # xtrace_disable 00:33:37.189 01:19:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:37.189 [2024-07-25 01:19:30.322556] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 23.11.0 initialization... 00:33:37.189 [2024-07-25 01:19:30.322634] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3919822 ] 00:33:37.447 EAL: No free 2048 kB hugepages reported on node 1 00:33:37.447 [2024-07-25 01:19:30.386262] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:37.447 [2024-07-25 01:19:30.477141] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:33:37.447 01:19:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:33:37.447 01:19:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # return 0 00:33:37.447 01:19:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:33:37.447 01:19:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:33:38.012 01:19:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:33:38.012 01:19:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:38.012 01:19:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:38.012 01:19:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:38.012 01:19:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:38.012 01:19:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:38.270 nvme0n1 00:33:38.270 01:19:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:33:38.270 01:19:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:38.270 01:19:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:38.270 01:19:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:38.270 01:19:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:33:38.270 01:19:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:38.528 Running I/O for 2 seconds... 00:33:38.529 [2024-07-25 01:19:31.492162] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x153a360) 00:33:38.529 [2024-07-25 01:19:31.492215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:9773 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.529 [2024-07-25 01:19:31.492238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:38.529 [2024-07-25 01:19:31.503700] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x153a360) 00:33:38.529 [2024-07-25 01:19:31.503749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9260 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.529 [2024-07-25 01:19:31.503770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:38.529 [2024-07-25 01:19:31.520220] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x153a360) 00:33:38.529 [2024-07-25 01:19:31.520263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6141 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.529 [2024-07-25 01:19:31.520284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:38.529 [2024-07-25 01:19:31.532321] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x153a360) 00:33:38.529 [2024-07-25 01:19:31.532351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:16969 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.529 [2024-07-25 01:19:31.532368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:38.529 [2024-07-25 01:19:31.547921] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x153a360) 00:33:38.529 [2024-07-25 01:19:31.547956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:1401 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.529 [2024-07-25 01:19:31.547975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:38.529 [2024-07-25 01:19:31.565331] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x153a360) 00:33:38.529 [2024-07-25 01:19:31.565360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1677 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.529 [2024-07-25 01:19:31.565377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:38.529 [2024-07-25 01:19:31.576970] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x153a360) 00:33:38.529 [2024-07-25 01:19:31.577004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:704 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.529 [2024-07-25 01:19:31.577024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:38.529 [2024-07-25 01:19:31.593338] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x153a360) 00:33:38.529 [2024-07-25 01:19:31.593382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3756 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.529 [2024-07-25 01:19:31.593400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:38.529 [2024-07-25 01:19:31.608937] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x153a360) 00:33:38.529 [2024-07-25 01:19:31.608971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:20228 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.529 [2024-07-25 01:19:31.608990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:38.529 [2024-07-25 01:19:31.621303] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x153a360) 00:33:38.529 [2024-07-25 01:19:31.621333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:3486 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.529 [2024-07-25 01:19:31.621350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:38.529 [2024-07-25 01:19:31.637570] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x153a360) 00:33:38.529 [2024-07-25 01:19:31.637605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:20660 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.529 [2024-07-25 01:19:31.637625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:38.529 [2024-07-25 01:19:31.649895] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x153a360) 00:33:38.529 [2024-07-25 01:19:31.649928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:5458 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.529 [2024-07-25 01:19:31.649948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:38.529 [2024-07-25 01:19:31.666509] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x153a360) 00:33:38.529 [2024-07-25 01:19:31.666541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:13484 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.529 [2024-07-25 01:19:31.666557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:38.787 [2024-07-25 01:19:31.682125] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x153a360) 00:33:38.787 [2024-07-25 01:19:31.682155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:4960 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.787 [2024-07-25 01:19:31.682172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:38.787 [2024-07-25 01:19:31.695409] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x153a360) 00:33:38.787 [2024-07-25 01:19:31.695440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:7273 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.787 [2024-07-25 01:19:31.695457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:38.787 [2024-07-25 01:19:31.709589] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x153a360) 00:33:38.787 [2024-07-25 01:19:31.709623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:19638 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.787 [2024-07-25 01:19:31.709643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:38.787 [2024-07-25 01:19:31.720941] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x153a360) 00:33:38.787 [2024-07-25 01:19:31.720975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:18231 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.787 [2024-07-25 01:19:31.720994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:38.787 [2024-07-25 01:19:31.736290] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x153a360) 00:33:38.787 [2024-07-25 01:19:31.736319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1099 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.787 [2024-07-25 01:19:31.736335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:38.787 [2024-07-25 01:19:31.749443] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x153a360) 00:33:38.787 [2024-07-25 01:19:31.749474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:609 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.787 [2024-07-25 01:19:31.749498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:38.787 [2024-07-25 01:19:31.762974] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x153a360) 00:33:38.787 [2024-07-25 01:19:31.763008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:6344 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.787 [2024-07-25 01:19:31.763027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:38.787 [2024-07-25 01:19:31.776786] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x153a360) 00:33:38.787 [2024-07-25 01:19:31.776820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:14433 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.788 [2024-07-25 01:19:31.776839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:38.788 [2024-07-25 01:19:31.788478] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x153a360) 00:33:38.788 [2024-07-25 01:19:31.788506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:22851 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.788 [2024-07-25 01:19:31.788523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:38.788 [2024-07-25 01:19:31.803998] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x153a360) 00:33:38.788 [2024-07-25 01:19:31.804032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:1808 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.788 [2024-07-25 01:19:31.804051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:38.788 [2024-07-25 01:19:31.816149] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x153a360) 00:33:38.788 [2024-07-25 01:19:31.816184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:4325 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.788 [2024-07-25 01:19:31.816203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:38.788 [2024-07-25 01:19:31.829406] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x153a360) 00:33:38.788 [2024-07-25 01:19:31.829436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:9427 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.788 [2024-07-25 01:19:31.829453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:38.788 [2024-07-25 01:19:31.845930] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x153a360) 00:33:38.788 [2024-07-25 01:19:31.845965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:9631 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.788 [2024-07-25 01:19:31.845984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:38.788 [2024-07-25 01:19:31.861816] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x153a360) 00:33:38.788 [2024-07-25 01:19:31.861850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14547 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.788 [2024-07-25 01:19:31.861869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:38.788 [2024-07-25 01:19:31.874420] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x153a360) 00:33:38.788 [2024-07-25 01:19:31.874458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:2522 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.788 [2024-07-25 01:19:31.874492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:38.788 [2024-07-25 01:19:31.888793] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x153a360) 00:33:38.788 [2024-07-25 01:19:31.888827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8741 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.788 [2024-07-25 01:19:31.888847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:38.788 [2024-07-25 01:19:31.900844] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x153a360) 00:33:38.788 [2024-07-25 01:19:31.900878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:13605 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.788 [2024-07-25 01:19:31.900897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:38.788 [2024-07-25 01:19:31.917101] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x153a360) 00:33:38.788 [2024-07-25 01:19:31.917135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:2559 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.788 [2024-07-25 01:19:31.917154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:38.788 [2024-07-25 01:19:31.929403] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x153a360) 00:33:38.788 [2024-07-25 01:19:31.929432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:5093 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.788 [2024-07-25 01:19:31.929449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:39.046 [2024-07-25 01:19:31.945874] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x153a360) 00:33:39.046 [2024-07-25 01:19:31.945908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:5347 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.046 [2024-07-25 01:19:31.945928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:39.046 [2024-07-25 01:19:31.963213] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x153a360) 00:33:39.046 [2024-07-25 01:19:31.963256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12809 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.046 [2024-07-25 01:19:31.963293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:39.046 [2024-07-25 01:19:31.980332] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x153a360) 00:33:39.046 [2024-07-25 01:19:31.980361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:8916 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.046 [2024-07-25 01:19:31.980378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:39.046 [2024-07-25 01:19:31.997549] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x153a360) 00:33:39.046 [2024-07-25 01:19:31.997583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:18612 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.046 [2024-07-25 01:19:31.997603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:39.046 [2024-07-25 01:19:32.012487] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x153a360) 00:33:39.046 [2024-07-25 01:19:32.012532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:8836 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.046 [2024-07-25 01:19:32.012549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:39.046 [2024-07-25 01:19:32.029137] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x153a360) 00:33:39.046 [2024-07-25 01:19:32.029171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17137 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.046 [2024-07-25 01:19:32.029190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:39.046 [2024-07-25 01:19:32.041991] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x153a360) 00:33:39.046 [2024-07-25 01:19:32.042025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:9092 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.047 [2024-07-25 01:19:32.042044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:39.047 [2024-07-25 01:19:32.057879] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x153a360) 00:33:39.047 [2024-07-25 01:19:32.057913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:8547 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.047 [2024-07-25 01:19:32.057932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:39.047 [2024-07-25 01:19:32.070775] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x153a360) 00:33:39.047 [2024-07-25 01:19:32.070810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:883 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.047 [2024-07-25 01:19:32.070829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:39.047 [2024-07-25 01:19:32.085469] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x153a360) 00:33:39.047 [2024-07-25 01:19:32.085500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:22770 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.047 [2024-07-25 01:19:32.085518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:39.047 [2024-07-25 01:19:32.097736] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x153a360) 00:33:39.047 [2024-07-25 01:19:32.097770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:13873 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.047 [2024-07-25 01:19:32.097790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:39.047 [2024-07-25 01:19:32.111753] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x153a360) 00:33:39.047 [2024-07-25 01:19:32.111787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:10367 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.047 [2024-07-25 01:19:32.111806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:39.047 [2024-07-25 01:19:32.125581] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x153a360) 00:33:39.047 [2024-07-25 01:19:32.125614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:19722 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.047 [2024-07-25 01:19:32.125643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:39.047 [2024-07-25 01:19:32.139177] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x153a360) 00:33:39.047 [2024-07-25 01:19:32.139210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:4498 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.047 [2024-07-25 01:19:32.139230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:39.047 [2024-07-25 01:19:32.151223] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x153a360) 00:33:39.047 [2024-07-25 01:19:32.151267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:6379 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.047 [2024-07-25 01:19:32.151302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:39.047 [2024-07-25 01:19:32.164755] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x153a360) 00:33:39.047 [2024-07-25 01:19:32.164789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7176 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.047 [2024-07-25 01:19:32.164808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:39.047 [2024-07-25 01:19:32.179788] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x153a360) 00:33:39.047 [2024-07-25 01:19:32.179821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:3985 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.047 [2024-07-25 01:19:32.179840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:39.047 [2024-07-25 01:19:32.194201] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x153a360) 00:33:39.047 [2024-07-25 01:19:32.194234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:12487 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.047 [2024-07-25 01:19:32.194264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:39.305 [2024-07-25 01:19:32.206635] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x153a360) 00:33:39.305 [2024-07-25 01:19:32.206668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20404 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.305 [2024-07-25 01:19:32.206687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:39.305 [2024-07-25 01:19:32.224277] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x153a360) 00:33:39.305 [2024-07-25 01:19:32.224309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:20330 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.305 [2024-07-25 01:19:32.224326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:39.305 [2024-07-25 01:19:32.239249] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x153a360) 00:33:39.305 [2024-07-25 01:19:32.239297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:20545 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.305 [2024-07-25 01:19:32.239315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:39.305 [2024-07-25 01:19:32.251256] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x153a360) 00:33:39.305 [2024-07-25 01:19:32.251315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:4854 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.305 [2024-07-25 01:19:32.251332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:39.305 [2024-07-25 01:19:32.265855] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x153a360) 00:33:39.305 [2024-07-25 01:19:32.265890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:25215 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.305 [2024-07-25 01:19:32.265909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:39.305 [2024-07-25 01:19:32.278110] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x153a360) 00:33:39.305 [2024-07-25 01:19:32.278143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:23939 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.305 [2024-07-25 01:19:32.278162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:39.305 [2024-07-25 01:19:32.294001] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x153a360) 00:33:39.305 [2024-07-25 01:19:32.294035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:9029 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.305 [2024-07-25 01:19:32.294054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:39.305 [2024-07-25 01:19:32.306722] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x153a360) 00:33:39.305 [2024-07-25 01:19:32.306757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:6900 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.305 [2024-07-25 01:19:32.306776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:39.305 [2024-07-25 01:19:32.323266] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x153a360) 00:33:39.305 [2024-07-25 01:19:32.323314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:21292 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.305 [2024-07-25 01:19:32.323333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:39.305 [2024-07-25 01:19:32.338256] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x153a360) 00:33:39.305 [2024-07-25 01:19:32.338303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:20128 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.306 [2024-07-25 01:19:32.338320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:39.306 [2024-07-25 01:19:32.350358] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x153a360) 00:33:39.306 [2024-07-25 01:19:32.350385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:4535 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.306 [2024-07-25 01:19:32.350400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:39.306 [2024-07-25 01:19:32.365717] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x153a360) 00:33:39.306 [2024-07-25 01:19:32.365751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:15367 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.306 [2024-07-25 01:19:32.365777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:39.306 [2024-07-25 01:19:32.379027] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x153a360) 00:33:39.306 [2024-07-25 01:19:32.379060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:15244 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.306 [2024-07-25 01:19:32.379079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:39.306 [2024-07-25 01:19:32.392209] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x153a360) 00:33:39.306 [2024-07-25 01:19:32.392251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:13951 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.306 [2024-07-25 01:19:32.392288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:39.306 [2024-07-25 01:19:32.405094] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x153a360) 00:33:39.306 [2024-07-25 01:19:32.405127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:11168 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.306 [2024-07-25 01:19:32.405145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:39.306 [2024-07-25 01:19:32.421567] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x153a360) 00:33:39.306 [2024-07-25 01:19:32.421600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17381 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.306 [2024-07-25 01:19:32.421619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:39.306 [2024-07-25 01:19:32.437214] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x153a360) 00:33:39.306 [2024-07-25 01:19:32.437256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:9059 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.306 [2024-07-25 01:19:32.437277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:39.306 [2024-07-25 01:19:32.454277] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x153a360) 00:33:39.306 [2024-07-25 01:19:32.454307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:20045 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.306 [2024-07-25 01:19:32.454324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:39.564 [2024-07-25 01:19:32.466765] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x153a360) 00:33:39.564 [2024-07-25 01:19:32.466800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:5624 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.564 [2024-07-25 01:19:32.466818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:39.564 [2024-07-25 01:19:32.483581] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x153a360) 00:33:39.564 [2024-07-25 01:19:32.483616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:18611 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.564 [2024-07-25 01:19:32.483635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:39.564 [2024-07-25 01:19:32.500337] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x153a360) 00:33:39.564 [2024-07-25 01:19:32.500374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:19039 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.564 [2024-07-25 01:19:32.500391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:39.564 [2024-07-25 01:19:32.512110] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x153a360) 00:33:39.564 [2024-07-25 01:19:32.512144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:17064 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.564 [2024-07-25 01:19:32.512163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:39.564 [2024-07-25 01:19:32.527488] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x153a360) 00:33:39.564 [2024-07-25 01:19:32.527535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:7337 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.564 [2024-07-25 01:19:32.527556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:39.564 [2024-07-25 01:19:32.539040] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x153a360) 00:33:39.564 [2024-07-25 01:19:32.539073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:17157 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.564 [2024-07-25 01:19:32.539092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:39.564 [2024-07-25 01:19:32.554024] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x153a360) 00:33:39.564 [2024-07-25 01:19:32.554058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:15909 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.564 [2024-07-25 01:19:32.554077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:39.564 [2024-07-25 01:19:32.567860] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x153a360) 00:33:39.564 [2024-07-25 01:19:32.567904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:16797 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.564 [2024-07-25 01:19:32.567923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:39.564 [2024-07-25 01:19:32.581340] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x153a360) 00:33:39.564 [2024-07-25 01:19:32.581384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:20623 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.564 [2024-07-25 01:19:32.581400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:39.564 [2024-07-25 01:19:32.594343] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x153a360) 00:33:39.564 [2024-07-25 01:19:32.594371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:4067 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.564 [2024-07-25 01:19:32.594386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:39.564 [2024-07-25 01:19:32.608403] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x153a360) 00:33:39.564 [2024-07-25 01:19:32.608447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:2116 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.564 [2024-07-25 01:19:32.608464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:39.564 [2024-07-25 01:19:32.621967] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x153a360) 00:33:39.564 [2024-07-25 01:19:32.622000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:18602 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.565 [2024-07-25 01:19:32.622019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:39.565 [2024-07-25 01:19:32.635305] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x153a360) 00:33:39.565 [2024-07-25 01:19:32.635336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:1393 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.565 [2024-07-25 01:19:32.635353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:39.565 [2024-07-25 01:19:32.647685] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x153a360) 00:33:39.565 [2024-07-25 01:19:32.647720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:5826 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.565 [2024-07-25 01:19:32.647738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:39.565 [2024-07-25 01:19:32.662494] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x153a360) 00:33:39.565 [2024-07-25 01:19:32.662539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:22590 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.565 [2024-07-25 01:19:32.662556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:39.565 [2024-07-25 01:19:32.675821] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x153a360) 00:33:39.565 [2024-07-25 01:19:32.675854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:6844 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.565 [2024-07-25 01:19:32.675873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:39.565 [2024-07-25 01:19:32.689069] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x153a360) 00:33:39.565 [2024-07-25 01:19:32.689102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:16392 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.565 [2024-07-25 01:19:32.689121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:39.565 [2024-07-25 01:19:32.701935] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x153a360) 00:33:39.565 [2024-07-25 01:19:32.701969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:24455 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.565 [2024-07-25 01:19:32.701988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:39.823 [2024-07-25 01:19:32.715185] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x153a360) 00:33:39.823 [2024-07-25 01:19:32.715219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:17798 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.823 [2024-07-25 01:19:32.715237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:39.823 [2024-07-25 01:19:32.728976] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x153a360) 00:33:39.823 [2024-07-25 01:19:32.729010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:18992 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.823 [2024-07-25 01:19:32.729037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:39.823 [2024-07-25 01:19:32.742812] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x153a360) 00:33:39.823 [2024-07-25 01:19:32.742846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4095 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.823 [2024-07-25 01:19:32.742865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:39.823 [2024-07-25 01:19:32.756069] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x153a360) 00:33:39.823 [2024-07-25 01:19:32.756103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:1159 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.823 [2024-07-25 01:19:32.756121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:39.823 [2024-07-25 01:19:32.770072] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x153a360) 00:33:39.823 [2024-07-25 01:19:32.770106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:2428 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.823 [2024-07-25 01:19:32.770125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:39.823 [2024-07-25 01:19:32.781654] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x153a360) 00:33:39.823 [2024-07-25 01:19:32.781689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:9941 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.823 [2024-07-25 01:19:32.781708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:39.823 [2024-07-25 01:19:32.796274] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x153a360) 00:33:39.823 [2024-07-25 01:19:32.796318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:2544 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.823 [2024-07-25 01:19:32.796334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:39.823 [2024-07-25 01:19:32.812426] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x153a360) 00:33:39.823 [2024-07-25 01:19:32.812470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:136 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.823 [2024-07-25 01:19:32.812486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:39.823 [2024-07-25 01:19:32.827869] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x153a360) 00:33:39.823 [2024-07-25 01:19:32.827904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2417 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.823 [2024-07-25 01:19:32.827923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:39.823 [2024-07-25 01:19:32.839395] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x153a360) 00:33:39.823 [2024-07-25 01:19:32.839425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:9192 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.824 [2024-07-25 01:19:32.839455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:39.824 [2024-07-25 01:19:32.855478] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x153a360) 00:33:39.824 [2024-07-25 01:19:32.855514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:15714 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.824 [2024-07-25 01:19:32.855548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:39.824 [2024-07-25 01:19:32.871728] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x153a360) 00:33:39.824 [2024-07-25 01:19:32.871763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:5311 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.824 [2024-07-25 01:19:32.871782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:39.824 [2024-07-25 01:19:32.885292] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x153a360) 00:33:39.824 [2024-07-25 01:19:32.885323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5287 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.824 [2024-07-25 01:19:32.885356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:39.824 [2024-07-25 01:19:32.899507] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x153a360) 00:33:39.824 [2024-07-25 01:19:32.899562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:25109 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.824 [2024-07-25 01:19:32.899581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:39.824 [2024-07-25 01:19:32.914199] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x153a360) 00:33:39.824 [2024-07-25 01:19:32.914233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:5842 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.824 [2024-07-25 01:19:32.914260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:39.824 [2024-07-25 01:19:32.925498] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x153a360) 00:33:39.824 [2024-07-25 01:19:32.925530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:25300 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.824 [2024-07-25 01:19:32.925547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:39.824 [2024-07-25 01:19:32.939988] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x153a360) 00:33:39.824 [2024-07-25 01:19:32.940022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:18002 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.824 [2024-07-25 01:19:32.940041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:39.824 [2024-07-25 01:19:32.953308] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x153a360) 00:33:39.824 [2024-07-25 01:19:32.953338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:7548 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.824 [2024-07-25 01:19:32.953355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:39.824 [2024-07-25 01:19:32.966971] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x153a360) 00:33:39.824 [2024-07-25 01:19:32.967005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:14759 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.824 [2024-07-25 01:19:32.967029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:40.082 [2024-07-25 01:19:32.980719] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x153a360) 00:33:40.082 [2024-07-25 01:19:32.980752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23257 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.082 [2024-07-25 01:19:32.980771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:40.082 [2024-07-25 01:19:32.997165] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x153a360) 00:33:40.082 [2024-07-25 01:19:32.997200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:2959 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.082 [2024-07-25 01:19:32.997219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:40.082 [2024-07-25 01:19:33.009670] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x153a360) 00:33:40.082 [2024-07-25 01:19:33.009704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:22441 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.082 [2024-07-25 01:19:33.009722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:40.082 [2024-07-25 01:19:33.025922] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x153a360) 00:33:40.082 [2024-07-25 01:19:33.025957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:17978 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.082 [2024-07-25 01:19:33.025977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:40.082 [2024-07-25 01:19:33.037152] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x153a360) 00:33:40.082 [2024-07-25 01:19:33.037187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:3413 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.082 [2024-07-25 01:19:33.037206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:40.082 [2024-07-25 01:19:33.051395] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x153a360) 00:33:40.082 [2024-07-25 01:19:33.051441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:11636 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.082 [2024-07-25 01:19:33.051459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:40.082 [2024-07-25 01:19:33.066695] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x153a360) 00:33:40.082 [2024-07-25 01:19:33.066729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:15769 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.082 [2024-07-25 01:19:33.066748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:40.082 [2024-07-25 01:19:33.079385] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x153a360) 00:33:40.082 [2024-07-25 01:19:33.079416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:14295 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.082 [2024-07-25 01:19:33.079433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:40.082 [2024-07-25 01:19:33.093071] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x153a360) 00:33:40.082 [2024-07-25 01:19:33.093116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:21821 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.082 [2024-07-25 01:19:33.093137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:40.082 [2024-07-25 01:19:33.107991] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x153a360) 00:33:40.082 [2024-07-25 01:19:33.108026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:21248 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.082 [2024-07-25 01:19:33.108046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:40.082 [2024-07-25 01:19:33.120099] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x153a360) 00:33:40.082 [2024-07-25 01:19:33.120132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:25019 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.082 [2024-07-25 01:19:33.120151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:40.082 [2024-07-25 01:19:33.134005] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x153a360) 00:33:40.082 [2024-07-25 01:19:33.134040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23221 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.082 [2024-07-25 01:19:33.134058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:40.082 [2024-07-25 01:19:33.148907] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x153a360) 00:33:40.082 [2024-07-25 01:19:33.148941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:19959 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.082 [2024-07-25 01:19:33.148961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:40.082 [2024-07-25 01:19:33.160294] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x153a360) 00:33:40.082 [2024-07-25 01:19:33.160322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:21283 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.082 [2024-07-25 01:19:33.160338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:40.082 [2024-07-25 01:19:33.175257] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x153a360) 00:33:40.082 [2024-07-25 01:19:33.175291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:5612 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.082 [2024-07-25 01:19:33.175310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:40.082 [2024-07-25 01:19:33.187439] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x153a360) 00:33:40.082 [2024-07-25 01:19:33.187467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:18944 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.082 [2024-07-25 01:19:33.187483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:40.082 [2024-07-25 01:19:33.203642] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x153a360) 00:33:40.082 [2024-07-25 01:19:33.203676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:1370 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.082 [2024-07-25 01:19:33.203695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:40.082 [2024-07-25 01:19:33.217137] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x153a360) 00:33:40.082 [2024-07-25 01:19:33.217172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:706 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.082 [2024-07-25 01:19:33.217192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:40.082 [2024-07-25 01:19:33.231313] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x153a360) 00:33:40.082 [2024-07-25 01:19:33.231344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:14713 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.082 [2024-07-25 01:19:33.231361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:40.340 [2024-07-25 01:19:33.244769] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x153a360) 00:33:40.340 [2024-07-25 01:19:33.244804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:18895 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.340 [2024-07-25 01:19:33.244823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:40.340 [2024-07-25 01:19:33.256427] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x153a360) 00:33:40.340 [2024-07-25 01:19:33.256457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:2472 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.340 [2024-07-25 01:19:33.256474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:40.340 [2024-07-25 01:19:33.270184] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x153a360) 00:33:40.340 [2024-07-25 01:19:33.270218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:13996 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.340 [2024-07-25 01:19:33.270237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:40.340 [2024-07-25 01:19:33.284601] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x153a360) 00:33:40.340 [2024-07-25 01:19:33.284646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:18207 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.340 [2024-07-25 01:19:33.284665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:40.340 [2024-07-25 01:19:33.297636] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x153a360) 00:33:40.340 [2024-07-25 01:19:33.297670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:7469 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.340 [2024-07-25 01:19:33.297689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:40.340 [2024-07-25 01:19:33.311455] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x153a360) 00:33:40.340 [2024-07-25 01:19:33.311487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:8090 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.340 [2024-07-25 01:19:33.311504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:40.340 [2024-07-25 01:19:33.325806] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x153a360) 00:33:40.340 [2024-07-25 01:19:33.325840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:17828 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.340 [2024-07-25 01:19:33.325865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:40.340 [2024-07-25 01:19:33.340501] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x153a360) 00:33:40.340 [2024-07-25 01:19:33.340533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:14093 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.340 [2024-07-25 01:19:33.340550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:40.340 [2024-07-25 01:19:33.353168] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x153a360) 00:33:40.340 [2024-07-25 01:19:33.353203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:3091 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.340 [2024-07-25 01:19:33.353222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:40.340 [2024-07-25 01:19:33.369132] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x153a360) 00:33:40.340 [2024-07-25 01:19:33.369167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:23635 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.340 [2024-07-25 01:19:33.369186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:40.340 [2024-07-25 01:19:33.381126] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x153a360) 00:33:40.340 [2024-07-25 01:19:33.381161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:4699 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.340 [2024-07-25 01:19:33.381179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:40.340 [2024-07-25 01:19:33.394930] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x153a360) 00:33:40.340 [2024-07-25 01:19:33.394964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:20703 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.340 [2024-07-25 01:19:33.394983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:40.340 [2024-07-25 01:19:33.409546] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x153a360) 00:33:40.340 [2024-07-25 01:19:33.409580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:3434 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.340 [2024-07-25 01:19:33.409599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:40.340 [2024-07-25 01:19:33.423690] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x153a360) 00:33:40.340 [2024-07-25 01:19:33.423724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:3946 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.340 [2024-07-25 01:19:33.423743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:40.340 [2024-07-25 01:19:33.435392] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x153a360) 00:33:40.340 [2024-07-25 01:19:33.435423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:8562 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.340 [2024-07-25 01:19:33.435440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:40.340 [2024-07-25 01:19:33.449695] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x153a360) 00:33:40.340 [2024-07-25 01:19:33.449735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:22948 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.340 [2024-07-25 01:19:33.449754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:40.340 [2024-07-25 01:19:33.461965] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x153a360) 00:33:40.340 [2024-07-25 01:19:33.461998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:23821 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.340 [2024-07-25 01:19:33.462017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:40.340 [2024-07-25 01:19:33.475785] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x153a360) 00:33:40.340 [2024-07-25 01:19:33.475820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:12516 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.340 [2024-07-25 01:19:33.475839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:40.340 00:33:40.340 Latency(us) 00:33:40.340 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:40.340 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:33:40.340 nvme0n1 : 2.01 18069.93 70.59 0.00 0.00 7074.32 3640.89 25437.68 00:33:40.340 =================================================================================================================== 00:33:40.340 Total : 18069.93 70.59 0.00 0.00 7074.32 3640.89 25437.68 00:33:40.340 0 00:33:40.623 01:19:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:33:40.623 01:19:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:33:40.623 01:19:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:33:40.623 01:19:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:33:40.623 | .driver_specific 00:33:40.623 | .nvme_error 00:33:40.623 | .status_code 00:33:40.623 | .command_transient_transport_error' 00:33:40.623 01:19:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 142 > 0 )) 00:33:40.623 01:19:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3919822 00:33:40.623 01:19:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@946 -- # '[' -z 3919822 ']' 00:33:40.623 01:19:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # kill -0 3919822 00:33:40.623 01:19:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # uname 00:33:40.623 01:19:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:33:40.934 01:19:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3919822 00:33:40.934 01:19:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:33:40.934 01:19:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:33:40.934 01:19:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3919822' 00:33:40.934 killing process with pid 3919822 00:33:40.934 01:19:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@965 -- # kill 3919822 00:33:40.934 Received shutdown signal, test time was about 2.000000 seconds 00:33:40.934 00:33:40.934 Latency(us) 00:33:40.934 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:40.934 =================================================================================================================== 00:33:40.934 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:40.934 01:19:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # wait 3919822 00:33:40.934 01:19:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:33:40.934 01:19:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:33:40.934 01:19:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:33:40.934 01:19:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:33:40.934 01:19:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:33:40.934 01:19:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3920310 00:33:40.934 01:19:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:33:40.934 01:19:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3920310 /var/tmp/bperf.sock 00:33:40.934 01:19:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@827 -- # '[' -z 3920310 ']' 00:33:40.934 01:19:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:40.934 01:19:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@832 -- # local max_retries=100 00:33:40.934 01:19:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:40.934 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:40.934 01:19:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # xtrace_disable 00:33:40.934 01:19:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:40.935 [2024-07-25 01:19:34.021915] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 23.11.0 initialization... 00:33:40.935 [2024-07-25 01:19:34.021996] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3920310 ] 00:33:40.935 I/O size of 131072 is greater than zero copy threshold (65536). 00:33:40.935 Zero copy mechanism will not be used. 00:33:40.935 EAL: No free 2048 kB hugepages reported on node 1 00:33:40.935 [2024-07-25 01:19:34.083401] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:41.193 [2024-07-25 01:19:34.168840] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:33:41.193 01:19:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:33:41.193 01:19:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # return 0 00:33:41.193 01:19:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:33:41.193 01:19:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:33:41.451 01:19:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:33:41.451 01:19:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:41.451 01:19:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:41.451 01:19:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:41.451 01:19:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:41.451 01:19:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:41.708 nvme0n1 00:33:41.709 01:19:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:33:41.709 01:19:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:41.709 01:19:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:41.966 01:19:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:41.966 01:19:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:33:41.966 01:19:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:41.966 I/O size of 131072 is greater than zero copy threshold (65536). 00:33:41.966 Zero copy mechanism will not be used. 00:33:41.966 Running I/O for 2 seconds... 00:33:41.966 [2024-07-25 01:19:34.981176] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x876d50) 00:33:41.966 [2024-07-25 01:19:34.981232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.966 [2024-07-25 01:19:34.981264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:41.966 [2024-07-25 01:19:34.989850] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x876d50) 00:33:41.966 [2024-07-25 01:19:34.989884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.966 [2024-07-25 01:19:34.989902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:41.966 [2024-07-25 01:19:34.999022] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x876d50) 00:33:41.966 [2024-07-25 01:19:34.999057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.966 [2024-07-25 01:19:34.999077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:41.966 [2024-07-25 01:19:35.008749] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x876d50) 00:33:41.966 [2024-07-25 01:19:35.008785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.966 [2024-07-25 01:19:35.008805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:41.966 [2024-07-25 01:19:35.018689] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x876d50) 00:33:41.966 [2024-07-25 01:19:35.018726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.966 [2024-07-25 01:19:35.018745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:41.966 [2024-07-25 01:19:35.028064] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x876d50) 00:33:41.966 [2024-07-25 01:19:35.028099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.966 [2024-07-25 01:19:35.028119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:41.966 [2024-07-25 01:19:35.037833] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x876d50) 00:33:41.966 [2024-07-25 01:19:35.037868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.966 [2024-07-25 01:19:35.037898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:41.966 [2024-07-25 01:19:35.047093] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x876d50) 00:33:41.966 [2024-07-25 01:19:35.047128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.966 [2024-07-25 01:19:35.047147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:41.966 [2024-07-25 01:19:35.056327] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x876d50) 00:33:41.966 [2024-07-25 01:19:35.056360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.966 [2024-07-25 01:19:35.056377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:41.966 [2024-07-25 01:19:35.065491] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x876d50) 00:33:41.966 [2024-07-25 01:19:35.065538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.966 [2024-07-25 01:19:35.065555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:41.966 [2024-07-25 01:19:35.075864] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x876d50) 00:33:41.966 [2024-07-25 01:19:35.075900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.966 [2024-07-25 01:19:35.075920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:41.966 [2024-07-25 01:19:35.085171] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x876d50) 00:33:41.966 [2024-07-25 01:19:35.085206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.966 [2024-07-25 01:19:35.085225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:41.966 [2024-07-25 01:19:35.094999] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x876d50) 00:33:41.966 [2024-07-25 01:19:35.095034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.966 [2024-07-25 01:19:35.095053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:41.966 [2024-07-25 01:19:35.104167] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x876d50) 00:33:41.966 [2024-07-25 01:19:35.104201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.966 [2024-07-25 01:19:35.104221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:41.966 [2024-07-25 01:19:35.114381] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x876d50) 00:33:41.966 [2024-07-25 01:19:35.114413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.966 [2024-07-25 01:19:35.114431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:42.224 [2024-07-25 01:19:35.123369] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x876d50) 00:33:42.224 [2024-07-25 01:19:35.123406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.224 [2024-07-25 01:19:35.123424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:42.224 [2024-07-25 01:19:35.133421] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x876d50) 00:33:42.224 [2024-07-25 01:19:35.133454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.224 [2024-07-25 01:19:35.133487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:42.224 [2024-07-25 01:19:35.142217] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x876d50) 00:33:42.224 [2024-07-25 01:19:35.142261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.224 [2024-07-25 01:19:35.142301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:42.224 [2024-07-25 01:19:35.152321] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x876d50) 00:33:42.224 [2024-07-25 01:19:35.152356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.224 [2024-07-25 01:19:35.152374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:42.224 [2024-07-25 01:19:35.162214] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x876d50) 00:33:42.224 [2024-07-25 01:19:35.162259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.224 [2024-07-25 01:19:35.162296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:42.224 [2024-07-25 01:19:35.172639] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x876d50) 00:33:42.224 [2024-07-25 01:19:35.172675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.224 [2024-07-25 01:19:35.172695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:42.224 [2024-07-25 01:19:35.182830] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x876d50) 00:33:42.224 [2024-07-25 01:19:35.182866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.224 [2024-07-25 01:19:35.182885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:42.224 [2024-07-25 01:19:35.192917] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x876d50) 00:33:42.224 [2024-07-25 01:19:35.192949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.224 [2024-07-25 01:19:35.192982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:42.224 [2024-07-25 01:19:35.203067] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x876d50) 00:33:42.224 [2024-07-25 01:19:35.203102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.224 [2024-07-25 01:19:35.203121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:42.224 [2024-07-25 01:19:35.212600] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x876d50) 00:33:42.224 [2024-07-25 01:19:35.212636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.224 [2024-07-25 01:19:35.212656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:42.224 [2024-07-25 01:19:35.221597] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x876d50) 00:33:42.224 [2024-07-25 01:19:35.221629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.224 [2024-07-25 01:19:35.221646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:42.224 [2024-07-25 01:19:35.231192] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x876d50) 00:33:42.224 [2024-07-25 01:19:35.231226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.224 [2024-07-25 01:19:35.231262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:42.224 [2024-07-25 01:19:35.240364] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x876d50) 00:33:42.224 [2024-07-25 01:19:35.240396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.224 [2024-07-25 01:19:35.240413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:42.224 [2024-07-25 01:19:35.250268] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x876d50) 00:33:42.224 [2024-07-25 01:19:35.250314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.224 [2024-07-25 01:19:35.250331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:42.224 [2024-07-25 01:19:35.260292] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x876d50) 00:33:42.224 [2024-07-25 01:19:35.260334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.224 [2024-07-25 01:19:35.260352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:42.224 [2024-07-25 01:19:35.270223] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x876d50) 00:33:42.224 [2024-07-25 01:19:35.270267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.224 [2024-07-25 01:19:35.270302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:42.224 [2024-07-25 01:19:35.279987] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x876d50) 00:33:42.224 [2024-07-25 01:19:35.280022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.224 [2024-07-25 01:19:35.280041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:42.224 [2024-07-25 01:19:35.289237] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x876d50) 00:33:42.225 [2024-07-25 01:19:35.289307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.225 [2024-07-25 01:19:35.289330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:42.225 [2024-07-25 01:19:35.299072] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x876d50) 00:33:42.225 [2024-07-25 01:19:35.299107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.225 [2024-07-25 01:19:35.299127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:42.225 [2024-07-25 01:19:35.308397] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x876d50) 00:33:42.225 [2024-07-25 01:19:35.308428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.225 [2024-07-25 01:19:35.308446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:42.225 [2024-07-25 01:19:35.317684] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x876d50) 00:33:42.225 [2024-07-25 01:19:35.317719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.225 [2024-07-25 01:19:35.317738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:42.225 [2024-07-25 01:19:35.326670] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x876d50) 00:33:42.225 [2024-07-25 01:19:35.326705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.225 [2024-07-25 01:19:35.326726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:42.225 [2024-07-25 01:19:35.336455] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x876d50) 00:33:42.225 [2024-07-25 01:19:35.336487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.225 [2024-07-25 01:19:35.336505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:42.225 [2024-07-25 01:19:35.346264] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x876d50) 00:33:42.225 [2024-07-25 01:19:35.346295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.225 [2024-07-25 01:19:35.346313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:42.225 [2024-07-25 01:19:35.355664] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x876d50) 00:33:42.225 [2024-07-25 01:19:35.355699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.225 [2024-07-25 01:19:35.355718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:42.225 [2024-07-25 01:19:35.364391] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x876d50) 00:33:42.225 [2024-07-25 01:19:35.364432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.225 [2024-07-25 01:19:35.364450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:42.225 [2024-07-25 01:19:35.374347] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x876d50) 00:33:42.225 [2024-07-25 01:19:35.374385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.225 [2024-07-25 01:19:35.374403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:42.483 [2024-07-25 01:19:35.382131] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x876d50) 00:33:42.483 [2024-07-25 01:19:35.382180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.483 [2024-07-25 01:19:35.382199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:42.483 [2024-07-25 01:19:35.391677] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x876d50) 00:33:42.483 [2024-07-25 01:19:35.391712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.483 [2024-07-25 01:19:35.391732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:42.483 [2024-07-25 01:19:35.400694] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x876d50) 00:33:42.483 [2024-07-25 01:19:35.400726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.483 [2024-07-25 01:19:35.400761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:42.483 [2024-07-25 01:19:35.410375] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x876d50) 00:33:42.483 [2024-07-25 01:19:35.410420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.483 [2024-07-25 01:19:35.410437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:42.483 [2024-07-25 01:19:35.419751] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x876d50) 00:33:42.483 [2024-07-25 01:19:35.419782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.483 [2024-07-25 01:19:35.419799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:42.483 [2024-07-25 01:19:35.429447] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x876d50) 00:33:42.483 [2024-07-25 01:19:35.429480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.483 [2024-07-25 01:19:35.429497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:42.483 [2024-07-25 01:19:35.439108] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x876d50) 00:33:42.483 [2024-07-25 01:19:35.439155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.483 [2024-07-25 01:19:35.439175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:42.483 [2024-07-25 01:19:35.448168] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x876d50) 00:33:42.483 [2024-07-25 01:19:35.448200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.483 [2024-07-25 01:19:35.448217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:42.483 [2024-07-25 01:19:35.457654] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x876d50) 00:33:42.483 [2024-07-25 01:19:35.457689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.483 [2024-07-25 01:19:35.457709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:42.483 [2024-07-25 01:19:35.467294] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x876d50) 00:33:42.483 [2024-07-25 01:19:35.467342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.483 [2024-07-25 01:19:35.467359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:42.483 [2024-07-25 01:19:35.476025] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x876d50) 00:33:42.483 [2024-07-25 01:19:35.476056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.483 [2024-07-25 01:19:35.476072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:42.483 [2024-07-25 01:19:35.480679] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x876d50) 00:33:42.483 [2024-07-25 01:19:35.480713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.483 [2024-07-25 01:19:35.480732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:42.483 [2024-07-25 01:19:35.488321] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x876d50) 00:33:42.483 [2024-07-25 01:19:35.488351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.483 [2024-07-25 01:19:35.488368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:42.483 [2024-07-25 01:19:35.496544] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x876d50) 00:33:42.483 [2024-07-25 01:19:35.496592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.483 [2024-07-25 01:19:35.496610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:42.483 [2024-07-25 01:19:35.504668] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x876d50) 00:33:42.484 [2024-07-25 01:19:35.504701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.484 [2024-07-25 01:19:35.504719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:42.484 [2024-07-25 01:19:35.512819] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x876d50) 00:33:42.484 [2024-07-25 01:19:35.512859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.484 [2024-07-25 01:19:35.512877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:42.484 [2024-07-25 01:19:35.521215] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x876d50) 00:33:42.484 [2024-07-25 01:19:35.521259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.484 [2024-07-25 01:19:35.521291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:42.484 [2024-07-25 01:19:35.529343] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x876d50) 00:33:42.484 [2024-07-25 01:19:35.529373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.484 [2024-07-25 01:19:35.529403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:42.484 [2024-07-25 01:19:35.537401] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x876d50) 00:33:42.484 [2024-07-25 01:19:35.537429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.484 [2024-07-25 01:19:35.537445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:42.484 [2024-07-25 01:19:35.545785] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x876d50) 00:33:42.484 [2024-07-25 01:19:35.545819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.484 [2024-07-25 01:19:35.545838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:42.484 [2024-07-25 01:19:35.554072] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x876d50) 00:33:42.484 [2024-07-25 01:19:35.554105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.484 [2024-07-25 01:19:35.554123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:42.484 [2024-07-25 01:19:35.562193] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x876d50) 00:33:42.484 [2024-07-25 01:19:35.562226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.484 [2024-07-25 01:19:35.562253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:42.484 [2024-07-25 01:19:35.570410] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x876d50) 00:33:42.484 [2024-07-25 01:19:35.570440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.484 [2024-07-25 01:19:35.570457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:42.484 [2024-07-25 01:19:35.578614] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x876d50) 00:33:42.484 [2024-07-25 01:19:35.578656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.484 [2024-07-25 01:19:35.578672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:42.484 [2024-07-25 01:19:35.586674] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x876d50) 00:33:42.484 [2024-07-25 01:19:35.586706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.484 [2024-07-25 01:19:35.586725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:42.484 [2024-07-25 01:19:35.594869] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x876d50) 00:33:42.484 [2024-07-25 01:19:35.594902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.484 [2024-07-25 01:19:35.594920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:42.484 [2024-07-25 01:19:35.603045] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x876d50) 00:33:42.484 [2024-07-25 01:19:35.603078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.484 [2024-07-25 01:19:35.603096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:42.484 [2024-07-25 01:19:35.611286] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x876d50) 00:33:42.484 [2024-07-25 01:19:35.611331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.484 [2024-07-25 01:19:35.611347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:42.484 [2024-07-25 01:19:35.619485] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x876d50) 00:33:42.484 [2024-07-25 01:19:35.619516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.484 [2024-07-25 01:19:35.619550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:42.484 [2024-07-25 01:19:35.627843] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x876d50) 00:33:42.484 [2024-07-25 01:19:35.627876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.484 [2024-07-25 01:19:35.627894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:42.743 [2024-07-25 01:19:35.636512] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x876d50) 00:33:42.743 [2024-07-25 01:19:35.636542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.743 [2024-07-25 01:19:35.636559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:42.743 [2024-07-25 01:19:35.645147] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x876d50) 00:33:42.743 [2024-07-25 01:19:35.645181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.743 [2024-07-25 01:19:35.645201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:42.743 [2024-07-25 01:19:35.653534] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x876d50) 00:33:42.743 [2024-07-25 01:19:35.653563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.743 [2024-07-25 01:19:35.653579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:42.743 [2024-07-25 01:19:35.662139] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x876d50) 00:33:42.743 [2024-07-25 01:19:35.662172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.743 [2024-07-25 01:19:35.662197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:42.743 [2024-07-25 01:19:35.670159] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x876d50) 00:33:42.743 [2024-07-25 01:19:35.670193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.743 [2024-07-25 01:19:35.670211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:42.743 [2024-07-25 01:19:35.678477] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x876d50) 00:33:42.743 [2024-07-25 01:19:35.678507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.743 [2024-07-25 01:19:35.678540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:42.743 [2024-07-25 01:19:35.686584] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x876d50) 00:33:42.743 [2024-07-25 01:19:35.686631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.743 [2024-07-25 01:19:35.686649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:42.743 [2024-07-25 01:19:35.694754] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x876d50) 00:33:42.743 [2024-07-25 01:19:35.694787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.743 [2024-07-25 01:19:35.694805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:42.743 [2024-07-25 01:19:35.702779] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x876d50) 00:33:42.743 [2024-07-25 01:19:35.702811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.743 [2024-07-25 01:19:35.702829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:42.743 [2024-07-25 01:19:35.710924] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x876d50) 00:33:42.743 [2024-07-25 01:19:35.710956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.743 [2024-07-25 01:19:35.710975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:42.743 [2024-07-25 01:19:35.719041] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x876d50) 00:33:42.743 [2024-07-25 01:19:35.719073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.743 [2024-07-25 01:19:35.719092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:42.743 [2024-07-25 01:19:35.727148] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x876d50) 00:33:42.743 [2024-07-25 01:19:35.727180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.743 [2024-07-25 01:19:35.727199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:42.743 [2024-07-25 01:19:35.735319] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x876d50) 00:33:42.743 [2024-07-25 01:19:35.735352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.743 [2024-07-25 01:19:35.735368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:42.743 [2024-07-25 01:19:35.743492] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x876d50) 00:33:42.743 [2024-07-25 01:19:35.743535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.743 [2024-07-25 01:19:35.743551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:42.743 [2024-07-25 01:19:35.751712] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x876d50) 00:33:42.743 [2024-07-25 01:19:35.751745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.743 [2024-07-25 01:19:35.751763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:42.743 [2024-07-25 01:19:35.759838] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x876d50) 00:33:42.743 [2024-07-25 01:19:35.759870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.743 [2024-07-25 01:19:35.759889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:42.743 [2024-07-25 01:19:35.768178] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x876d50) 00:33:42.743 [2024-07-25 01:19:35.768210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.743 [2024-07-25 01:19:35.768228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:42.743 [2024-07-25 01:19:35.776472] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x876d50) 00:33:42.743 [2024-07-25 01:19:35.776500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.743 [2024-07-25 01:19:35.776532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:42.743 [2024-07-25 01:19:35.784718] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x876d50) 00:33:42.743 [2024-07-25 01:19:35.784751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.743 [2024-07-25 01:19:35.784769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:42.743 [2024-07-25 01:19:35.794400] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x876d50) 00:33:42.743 [2024-07-25 01:19:35.794435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.743 [2024-07-25 01:19:35.794454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:42.743 [2024-07-25 01:19:35.804529] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x876d50) 00:33:42.743 [2024-07-25 01:19:35.804578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.743 [2024-07-25 01:19:35.804597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:42.743 [2024-07-25 01:19:35.813495] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x876d50) 00:33:42.743 [2024-07-25 01:19:35.813538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.743 [2024-07-25 01:19:35.813553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:42.743 [2024-07-25 01:19:35.822655] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x876d50) 00:33:42.743 [2024-07-25 01:19:35.822689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.743 [2024-07-25 01:19:35.822709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:42.743 [2024-07-25 01:19:35.832779] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x876d50) 00:33:42.743 [2024-07-25 01:19:35.832813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.743 [2024-07-25 01:19:35.832832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:42.743 [2024-07-25 01:19:35.842526] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x876d50) 00:33:42.743 [2024-07-25 01:19:35.842573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.743 [2024-07-25 01:19:35.842593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:42.743 [2024-07-25 01:19:35.852542] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x876d50) 00:33:42.743 [2024-07-25 01:19:35.852587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.744 [2024-07-25 01:19:35.852603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:42.744 [2024-07-25 01:19:35.860550] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x876d50) 00:33:42.744 [2024-07-25 01:19:35.860594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.744 [2024-07-25 01:19:35.860610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:42.744 [2024-07-25 01:19:35.868409] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x876d50) 00:33:42.744 [2024-07-25 01:19:35.868437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.744 [2024-07-25 01:19:35.868452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:42.744 [2024-07-25 01:19:35.876458] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x876d50) 00:33:42.744 [2024-07-25 01:19:35.876487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.744 [2024-07-25 01:19:35.876504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:42.744 [2024-07-25 01:19:35.884495] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x876d50) 00:33:42.744 [2024-07-25 01:19:35.884549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.744 [2024-07-25 01:19:35.884571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:42.744 [2024-07-25 01:19:35.892852] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x876d50) 00:33:42.744 [2024-07-25 01:19:35.892881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.744 [2024-07-25 01:19:35.892899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:43.002 [2024-07-25 01:19:35.901035] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x876d50) 00:33:43.002 [2024-07-25 01:19:35.901068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.002 [2024-07-25 01:19:35.901086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:43.002 [2024-07-25 01:19:35.909269] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x876d50) 00:33:43.002 [2024-07-25 01:19:35.909312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.002 [2024-07-25 01:19:35.909328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:43.002 [2024-07-25 01:19:35.917327] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x876d50) 00:33:43.002 [2024-07-25 01:19:35.917356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.003 [2024-07-25 01:19:35.917373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:43.003 [2024-07-25 01:19:35.925552] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x876d50) 00:33:43.003 [2024-07-25 01:19:35.925585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.003 [2024-07-25 01:19:35.925603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:43.003 [2024-07-25 01:19:35.933844] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x876d50) 00:33:43.003 [2024-07-25 01:19:35.933877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.003 [2024-07-25 01:19:35.933895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:43.003 [2024-07-25 01:19:35.942154] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x876d50) 00:33:43.003 [2024-07-25 01:19:35.942187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.003 [2024-07-25 01:19:35.942206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:43.003 [2024-07-25 01:19:35.950412] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x876d50) 00:33:43.003 [2024-07-25 01:19:35.950440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.003 [2024-07-25 01:19:35.950455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:43.003 [2024-07-25 01:19:35.958574] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x876d50) 00:33:43.003 [2024-07-25 01:19:35.958613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.003 [2024-07-25 01:19:35.958632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:43.003 [2024-07-25 01:19:35.966856] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x876d50) 00:33:43.003 [2024-07-25 01:19:35.966883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.003 [2024-07-25 01:19:35.966913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:43.003 [2024-07-25 01:19:35.975199] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x876d50) 00:33:43.003 [2024-07-25 01:19:35.975231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.003 [2024-07-25 01:19:35.975262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:43.003 [2024-07-25 01:19:35.983514] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x876d50) 00:33:43.003 [2024-07-25 01:19:35.983543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.003 [2024-07-25 01:19:35.983560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:43.003 [2024-07-25 01:19:35.991677] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x876d50) 00:33:43.003 [2024-07-25 01:19:35.991709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.003 [2024-07-25 01:19:35.991727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:43.003 [2024-07-25 01:19:35.999639] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x876d50) 00:33:43.003 [2024-07-25 01:19:35.999671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.003 [2024-07-25 01:19:35.999690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:43.003 [2024-07-25 01:19:36.007848] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x876d50) 00:33:43.003 [2024-07-25 01:19:36.007881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.003 [2024-07-25 01:19:36.007899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:43.003 [2024-07-25 01:19:36.016181] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x876d50) 00:33:43.003 [2024-07-25 01:19:36.016214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.003 [2024-07-25 01:19:36.016233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:43.003 [2024-07-25 01:19:36.024848] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x876d50) 00:33:43.003 [2024-07-25 01:19:36.024882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.003 [2024-07-25 01:19:36.024901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:43.003 [2024-07-25 01:19:36.033448] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x876d50) 00:33:43.003 [2024-07-25 01:19:36.033478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.003 [2024-07-25 01:19:36.033494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:43.003 [2024-07-25 01:19:36.041826] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x876d50) 00:33:43.003 [2024-07-25 01:19:36.041858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.003 [2024-07-25 01:19:36.041877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:43.003 [2024-07-25 01:19:36.050151] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x876d50) 00:33:43.003 [2024-07-25 01:19:36.050184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.003 [2024-07-25 01:19:36.050203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:43.003 [2024-07-25 01:19:36.058607] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x876d50) 00:33:43.003 [2024-07-25 01:19:36.058641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.003 [2024-07-25 01:19:36.058659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:43.003 [2024-07-25 01:19:36.067426] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x876d50) 00:33:43.003 [2024-07-25 01:19:36.067456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.003 [2024-07-25 01:19:36.067486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:43.003 [2024-07-25 01:19:36.075623] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x876d50) 00:33:43.003 [2024-07-25 01:19:36.075651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.003 [2024-07-25 01:19:36.075682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:43.003 [2024-07-25 01:19:36.083649] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x876d50) 00:33:43.003 [2024-07-25 01:19:36.083678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.003 [2024-07-25 01:19:36.083708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:43.003 [2024-07-25 01:19:36.091838] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x876d50) 00:33:43.003 [2024-07-25 01:19:36.091872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.003 [2024-07-25 01:19:36.091890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:43.003 [2024-07-25 01:19:36.099814] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x876d50) 00:33:43.003 [2024-07-25 01:19:36.099846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.003 [2024-07-25 01:19:36.099872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:43.003 [2024-07-25 01:19:36.107865] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x876d50) 00:33:43.003 [2024-07-25 01:19:36.107897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.003 [2024-07-25 01:19:36.107916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:43.003 [2024-07-25 01:19:36.116031] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x876d50) 00:33:43.003 [2024-07-25 01:19:36.116064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.003 [2024-07-25 01:19:36.116082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:43.003 [2024-07-25 01:19:36.124068] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x876d50) 00:33:43.003 [2024-07-25 01:19:36.124100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.003 [2024-07-25 01:19:36.124119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:43.003 [2024-07-25 01:19:36.132253] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x876d50) 00:33:43.003 [2024-07-25 01:19:36.132299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.003 [2024-07-25 01:19:36.132315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:43.004 [2024-07-25 01:19:36.140302] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x876d50) 00:33:43.004 [2024-07-25 01:19:36.140331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.004 [2024-07-25 01:19:36.140346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:43.004 [2024-07-25 01:19:36.148484] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x876d50) 00:33:43.004 [2024-07-25 01:19:36.148510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.004 [2024-07-25 01:19:36.148525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:43.262 [2024-07-25 01:19:36.157165] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x876d50) 00:33:43.262 [2024-07-25 01:19:36.157195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.262 [2024-07-25 01:19:36.157211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:43.262 [2024-07-25 01:19:36.165516] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x876d50) 00:33:43.262 [2024-07-25 01:19:36.165549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.262 [2024-07-25 01:19:36.165567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:43.262 [2024-07-25 01:19:36.173885] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x876d50) 00:33:43.262 [2024-07-25 01:19:36.173933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.262 [2024-07-25 01:19:36.173950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:43.262 [2024-07-25 01:19:36.182187] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x876d50) 00:33:43.262 [2024-07-25 01:19:36.182216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.262 [2024-07-25 01:19:36.182255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:43.262 [2024-07-25 01:19:36.190735] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x876d50) 00:33:43.262 [2024-07-25 01:19:36.190767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.262 [2024-07-25 01:19:36.190786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:43.262 [2024-07-25 01:19:36.198961] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x876d50) 00:33:43.262 [2024-07-25 01:19:36.198988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.262 [2024-07-25 01:19:36.199018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:43.262 [2024-07-25 01:19:36.207346] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x876d50) 00:33:43.262 [2024-07-25 01:19:36.207375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.262 [2024-07-25 01:19:36.207391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:43.262 [2024-07-25 01:19:36.215517] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x876d50) 00:33:43.262 [2024-07-25 01:19:36.215550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.262 [2024-07-25 01:19:36.215569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:43.262 [2024-07-25 01:19:36.223823] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x876d50) 00:33:43.262 [2024-07-25 01:19:36.223856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.262 [2024-07-25 01:19:36.223874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:43.262 [2024-07-25 01:19:36.232085] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x876d50) 00:33:43.262 [2024-07-25 01:19:36.232117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.262 [2024-07-25 01:19:36.232135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:43.262 [2024-07-25 01:19:36.240373] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x876d50) 00:33:43.262 [2024-07-25 01:19:36.240414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.263 [2024-07-25 01:19:36.240430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:43.263 [2024-07-25 01:19:36.248551] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x876d50) 00:33:43.263 [2024-07-25 01:19:36.248583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.263 [2024-07-25 01:19:36.248602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:43.263 [2024-07-25 01:19:36.257047] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x876d50) 00:33:43.263 [2024-07-25 01:19:36.257080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.263 [2024-07-25 01:19:36.257098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:43.263 [2024-07-25 01:19:36.265497] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x876d50) 00:33:43.263 [2024-07-25 01:19:36.265525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.263 [2024-07-25 01:19:36.265555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:43.263 [2024-07-25 01:19:36.274248] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x876d50) 00:33:43.263 [2024-07-25 01:19:36.274294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.263 [2024-07-25 01:19:36.274309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:43.263 [2024-07-25 01:19:36.282812] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x876d50) 00:33:43.263 [2024-07-25 01:19:36.282845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.263 [2024-07-25 01:19:36.282864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:43.263 [2024-07-25 01:19:36.291313] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x876d50) 00:33:43.263 [2024-07-25 01:19:36.291343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.263 [2024-07-25 01:19:36.291359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:43.263 [2024-07-25 01:19:36.299666] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x876d50) 00:33:43.263 [2024-07-25 01:19:36.299700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.263 [2024-07-25 01:19:36.299718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:43.263 [2024-07-25 01:19:36.307965] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x876d50) 00:33:43.263 [2024-07-25 01:19:36.307998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.263 [2024-07-25 01:19:36.308017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:43.263 [2024-07-25 01:19:36.316170] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x876d50) 00:33:43.263 [2024-07-25 01:19:36.316208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.263 [2024-07-25 01:19:36.316227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:43.263 [2024-07-25 01:19:36.324795] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x876d50) 00:33:43.263 [2024-07-25 01:19:36.324828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.263 [2024-07-25 01:19:36.324846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:43.263 [2024-07-25 01:19:36.333215] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x876d50) 00:33:43.263 [2024-07-25 01:19:36.333259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.263 [2024-07-25 01:19:36.333295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:43.263 [2024-07-25 01:19:36.341472] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x876d50) 00:33:43.263 [2024-07-25 01:19:36.341500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.263 [2024-07-25 01:19:36.341515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:43.263 [2024-07-25 01:19:36.349693] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x876d50) 00:33:43.263 [2024-07-25 01:19:36.349726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.263 [2024-07-25 01:19:36.349745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:43.263 [2024-07-25 01:19:36.357831] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x876d50) 00:33:43.263 [2024-07-25 01:19:36.357864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.263 [2024-07-25 01:19:36.357883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:43.263 [2024-07-25 01:19:36.366188] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x876d50) 00:33:43.263 [2024-07-25 01:19:36.366221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.263 [2024-07-25 01:19:36.366239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:43.263 [2024-07-25 01:19:36.374638] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x876d50) 00:33:43.263 [2024-07-25 01:19:36.374670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.263 [2024-07-25 01:19:36.374689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:43.263 [2024-07-25 01:19:36.383098] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x876d50) 00:33:43.263 [2024-07-25 01:19:36.383132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.263 [2024-07-25 01:19:36.383151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:43.263 [2024-07-25 01:19:36.391849] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x876d50) 00:33:43.263 [2024-07-25 01:19:36.391882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.263 [2024-07-25 01:19:36.391901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:43.263 [2024-07-25 01:19:36.400307] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x876d50) 00:33:43.263 [2024-07-25 01:19:36.400350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.263 [2024-07-25 01:19:36.400366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:43.263 [2024-07-25 01:19:36.408855] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x876d50) 00:33:43.263 [2024-07-25 01:19:36.408883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.263 [2024-07-25 01:19:36.408913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:43.522 [2024-07-25 01:19:36.417620] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x876d50) 00:33:43.522 [2024-07-25 01:19:36.417652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.522 [2024-07-25 01:19:36.417671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:43.522 [2024-07-25 01:19:36.426077] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x876d50) 00:33:43.522 [2024-07-25 01:19:36.426109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.522 [2024-07-25 01:19:36.426127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:43.522 [2024-07-25 01:19:36.434513] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x876d50) 00:33:43.522 [2024-07-25 01:19:36.434541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.522 [2024-07-25 01:19:36.434556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:43.522 [2024-07-25 01:19:36.443214] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x876d50) 00:33:43.522 [2024-07-25 01:19:36.443256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.522 [2024-07-25 01:19:36.443277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:43.522 [2024-07-25 01:19:36.451940] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x876d50) 00:33:43.522 [2024-07-25 01:19:36.451975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.522 [2024-07-25 01:19:36.451993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:43.522 [2024-07-25 01:19:36.460373] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x876d50) 00:33:43.522 [2024-07-25 01:19:36.460401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.522 [2024-07-25 01:19:36.460436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:43.522 [2024-07-25 01:19:36.468696] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x876d50) 00:33:43.522 [2024-07-25 01:19:36.468729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.522 [2024-07-25 01:19:36.468747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:43.522 [2024-07-25 01:19:36.477206] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x876d50) 00:33:43.522 [2024-07-25 01:19:36.477238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.522 [2024-07-25 01:19:36.477267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:43.522 [2024-07-25 01:19:36.485591] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x876d50) 00:33:43.522 [2024-07-25 01:19:36.485637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.522 [2024-07-25 01:19:36.485656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:43.522 [2024-07-25 01:19:36.494112] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x876d50) 00:33:43.522 [2024-07-25 01:19:36.494146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.522 [2024-07-25 01:19:36.494165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:43.522 [2024-07-25 01:19:36.502988] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x876d50) 00:33:43.522 [2024-07-25 01:19:36.503022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.522 [2024-07-25 01:19:36.503041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:43.522 [2024-07-25 01:19:36.511212] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x876d50) 00:33:43.522 [2024-07-25 01:19:36.511264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.522 [2024-07-25 01:19:36.511280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:43.522 [2024-07-25 01:19:36.519492] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x876d50) 00:33:43.522 [2024-07-25 01:19:36.519537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.522 [2024-07-25 01:19:36.519554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:43.522 [2024-07-25 01:19:36.527726] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x876d50) 00:33:43.522 [2024-07-25 01:19:36.527759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.522 [2024-07-25 01:19:36.527777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:43.522 [2024-07-25 01:19:36.536065] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x876d50) 00:33:43.522 [2024-07-25 01:19:36.536115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.522 [2024-07-25 01:19:36.536135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:43.522 [2024-07-25 01:19:36.544093] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x876d50) 00:33:43.522 [2024-07-25 01:19:36.544126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.522 [2024-07-25 01:19:36.544144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:43.522 [2024-07-25 01:19:36.552150] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x876d50) 00:33:43.522 [2024-07-25 01:19:36.552183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.522 [2024-07-25 01:19:36.552202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:43.522 [2024-07-25 01:19:36.560220] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x876d50) 00:33:43.522 [2024-07-25 01:19:36.560260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.522 [2024-07-25 01:19:36.560295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:43.522 [2024-07-25 01:19:36.568924] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x876d50) 00:33:43.523 [2024-07-25 01:19:36.568968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.523 [2024-07-25 01:19:36.568986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:43.523 [2024-07-25 01:19:36.577292] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x876d50) 00:33:43.523 [2024-07-25 01:19:36.577325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.523 [2024-07-25 01:19:36.577344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:43.523 [2024-07-25 01:19:36.585308] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x876d50) 00:33:43.523 [2024-07-25 01:19:36.585355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.523 [2024-07-25 01:19:36.585371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:43.523 [2024-07-25 01:19:36.593637] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x876d50) 00:33:43.523 [2024-07-25 01:19:36.593670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.523 [2024-07-25 01:19:36.593690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:43.523 [2024-07-25 01:19:36.601844] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x876d50) 00:33:43.523 [2024-07-25 01:19:36.601876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.523 [2024-07-25 01:19:36.601900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:43.523 [2024-07-25 01:19:36.609909] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x876d50) 00:33:43.523 [2024-07-25 01:19:36.609941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.523 [2024-07-25 01:19:36.609962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:43.523 [2024-07-25 01:19:36.618110] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x876d50) 00:33:43.523 [2024-07-25 01:19:36.618143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.523 [2024-07-25 01:19:36.618164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:43.523 [2024-07-25 01:19:36.626297] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x876d50) 00:33:43.523 [2024-07-25 01:19:36.626326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.523 [2024-07-25 01:19:36.626342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:43.523 [2024-07-25 01:19:36.634948] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x876d50) 00:33:43.523 [2024-07-25 01:19:36.634980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.523 [2024-07-25 01:19:36.635009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:43.523 [2024-07-25 01:19:36.643230] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x876d50) 00:33:43.523 [2024-07-25 01:19:36.643273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.523 [2024-07-25 01:19:36.643293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:43.523 [2024-07-25 01:19:36.651596] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x876d50) 00:33:43.523 [2024-07-25 01:19:36.651629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.523 [2024-07-25 01:19:36.651647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:43.523 [2024-07-25 01:19:36.659777] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x876d50) 00:33:43.523 [2024-07-25 01:19:36.659809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.523 [2024-07-25 01:19:36.659828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:43.523 [2024-07-25 01:19:36.668220] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x876d50) 00:33:43.523 [2024-07-25 01:19:36.668270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.523 [2024-07-25 01:19:36.668315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:43.781 [2024-07-25 01:19:36.676741] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x876d50) 00:33:43.782 [2024-07-25 01:19:36.676774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.782 [2024-07-25 01:19:36.676803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:43.782 [2024-07-25 01:19:36.685182] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x876d50) 00:33:43.782 [2024-07-25 01:19:36.685214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.782 [2024-07-25 01:19:36.685239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:43.782 [2024-07-25 01:19:36.693697] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x876d50) 00:33:43.782 [2024-07-25 01:19:36.693729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.782 [2024-07-25 01:19:36.693753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:43.782 [2024-07-25 01:19:36.701780] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x876d50) 00:33:43.782 [2024-07-25 01:19:36.701812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.782 [2024-07-25 01:19:36.701837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:43.782 [2024-07-25 01:19:36.710111] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x876d50) 00:33:43.782 [2024-07-25 01:19:36.710143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.782 [2024-07-25 01:19:36.710166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:43.782 [2024-07-25 01:19:36.718396] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x876d50) 00:33:43.782 [2024-07-25 01:19:36.718423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.782 [2024-07-25 01:19:36.718449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:43.782 [2024-07-25 01:19:36.726570] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x876d50) 00:33:43.782 [2024-07-25 01:19:36.726603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.782 [2024-07-25 01:19:36.726628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:43.782 [2024-07-25 01:19:36.734664] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x876d50) 00:33:43.782 [2024-07-25 01:19:36.734697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.782 [2024-07-25 01:19:36.734717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:43.782 [2024-07-25 01:19:36.742828] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x876d50) 00:33:43.782 [2024-07-25 01:19:36.742861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.782 [2024-07-25 01:19:36.742883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:43.782 [2024-07-25 01:19:36.750955] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x876d50) 00:33:43.782 [2024-07-25 01:19:36.750994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.782 [2024-07-25 01:19:36.751013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:43.782 [2024-07-25 01:19:36.759170] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x876d50) 00:33:43.782 [2024-07-25 01:19:36.759202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.782 [2024-07-25 01:19:36.759221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:43.782 [2024-07-25 01:19:36.767414] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x876d50) 00:33:43.782 [2024-07-25 01:19:36.767442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.782 [2024-07-25 01:19:36.767474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:43.782 [2024-07-25 01:19:36.775771] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x876d50) 00:33:43.782 [2024-07-25 01:19:36.775803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.782 [2024-07-25 01:19:36.775827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:43.782 [2024-07-25 01:19:36.784134] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x876d50) 00:33:43.782 [2024-07-25 01:19:36.784166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.782 [2024-07-25 01:19:36.784186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:43.782 [2024-07-25 01:19:36.792481] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x876d50) 00:33:43.782 [2024-07-25 01:19:36.792508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.782 [2024-07-25 01:19:36.792525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:43.782 [2024-07-25 01:19:36.800709] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x876d50) 00:33:43.782 [2024-07-25 01:19:36.800752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.782 [2024-07-25 01:19:36.800771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:43.782 [2024-07-25 01:19:36.808987] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x876d50) 00:33:43.782 [2024-07-25 01:19:36.809019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.782 [2024-07-25 01:19:36.809037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:43.782 [2024-07-25 01:19:36.817094] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x876d50) 00:33:43.782 [2024-07-25 01:19:36.817126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.782 [2024-07-25 01:19:36.817145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:43.782 [2024-07-25 01:19:36.825134] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x876d50) 00:33:43.782 [2024-07-25 01:19:36.825167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.782 [2024-07-25 01:19:36.825189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:43.782 [2024-07-25 01:19:36.833168] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x876d50) 00:33:43.782 [2024-07-25 01:19:36.833200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.782 [2024-07-25 01:19:36.833219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:43.782 [2024-07-25 01:19:36.841441] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x876d50) 00:33:43.782 [2024-07-25 01:19:36.841487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.782 [2024-07-25 01:19:36.841504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:43.782 [2024-07-25 01:19:36.849581] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x876d50) 00:33:43.782 [2024-07-25 01:19:36.849626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.782 [2024-07-25 01:19:36.849644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:43.782 [2024-07-25 01:19:36.858127] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x876d50) 00:33:43.782 [2024-07-25 01:19:36.858161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.782 [2024-07-25 01:19:36.858180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:43.782 [2024-07-25 01:19:36.866421] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x876d50) 00:33:43.782 [2024-07-25 01:19:36.866450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.782 [2024-07-25 01:19:36.866466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:43.782 [2024-07-25 01:19:36.874736] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x876d50) 00:33:43.782 [2024-07-25 01:19:36.874769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.782 [2024-07-25 01:19:36.874787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:43.782 [2024-07-25 01:19:36.883109] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x876d50) 00:33:43.782 [2024-07-25 01:19:36.883142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.782 [2024-07-25 01:19:36.883160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:43.782 [2024-07-25 01:19:36.891281] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x876d50) 00:33:43.782 [2024-07-25 01:19:36.891327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.783 [2024-07-25 01:19:36.891347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:43.783 [2024-07-25 01:19:36.899649] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x876d50) 00:33:43.783 [2024-07-25 01:19:36.899680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.783 [2024-07-25 01:19:36.899699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:43.783 [2024-07-25 01:19:36.907463] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x876d50) 00:33:43.783 [2024-07-25 01:19:36.907492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.783 [2024-07-25 01:19:36.907509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:43.783 [2024-07-25 01:19:36.915514] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x876d50) 00:33:43.783 [2024-07-25 01:19:36.915542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.783 [2024-07-25 01:19:36.915559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:43.783 [2024-07-25 01:19:36.923676] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x876d50) 00:33:43.783 [2024-07-25 01:19:36.923708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.783 [2024-07-25 01:19:36.923727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:43.783 [2024-07-25 01:19:36.931999] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x876d50) 00:33:43.783 [2024-07-25 01:19:36.932029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.783 [2024-07-25 01:19:36.932045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:44.041 [2024-07-25 01:19:36.940461] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x876d50) 00:33:44.041 [2024-07-25 01:19:36.940503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.041 [2024-07-25 01:19:36.940519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:44.041 [2024-07-25 01:19:36.948732] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x876d50) 00:33:44.041 [2024-07-25 01:19:36.948765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.041 [2024-07-25 01:19:36.948783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:44.041 [2024-07-25 01:19:36.956897] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x876d50) 00:33:44.041 [2024-07-25 01:19:36.956929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.041 [2024-07-25 01:19:36.956948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:44.041 [2024-07-25 01:19:36.965289] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x876d50) 00:33:44.041 [2024-07-25 01:19:36.965319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.041 [2024-07-25 01:19:36.965351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:44.041 [2024-07-25 01:19:36.973385] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x876d50) 00:33:44.041 [2024-07-25 01:19:36.973415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.041 [2024-07-25 01:19:36.973431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:44.041 00:33:44.041 Latency(us) 00:33:44.041 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:44.041 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:33:44.041 nvme0n1 : 2.00 3611.08 451.38 0.00 0.00 4424.74 1310.72 10534.31 00:33:44.041 =================================================================================================================== 00:33:44.041 Total : 3611.08 451.38 0.00 0.00 4424.74 1310.72 10534.31 00:33:44.041 0 00:33:44.041 01:19:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:33:44.041 01:19:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:33:44.041 01:19:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:33:44.041 01:19:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:33:44.041 | .driver_specific 00:33:44.041 | .nvme_error 00:33:44.041 | .status_code 00:33:44.041 | .command_transient_transport_error' 00:33:44.298 01:19:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 233 > 0 )) 00:33:44.298 01:19:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3920310 00:33:44.298 01:19:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@946 -- # '[' -z 3920310 ']' 00:33:44.298 01:19:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # kill -0 3920310 00:33:44.298 01:19:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # uname 00:33:44.298 01:19:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:33:44.298 01:19:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3920310 00:33:44.298 01:19:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:33:44.299 01:19:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:33:44.299 01:19:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3920310' 00:33:44.299 killing process with pid 3920310 00:33:44.299 01:19:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@965 -- # kill 3920310 00:33:44.299 Received shutdown signal, test time was about 2.000000 seconds 00:33:44.299 00:33:44.299 Latency(us) 00:33:44.299 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:44.299 =================================================================================================================== 00:33:44.299 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:44.299 01:19:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # wait 3920310 00:33:44.557 01:19:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:33:44.557 01:19:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:33:44.557 01:19:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:33:44.557 01:19:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:33:44.557 01:19:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:33:44.557 01:19:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3920716 00:33:44.557 01:19:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:33:44.557 01:19:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3920716 /var/tmp/bperf.sock 00:33:44.557 01:19:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@827 -- # '[' -z 3920716 ']' 00:33:44.557 01:19:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:44.557 01:19:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@832 -- # local max_retries=100 00:33:44.557 01:19:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:44.557 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:44.557 01:19:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # xtrace_disable 00:33:44.557 01:19:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:44.557 [2024-07-25 01:19:37.557010] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 23.11.0 initialization... 00:33:44.557 [2024-07-25 01:19:37.557103] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3920716 ] 00:33:44.557 EAL: No free 2048 kB hugepages reported on node 1 00:33:44.557 [2024-07-25 01:19:37.619718] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:44.557 [2024-07-25 01:19:37.707599] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:33:44.815 01:19:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:33:44.815 01:19:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # return 0 00:33:44.815 01:19:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:33:44.815 01:19:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:33:45.073 01:19:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:33:45.073 01:19:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:45.073 01:19:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:45.073 01:19:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:45.073 01:19:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:45.073 01:19:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:45.331 nvme0n1 00:33:45.331 01:19:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:33:45.331 01:19:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:45.331 01:19:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:45.331 01:19:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:45.331 01:19:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:33:45.331 01:19:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:45.589 Running I/O for 2 seconds... 00:33:45.589 [2024-07-25 01:19:38.548922] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122dbc0) with pdu=0x2000190ee5c8 00:33:45.589 [2024-07-25 01:19:38.549970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:15801 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.589 [2024-07-25 01:19:38.550015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:33:45.589 [2024-07-25 01:19:38.561137] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122dbc0) with pdu=0x2000190fac10 00:33:45.589 [2024-07-25 01:19:38.562153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:3917 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.589 [2024-07-25 01:19:38.562187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:33:45.589 [2024-07-25 01:19:38.574680] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122dbc0) with pdu=0x2000190eaef0 00:33:45.589 [2024-07-25 01:19:38.575863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:11766 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.589 [2024-07-25 01:19:38.575897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:33:45.589 [2024-07-25 01:19:38.587993] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122dbc0) with pdu=0x2000190e1b48 00:33:45.589 [2024-07-25 01:19:38.589368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:22179 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.589 [2024-07-25 01:19:38.589400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:33:45.589 [2024-07-25 01:19:38.601358] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122dbc0) with pdu=0x2000190e84c0 00:33:45.589 [2024-07-25 01:19:38.602867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:23182 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.589 [2024-07-25 01:19:38.602902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:33:45.589 [2024-07-25 01:19:38.613172] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122dbc0) with pdu=0x2000190fac10 00:33:45.590 [2024-07-25 01:19:38.614207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:23415 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.590 [2024-07-25 01:19:38.614257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:33:45.590 [2024-07-25 01:19:38.626025] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122dbc0) with pdu=0x2000190e23b8 00:33:45.590 [2024-07-25 01:19:38.626893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:9117 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.590 [2024-07-25 01:19:38.626928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:33:45.590 [2024-07-25 01:19:38.638984] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122dbc0) with pdu=0x2000190f0bc0 00:33:45.590 [2024-07-25 01:19:38.640153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:5715 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.590 [2024-07-25 01:19:38.640185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:33:45.590 [2024-07-25 01:19:38.651981] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122dbc0) with pdu=0x2000190f46d0 00:33:45.590 [2024-07-25 01:19:38.653346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:19727 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.590 [2024-07-25 01:19:38.653376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:33:45.590 [2024-07-25 01:19:38.664049] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122dbc0) with pdu=0x2000190f9f68 00:33:45.590 [2024-07-25 01:19:38.665420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:977 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.590 [2024-07-25 01:19:38.665449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:33:45.590 [2024-07-25 01:19:38.677253] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122dbc0) with pdu=0x2000190ec840 00:33:45.590 [2024-07-25 01:19:38.678767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:19151 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.590 [2024-07-25 01:19:38.678799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:33:45.590 [2024-07-25 01:19:38.690574] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122dbc0) with pdu=0x2000190f7100 00:33:45.590 [2024-07-25 01:19:38.692267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:25145 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.590 [2024-07-25 01:19:38.692322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:33:45.590 [2024-07-25 01:19:38.703833] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122dbc0) with pdu=0x2000190e73e0 00:33:45.590 [2024-07-25 01:19:38.705542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:6134 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.590 [2024-07-25 01:19:38.705571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:33:45.590 [2024-07-25 01:19:38.715962] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122dbc0) with pdu=0x2000190f8618 00:33:45.590 [2024-07-25 01:19:38.717734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:20033 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.590 [2024-07-25 01:19:38.717763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:33:45.590 [2024-07-25 01:19:38.724153] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122dbc0) with pdu=0x2000190e3498 00:33:45.590 [2024-07-25 01:19:38.724966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:7422 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.590 [2024-07-25 01:19:38.724996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:33:45.590 [2024-07-25 01:19:38.736024] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122dbc0) with pdu=0x2000190e6fa8 00:33:45.590 [2024-07-25 01:19:38.736814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:19121 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.590 [2024-07-25 01:19:38.736844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:33:45.848 [2024-07-25 01:19:38.749857] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122dbc0) with pdu=0x2000190e8088 00:33:45.848 [2024-07-25 01:19:38.751345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:19693 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.848 [2024-07-25 01:19:38.751385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:33:45.848 [2024-07-25 01:19:38.763114] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122dbc0) with pdu=0x2000190e0a68 00:33:45.848 [2024-07-25 01:19:38.764794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:23324 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.848 [2024-07-25 01:19:38.764827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:33:45.848 [2024-07-25 01:19:38.774917] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122dbc0) with pdu=0x2000190ee190 00:33:45.848 [2024-07-25 01:19:38.776078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:5515 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.848 [2024-07-25 01:19:38.776111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:33:45.848 [2024-07-25 01:19:38.787757] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122dbc0) with pdu=0x2000190f1868 00:33:45.848 [2024-07-25 01:19:38.788785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:8586 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.849 [2024-07-25 01:19:38.788818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:33:45.849 [2024-07-25 01:19:38.799727] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122dbc0) with pdu=0x2000190e6300 00:33:45.849 [2024-07-25 01:19:38.801505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:3724 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.849 [2024-07-25 01:19:38.801536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:33:45.849 [2024-07-25 01:19:38.810575] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122dbc0) with pdu=0x2000190f6cc8 00:33:45.849 [2024-07-25 01:19:38.811437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:23336 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.849 [2024-07-25 01:19:38.811466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:33:45.849 [2024-07-25 01:19:38.823732] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122dbc0) with pdu=0x2000190e95a0 00:33:45.849 [2024-07-25 01:19:38.824698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:21448 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.849 [2024-07-25 01:19:38.824730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:33:45.849 [2024-07-25 01:19:38.837852] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122dbc0) with pdu=0x2000190eaef0 00:33:45.849 [2024-07-25 01:19:38.839032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:2048 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.849 [2024-07-25 01:19:38.839065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:45.849 [2024-07-25 01:19:38.849633] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122dbc0) with pdu=0x2000190f6458 00:33:45.849 [2024-07-25 01:19:38.850796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:23135 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.849 [2024-07-25 01:19:38.850831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:33:45.849 [2024-07-25 01:19:38.862905] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122dbc0) with pdu=0x2000190df118 00:33:45.849 [2024-07-25 01:19:38.864238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:16795 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.849 [2024-07-25 01:19:38.864279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:33:45.849 [2024-07-25 01:19:38.876112] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122dbc0) with pdu=0x2000190e73e0 00:33:45.849 [2024-07-25 01:19:38.877624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:21750 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.849 [2024-07-25 01:19:38.877656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:33:45.849 [2024-07-25 01:19:38.889389] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122dbc0) with pdu=0x2000190f0ff8 00:33:45.849 [2024-07-25 01:19:38.891049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:1217 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.849 [2024-07-25 01:19:38.891082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:33:45.849 [2024-07-25 01:19:38.902706] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122dbc0) with pdu=0x2000190ebfd0 00:33:45.849 [2024-07-25 01:19:38.904561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:23262 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.849 [2024-07-25 01:19:38.904596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:33:45.849 [2024-07-25 01:19:38.916124] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122dbc0) with pdu=0x2000190e4140 00:33:45.849 [2024-07-25 01:19:38.918108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:16922 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.849 [2024-07-25 01:19:38.918141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:33:45.849 [2024-07-25 01:19:38.925130] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122dbc0) with pdu=0x2000190fd640 00:33:45.849 [2024-07-25 01:19:38.925942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7752 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.849 [2024-07-25 01:19:38.925975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:33:45.849 [2024-07-25 01:19:38.937159] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122dbc0) with pdu=0x2000190e01f8 00:33:45.849 [2024-07-25 01:19:38.937971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:7659 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.849 [2024-07-25 01:19:38.938003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:33:45.849 [2024-07-25 01:19:38.950251] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122dbc0) with pdu=0x2000190e99d8 00:33:45.849 [2024-07-25 01:19:38.951251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:17778 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.849 [2024-07-25 01:19:38.951298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:33:45.849 [2024-07-25 01:19:38.963541] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122dbc0) with pdu=0x2000190ed920 00:33:45.849 [2024-07-25 01:19:38.964715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:18107 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.849 [2024-07-25 01:19:38.964749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:33:45.849 [2024-07-25 01:19:38.977642] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122dbc0) with pdu=0x2000190e5ec8 00:33:45.849 [2024-07-25 01:19:38.979030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:11997 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.849 [2024-07-25 01:19:38.979059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:33:45.849 [2024-07-25 01:19:38.991751] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122dbc0) with pdu=0x2000190f0ff8 00:33:45.849 [2024-07-25 01:19:38.993746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:18188 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.849 [2024-07-25 01:19:38.993778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:33:46.107 [2024-07-25 01:19:39.000734] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122dbc0) with pdu=0x2000190f8a50 00:33:46.107 [2024-07-25 01:19:39.001586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:18550 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.107 [2024-07-25 01:19:39.001636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:33:46.107 [2024-07-25 01:19:39.014119] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122dbc0) with pdu=0x2000190efae0 00:33:46.107 [2024-07-25 01:19:39.015106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:16899 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.107 [2024-07-25 01:19:39.015139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:46.107 [2024-07-25 01:19:39.026068] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122dbc0) with pdu=0x2000190eee38 00:33:46.107 [2024-07-25 01:19:39.027037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:5576 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.107 [2024-07-25 01:19:39.027069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:33:46.107 [2024-07-25 01:19:39.040120] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122dbc0) with pdu=0x2000190f4b08 00:33:46.107 [2024-07-25 01:19:39.041339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:5920 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.107 [2024-07-25 01:19:39.041370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:46.107 [2024-07-25 01:19:39.052837] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122dbc0) with pdu=0x2000190f20d8 00:33:46.107 [2024-07-25 01:19:39.054010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:10546 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.107 [2024-07-25 01:19:39.054043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:46.107 [2024-07-25 01:19:39.065526] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122dbc0) with pdu=0x2000190fa3a0 00:33:46.107 [2024-07-25 01:19:39.066708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:24807 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.107 [2024-07-25 01:19:39.066741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:46.107 [2024-07-25 01:19:39.077337] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122dbc0) with pdu=0x2000190e6fa8 00:33:46.107 [2024-07-25 01:19:39.078482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:17419 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.107 [2024-07-25 01:19:39.078517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:46.108 [2024-07-25 01:19:39.090507] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122dbc0) with pdu=0x2000190f81e0 00:33:46.108 [2024-07-25 01:19:39.091823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:21140 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.108 [2024-07-25 01:19:39.091855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:33:46.108 [2024-07-25 01:19:39.103726] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122dbc0) with pdu=0x2000190fc560 00:33:46.108 [2024-07-25 01:19:39.105209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:480 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.108 [2024-07-25 01:19:39.105250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:46.108 [2024-07-25 01:19:39.116660] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122dbc0) with pdu=0x2000190fda78 00:33:46.108 [2024-07-25 01:19:39.118145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:13516 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.108 [2024-07-25 01:19:39.118178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:33:46.108 [2024-07-25 01:19:39.128984] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122dbc0) with pdu=0x2000190e01f8 00:33:46.108 [2024-07-25 01:19:39.129984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:20119 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.108 [2024-07-25 01:19:39.130017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:33:46.108 [2024-07-25 01:19:39.143444] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122dbc0) with pdu=0x2000190e12d8 00:33:46.108 [2024-07-25 01:19:39.145482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:17736 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.108 [2024-07-25 01:19:39.145513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:33:46.108 [2024-07-25 01:19:39.152435] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122dbc0) with pdu=0x2000190eff18 00:33:46.108 [2024-07-25 01:19:39.153238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:13225 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.108 [2024-07-25 01:19:39.153291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:33:46.108 [2024-07-25 01:19:39.165273] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122dbc0) with pdu=0x2000190e6b70 00:33:46.108 [2024-07-25 01:19:39.166099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:6539 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.108 [2024-07-25 01:19:39.166131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:46.108 [2024-07-25 01:19:39.178274] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122dbc0) with pdu=0x2000190f9b30 00:33:46.108 [2024-07-25 01:19:39.179256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:19533 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.108 [2024-07-25 01:19:39.179289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:46.108 [2024-07-25 01:19:39.190097] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122dbc0) with pdu=0x2000190e23b8 00:33:46.108 [2024-07-25 01:19:39.191053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:11401 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.108 [2024-07-25 01:19:39.191086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:33:46.108 [2024-07-25 01:19:39.203344] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122dbc0) with pdu=0x2000190f0788 00:33:46.108 [2024-07-25 01:19:39.204493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:17576 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.108 [2024-07-25 01:19:39.204523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:46.108 [2024-07-25 01:19:39.216499] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122dbc0) with pdu=0x2000190ddc00 00:33:46.108 [2024-07-25 01:19:39.217821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:2917 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.108 [2024-07-25 01:19:39.217856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:33:46.108 [2024-07-25 01:19:39.230512] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122dbc0) with pdu=0x2000190e99d8 00:33:46.108 [2024-07-25 01:19:39.232036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:22527 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.108 [2024-07-25 01:19:39.232071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:46.108 [2024-07-25 01:19:39.242205] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122dbc0) with pdu=0x2000190e4578 00:33:46.108 [2024-07-25 01:19:39.243686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:12563 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.108 [2024-07-25 01:19:39.243720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:46.108 [2024-07-25 01:19:39.254002] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122dbc0) with pdu=0x2000190e6b70 00:33:46.108 [2024-07-25 01:19:39.255004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:4582 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.108 [2024-07-25 01:19:39.255039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:33:46.366 [2024-07-25 01:19:39.268039] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122dbc0) with pdu=0x2000190ed4e8 00:33:46.366 [2024-07-25 01:19:39.269705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:16658 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.366 [2024-07-25 01:19:39.269739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:33:46.366 [2024-07-25 01:19:39.281311] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122dbc0) with pdu=0x2000190eaab8 00:33:46.366 [2024-07-25 01:19:39.283156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:22127 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.366 [2024-07-25 01:19:39.283191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:46.366 [2024-07-25 01:19:39.294514] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122dbc0) with pdu=0x2000190e5658 00:33:46.366 [2024-07-25 01:19:39.296591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:2324 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.366 [2024-07-25 01:19:39.296620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:33:46.366 [2024-07-25 01:19:39.303476] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122dbc0) with pdu=0x2000190ee5c8 00:33:46.366 [2024-07-25 01:19:39.304312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:20827 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.366 [2024-07-25 01:19:39.304342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:33:46.366 [2024-07-25 01:19:39.316334] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122dbc0) with pdu=0x2000190fe720 00:33:46.367 [2024-07-25 01:19:39.317157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:8631 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.367 [2024-07-25 01:19:39.317190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:46.367 [2024-07-25 01:19:39.329316] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122dbc0) with pdu=0x2000190e84c0 00:33:46.367 [2024-07-25 01:19:39.330301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:23629 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.367 [2024-07-25 01:19:39.330334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:46.367 [2024-07-25 01:19:39.341270] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122dbc0) with pdu=0x2000190f4298 00:33:46.367 [2024-07-25 01:19:39.342253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:2602 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.367 [2024-07-25 01:19:39.342297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:33:46.367 [2024-07-25 01:19:39.354505] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122dbc0) with pdu=0x2000190fc128 00:33:46.367 [2024-07-25 01:19:39.355665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:6130 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.367 [2024-07-25 01:19:39.355698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:46.367 [2024-07-25 01:19:39.367804] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122dbc0) with pdu=0x2000190e7818 00:33:46.367 [2024-07-25 01:19:39.369143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6889 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.367 [2024-07-25 01:19:39.369176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:33:46.367 [2024-07-25 01:19:39.380996] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122dbc0) with pdu=0x2000190f57b0 00:33:46.367 [2024-07-25 01:19:39.382517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:21563 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.367 [2024-07-25 01:19:39.382563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:46.367 [2024-07-25 01:19:39.394148] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122dbc0) with pdu=0x2000190fe720 00:33:46.367 [2024-07-25 01:19:39.395832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:9058 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.367 [2024-07-25 01:19:39.395865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:33:46.367 [2024-07-25 01:19:39.407344] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122dbc0) with pdu=0x2000190df550 00:33:46.367 [2024-07-25 01:19:39.409186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:11141 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.367 [2024-07-25 01:19:39.409226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:46.367 [2024-07-25 01:19:39.420498] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122dbc0) with pdu=0x2000190e5220 00:33:46.367 [2024-07-25 01:19:39.422522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:21166 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.367 [2024-07-25 01:19:39.422554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:33:46.367 [2024-07-25 01:19:39.429446] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122dbc0) with pdu=0x2000190e3d08 00:33:46.367 [2024-07-25 01:19:39.430266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:21707 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.367 [2024-07-25 01:19:39.430301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:33:46.367 [2024-07-25 01:19:39.443704] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122dbc0) with pdu=0x2000190fe2e8 00:33:46.367 [2024-07-25 01:19:39.445683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:22330 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.367 [2024-07-25 01:19:39.445715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:46.367 [2024-07-25 01:19:39.454538] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122dbc0) with pdu=0x2000190e4578 00:33:46.367 [2024-07-25 01:19:39.455522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:11483 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.367 [2024-07-25 01:19:39.455554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:33:46.367 [2024-07-25 01:19:39.467697] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122dbc0) with pdu=0x2000190e84c0 00:33:46.367 [2024-07-25 01:19:39.468859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:12668 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.367 [2024-07-25 01:19:39.468893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:46.367 [2024-07-25 01:19:39.480863] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122dbc0) with pdu=0x2000190f9b30 00:33:46.367 [2024-07-25 01:19:39.482191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:9855 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.367 [2024-07-25 01:19:39.482223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:33:46.367 [2024-07-25 01:19:39.494030] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122dbc0) with pdu=0x2000190f8e88 00:33:46.367 [2024-07-25 01:19:39.495551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:15775 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.367 [2024-07-25 01:19:39.495587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:46.367 [2024-07-25 01:19:39.507191] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122dbc0) with pdu=0x2000190df988 00:33:46.367 [2024-07-25 01:19:39.508882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:10673 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.367 [2024-07-25 01:19:39.508914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:33:46.625 [2024-07-25 01:19:39.520366] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122dbc0) with pdu=0x2000190f1868 00:33:46.625 [2024-07-25 01:19:39.522210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:2104 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.625 [2024-07-25 01:19:39.522259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:46.625 [2024-07-25 01:19:39.533540] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122dbc0) with pdu=0x2000190eea00 00:33:46.625 [2024-07-25 01:19:39.535580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:11720 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.625 [2024-07-25 01:19:39.535615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:33:46.625 [2024-07-25 01:19:39.542528] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122dbc0) with pdu=0x2000190e27f0 00:33:46.625 [2024-07-25 01:19:39.543355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:25072 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.625 [2024-07-25 01:19:39.543386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:33:46.625 [2024-07-25 01:19:39.556786] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122dbc0) with pdu=0x2000190ef6a8 00:33:46.625 [2024-07-25 01:19:39.558751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:20141 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.625 [2024-07-25 01:19:39.558783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:46.625 [2024-07-25 01:19:39.567609] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122dbc0) with pdu=0x2000190f4298 00:33:46.625 [2024-07-25 01:19:39.568596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:9783 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.625 [2024-07-25 01:19:39.568629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:33:46.625 [2024-07-25 01:19:39.580766] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122dbc0) with pdu=0x2000190ed4e8 00:33:46.625 [2024-07-25 01:19:39.581935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:11527 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.625 [2024-07-25 01:19:39.581969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:46.625 [2024-07-25 01:19:39.593917] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122dbc0) with pdu=0x2000190e5a90 00:33:46.626 [2024-07-25 01:19:39.595250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:19704 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.626 [2024-07-25 01:19:39.595285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:33:46.626 [2024-07-25 01:19:39.607084] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122dbc0) with pdu=0x2000190f57b0 00:33:46.626 [2024-07-25 01:19:39.608603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:8554 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.626 [2024-07-25 01:19:39.608635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:46.626 [2024-07-25 01:19:39.620303] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122dbc0) with pdu=0x2000190f1430 00:33:46.626 [2024-07-25 01:19:39.621986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:19263 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.626 [2024-07-25 01:19:39.622018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:33:46.626 [2024-07-25 01:19:39.633488] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122dbc0) with pdu=0x2000190fc998 00:33:46.626 [2024-07-25 01:19:39.635330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:22158 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.626 [2024-07-25 01:19:39.635362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:46.626 [2024-07-25 01:19:39.646626] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122dbc0) with pdu=0x2000190e5220 00:33:46.626 [2024-07-25 01:19:39.648658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:7925 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.626 [2024-07-25 01:19:39.648690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:33:46.626 [2024-07-25 01:19:39.655574] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122dbc0) with pdu=0x2000190ea680 00:33:46.626 [2024-07-25 01:19:39.656394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:23155 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.626 [2024-07-25 01:19:39.656426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:33:46.626 [2024-07-25 01:19:39.668741] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122dbc0) with pdu=0x2000190f35f0 00:33:46.626 [2024-07-25 01:19:39.669739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:5511 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.626 [2024-07-25 01:19:39.669774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:46.626 [2024-07-25 01:19:39.680647] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122dbc0) with pdu=0x2000190ec408 00:33:46.626 [2024-07-25 01:19:39.681643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:8862 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.626 [2024-07-25 01:19:39.681675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:33:46.626 [2024-07-25 01:19:39.693823] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122dbc0) with pdu=0x2000190fe720 00:33:46.626 [2024-07-25 01:19:39.694977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:12695 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.626 [2024-07-25 01:19:39.695011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:46.626 [2024-07-25 01:19:39.706969] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122dbc0) with pdu=0x2000190ed4e8 00:33:46.626 [2024-07-25 01:19:39.708299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:24384 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.626 [2024-07-25 01:19:39.708332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:33:46.626 [2024-07-25 01:19:39.720092] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122dbc0) with pdu=0x2000190f8e88 00:33:46.626 [2024-07-25 01:19:39.721613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:6065 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.626 [2024-07-25 01:19:39.721647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:46.626 [2024-07-25 01:19:39.733257] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122dbc0) with pdu=0x2000190ea680 00:33:46.626 [2024-07-25 01:19:39.734934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:13673 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.626 [2024-07-25 01:19:39.734969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:33:46.626 [2024-07-25 01:19:39.746403] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122dbc0) with pdu=0x2000190e1f80 00:33:46.626 [2024-07-25 01:19:39.748253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:2553 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.626 [2024-07-25 01:19:39.748288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:46.626 [2024-07-25 01:19:39.759549] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122dbc0) with pdu=0x2000190e5658 00:33:46.626 [2024-07-25 01:19:39.761580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:3130 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.626 [2024-07-25 01:19:39.761613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:33:46.626 [2024-07-25 01:19:39.768465] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122dbc0) with pdu=0x2000190e6b70 00:33:46.626 [2024-07-25 01:19:39.769278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:20181 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.626 [2024-07-25 01:19:39.769310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:33:46.885 [2024-07-25 01:19:39.782697] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122dbc0) with pdu=0x2000190f9f68 00:33:46.885 [2024-07-25 01:19:39.784646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:23825 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.885 [2024-07-25 01:19:39.784678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:46.885 [2024-07-25 01:19:39.793498] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122dbc0) with pdu=0x2000190eaef0 00:33:46.885 [2024-07-25 01:19:39.794490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:15016 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.885 [2024-07-25 01:19:39.794524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:33:46.885 [2024-07-25 01:19:39.806667] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122dbc0) with pdu=0x2000190f35f0 00:33:46.885 [2024-07-25 01:19:39.807821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:5819 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.885 [2024-07-25 01:19:39.807853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:46.885 [2024-07-25 01:19:39.819822] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122dbc0) with pdu=0x2000190f7100 00:33:46.885 [2024-07-25 01:19:39.821153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:12409 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.885 [2024-07-25 01:19:39.821185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:33:46.885 [2024-07-25 01:19:39.832968] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122dbc0) with pdu=0x2000190f57b0 00:33:46.885 [2024-07-25 01:19:39.834454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:11174 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.885 [2024-07-25 01:19:39.834486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:46.885 [2024-07-25 01:19:39.846116] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122dbc0) with pdu=0x2000190e23b8 00:33:46.885 [2024-07-25 01:19:39.847803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:2468 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.885 [2024-07-25 01:19:39.847842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:33:46.885 [2024-07-25 01:19:39.859287] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122dbc0) with pdu=0x2000190e1b48 00:33:46.885 [2024-07-25 01:19:39.861132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:12044 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.885 [2024-07-25 01:19:39.861166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:46.885 [2024-07-25 01:19:39.872425] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122dbc0) with pdu=0x2000190f0788 00:33:46.885 [2024-07-25 01:19:39.874449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:18576 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.885 [2024-07-25 01:19:39.874484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:33:46.885 [2024-07-25 01:19:39.881358] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122dbc0) with pdu=0x2000190e27f0 00:33:46.885 [2024-07-25 01:19:39.882171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:2346 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.885 [2024-07-25 01:19:39.882204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:33:46.885 [2024-07-25 01:19:39.894531] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122dbc0) with pdu=0x2000190f2d80 00:33:46.885 [2024-07-25 01:19:39.895537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:15961 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.885 [2024-07-25 01:19:39.895569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:46.885 [2024-07-25 01:19:39.907705] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122dbc0) with pdu=0x2000190e84c0 00:33:46.885 [2024-07-25 01:19:39.908871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:1640 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.885 [2024-07-25 01:19:39.908903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:33:46.885 [2024-07-25 01:19:39.920888] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122dbc0) with pdu=0x2000190e6b70 00:33:46.885 [2024-07-25 01:19:39.922230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:2946 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.885 [2024-07-25 01:19:39.922286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:46.885 [2024-07-25 01:19:39.934297] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122dbc0) with pdu=0x2000190e49b0 00:33:46.885 [2024-07-25 01:19:39.935811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:9616 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.885 [2024-07-25 01:19:39.935843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:33:46.885 [2024-07-25 01:19:39.947562] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122dbc0) with pdu=0x2000190f4b08 00:33:46.885 [2024-07-25 01:19:39.949254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:7077 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.886 [2024-07-25 01:19:39.949288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:46.886 [2024-07-25 01:19:39.959542] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122dbc0) with pdu=0x2000190e8088 00:33:46.886 [2024-07-25 01:19:39.961239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18680 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.886 [2024-07-25 01:19:39.961277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:33:46.886 [2024-07-25 01:19:39.972770] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122dbc0) with pdu=0x2000190f2510 00:33:46.886 [2024-07-25 01:19:39.974632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:7204 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.886 [2024-07-25 01:19:39.974665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:46.886 [2024-07-25 01:19:39.985972] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122dbc0) with pdu=0x2000190e5658 00:33:46.886 [2024-07-25 01:19:39.988002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:7863 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.886 [2024-07-25 01:19:39.988035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:33:46.886 [2024-07-25 01:19:39.994941] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122dbc0) with pdu=0x2000190ef6a8 00:33:46.886 [2024-07-25 01:19:39.995762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:21974 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.886 [2024-07-25 01:19:39.995794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:33:46.886 [2024-07-25 01:19:40.008942] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122dbc0) with pdu=0x2000190e3d08 00:33:46.886 [2024-07-25 01:19:40.010767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:4052 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.886 [2024-07-25 01:19:40.010799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:46.886 [2024-07-25 01:19:40.019047] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122dbc0) with pdu=0x2000190ebfd0 00:33:46.886 [2024-07-25 01:19:40.019955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:9658 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.886 [2024-07-25 01:19:40.020001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:33:46.886 [2024-07-25 01:19:40.031464] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122dbc0) with pdu=0x2000190e1b48 00:33:46.886 [2024-07-25 01:19:40.032605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:3699 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.886 [2024-07-25 01:19:40.032651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:47.144 [2024-07-25 01:19:40.044742] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122dbc0) with pdu=0x2000190f1430 00:33:47.144 [2024-07-25 01:19:40.046010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:4263 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:47.144 [2024-07-25 01:19:40.046050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:33:47.144 [2024-07-25 01:19:40.057136] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122dbc0) with pdu=0x2000190f6458 00:33:47.144 [2024-07-25 01:19:40.058519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:3847 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:47.144 [2024-07-25 01:19:40.058552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:47.144 [2024-07-25 01:19:40.069708] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122dbc0) with pdu=0x2000190f2948 00:33:47.144 [2024-07-25 01:19:40.071270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:16890 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:47.145 [2024-07-25 01:19:40.071302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:33:47.145 [2024-07-25 01:19:40.080969] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122dbc0) with pdu=0x2000190fd640 00:33:47.145 [2024-07-25 01:19:40.082103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:14468 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:47.145 [2024-07-25 01:19:40.082135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:47.145 [2024-07-25 01:19:40.093220] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122dbc0) with pdu=0x2000190e6b70 00:33:47.145 [2024-07-25 01:19:40.094199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:14665 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:47.145 [2024-07-25 01:19:40.094230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:33:47.145 [2024-07-25 01:19:40.104375] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122dbc0) with pdu=0x2000190ec840 00:33:47.145 [2024-07-25 01:19:40.106371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:22410 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:47.145 [2024-07-25 01:19:40.106401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:33:47.145 [2024-07-25 01:19:40.117150] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122dbc0) with pdu=0x2000190f0bc0 00:33:47.145 [2024-07-25 01:19:40.119086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:9995 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:47.145 [2024-07-25 01:19:40.119116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:47.145 [2024-07-25 01:19:40.127394] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122dbc0) with pdu=0x2000190e49b0 00:33:47.145 [2024-07-25 01:19:40.128287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:23708 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:47.145 [2024-07-25 01:19:40.128316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:33:47.145 [2024-07-25 01:19:40.139719] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122dbc0) with pdu=0x2000190f20d8 00:33:47.145 [2024-07-25 01:19:40.140823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15446 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:47.145 [2024-07-25 01:19:40.140851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:47.145 [2024-07-25 01:19:40.152192] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122dbc0) with pdu=0x2000190e27f0 00:33:47.145 [2024-07-25 01:19:40.153424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:9543 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:47.145 [2024-07-25 01:19:40.153454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:33:47.145 [2024-07-25 01:19:40.164649] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122dbc0) with pdu=0x2000190fb8b8 00:33:47.145 [2024-07-25 01:19:40.166000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:21890 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:47.145 [2024-07-25 01:19:40.166054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:47.145 [2024-07-25 01:19:40.176995] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122dbc0) with pdu=0x2000190e23b8 00:33:47.145 [2024-07-25 01:19:40.178572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:4188 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:47.145 [2024-07-25 01:19:40.178604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:33:47.145 [2024-07-25 01:19:40.189461] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122dbc0) with pdu=0x2000190f4298 00:33:47.145 [2024-07-25 01:19:40.191135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:12253 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:47.145 [2024-07-25 01:19:40.191164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:47.145 [2024-07-25 01:19:40.201940] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122dbc0) with pdu=0x2000190eea00 00:33:47.145 [2024-07-25 01:19:40.203852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:18712 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:47.145 [2024-07-25 01:19:40.203882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:33:47.145 [2024-07-25 01:19:40.210371] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122dbc0) with pdu=0x2000190eff18 00:33:47.145 [2024-07-25 01:19:40.211101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:9826 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:47.145 [2024-07-25 01:19:40.211131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:33:47.145 [2024-07-25 01:19:40.223819] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122dbc0) with pdu=0x2000190e12d8 00:33:47.145 [2024-07-25 01:19:40.225184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:4438 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:47.145 [2024-07-25 01:19:40.225215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:47.145 [2024-07-25 01:19:40.235980] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122dbc0) with pdu=0x2000190f2510 00:33:47.145 [2024-07-25 01:19:40.237432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:11614 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:47.145 [2024-07-25 01:19:40.237463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:33:47.145 [2024-07-25 01:19:40.248137] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122dbc0) with pdu=0x2000190fc560 00:33:47.145 [2024-07-25 01:19:40.249747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:22212 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:47.145 [2024-07-25 01:19:40.249792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:47.145 [2024-07-25 01:19:40.260334] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122dbc0) with pdu=0x2000190f8e88 00:33:47.145 [2024-07-25 01:19:40.262108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:23844 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:47.145 [2024-07-25 01:19:40.262137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:33:47.145 [2024-07-25 01:19:40.272478] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122dbc0) with pdu=0x2000190e4de8 00:33:47.145 [2024-07-25 01:19:40.274397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:24332 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:47.145 [2024-07-25 01:19:40.274427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:47.145 [2024-07-25 01:19:40.280688] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122dbc0) with pdu=0x2000190f6020 00:33:47.145 [2024-07-25 01:19:40.281568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:1987 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:47.145 [2024-07-25 01:19:40.281598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:33:47.145 [2024-07-25 01:19:40.292924] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122dbc0) with pdu=0x2000190f7538 00:33:47.145 [2024-07-25 01:19:40.294059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:5895 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:47.145 [2024-07-25 01:19:40.294088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:33:47.404 [2024-07-25 01:19:40.304065] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122dbc0) with pdu=0x2000190e0a68 00:33:47.404 [2024-07-25 01:19:40.305159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:11692 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:47.404 [2024-07-25 01:19:40.305188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:47.404 [2024-07-25 01:19:40.316222] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122dbc0) with pdu=0x2000190f46d0 00:33:47.404 [2024-07-25 01:19:40.317461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:19694 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:47.404 [2024-07-25 01:19:40.317493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:33:47.404 [2024-07-25 01:19:40.328474] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122dbc0) with pdu=0x2000190f0ff8 00:33:47.404 [2024-07-25 01:19:40.329856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:22890 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:47.404 [2024-07-25 01:19:40.329885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:47.404 [2024-07-25 01:19:40.340791] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122dbc0) with pdu=0x2000190e73e0 00:33:47.404 [2024-07-25 01:19:40.342293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:3465 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:47.404 [2024-07-25 01:19:40.342323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:33:47.404 [2024-07-25 01:19:40.352938] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122dbc0) with pdu=0x2000190f1868 00:33:47.404 [2024-07-25 01:19:40.354676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:14927 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:47.404 [2024-07-25 01:19:40.354706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:47.404 [2024-07-25 01:19:40.365272] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122dbc0) with pdu=0x2000190f92c0 00:33:47.404 [2024-07-25 01:19:40.367072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:21599 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:47.404 [2024-07-25 01:19:40.367117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:33:47.404 [2024-07-25 01:19:40.377399] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122dbc0) with pdu=0x2000190f20d8 00:33:47.404 [2024-07-25 01:19:40.379396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:3618 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:47.404 [2024-07-25 01:19:40.379426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:47.404 [2024-07-25 01:19:40.385810] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122dbc0) with pdu=0x2000190fb480 00:33:47.404 [2024-07-25 01:19:40.386754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:4837 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:47.404 [2024-07-25 01:19:40.386784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:33:47.404 [2024-07-25 01:19:40.397837] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122dbc0) with pdu=0x2000190f57b0 00:33:47.404 [2024-07-25 01:19:40.398758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:467 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:47.404 [2024-07-25 01:19:40.398789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:33:47.404 [2024-07-25 01:19:40.409604] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122dbc0) with pdu=0x2000190ea680 00:33:47.404 [2024-07-25 01:19:40.410554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:1711 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:47.404 [2024-07-25 01:19:40.410583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:33:47.404 [2024-07-25 01:19:40.421456] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122dbc0) with pdu=0x2000190e49b0 00:33:47.404 [2024-07-25 01:19:40.422223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:7464 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:47.404 [2024-07-25 01:19:40.422261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:33:47.404 [2024-07-25 01:19:40.433725] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122dbc0) with pdu=0x2000190e4578 00:33:47.404 [2024-07-25 01:19:40.434585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:8867 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:47.404 [2024-07-25 01:19:40.434616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:33:47.404 [2024-07-25 01:19:40.445847] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122dbc0) with pdu=0x2000190fc128 00:33:47.404 [2024-07-25 01:19:40.446856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:6509 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:47.404 [2024-07-25 01:19:40.446885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:47.404 [2024-07-25 01:19:40.457635] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122dbc0) with pdu=0x2000190ee5c8 00:33:47.404 [2024-07-25 01:19:40.459127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:801 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:47.404 [2024-07-25 01:19:40.459157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:47.404 [2024-07-25 01:19:40.469478] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122dbc0) with pdu=0x2000190f3e60 00:33:47.404 [2024-07-25 01:19:40.470909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:2857 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:47.404 [2024-07-25 01:19:40.470952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:47.404 [2024-07-25 01:19:40.481384] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122dbc0) with pdu=0x2000190f9f68 00:33:47.404 [2024-07-25 01:19:40.482806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:1341 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:47.404 [2024-07-25 01:19:40.482836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:47.404 [2024-07-25 01:19:40.493197] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122dbc0) with pdu=0x2000190fd208 00:33:47.404 [2024-07-25 01:19:40.494584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:19140 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:47.404 [2024-07-25 01:19:40.494613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:47.404 [2024-07-25 01:19:40.504964] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122dbc0) with pdu=0x2000190e27f0 00:33:47.404 [2024-07-25 01:19:40.506381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:2279 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:47.404 [2024-07-25 01:19:40.506411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:47.404 [2024-07-25 01:19:40.516995] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122dbc0) with pdu=0x2000190f2510 00:33:47.404 [2024-07-25 01:19:40.518447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:10306 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:47.404 [2024-07-25 01:19:40.518478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:47.404 [2024-07-25 01:19:40.528074] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122dbc0) with pdu=0x2000190eb760 00:33:47.404 [2024-07-25 01:19:40.529526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:6684 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:47.404 [2024-07-25 01:19:40.529555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:33:47.404 [2024-07-25 01:19:40.540206] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122dbc0) with pdu=0x2000190fb048 00:33:47.404 [2024-07-25 01:19:40.541857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:2540 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:47.404 [2024-07-25 01:19:40.541886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:33:47.404 00:33:47.404 Latency(us) 00:33:47.404 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:47.404 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:47.404 nvme0n1 : 2.01 20635.40 80.61 0.00 0.00 6196.21 2451.53 15146.10 00:33:47.404 =================================================================================================================== 00:33:47.404 Total : 20635.40 80.61 0.00 0.00 6196.21 2451.53 15146.10 00:33:47.404 0 00:33:47.663 01:19:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:33:47.663 01:19:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:33:47.663 01:19:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:33:47.663 | .driver_specific 00:33:47.663 | .nvme_error 00:33:47.663 | .status_code 00:33:47.663 | .command_transient_transport_error' 00:33:47.663 01:19:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:33:47.663 01:19:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 162 > 0 )) 00:33:47.663 01:19:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3920716 00:33:47.663 01:19:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@946 -- # '[' -z 3920716 ']' 00:33:47.663 01:19:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # kill -0 3920716 00:33:47.663 01:19:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # uname 00:33:47.663 01:19:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:33:47.663 01:19:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3920716 00:33:47.921 01:19:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:33:47.921 01:19:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:33:47.921 01:19:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3920716' 00:33:47.921 killing process with pid 3920716 00:33:47.921 01:19:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@965 -- # kill 3920716 00:33:47.921 Received shutdown signal, test time was about 2.000000 seconds 00:33:47.921 00:33:47.921 Latency(us) 00:33:47.921 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:47.921 =================================================================================================================== 00:33:47.921 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:47.921 01:19:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # wait 3920716 00:33:47.921 01:19:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:33:47.921 01:19:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:33:47.921 01:19:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:33:47.921 01:19:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:33:47.921 01:19:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:33:47.921 01:19:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3921121 00:33:47.921 01:19:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:33:47.921 01:19:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3921121 /var/tmp/bperf.sock 00:33:47.921 01:19:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@827 -- # '[' -z 3921121 ']' 00:33:47.921 01:19:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:47.921 01:19:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@832 -- # local max_retries=100 00:33:47.921 01:19:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:47.921 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:47.921 01:19:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # xtrace_disable 00:33:47.921 01:19:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:48.180 [2024-07-25 01:19:41.111602] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 23.11.0 initialization... 00:33:48.180 [2024-07-25 01:19:41.111695] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3921121 ] 00:33:48.180 I/O size of 131072 is greater than zero copy threshold (65536). 00:33:48.180 Zero copy mechanism will not be used. 00:33:48.180 EAL: No free 2048 kB hugepages reported on node 1 00:33:48.180 [2024-07-25 01:19:41.170682] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:48.180 [2024-07-25 01:19:41.259623] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:33:48.438 01:19:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:33:48.438 01:19:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # return 0 00:33:48.438 01:19:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:33:48.438 01:19:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:33:48.696 01:19:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:33:48.696 01:19:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:48.696 01:19:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:48.696 01:19:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:48.696 01:19:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:48.696 01:19:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:48.954 nvme0n1 00:33:48.954 01:19:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:33:48.954 01:19:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:48.954 01:19:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:48.954 01:19:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:48.954 01:19:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:33:48.954 01:19:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:49.211 I/O size of 131072 is greater than zero copy threshold (65536). 00:33:49.211 Zero copy mechanism will not be used. 00:33:49.211 Running I/O for 2 seconds... 00:33:49.211 [2024-07-25 01:19:42.238982] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122de90) with pdu=0x2000190fef90 00:33:49.211 [2024-07-25 01:19:42.239332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.211 [2024-07-25 01:19:42.239384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:49.212 [2024-07-25 01:19:42.247541] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122de90) with pdu=0x2000190fef90 00:33:49.212 [2024-07-25 01:19:42.247919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.212 [2024-07-25 01:19:42.247955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:49.212 [2024-07-25 01:19:42.255671] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122de90) with pdu=0x2000190fef90 00:33:49.212 [2024-07-25 01:19:42.256035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.212 [2024-07-25 01:19:42.256069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:49.212 [2024-07-25 01:19:42.267115] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122de90) with pdu=0x2000190fef90 00:33:49.212 [2024-07-25 01:19:42.267483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.212 [2024-07-25 01:19:42.267515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:49.212 [2024-07-25 01:19:42.277093] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122de90) with pdu=0x2000190fef90 00:33:49.212 [2024-07-25 01:19:42.277485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.212 [2024-07-25 01:19:42.277514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:49.212 [2024-07-25 01:19:42.287505] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122de90) with pdu=0x2000190fef90 00:33:49.212 [2024-07-25 01:19:42.287875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.212 [2024-07-25 01:19:42.287908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:49.212 [2024-07-25 01:19:42.297494] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122de90) with pdu=0x2000190fef90 00:33:49.212 [2024-07-25 01:19:42.297892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.212 [2024-07-25 01:19:42.297924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:49.212 [2024-07-25 01:19:42.307780] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122de90) with pdu=0x2000190fef90 00:33:49.212 [2024-07-25 01:19:42.308146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.212 [2024-07-25 01:19:42.308180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:49.212 [2024-07-25 01:19:42.316386] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122de90) with pdu=0x2000190fef90 00:33:49.212 [2024-07-25 01:19:42.316622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.212 [2024-07-25 01:19:42.316651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:49.212 [2024-07-25 01:19:42.325415] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122de90) with pdu=0x2000190fef90 00:33:49.212 [2024-07-25 01:19:42.325755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.212 [2024-07-25 01:19:42.325787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:49.212 [2024-07-25 01:19:42.334770] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122de90) with pdu=0x2000190fef90 00:33:49.212 [2024-07-25 01:19:42.335107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.212 [2024-07-25 01:19:42.335135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:49.212 [2024-07-25 01:19:42.343053] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122de90) with pdu=0x2000190fef90 00:33:49.212 [2024-07-25 01:19:42.343373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.212 [2024-07-25 01:19:42.343411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:49.212 [2024-07-25 01:19:42.351328] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122de90) with pdu=0x2000190fef90 00:33:49.212 [2024-07-25 01:19:42.351668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.212 [2024-07-25 01:19:42.351711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:49.212 [2024-07-25 01:19:42.359430] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122de90) with pdu=0x2000190fef90 00:33:49.212 [2024-07-25 01:19:42.359759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.212 [2024-07-25 01:19:42.359789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:49.470 [2024-07-25 01:19:42.368066] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122de90) with pdu=0x2000190fef90 00:33:49.470 [2024-07-25 01:19:42.368202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.471 [2024-07-25 01:19:42.368232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:49.471 [2024-07-25 01:19:42.378310] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122de90) with pdu=0x2000190fef90 00:33:49.471 [2024-07-25 01:19:42.378628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.471 [2024-07-25 01:19:42.378658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:49.471 [2024-07-25 01:19:42.387354] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122de90) with pdu=0x2000190fef90 00:33:49.471 [2024-07-25 01:19:42.387537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.471 [2024-07-25 01:19:42.387564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:49.471 [2024-07-25 01:19:42.396520] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122de90) with pdu=0x2000190fef90 00:33:49.471 [2024-07-25 01:19:42.396881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.471 [2024-07-25 01:19:42.396909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:49.471 [2024-07-25 01:19:42.406260] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122de90) with pdu=0x2000190fef90 00:33:49.471 [2024-07-25 01:19:42.406597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.471 [2024-07-25 01:19:42.406626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:49.471 [2024-07-25 01:19:42.414716] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122de90) with pdu=0x2000190fef90 00:33:49.471 [2024-07-25 01:19:42.414820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.471 [2024-07-25 01:19:42.414847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:49.471 [2024-07-25 01:19:42.422980] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122de90) with pdu=0x2000190fef90 00:33:49.471 [2024-07-25 01:19:42.423332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.471 [2024-07-25 01:19:42.423369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:49.471 [2024-07-25 01:19:42.430661] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122de90) with pdu=0x2000190fef90 00:33:49.471 [2024-07-25 01:19:42.431005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.471 [2024-07-25 01:19:42.431034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:49.471 [2024-07-25 01:19:42.439824] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122de90) with pdu=0x2000190fef90 00:33:49.471 [2024-07-25 01:19:42.440250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.471 [2024-07-25 01:19:42.440279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:49.471 [2024-07-25 01:19:42.448847] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122de90) with pdu=0x2000190fef90 00:33:49.471 [2024-07-25 01:19:42.449194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.471 [2024-07-25 01:19:42.449223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:49.471 [2024-07-25 01:19:42.458162] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122de90) with pdu=0x2000190fef90 00:33:49.471 [2024-07-25 01:19:42.458547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.471 [2024-07-25 01:19:42.458577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:49.471 [2024-07-25 01:19:42.467091] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122de90) with pdu=0x2000190fef90 00:33:49.471 [2024-07-25 01:19:42.467446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.471 [2024-07-25 01:19:42.467476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:49.471 [2024-07-25 01:19:42.476224] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122de90) with pdu=0x2000190fef90 00:33:49.471 [2024-07-25 01:19:42.476554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.471 [2024-07-25 01:19:42.476583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:49.471 [2024-07-25 01:19:42.485047] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122de90) with pdu=0x2000190fef90 00:33:49.471 [2024-07-25 01:19:42.485381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.471 [2024-07-25 01:19:42.485411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:49.471 [2024-07-25 01:19:42.493692] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122de90) with pdu=0x2000190fef90 00:33:49.471 [2024-07-25 01:19:42.494067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.471 [2024-07-25 01:19:42.494108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:49.471 [2024-07-25 01:19:42.502730] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122de90) with pdu=0x2000190fef90 00:33:49.471 [2024-07-25 01:19:42.503163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.471 [2024-07-25 01:19:42.503193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:49.471 [2024-07-25 01:19:42.512239] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122de90) with pdu=0x2000190fef90 00:33:49.471 [2024-07-25 01:19:42.512708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.471 [2024-07-25 01:19:42.512736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:49.471 [2024-07-25 01:19:42.521318] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122de90) with pdu=0x2000190fef90 00:33:49.471 [2024-07-25 01:19:42.521640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.471 [2024-07-25 01:19:42.521670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:49.471 [2024-07-25 01:19:42.529910] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122de90) with pdu=0x2000190fef90 00:33:49.471 [2024-07-25 01:19:42.530222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.471 [2024-07-25 01:19:42.530273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:49.471 [2024-07-25 01:19:42.538553] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122de90) with pdu=0x2000190fef90 00:33:49.471 [2024-07-25 01:19:42.538878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.471 [2024-07-25 01:19:42.538908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:49.471 [2024-07-25 01:19:42.547484] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122de90) with pdu=0x2000190fef90 00:33:49.471 [2024-07-25 01:19:42.547785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.471 [2024-07-25 01:19:42.547815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:49.471 [2024-07-25 01:19:42.556295] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122de90) with pdu=0x2000190fef90 00:33:49.471 [2024-07-25 01:19:42.556608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.471 [2024-07-25 01:19:42.556637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:49.471 [2024-07-25 01:19:42.565320] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122de90) with pdu=0x2000190fef90 00:33:49.471 [2024-07-25 01:19:42.565679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.471 [2024-07-25 01:19:42.565708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:49.471 [2024-07-25 01:19:42.574443] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122de90) with pdu=0x2000190fef90 00:33:49.471 [2024-07-25 01:19:42.574777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.471 [2024-07-25 01:19:42.574817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:49.471 [2024-07-25 01:19:42.582888] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122de90) with pdu=0x2000190fef90 00:33:49.471 [2024-07-25 01:19:42.583207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.471 [2024-07-25 01:19:42.583237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:49.471 [2024-07-25 01:19:42.591412] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122de90) with pdu=0x2000190fef90 00:33:49.471 [2024-07-25 01:19:42.591770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.471 [2024-07-25 01:19:42.591800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:49.471 [2024-07-25 01:19:42.599841] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122de90) with pdu=0x2000190fef90 00:33:49.471 [2024-07-25 01:19:42.600166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.471 [2024-07-25 01:19:42.600195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:49.472 [2024-07-25 01:19:42.607953] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122de90) with pdu=0x2000190fef90 00:33:49.472 [2024-07-25 01:19:42.608266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.472 [2024-07-25 01:19:42.608296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:49.472 [2024-07-25 01:19:42.616545] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122de90) with pdu=0x2000190fef90 00:33:49.472 [2024-07-25 01:19:42.616893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.472 [2024-07-25 01:19:42.616922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:49.730 [2024-07-25 01:19:42.624287] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122de90) with pdu=0x2000190fef90 00:33:49.730 [2024-07-25 01:19:42.624652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.730 [2024-07-25 01:19:42.624682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:49.730 [2024-07-25 01:19:42.631826] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122de90) with pdu=0x2000190fef90 00:33:49.730 [2024-07-25 01:19:42.632165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.730 [2024-07-25 01:19:42.632196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:49.730 [2024-07-25 01:19:42.640491] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122de90) with pdu=0x2000190fef90 00:33:49.730 [2024-07-25 01:19:42.640863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.730 [2024-07-25 01:19:42.640893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:49.730 [2024-07-25 01:19:42.649252] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122de90) with pdu=0x2000190fef90 00:33:49.730 [2024-07-25 01:19:42.649544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.730 [2024-07-25 01:19:42.649574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:49.730 [2024-07-25 01:19:42.658228] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122de90) with pdu=0x2000190fef90 00:33:49.730 [2024-07-25 01:19:42.658526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.730 [2024-07-25 01:19:42.658556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:49.730 [2024-07-25 01:19:42.666916] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122de90) with pdu=0x2000190fef90 00:33:49.730 [2024-07-25 01:19:42.667285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.730 [2024-07-25 01:19:42.667314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:49.730 [2024-07-25 01:19:42.676136] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122de90) with pdu=0x2000190fef90 00:33:49.730 [2024-07-25 01:19:42.676468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.730 [2024-07-25 01:19:42.676498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:49.730 [2024-07-25 01:19:42.684676] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122de90) with pdu=0x2000190fef90 00:33:49.730 [2024-07-25 01:19:42.685094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.730 [2024-07-25 01:19:42.685125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:49.730 [2024-07-25 01:19:42.693819] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122de90) with pdu=0x2000190fef90 00:33:49.730 [2024-07-25 01:19:42.694184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.730 [2024-07-25 01:19:42.694213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:49.730 [2024-07-25 01:19:42.703045] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122de90) with pdu=0x2000190fef90 00:33:49.730 [2024-07-25 01:19:42.703376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.730 [2024-07-25 01:19:42.703405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:49.730 [2024-07-25 01:19:42.711786] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122de90) with pdu=0x2000190fef90 00:33:49.730 [2024-07-25 01:19:42.712152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.730 [2024-07-25 01:19:42.712182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:49.730 [2024-07-25 01:19:42.720819] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122de90) with pdu=0x2000190fef90 00:33:49.730 [2024-07-25 01:19:42.721184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.730 [2024-07-25 01:19:42.721214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:49.730 [2024-07-25 01:19:42.729329] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122de90) with pdu=0x2000190fef90 00:33:49.730 [2024-07-25 01:19:42.729674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.730 [2024-07-25 01:19:42.729703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:49.730 [2024-07-25 01:19:42.738418] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122de90) with pdu=0x2000190fef90 00:33:49.730 [2024-07-25 01:19:42.738704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.730 [2024-07-25 01:19:42.738733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:49.730 [2024-07-25 01:19:42.746564] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122de90) with pdu=0x2000190fef90 00:33:49.730 [2024-07-25 01:19:42.746846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.730 [2024-07-25 01:19:42.746875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:49.730 [2024-07-25 01:19:42.755270] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122de90) with pdu=0x2000190fef90 00:33:49.730 [2024-07-25 01:19:42.755638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.730 [2024-07-25 01:19:42.755667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:49.730 [2024-07-25 01:19:42.763658] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122de90) with pdu=0x2000190fef90 00:33:49.730 [2024-07-25 01:19:42.763939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.730 [2024-07-25 01:19:42.763968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:49.730 [2024-07-25 01:19:42.772617] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122de90) with pdu=0x2000190fef90 00:33:49.730 [2024-07-25 01:19:42.772947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.730 [2024-07-25 01:19:42.772977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:49.730 [2024-07-25 01:19:42.780961] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122de90) with pdu=0x2000190fef90 00:33:49.730 [2024-07-25 01:19:42.781218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.730 [2024-07-25 01:19:42.781255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:49.730 [2024-07-25 01:19:42.788664] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122de90) with pdu=0x2000190fef90 00:33:49.730 [2024-07-25 01:19:42.788985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.730 [2024-07-25 01:19:42.789015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:49.730 [2024-07-25 01:19:42.796790] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122de90) with pdu=0x2000190fef90 00:33:49.730 [2024-07-25 01:19:42.797005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.730 [2024-07-25 01:19:42.797043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:49.730 [2024-07-25 01:19:42.805030] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122de90) with pdu=0x2000190fef90 00:33:49.730 [2024-07-25 01:19:42.805292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.730 [2024-07-25 01:19:42.805321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:49.730 [2024-07-25 01:19:42.813237] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122de90) with pdu=0x2000190fef90 00:33:49.730 [2024-07-25 01:19:42.813593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.730 [2024-07-25 01:19:42.813622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:49.730 [2024-07-25 01:19:42.821318] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122de90) with pdu=0x2000190fef90 00:33:49.730 [2024-07-25 01:19:42.821573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.730 [2024-07-25 01:19:42.821602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:49.730 [2024-07-25 01:19:42.829531] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122de90) with pdu=0x2000190fef90 00:33:49.730 [2024-07-25 01:19:42.829831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.730 [2024-07-25 01:19:42.829861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:49.731 [2024-07-25 01:19:42.837234] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122de90) with pdu=0x2000190fef90 00:33:49.731 [2024-07-25 01:19:42.837521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.731 [2024-07-25 01:19:42.837550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:49.731 [2024-07-25 01:19:42.845079] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122de90) with pdu=0x2000190fef90 00:33:49.731 [2024-07-25 01:19:42.845344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.731 [2024-07-25 01:19:42.845373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:49.731 [2024-07-25 01:19:42.853236] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122de90) with pdu=0x2000190fef90 00:33:49.731 [2024-07-25 01:19:42.853556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.731 [2024-07-25 01:19:42.853585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:49.731 [2024-07-25 01:19:42.860949] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122de90) with pdu=0x2000190fef90 00:33:49.731 [2024-07-25 01:19:42.861203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.731 [2024-07-25 01:19:42.861232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:49.731 [2024-07-25 01:19:42.869397] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122de90) with pdu=0x2000190fef90 00:33:49.731 [2024-07-25 01:19:42.869699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.731 [2024-07-25 01:19:42.869728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:49.731 [2024-07-25 01:19:42.877010] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122de90) with pdu=0x2000190fef90 00:33:49.731 [2024-07-25 01:19:42.877364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.731 [2024-07-25 01:19:42.877393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:49.989 [2024-07-25 01:19:42.884607] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122de90) with pdu=0x2000190fef90 00:33:49.989 [2024-07-25 01:19:42.884862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.989 [2024-07-25 01:19:42.884892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:49.989 [2024-07-25 01:19:42.892545] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122de90) with pdu=0x2000190fef90 00:33:49.989 [2024-07-25 01:19:42.892834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.989 [2024-07-25 01:19:42.892864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:49.989 [2024-07-25 01:19:42.900448] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122de90) with pdu=0x2000190fef90 00:33:49.989 [2024-07-25 01:19:42.900703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.989 [2024-07-25 01:19:42.900732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:49.989 [2024-07-25 01:19:42.908462] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122de90) with pdu=0x2000190fef90 00:33:49.989 [2024-07-25 01:19:42.908776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.989 [2024-07-25 01:19:42.908806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:49.989 [2024-07-25 01:19:42.915733] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122de90) with pdu=0x2000190fef90 00:33:49.989 [2024-07-25 01:19:42.916065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.989 [2024-07-25 01:19:42.916096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:49.989 [2024-07-25 01:19:42.923040] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122de90) with pdu=0x2000190fef90 00:33:49.989 [2024-07-25 01:19:42.923344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.989 [2024-07-25 01:19:42.923384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:49.989 [2024-07-25 01:19:42.930922] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122de90) with pdu=0x2000190fef90 00:33:49.989 [2024-07-25 01:19:42.931177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.989 [2024-07-25 01:19:42.931207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:49.989 [2024-07-25 01:19:42.939325] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122de90) with pdu=0x2000190fef90 00:33:49.990 [2024-07-25 01:19:42.939577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.990 [2024-07-25 01:19:42.939607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:49.990 [2024-07-25 01:19:42.946789] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122de90) with pdu=0x2000190fef90 00:33:49.990 [2024-07-25 01:19:42.947127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.990 [2024-07-25 01:19:42.947156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:49.990 [2024-07-25 01:19:42.954419] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122de90) with pdu=0x2000190fef90 00:33:49.990 [2024-07-25 01:19:42.954717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.990 [2024-07-25 01:19:42.954747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:49.990 [2024-07-25 01:19:42.961718] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122de90) with pdu=0x2000190fef90 00:33:49.990 [2024-07-25 01:19:42.962017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.990 [2024-07-25 01:19:42.962046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:49.990 [2024-07-25 01:19:42.969726] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122de90) with pdu=0x2000190fef90 00:33:49.990 [2024-07-25 01:19:42.969999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.990 [2024-07-25 01:19:42.970029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:49.990 [2024-07-25 01:19:42.977845] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122de90) with pdu=0x2000190fef90 00:33:49.990 [2024-07-25 01:19:42.978155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.990 [2024-07-25 01:19:42.978185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:49.990 [2024-07-25 01:19:42.986452] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122de90) with pdu=0x2000190fef90 00:33:49.990 [2024-07-25 01:19:42.986740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.990 [2024-07-25 01:19:42.986770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:49.990 [2024-07-25 01:19:42.995116] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122de90) with pdu=0x2000190fef90 00:33:49.990 [2024-07-25 01:19:42.995379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.990 [2024-07-25 01:19:42.995408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:49.990 [2024-07-25 01:19:43.002890] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122de90) with pdu=0x2000190fef90 00:33:49.990 [2024-07-25 01:19:43.003265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.990 [2024-07-25 01:19:43.003303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:49.990 [2024-07-25 01:19:43.011257] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122de90) with pdu=0x2000190fef90 00:33:49.990 [2024-07-25 01:19:43.011525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.990 [2024-07-25 01:19:43.011555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:49.990 [2024-07-25 01:19:43.019293] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122de90) with pdu=0x2000190fef90 00:33:49.990 [2024-07-25 01:19:43.019619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.990 [2024-07-25 01:19:43.019649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:49.990 [2024-07-25 01:19:43.027462] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122de90) with pdu=0x2000190fef90 00:33:49.990 [2024-07-25 01:19:43.027797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.990 [2024-07-25 01:19:43.027827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:49.990 [2024-07-25 01:19:43.035873] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122de90) with pdu=0x2000190fef90 00:33:49.990 [2024-07-25 01:19:43.036164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.990 [2024-07-25 01:19:43.036194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:49.990 [2024-07-25 01:19:43.044147] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122de90) with pdu=0x2000190fef90 00:33:49.990 [2024-07-25 01:19:43.044527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.990 [2024-07-25 01:19:43.044557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:49.990 [2024-07-25 01:19:43.052307] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122de90) with pdu=0x2000190fef90 00:33:49.990 [2024-07-25 01:19:43.052589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.990 [2024-07-25 01:19:43.052619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:49.990 [2024-07-25 01:19:43.060440] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122de90) with pdu=0x2000190fef90 00:33:49.990 [2024-07-25 01:19:43.060748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.990 [2024-07-25 01:19:43.060778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:49.990 [2024-07-25 01:19:43.067775] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122de90) with pdu=0x2000190fef90 00:33:49.990 [2024-07-25 01:19:43.068123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.990 [2024-07-25 01:19:43.068152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:49.990 [2024-07-25 01:19:43.075932] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122de90) with pdu=0x2000190fef90 00:33:49.990 [2024-07-25 01:19:43.076249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.990 [2024-07-25 01:19:43.076280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:49.990 [2024-07-25 01:19:43.083041] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122de90) with pdu=0x2000190fef90 00:33:49.990 [2024-07-25 01:19:43.083362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.990 [2024-07-25 01:19:43.083393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:49.990 [2024-07-25 01:19:43.090587] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122de90) with pdu=0x2000190fef90 00:33:49.990 [2024-07-25 01:19:43.090912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.990 [2024-07-25 01:19:43.090944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:49.990 [2024-07-25 01:19:43.098136] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122de90) with pdu=0x2000190fef90 00:33:49.990 [2024-07-25 01:19:43.098436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.990 [2024-07-25 01:19:43.098466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:49.990 [2024-07-25 01:19:43.105430] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122de90) with pdu=0x2000190fef90 00:33:49.990 [2024-07-25 01:19:43.105682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.990 [2024-07-25 01:19:43.105711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:49.990 [2024-07-25 01:19:43.113218] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122de90) with pdu=0x2000190fef90 00:33:49.990 [2024-07-25 01:19:43.113569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.990 [2024-07-25 01:19:43.113598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:49.990 [2024-07-25 01:19:43.121436] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122de90) with pdu=0x2000190fef90 00:33:49.990 [2024-07-25 01:19:43.121697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.990 [2024-07-25 01:19:43.121727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:49.990 [2024-07-25 01:19:43.129633] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122de90) with pdu=0x2000190fef90 00:33:49.990 [2024-07-25 01:19:43.129931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.990 [2024-07-25 01:19:43.129960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:49.990 [2024-07-25 01:19:43.137282] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122de90) with pdu=0x2000190fef90 00:33:49.990 [2024-07-25 01:19:43.137611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.990 [2024-07-25 01:19:43.137649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:50.249 [2024-07-25 01:19:43.145137] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122de90) with pdu=0x2000190fef90 00:33:50.249 [2024-07-25 01:19:43.145404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.249 [2024-07-25 01:19:43.145434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:50.249 [2024-07-25 01:19:43.153308] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122de90) with pdu=0x2000190fef90 00:33:50.249 [2024-07-25 01:19:43.153600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.249 [2024-07-25 01:19:43.153630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:50.249 [2024-07-25 01:19:43.161907] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122de90) with pdu=0x2000190fef90 00:33:50.249 [2024-07-25 01:19:43.162201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.249 [2024-07-25 01:19:43.162231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:50.249 [2024-07-25 01:19:43.170611] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122de90) with pdu=0x2000190fef90 00:33:50.249 [2024-07-25 01:19:43.170929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.249 [2024-07-25 01:19:43.170959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:50.249 [2024-07-25 01:19:43.179715] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122de90) with pdu=0x2000190fef90 00:33:50.249 [2024-07-25 01:19:43.180076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.249 [2024-07-25 01:19:43.180105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:50.249 [2024-07-25 01:19:43.187825] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122de90) with pdu=0x2000190fef90 00:33:50.249 [2024-07-25 01:19:43.188201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.249 [2024-07-25 01:19:43.188231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:50.249 [2024-07-25 01:19:43.195225] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122de90) with pdu=0x2000190fef90 00:33:50.249 [2024-07-25 01:19:43.195565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.249 [2024-07-25 01:19:43.195609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:50.249 [2024-07-25 01:19:43.203636] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122de90) with pdu=0x2000190fef90 00:33:50.249 [2024-07-25 01:19:43.203898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.249 [2024-07-25 01:19:43.203929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:50.249 [2024-07-25 01:19:43.210810] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122de90) with pdu=0x2000190fef90 00:33:50.249 [2024-07-25 01:19:43.211135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.249 [2024-07-25 01:19:43.211165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:50.249 [2024-07-25 01:19:43.219095] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122de90) with pdu=0x2000190fef90 00:33:50.249 [2024-07-25 01:19:43.219442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.249 [2024-07-25 01:19:43.219471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:50.249 [2024-07-25 01:19:43.227963] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122de90) with pdu=0x2000190fef90 00:33:50.249 [2024-07-25 01:19:43.228270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.249 [2024-07-25 01:19:43.228300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:50.250 [2024-07-25 01:19:43.236132] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122de90) with pdu=0x2000190fef90 00:33:50.250 [2024-07-25 01:19:43.236450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.250 [2024-07-25 01:19:43.236480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:50.250 [2024-07-25 01:19:43.244935] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122de90) with pdu=0x2000190fef90 00:33:50.250 [2024-07-25 01:19:43.245253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.250 [2024-07-25 01:19:43.245281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:50.250 [2024-07-25 01:19:43.253582] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122de90) with pdu=0x2000190fef90 00:33:50.250 [2024-07-25 01:19:43.253907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.250 [2024-07-25 01:19:43.253937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:50.250 [2024-07-25 01:19:43.261989] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122de90) with pdu=0x2000190fef90 00:33:50.250 [2024-07-25 01:19:43.262304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.250 [2024-07-25 01:19:43.262335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:50.250 [2024-07-25 01:19:43.270705] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122de90) with pdu=0x2000190fef90 00:33:50.250 [2024-07-25 01:19:43.271047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.250 [2024-07-25 01:19:43.271078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:50.250 [2024-07-25 01:19:43.280066] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122de90) with pdu=0x2000190fef90 00:33:50.250 [2024-07-25 01:19:43.280467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.250 [2024-07-25 01:19:43.280497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:50.250 [2024-07-25 01:19:43.289550] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122de90) with pdu=0x2000190fef90 00:33:50.250 [2024-07-25 01:19:43.289910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.250 [2024-07-25 01:19:43.289939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:50.250 [2024-07-25 01:19:43.298305] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122de90) with pdu=0x2000190fef90 00:33:50.250 [2024-07-25 01:19:43.298663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.250 [2024-07-25 01:19:43.298691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:50.250 [2024-07-25 01:19:43.307199] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122de90) with pdu=0x2000190fef90 00:33:50.250 [2024-07-25 01:19:43.307598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.250 [2024-07-25 01:19:43.307628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:50.250 [2024-07-25 01:19:43.316153] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122de90) with pdu=0x2000190fef90 00:33:50.250 [2024-07-25 01:19:43.316450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.250 [2024-07-25 01:19:43.316479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:50.250 [2024-07-25 01:19:43.324935] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122de90) with pdu=0x2000190fef90 00:33:50.250 [2024-07-25 01:19:43.325250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.250 [2024-07-25 01:19:43.325285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:50.250 [2024-07-25 01:19:43.333774] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122de90) with pdu=0x2000190fef90 00:33:50.250 [2024-07-25 01:19:43.334107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.250 [2024-07-25 01:19:43.334136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:50.250 [2024-07-25 01:19:43.342855] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122de90) with pdu=0x2000190fef90 00:33:50.250 [2024-07-25 01:19:43.343206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.250 [2024-07-25 01:19:43.343237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:50.250 [2024-07-25 01:19:43.351434] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122de90) with pdu=0x2000190fef90 00:33:50.250 [2024-07-25 01:19:43.351818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.250 [2024-07-25 01:19:43.351848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:50.250 [2024-07-25 01:19:43.360460] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122de90) with pdu=0x2000190fef90 00:33:50.250 [2024-07-25 01:19:43.360817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.250 [2024-07-25 01:19:43.360855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:50.250 [2024-07-25 01:19:43.369514] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122de90) with pdu=0x2000190fef90 00:33:50.250 [2024-07-25 01:19:43.369826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.250 [2024-07-25 01:19:43.369856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:50.250 [2024-07-25 01:19:43.378004] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122de90) with pdu=0x2000190fef90 00:33:50.250 [2024-07-25 01:19:43.378335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.250 [2024-07-25 01:19:43.378364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:50.250 [2024-07-25 01:19:43.386679] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122de90) with pdu=0x2000190fef90 00:33:50.250 [2024-07-25 01:19:43.386999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.250 [2024-07-25 01:19:43.387028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:50.250 [2024-07-25 01:19:43.394976] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122de90) with pdu=0x2000190fef90 00:33:50.250 [2024-07-25 01:19:43.395275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.250 [2024-07-25 01:19:43.395315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:50.508 [2024-07-25 01:19:43.403797] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122de90) with pdu=0x2000190fef90 00:33:50.508 [2024-07-25 01:19:43.404216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.508 [2024-07-25 01:19:43.404253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:50.508 [2024-07-25 01:19:43.412874] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122de90) with pdu=0x2000190fef90 00:33:50.508 [2024-07-25 01:19:43.413201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.508 [2024-07-25 01:19:43.413231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:50.508 [2024-07-25 01:19:43.422051] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122de90) with pdu=0x2000190fef90 00:33:50.508 [2024-07-25 01:19:43.422400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.508 [2024-07-25 01:19:43.422430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:50.508 [2024-07-25 01:19:43.430933] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122de90) with pdu=0x2000190fef90 00:33:50.508 [2024-07-25 01:19:43.431263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.508 [2024-07-25 01:19:43.431296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:50.508 [2024-07-25 01:19:43.439224] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122de90) with pdu=0x2000190fef90 00:33:50.508 [2024-07-25 01:19:43.439498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.508 [2024-07-25 01:19:43.439528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:50.508 [2024-07-25 01:19:43.447733] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122de90) with pdu=0x2000190fef90 00:33:50.508 [2024-07-25 01:19:43.447993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.508 [2024-07-25 01:19:43.448023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:50.508 [2024-07-25 01:19:43.456748] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122de90) with pdu=0x2000190fef90 00:33:50.508 [2024-07-25 01:19:43.457093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.508 [2024-07-25 01:19:43.457122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:50.508 [2024-07-25 01:19:43.464966] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122de90) with pdu=0x2000190fef90 00:33:50.509 [2024-07-25 01:19:43.465278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.509 [2024-07-25 01:19:43.465307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:50.509 [2024-07-25 01:19:43.473585] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122de90) with pdu=0x2000190fef90 00:33:50.509 [2024-07-25 01:19:43.473862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.509 [2024-07-25 01:19:43.473891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:50.509 [2024-07-25 01:19:43.482548] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122de90) with pdu=0x2000190fef90 00:33:50.509 [2024-07-25 01:19:43.482862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.509 [2024-07-25 01:19:43.482892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:50.509 [2024-07-25 01:19:43.491618] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122de90) with pdu=0x2000190fef90 00:33:50.509 [2024-07-25 01:19:43.491931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.509 [2024-07-25 01:19:43.491961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:50.509 [2024-07-25 01:19:43.500552] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122de90) with pdu=0x2000190fef90 00:33:50.509 [2024-07-25 01:19:43.500843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.509 [2024-07-25 01:19:43.500873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:50.509 [2024-07-25 01:19:43.509205] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122de90) with pdu=0x2000190fef90 00:33:50.509 [2024-07-25 01:19:43.509532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.509 [2024-07-25 01:19:43.509561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:50.509 [2024-07-25 01:19:43.518070] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122de90) with pdu=0x2000190fef90 00:33:50.509 [2024-07-25 01:19:43.518365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.509 [2024-07-25 01:19:43.518395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:50.509 [2024-07-25 01:19:43.527090] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122de90) with pdu=0x2000190fef90 00:33:50.509 [2024-07-25 01:19:43.527413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.509 [2024-07-25 01:19:43.527442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:50.509 [2024-07-25 01:19:43.536116] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122de90) with pdu=0x2000190fef90 00:33:50.509 [2024-07-25 01:19:43.536457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.509 [2024-07-25 01:19:43.536486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:50.509 [2024-07-25 01:19:43.544510] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122de90) with pdu=0x2000190fef90 00:33:50.509 [2024-07-25 01:19:43.544867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.509 [2024-07-25 01:19:43.544896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:50.509 [2024-07-25 01:19:43.553238] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122de90) with pdu=0x2000190fef90 00:33:50.509 [2024-07-25 01:19:43.553532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.509 [2024-07-25 01:19:43.553560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:50.509 [2024-07-25 01:19:43.562355] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122de90) with pdu=0x2000190fef90 00:33:50.509 [2024-07-25 01:19:43.562683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.509 [2024-07-25 01:19:43.562711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:50.509 [2024-07-25 01:19:43.571583] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122de90) with pdu=0x2000190fef90 00:33:50.509 [2024-07-25 01:19:43.571939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.509 [2024-07-25 01:19:43.571967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:50.509 [2024-07-25 01:19:43.579455] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122de90) with pdu=0x2000190fef90 00:33:50.509 [2024-07-25 01:19:43.579763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.509 [2024-07-25 01:19:43.579792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:50.509 [2024-07-25 01:19:43.588463] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122de90) with pdu=0x2000190fef90 00:33:50.509 [2024-07-25 01:19:43.588783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.509 [2024-07-25 01:19:43.588819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:50.509 [2024-07-25 01:19:43.597299] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122de90) with pdu=0x2000190fef90 00:33:50.509 [2024-07-25 01:19:43.597602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.509 [2024-07-25 01:19:43.597631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:50.509 [2024-07-25 01:19:43.606362] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122de90) with pdu=0x2000190fef90 00:33:50.509 [2024-07-25 01:19:43.606673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.509 [2024-07-25 01:19:43.606702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:50.509 [2024-07-25 01:19:43.615252] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122de90) with pdu=0x2000190fef90 00:33:50.509 [2024-07-25 01:19:43.615614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.509 [2024-07-25 01:19:43.615642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:50.509 [2024-07-25 01:19:43.624071] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122de90) with pdu=0x2000190fef90 00:33:50.509 [2024-07-25 01:19:43.624389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.509 [2024-07-25 01:19:43.624418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:50.509 [2024-07-25 01:19:43.633214] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122de90) with pdu=0x2000190fef90 00:33:50.509 [2024-07-25 01:19:43.633576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.509 [2024-07-25 01:19:43.633620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:50.509 [2024-07-25 01:19:43.642044] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122de90) with pdu=0x2000190fef90 00:33:50.509 [2024-07-25 01:19:43.642308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.509 [2024-07-25 01:19:43.642337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:50.509 [2024-07-25 01:19:43.650580] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122de90) with pdu=0x2000190fef90 00:33:50.509 [2024-07-25 01:19:43.650909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.509 [2024-07-25 01:19:43.650938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:50.509 [2024-07-25 01:19:43.658601] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122de90) with pdu=0x2000190fef90 00:33:50.767 [2024-07-25 01:19:43.658969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.767 [2024-07-25 01:19:43.658998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:50.767 [2024-07-25 01:19:43.667778] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122de90) with pdu=0x2000190fef90 00:33:50.767 [2024-07-25 01:19:43.668092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.767 [2024-07-25 01:19:43.668121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:50.767 [2024-07-25 01:19:43.676844] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122de90) with pdu=0x2000190fef90 00:33:50.767 [2024-07-25 01:19:43.677137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.767 [2024-07-25 01:19:43.677167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:50.767 [2024-07-25 01:19:43.685866] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122de90) with pdu=0x2000190fef90 00:33:50.767 [2024-07-25 01:19:43.686168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.767 [2024-07-25 01:19:43.686197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:50.767 [2024-07-25 01:19:43.693848] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122de90) with pdu=0x2000190fef90 00:33:50.767 [2024-07-25 01:19:43.694166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.767 [2024-07-25 01:19:43.694195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:50.767 [2024-07-25 01:19:43.701364] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122de90) with pdu=0x2000190fef90 00:33:50.767 [2024-07-25 01:19:43.701669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.767 [2024-07-25 01:19:43.701698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:50.767 [2024-07-25 01:19:43.709905] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122de90) with pdu=0x2000190fef90 00:33:50.767 [2024-07-25 01:19:43.710137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.767 [2024-07-25 01:19:43.710166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:50.767 [2024-07-25 01:19:43.718746] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122de90) with pdu=0x2000190fef90 00:33:50.767 [2024-07-25 01:19:43.719128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.767 [2024-07-25 01:19:43.719158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:50.767 [2024-07-25 01:19:43.726649] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122de90) with pdu=0x2000190fef90 00:33:50.767 [2024-07-25 01:19:43.726930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.767 [2024-07-25 01:19:43.726959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:50.767 [2024-07-25 01:19:43.734780] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122de90) with pdu=0x2000190fef90 00:33:50.767 [2024-07-25 01:19:43.735069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.767 [2024-07-25 01:19:43.735103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:50.767 [2024-07-25 01:19:43.743458] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122de90) with pdu=0x2000190fef90 00:33:50.767 [2024-07-25 01:19:43.743723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.767 [2024-07-25 01:19:43.743752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:50.767 [2024-07-25 01:19:43.751089] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122de90) with pdu=0x2000190fef90 00:33:50.767 [2024-07-25 01:19:43.751433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.767 [2024-07-25 01:19:43.751462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:50.767 [2024-07-25 01:19:43.759522] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122de90) with pdu=0x2000190fef90 00:33:50.767 [2024-07-25 01:19:43.759807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.768 [2024-07-25 01:19:43.759834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:50.768 [2024-07-25 01:19:43.766829] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122de90) with pdu=0x2000190fef90 00:33:50.768 [2024-07-25 01:19:43.767108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.768 [2024-07-25 01:19:43.767137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:50.768 [2024-07-25 01:19:43.775325] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122de90) with pdu=0x2000190fef90 00:33:50.768 [2024-07-25 01:19:43.775731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.768 [2024-07-25 01:19:43.775759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:50.768 [2024-07-25 01:19:43.784039] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122de90) with pdu=0x2000190fef90 00:33:50.768 [2024-07-25 01:19:43.784391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.768 [2024-07-25 01:19:43.784420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:50.768 [2024-07-25 01:19:43.793041] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122de90) with pdu=0x2000190fef90 00:33:50.768 [2024-07-25 01:19:43.793398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.768 [2024-07-25 01:19:43.793428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:50.768 [2024-07-25 01:19:43.802036] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122de90) with pdu=0x2000190fef90 00:33:50.768 [2024-07-25 01:19:43.802408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.768 [2024-07-25 01:19:43.802437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:50.768 [2024-07-25 01:19:43.811300] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122de90) with pdu=0x2000190fef90 00:33:50.768 [2024-07-25 01:19:43.811618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.768 [2024-07-25 01:19:43.811647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:50.768 [2024-07-25 01:19:43.820454] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122de90) with pdu=0x2000190fef90 00:33:50.768 [2024-07-25 01:19:43.820722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.768 [2024-07-25 01:19:43.820752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:50.768 [2024-07-25 01:19:43.828763] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122de90) with pdu=0x2000190fef90 00:33:50.768 [2024-07-25 01:19:43.829028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.768 [2024-07-25 01:19:43.829057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:50.768 [2024-07-25 01:19:43.836271] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122de90) with pdu=0x2000190fef90 00:33:50.768 [2024-07-25 01:19:43.836604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.768 [2024-07-25 01:19:43.836633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:50.768 [2024-07-25 01:19:43.844220] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122de90) with pdu=0x2000190fef90 00:33:50.768 [2024-07-25 01:19:43.844512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.768 [2024-07-25 01:19:43.844550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:50.768 [2024-07-25 01:19:43.852721] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122de90) with pdu=0x2000190fef90 00:33:50.768 [2024-07-25 01:19:43.852985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.768 [2024-07-25 01:19:43.853014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:50.768 [2024-07-25 01:19:43.861147] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122de90) with pdu=0x2000190fef90 00:33:50.768 [2024-07-25 01:19:43.861481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.768 [2024-07-25 01:19:43.861511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:50.768 [2024-07-25 01:19:43.869930] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122de90) with pdu=0x2000190fef90 00:33:50.768 [2024-07-25 01:19:43.870240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.768 [2024-07-25 01:19:43.870276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:50.768 [2024-07-25 01:19:43.879014] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122de90) with pdu=0x2000190fef90 00:33:50.768 [2024-07-25 01:19:43.879334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.768 [2024-07-25 01:19:43.879364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:50.768 [2024-07-25 01:19:43.887047] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122de90) with pdu=0x2000190fef90 00:33:50.768 [2024-07-25 01:19:43.887375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.768 [2024-07-25 01:19:43.887403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:50.768 [2024-07-25 01:19:43.895759] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122de90) with pdu=0x2000190fef90 00:33:50.768 [2024-07-25 01:19:43.896048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.768 [2024-07-25 01:19:43.896077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:50.768 [2024-07-25 01:19:43.904660] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122de90) with pdu=0x2000190fef90 00:33:50.768 [2024-07-25 01:19:43.904968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.768 [2024-07-25 01:19:43.904997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:50.768 [2024-07-25 01:19:43.913516] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122de90) with pdu=0x2000190fef90 00:33:50.768 [2024-07-25 01:19:43.913875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.768 [2024-07-25 01:19:43.913904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:51.026 [2024-07-25 01:19:43.922179] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122de90) with pdu=0x2000190fef90 00:33:51.026 [2024-07-25 01:19:43.922557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.026 [2024-07-25 01:19:43.922586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:51.026 [2024-07-25 01:19:43.931281] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122de90) with pdu=0x2000190fef90 00:33:51.026 [2024-07-25 01:19:43.931628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.026 [2024-07-25 01:19:43.931657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:51.026 [2024-07-25 01:19:43.940309] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122de90) with pdu=0x2000190fef90 00:33:51.026 [2024-07-25 01:19:43.940648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.026 [2024-07-25 01:19:43.940676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:51.026 [2024-07-25 01:19:43.949628] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122de90) with pdu=0x2000190fef90 00:33:51.026 [2024-07-25 01:19:43.949968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.026 [2024-07-25 01:19:43.949998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:51.026 [2024-07-25 01:19:43.958897] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122de90) with pdu=0x2000190fef90 00:33:51.026 [2024-07-25 01:19:43.959225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.026 [2024-07-25 01:19:43.959281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:51.026 [2024-07-25 01:19:43.967932] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122de90) with pdu=0x2000190fef90 00:33:51.026 [2024-07-25 01:19:43.968263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.026 [2024-07-25 01:19:43.968292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:51.026 [2024-07-25 01:19:43.977339] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122de90) with pdu=0x2000190fef90 00:33:51.026 [2024-07-25 01:19:43.977652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.026 [2024-07-25 01:19:43.977680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:51.026 [2024-07-25 01:19:43.986500] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122de90) with pdu=0x2000190fef90 00:33:51.026 [2024-07-25 01:19:43.986828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.026 [2024-07-25 01:19:43.986856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:51.026 [2024-07-25 01:19:43.994705] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122de90) with pdu=0x2000190fef90 00:33:51.026 [2024-07-25 01:19:43.995073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.026 [2024-07-25 01:19:43.995100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:51.026 [2024-07-25 01:19:44.003907] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122de90) with pdu=0x2000190fef90 00:33:51.026 [2024-07-25 01:19:44.004229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.026 [2024-07-25 01:19:44.004266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:51.026 [2024-07-25 01:19:44.012991] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122de90) with pdu=0x2000190fef90 00:33:51.026 [2024-07-25 01:19:44.013391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.026 [2024-07-25 01:19:44.013419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:51.026 [2024-07-25 01:19:44.021525] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122de90) with pdu=0x2000190fef90 00:33:51.026 [2024-07-25 01:19:44.021906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.026 [2024-07-25 01:19:44.021950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:51.026 [2024-07-25 01:19:44.030194] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122de90) with pdu=0x2000190fef90 00:33:51.026 [2024-07-25 01:19:44.030514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.026 [2024-07-25 01:19:44.030543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:51.026 [2024-07-25 01:19:44.039533] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122de90) with pdu=0x2000190fef90 00:33:51.026 [2024-07-25 01:19:44.039836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.026 [2024-07-25 01:19:44.039864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:51.026 [2024-07-25 01:19:44.048111] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122de90) with pdu=0x2000190fef90 00:33:51.026 [2024-07-25 01:19:44.048447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.027 [2024-07-25 01:19:44.048476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:51.027 [2024-07-25 01:19:44.057057] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122de90) with pdu=0x2000190fef90 00:33:51.027 [2024-07-25 01:19:44.057409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.027 [2024-07-25 01:19:44.057437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:51.027 [2024-07-25 01:19:44.066253] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122de90) with pdu=0x2000190fef90 00:33:51.027 [2024-07-25 01:19:44.066532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.027 [2024-07-25 01:19:44.066561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:51.027 [2024-07-25 01:19:44.074136] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122de90) with pdu=0x2000190fef90 00:33:51.027 [2024-07-25 01:19:44.074446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.027 [2024-07-25 01:19:44.074475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:51.027 [2024-07-25 01:19:44.082610] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122de90) with pdu=0x2000190fef90 00:33:51.027 [2024-07-25 01:19:44.083031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.027 [2024-07-25 01:19:44.083059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:51.027 [2024-07-25 01:19:44.091602] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122de90) with pdu=0x2000190fef90 00:33:51.027 [2024-07-25 01:19:44.091793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.027 [2024-07-25 01:19:44.091821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:51.027 [2024-07-25 01:19:44.100670] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122de90) with pdu=0x2000190fef90 00:33:51.027 [2024-07-25 01:19:44.100879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.027 [2024-07-25 01:19:44.100907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:51.027 [2024-07-25 01:19:44.110193] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122de90) with pdu=0x2000190fef90 00:33:51.027 [2024-07-25 01:19:44.110435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.027 [2024-07-25 01:19:44.110464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:51.027 [2024-07-25 01:19:44.118923] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122de90) with pdu=0x2000190fef90 00:33:51.027 [2024-07-25 01:19:44.119155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.027 [2024-07-25 01:19:44.119184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:51.027 [2024-07-25 01:19:44.127152] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122de90) with pdu=0x2000190fef90 00:33:51.027 [2024-07-25 01:19:44.127352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.027 [2024-07-25 01:19:44.127380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:51.027 [2024-07-25 01:19:44.135833] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122de90) with pdu=0x2000190fef90 00:33:51.027 [2024-07-25 01:19:44.136040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.027 [2024-07-25 01:19:44.136068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:51.027 [2024-07-25 01:19:44.145258] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122de90) with pdu=0x2000190fef90 00:33:51.027 [2024-07-25 01:19:44.145382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.027 [2024-07-25 01:19:44.145409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:51.027 [2024-07-25 01:19:44.154272] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122de90) with pdu=0x2000190fef90 00:33:51.027 [2024-07-25 01:19:44.154587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.027 [2024-07-25 01:19:44.154616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:51.027 [2024-07-25 01:19:44.163836] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122de90) with pdu=0x2000190fef90 00:33:51.027 [2024-07-25 01:19:44.164095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.027 [2024-07-25 01:19:44.164123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:51.027 [2024-07-25 01:19:44.174025] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122de90) with pdu=0x2000190fef90 00:33:51.027 [2024-07-25 01:19:44.174322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.027 [2024-07-25 01:19:44.174351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:51.285 [2024-07-25 01:19:44.183092] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122de90) with pdu=0x2000190fef90 00:33:51.285 [2024-07-25 01:19:44.183293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.285 [2024-07-25 01:19:44.183321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:51.285 [2024-07-25 01:19:44.192554] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122de90) with pdu=0x2000190fef90 00:33:51.285 [2024-07-25 01:19:44.192793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.285 [2024-07-25 01:19:44.192828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:51.285 [2024-07-25 01:19:44.202048] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122de90) with pdu=0x2000190fef90 00:33:51.285 [2024-07-25 01:19:44.202247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.285 [2024-07-25 01:19:44.202275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:51.285 [2024-07-25 01:19:44.210611] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122de90) with pdu=0x2000190fef90 00:33:51.285 [2024-07-25 01:19:44.210900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.285 [2024-07-25 01:19:44.210929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:51.285 [2024-07-25 01:19:44.218946] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122de90) with pdu=0x2000190fef90 00:33:51.285 [2024-07-25 01:19:44.219195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.285 [2024-07-25 01:19:44.219224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:51.285 [2024-07-25 01:19:44.228410] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122de90) with pdu=0x2000190fef90 00:33:51.285 [2024-07-25 01:19:44.228553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.285 [2024-07-25 01:19:44.228580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:51.285 00:33:51.285 Latency(us) 00:33:51.285 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:51.285 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:33:51.285 nvme0n1 : 2.00 3588.37 448.55 0.00 0.00 4448.48 3276.80 15243.19 00:33:51.285 =================================================================================================================== 00:33:51.285 Total : 3588.37 448.55 0.00 0.00 4448.48 3276.80 15243.19 00:33:51.285 0 00:33:51.285 01:19:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:33:51.285 01:19:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:33:51.285 01:19:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:33:51.285 | .driver_specific 00:33:51.285 | .nvme_error 00:33:51.285 | .status_code 00:33:51.285 | .command_transient_transport_error' 00:33:51.285 01:19:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:33:51.543 01:19:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 232 > 0 )) 00:33:51.543 01:19:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3921121 00:33:51.543 01:19:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@946 -- # '[' -z 3921121 ']' 00:33:51.543 01:19:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # kill -0 3921121 00:33:51.543 01:19:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # uname 00:33:51.543 01:19:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:33:51.543 01:19:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3921121 00:33:51.543 01:19:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:33:51.543 01:19:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:33:51.543 01:19:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3921121' 00:33:51.543 killing process with pid 3921121 00:33:51.543 01:19:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@965 -- # kill 3921121 00:33:51.543 Received shutdown signal, test time was about 2.000000 seconds 00:33:51.543 00:33:51.543 Latency(us) 00:33:51.543 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:51.543 =================================================================================================================== 00:33:51.543 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:51.543 01:19:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # wait 3921121 00:33:51.801 01:19:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 3919762 00:33:51.801 01:19:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@946 -- # '[' -z 3919762 ']' 00:33:51.801 01:19:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # kill -0 3919762 00:33:51.801 01:19:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # uname 00:33:51.801 01:19:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:33:51.801 01:19:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3919762 00:33:51.801 01:19:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:33:51.801 01:19:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:33:51.801 01:19:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3919762' 00:33:51.801 killing process with pid 3919762 00:33:51.801 01:19:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@965 -- # kill 3919762 00:33:51.801 01:19:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # wait 3919762 00:33:52.059 00:33:52.059 real 0m15.148s 00:33:52.059 user 0m30.099s 00:33:52.059 sys 0m4.116s 00:33:52.059 01:19:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1122 -- # xtrace_disable 00:33:52.059 01:19:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:52.059 ************************************ 00:33:52.059 END TEST nvmf_digest_error 00:33:52.059 ************************************ 00:33:52.059 01:19:45 nvmf_tcp.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:33:52.059 01:19:45 nvmf_tcp.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:33:52.059 01:19:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@488 -- # nvmfcleanup 00:33:52.059 01:19:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@117 -- # sync 00:33:52.059 01:19:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:33:52.059 01:19:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@120 -- # set +e 00:33:52.059 01:19:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@121 -- # for i in {1..20} 00:33:52.059 01:19:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:33:52.059 rmmod nvme_tcp 00:33:52.059 rmmod nvme_fabrics 00:33:52.059 rmmod nvme_keyring 00:33:52.059 01:19:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:33:52.059 01:19:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@124 -- # set -e 00:33:52.059 01:19:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@125 -- # return 0 00:33:52.059 01:19:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@489 -- # '[' -n 3919762 ']' 00:33:52.059 01:19:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@490 -- # killprocess 3919762 00:33:52.059 01:19:45 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@946 -- # '[' -z 3919762 ']' 00:33:52.059 01:19:45 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@950 -- # kill -0 3919762 00:33:52.059 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 950: kill: (3919762) - No such process 00:33:52.059 01:19:45 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@973 -- # echo 'Process with pid 3919762 is not found' 00:33:52.059 Process with pid 3919762 is not found 00:33:52.059 01:19:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:33:52.059 01:19:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:33:52.059 01:19:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:33:52.059 01:19:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:33:52.059 01:19:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@278 -- # remove_spdk_ns 00:33:52.059 01:19:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:52.059 01:19:45 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:33:52.059 01:19:45 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:53.971 01:19:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:33:53.971 00:33:53.971 real 0m34.782s 00:33:53.971 user 1m1.453s 00:33:53.971 sys 0m9.702s 00:33:53.971 01:19:47 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1122 -- # xtrace_disable 00:33:53.971 01:19:47 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:33:53.971 ************************************ 00:33:53.971 END TEST nvmf_digest 00:33:53.971 ************************************ 00:33:54.229 01:19:47 nvmf_tcp -- nvmf/nvmf.sh@111 -- # [[ 0 -eq 1 ]] 00:33:54.229 01:19:47 nvmf_tcp -- nvmf/nvmf.sh@116 -- # [[ 0 -eq 1 ]] 00:33:54.229 01:19:47 nvmf_tcp -- nvmf/nvmf.sh@121 -- # [[ phy == phy ]] 00:33:54.229 01:19:47 nvmf_tcp -- nvmf/nvmf.sh@122 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:33:54.229 01:19:47 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:33:54.229 01:19:47 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:33:54.229 01:19:47 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:54.229 ************************************ 00:33:54.229 START TEST nvmf_bdevperf 00:33:54.229 ************************************ 00:33:54.229 01:19:47 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:33:54.229 * Looking for test storage... 00:33:54.229 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:33:54.229 01:19:47 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:54.229 01:19:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:33:54.229 01:19:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:54.229 01:19:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:54.229 01:19:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:54.229 01:19:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:54.229 01:19:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:54.229 01:19:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:54.229 01:19:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:54.229 01:19:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:54.229 01:19:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:54.229 01:19:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:54.229 01:19:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:33:54.229 01:19:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:33:54.229 01:19:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:54.229 01:19:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:54.229 01:19:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:54.229 01:19:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:54.229 01:19:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:54.229 01:19:47 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:54.229 01:19:47 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:54.229 01:19:47 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:54.229 01:19:47 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:54.229 01:19:47 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:54.229 01:19:47 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:54.229 01:19:47 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:33:54.229 01:19:47 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:54.229 01:19:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@47 -- # : 0 00:33:54.229 01:19:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:33:54.229 01:19:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:33:54.229 01:19:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:54.229 01:19:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:54.229 01:19:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:54.229 01:19:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:33:54.229 01:19:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:33:54.229 01:19:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:33:54.229 01:19:47 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:33:54.229 01:19:47 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:33:54.229 01:19:47 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:33:54.229 01:19:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:33:54.229 01:19:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:54.229 01:19:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@448 -- # prepare_net_devs 00:33:54.229 01:19:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:33:54.229 01:19:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:33:54.229 01:19:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:54.229 01:19:47 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:33:54.229 01:19:47 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:54.229 01:19:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:33:54.229 01:19:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:33:54.229 01:19:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@285 -- # xtrace_disable 00:33:54.229 01:19:47 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:56.194 01:19:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:56.194 01:19:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@291 -- # pci_devs=() 00:33:56.194 01:19:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@291 -- # local -a pci_devs 00:33:56.194 01:19:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:33:56.194 01:19:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:33:56.194 01:19:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@293 -- # pci_drivers=() 00:33:56.194 01:19:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:33:56.194 01:19:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@295 -- # net_devs=() 00:33:56.194 01:19:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@295 -- # local -ga net_devs 00:33:56.194 01:19:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@296 -- # e810=() 00:33:56.194 01:19:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@296 -- # local -ga e810 00:33:56.194 01:19:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@297 -- # x722=() 00:33:56.194 01:19:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@297 -- # local -ga x722 00:33:56.194 01:19:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@298 -- # mlx=() 00:33:56.194 01:19:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@298 -- # local -ga mlx 00:33:56.194 01:19:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:56.194 01:19:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:56.194 01:19:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:56.194 01:19:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:56.194 01:19:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:56.194 01:19:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:56.194 01:19:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:56.194 01:19:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:56.194 01:19:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:56.194 01:19:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:56.194 01:19:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:56.194 01:19:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:33:56.194 01:19:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:33:56.194 01:19:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:33:56.194 01:19:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:33:56.194 01:19:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:33:56.194 01:19:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:33:56.194 01:19:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:56.194 01:19:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:33:56.194 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:33:56.194 01:19:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:33:56.194 01:19:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:33:56.194 01:19:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:56.194 01:19:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:56.194 01:19:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:33:56.194 01:19:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:56.194 01:19:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:33:56.194 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:33:56.194 01:19:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:33:56.194 01:19:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:33:56.194 01:19:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:56.194 01:19:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:56.194 01:19:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:33:56.194 01:19:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:33:56.194 01:19:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:33:56.194 01:19:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:33:56.195 01:19:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:56.195 01:19:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:56.195 01:19:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:33:56.195 01:19:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:56.195 01:19:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:33:56.195 01:19:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:33:56.195 01:19:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:56.195 01:19:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:33:56.195 Found net devices under 0000:0a:00.0: cvl_0_0 00:33:56.195 01:19:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:33:56.195 01:19:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:56.195 01:19:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:56.195 01:19:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:33:56.195 01:19:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:56.195 01:19:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:33:56.195 01:19:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:33:56.195 01:19:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:56.195 01:19:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:33:56.195 Found net devices under 0000:0a:00.1: cvl_0_1 00:33:56.195 01:19:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:33:56.195 01:19:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:33:56.195 01:19:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # is_hw=yes 00:33:56.195 01:19:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:33:56.195 01:19:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:33:56.195 01:19:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:33:56.195 01:19:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:56.195 01:19:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:56.195 01:19:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:56.195 01:19:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:33:56.195 01:19:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:56.195 01:19:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:56.195 01:19:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:33:56.195 01:19:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:56.195 01:19:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:56.195 01:19:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:33:56.195 01:19:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:33:56.195 01:19:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:33:56.195 01:19:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:56.195 01:19:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:56.195 01:19:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:56.195 01:19:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:33:56.195 01:19:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:56.195 01:19:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:56.195 01:19:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:56.195 01:19:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:33:56.195 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:56.195 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.160 ms 00:33:56.195 00:33:56.195 --- 10.0.0.2 ping statistics --- 00:33:56.195 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:56.195 rtt min/avg/max/mdev = 0.160/0.160/0.160/0.000 ms 00:33:56.195 01:19:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:56.195 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:56.195 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.091 ms 00:33:56.195 00:33:56.195 --- 10.0.0.1 ping statistics --- 00:33:56.195 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:56.195 rtt min/avg/max/mdev = 0.091/0.091/0.091/0.000 ms 00:33:56.195 01:19:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:56.195 01:19:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@422 -- # return 0 00:33:56.195 01:19:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:33:56.195 01:19:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:56.195 01:19:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:33:56.195 01:19:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:33:56.195 01:19:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:56.195 01:19:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:33:56.195 01:19:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:33:56.195 01:19:49 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:33:56.195 01:19:49 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:33:56.195 01:19:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:33:56.195 01:19:49 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@720 -- # xtrace_disable 00:33:56.195 01:19:49 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:56.195 01:19:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=3923469 00:33:56.195 01:19:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:33:56.195 01:19:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 3923469 00:33:56.195 01:19:49 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@827 -- # '[' -z 3923469 ']' 00:33:56.195 01:19:49 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:56.195 01:19:49 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@832 -- # local max_retries=100 00:33:56.195 01:19:49 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:56.195 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:56.195 01:19:49 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@836 -- # xtrace_disable 00:33:56.195 01:19:49 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:56.195 [2024-07-25 01:19:49.333994] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 23.11.0 initialization... 00:33:56.195 [2024-07-25 01:19:49.334070] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:56.453 EAL: No free 2048 kB hugepages reported on node 1 00:33:56.453 [2024-07-25 01:19:49.399864] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:33:56.453 [2024-07-25 01:19:49.487280] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:56.453 [2024-07-25 01:19:49.487344] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:56.453 [2024-07-25 01:19:49.487371] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:56.453 [2024-07-25 01:19:49.487383] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:56.453 [2024-07-25 01:19:49.487394] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:56.453 [2024-07-25 01:19:49.487472] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:33:56.453 [2024-07-25 01:19:49.487553] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:33:56.453 [2024-07-25 01:19:49.487555] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:33:56.453 01:19:49 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:33:56.453 01:19:49 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@860 -- # return 0 00:33:56.453 01:19:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:33:56.453 01:19:49 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:56.453 01:19:49 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:56.712 01:19:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:56.712 01:19:49 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:33:56.712 01:19:49 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:56.712 01:19:49 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:56.712 [2024-07-25 01:19:49.632970] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:56.712 01:19:49 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:56.712 01:19:49 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:33:56.712 01:19:49 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:56.712 01:19:49 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:56.712 Malloc0 00:33:56.712 01:19:49 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:56.712 01:19:49 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:33:56.712 01:19:49 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:56.712 01:19:49 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:56.712 01:19:49 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:56.712 01:19:49 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:33:56.712 01:19:49 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:56.712 01:19:49 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:56.712 01:19:49 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:56.712 01:19:49 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:56.712 01:19:49 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:56.712 01:19:49 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:56.712 [2024-07-25 01:19:49.694910] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:56.712 01:19:49 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:56.712 01:19:49 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:33:56.712 01:19:49 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:33:56.712 01:19:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:33:56.712 01:19:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:33:56.712 01:19:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:33:56.712 01:19:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:33:56.712 { 00:33:56.712 "params": { 00:33:56.712 "name": "Nvme$subsystem", 00:33:56.712 "trtype": "$TEST_TRANSPORT", 00:33:56.712 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:56.712 "adrfam": "ipv4", 00:33:56.712 "trsvcid": "$NVMF_PORT", 00:33:56.712 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:56.712 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:56.712 "hdgst": ${hdgst:-false}, 00:33:56.712 "ddgst": ${ddgst:-false} 00:33:56.712 }, 00:33:56.712 "method": "bdev_nvme_attach_controller" 00:33:56.712 } 00:33:56.712 EOF 00:33:56.712 )") 00:33:56.712 01:19:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:33:56.712 01:19:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:33:56.712 01:19:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:33:56.712 01:19:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:33:56.712 "params": { 00:33:56.712 "name": "Nvme1", 00:33:56.712 "trtype": "tcp", 00:33:56.712 "traddr": "10.0.0.2", 00:33:56.712 "adrfam": "ipv4", 00:33:56.712 "trsvcid": "4420", 00:33:56.712 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:56.712 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:56.712 "hdgst": false, 00:33:56.712 "ddgst": false 00:33:56.712 }, 00:33:56.712 "method": "bdev_nvme_attach_controller" 00:33:56.712 }' 00:33:56.712 [2024-07-25 01:19:49.745434] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 23.11.0 initialization... 00:33:56.712 [2024-07-25 01:19:49.745513] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3923566 ] 00:33:56.712 EAL: No free 2048 kB hugepages reported on node 1 00:33:56.712 [2024-07-25 01:19:49.807501] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:56.970 [2024-07-25 01:19:49.902608] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:33:57.229 Running I/O for 1 seconds... 00:33:58.163 00:33:58.163 Latency(us) 00:33:58.163 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:58.163 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:33:58.163 Verification LBA range: start 0x0 length 0x4000 00:33:58.163 Nvme1n1 : 1.02 8697.77 33.98 0.00 0.00 14654.02 2936.98 15340.28 00:33:58.163 =================================================================================================================== 00:33:58.163 Total : 8697.77 33.98 0.00 0.00 14654.02 2936.98 15340.28 00:33:58.421 01:19:51 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=3923761 00:33:58.421 01:19:51 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:33:58.421 01:19:51 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:33:58.421 01:19:51 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:33:58.421 01:19:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:33:58.421 01:19:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:33:58.421 01:19:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:33:58.421 01:19:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:33:58.421 { 00:33:58.421 "params": { 00:33:58.421 "name": "Nvme$subsystem", 00:33:58.421 "trtype": "$TEST_TRANSPORT", 00:33:58.421 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:58.421 "adrfam": "ipv4", 00:33:58.421 "trsvcid": "$NVMF_PORT", 00:33:58.421 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:58.421 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:58.421 "hdgst": ${hdgst:-false}, 00:33:58.421 "ddgst": ${ddgst:-false} 00:33:58.421 }, 00:33:58.421 "method": "bdev_nvme_attach_controller" 00:33:58.421 } 00:33:58.421 EOF 00:33:58.421 )") 00:33:58.421 01:19:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:33:58.421 01:19:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:33:58.421 01:19:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:33:58.421 01:19:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:33:58.421 "params": { 00:33:58.421 "name": "Nvme1", 00:33:58.421 "trtype": "tcp", 00:33:58.421 "traddr": "10.0.0.2", 00:33:58.421 "adrfam": "ipv4", 00:33:58.421 "trsvcid": "4420", 00:33:58.421 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:58.421 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:58.421 "hdgst": false, 00:33:58.421 "ddgst": false 00:33:58.421 }, 00:33:58.421 "method": "bdev_nvme_attach_controller" 00:33:58.421 }' 00:33:58.421 [2024-07-25 01:19:51.518847] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 23.11.0 initialization... 00:33:58.421 [2024-07-25 01:19:51.518939] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3923761 ] 00:33:58.421 EAL: No free 2048 kB hugepages reported on node 1 00:33:58.679 [2024-07-25 01:19:51.581738] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:58.679 [2024-07-25 01:19:51.666633] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:33:58.936 Running I/O for 15 seconds... 00:34:01.470 01:19:54 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 3923469 00:34:01.470 01:19:54 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:34:01.470 [2024-07-25 01:19:54.490570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:42680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.470 [2024-07-25 01:19:54.490623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.470 [2024-07-25 01:19:54.490664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:42688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.470 [2024-07-25 01:19:54.490683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.470 [2024-07-25 01:19:54.490702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:42696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.470 [2024-07-25 01:19:54.490718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.470 [2024-07-25 01:19:54.490737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:42704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.470 [2024-07-25 01:19:54.490752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.470 [2024-07-25 01:19:54.490770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:42712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.470 [2024-07-25 01:19:54.490786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.470 [2024-07-25 01:19:54.490803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:42720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.470 [2024-07-25 01:19:54.490820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.470 [2024-07-25 01:19:54.490838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:42728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.471 [2024-07-25 01:19:54.490853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.471 [2024-07-25 01:19:54.490872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:42736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.471 [2024-07-25 01:19:54.490887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.471 [2024-07-25 01:19:54.490905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:42744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.471 [2024-07-25 01:19:54.490922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.471 [2024-07-25 01:19:54.490940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:42752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.471 [2024-07-25 01:19:54.490957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.471 [2024-07-25 01:19:54.490976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:42760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.471 [2024-07-25 01:19:54.490992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.471 [2024-07-25 01:19:54.491009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:42768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.471 [2024-07-25 01:19:54.491033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.471 [2024-07-25 01:19:54.491051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:42776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.471 [2024-07-25 01:19:54.491065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.471 [2024-07-25 01:19:54.491082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:42784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.471 [2024-07-25 01:19:54.491096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.471 [2024-07-25 01:19:54.491112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:42792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.471 [2024-07-25 01:19:54.491127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.471 [2024-07-25 01:19:54.491144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:43632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:01.471 [2024-07-25 01:19:54.491159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.471 [2024-07-25 01:19:54.491176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:42800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.471 [2024-07-25 01:19:54.491191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.471 [2024-07-25 01:19:54.491208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:42808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.471 [2024-07-25 01:19:54.491223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.471 [2024-07-25 01:19:54.491239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:42816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.471 [2024-07-25 01:19:54.491265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.471 [2024-07-25 01:19:54.491300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:42824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.471 [2024-07-25 01:19:54.491314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.471 [2024-07-25 01:19:54.491329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:42832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.471 [2024-07-25 01:19:54.491342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.471 [2024-07-25 01:19:54.491357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:42840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.471 [2024-07-25 01:19:54.491370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.471 [2024-07-25 01:19:54.491385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:42848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.471 [2024-07-25 01:19:54.491399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.471 [2024-07-25 01:19:54.491414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:42856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.471 [2024-07-25 01:19:54.491427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.471 [2024-07-25 01:19:54.491446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:42864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.471 [2024-07-25 01:19:54.491461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.471 [2024-07-25 01:19:54.491476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:42872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.471 [2024-07-25 01:19:54.491490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.471 [2024-07-25 01:19:54.491505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:42880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.471 [2024-07-25 01:19:54.491543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.471 [2024-07-25 01:19:54.491561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:42888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.471 [2024-07-25 01:19:54.491576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.471 [2024-07-25 01:19:54.491592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:42896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.471 [2024-07-25 01:19:54.491607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.471 [2024-07-25 01:19:54.491624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:42904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.471 [2024-07-25 01:19:54.491638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.471 [2024-07-25 01:19:54.491655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:42912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.471 [2024-07-25 01:19:54.491669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.471 [2024-07-25 01:19:54.491686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:42920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.471 [2024-07-25 01:19:54.491701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.471 [2024-07-25 01:19:54.491717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:42928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.471 [2024-07-25 01:19:54.491733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.471 [2024-07-25 01:19:54.491750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:42936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.471 [2024-07-25 01:19:54.491764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.471 [2024-07-25 01:19:54.491780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:42944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.471 [2024-07-25 01:19:54.491795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.471 [2024-07-25 01:19:54.491812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:42952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.471 [2024-07-25 01:19:54.491826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.471 [2024-07-25 01:19:54.491843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:42960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.471 [2024-07-25 01:19:54.491861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.471 [2024-07-25 01:19:54.491879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:42968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.471 [2024-07-25 01:19:54.491893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.471 [2024-07-25 01:19:54.491910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:42976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.471 [2024-07-25 01:19:54.491925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.471 [2024-07-25 01:19:54.491941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:42984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.471 [2024-07-25 01:19:54.491955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.471 [2024-07-25 01:19:54.491972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:42992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.471 [2024-07-25 01:19:54.491986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.471 [2024-07-25 01:19:54.492003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:43000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.471 [2024-07-25 01:19:54.492017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.471 [2024-07-25 01:19:54.492034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:43008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.471 [2024-07-25 01:19:54.492048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.471 [2024-07-25 01:19:54.492065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:43016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.471 [2024-07-25 01:19:54.492079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.471 [2024-07-25 01:19:54.492096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:43024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.471 [2024-07-25 01:19:54.492110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.471 [2024-07-25 01:19:54.492127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:43032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.471 [2024-07-25 01:19:54.492141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.472 [2024-07-25 01:19:54.492158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:43040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.472 [2024-07-25 01:19:54.492172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.472 [2024-07-25 01:19:54.492189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:43048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.472 [2024-07-25 01:19:54.492203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.472 [2024-07-25 01:19:54.492219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:43056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.472 [2024-07-25 01:19:54.492237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.472 [2024-07-25 01:19:54.492266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:43064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.472 [2024-07-25 01:19:54.492283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.472 [2024-07-25 01:19:54.492315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:43072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.472 [2024-07-25 01:19:54.492331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.472 [2024-07-25 01:19:54.492346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:43080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.472 [2024-07-25 01:19:54.492360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.472 [2024-07-25 01:19:54.492375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:43088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.472 [2024-07-25 01:19:54.492388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.472 [2024-07-25 01:19:54.492403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:43096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.472 [2024-07-25 01:19:54.492416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.472 [2024-07-25 01:19:54.492432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:43104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.472 [2024-07-25 01:19:54.492445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.472 [2024-07-25 01:19:54.492460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:43112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.472 [2024-07-25 01:19:54.492473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.472 [2024-07-25 01:19:54.492488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:43120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.472 [2024-07-25 01:19:54.492501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.472 [2024-07-25 01:19:54.492542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:43128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.472 [2024-07-25 01:19:54.492556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.472 [2024-07-25 01:19:54.492570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:43136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.472 [2024-07-25 01:19:54.492599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.472 [2024-07-25 01:19:54.492616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:43144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.472 [2024-07-25 01:19:54.492631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.472 [2024-07-25 01:19:54.492648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:43152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.472 [2024-07-25 01:19:54.492663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.472 [2024-07-25 01:19:54.492679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:43160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.472 [2024-07-25 01:19:54.492697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.472 [2024-07-25 01:19:54.492715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:43168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.472 [2024-07-25 01:19:54.492729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.472 [2024-07-25 01:19:54.492746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:43176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.472 [2024-07-25 01:19:54.492760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.472 [2024-07-25 01:19:54.492777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:43184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.472 [2024-07-25 01:19:54.492792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.472 [2024-07-25 01:19:54.492809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:43192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.472 [2024-07-25 01:19:54.492823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.472 [2024-07-25 01:19:54.492840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:43200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.472 [2024-07-25 01:19:54.492855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.472 [2024-07-25 01:19:54.492872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:43208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.472 [2024-07-25 01:19:54.492886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.472 [2024-07-25 01:19:54.492903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:43216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.472 [2024-07-25 01:19:54.492918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.472 [2024-07-25 01:19:54.492934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:43224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.472 [2024-07-25 01:19:54.492949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.472 [2024-07-25 01:19:54.492965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:43232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.472 [2024-07-25 01:19:54.492980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.472 [2024-07-25 01:19:54.492998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:43240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.472 [2024-07-25 01:19:54.493012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.472 [2024-07-25 01:19:54.493029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:43248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.472 [2024-07-25 01:19:54.493043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.472 [2024-07-25 01:19:54.493060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:43256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.472 [2024-07-25 01:19:54.493075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.472 [2024-07-25 01:19:54.493091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:43264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.472 [2024-07-25 01:19:54.493110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.472 [2024-07-25 01:19:54.493127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:43272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.472 [2024-07-25 01:19:54.493142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.472 [2024-07-25 01:19:54.493158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:43280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.472 [2024-07-25 01:19:54.493173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.472 [2024-07-25 01:19:54.493189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:43288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.472 [2024-07-25 01:19:54.493203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.472 [2024-07-25 01:19:54.493220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:43296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.472 [2024-07-25 01:19:54.493236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.472 [2024-07-25 01:19:54.493261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:43304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.472 [2024-07-25 01:19:54.493292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.472 [2024-07-25 01:19:54.493308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:43312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.472 [2024-07-25 01:19:54.493321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.472 [2024-07-25 01:19:54.493336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:43320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.472 [2024-07-25 01:19:54.493350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.472 [2024-07-25 01:19:54.493365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:43328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.472 [2024-07-25 01:19:54.493379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.472 [2024-07-25 01:19:54.493394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:43336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.472 [2024-07-25 01:19:54.493407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.472 [2024-07-25 01:19:54.493422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:43344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.472 [2024-07-25 01:19:54.493435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.472 [2024-07-25 01:19:54.493451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:43352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.473 [2024-07-25 01:19:54.493464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.473 [2024-07-25 01:19:54.493479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:43360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.473 [2024-07-25 01:19:54.493492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.473 [2024-07-25 01:19:54.493510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:43368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.473 [2024-07-25 01:19:54.493539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.473 [2024-07-25 01:19:54.493566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:43376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.473 [2024-07-25 01:19:54.493580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.473 [2024-07-25 01:19:54.493597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:43384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.473 [2024-07-25 01:19:54.493611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.473 [2024-07-25 01:19:54.493628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:43392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.473 [2024-07-25 01:19:54.493643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.473 [2024-07-25 01:19:54.493659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:43400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.473 [2024-07-25 01:19:54.493673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.473 [2024-07-25 01:19:54.493690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:43408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.473 [2024-07-25 01:19:54.493704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.473 [2024-07-25 01:19:54.493721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:43416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.473 [2024-07-25 01:19:54.493735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.473 [2024-07-25 01:19:54.493754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:43424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.473 [2024-07-25 01:19:54.493769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.473 [2024-07-25 01:19:54.493786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:43432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.473 [2024-07-25 01:19:54.493802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.473 [2024-07-25 01:19:54.493819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:43440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.473 [2024-07-25 01:19:54.493834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.473 [2024-07-25 01:19:54.493850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:43448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.473 [2024-07-25 01:19:54.493866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.473 [2024-07-25 01:19:54.493883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:43456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.473 [2024-07-25 01:19:54.493904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.473 [2024-07-25 01:19:54.493922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:43464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.473 [2024-07-25 01:19:54.493941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.473 [2024-07-25 01:19:54.493959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:43472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.473 [2024-07-25 01:19:54.493974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.473 [2024-07-25 01:19:54.493991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:43480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.473 [2024-07-25 01:19:54.494007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.473 [2024-07-25 01:19:54.494023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:43488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.473 [2024-07-25 01:19:54.494039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.473 [2024-07-25 01:19:54.494056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:43496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.473 [2024-07-25 01:19:54.494071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.473 [2024-07-25 01:19:54.494088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:43504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.473 [2024-07-25 01:19:54.494103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.473 [2024-07-25 01:19:54.494120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:43512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.473 [2024-07-25 01:19:54.494135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.473 [2024-07-25 01:19:54.494153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:43520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.473 [2024-07-25 01:19:54.494168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.473 [2024-07-25 01:19:54.494185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:43528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.473 [2024-07-25 01:19:54.494200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.473 [2024-07-25 01:19:54.494216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:43536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.473 [2024-07-25 01:19:54.494237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.473 [2024-07-25 01:19:54.494262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:43544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.473 [2024-07-25 01:19:54.494278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.473 [2024-07-25 01:19:54.494310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:43552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.473 [2024-07-25 01:19:54.494324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.473 [2024-07-25 01:19:54.494339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:43560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.473 [2024-07-25 01:19:54.494352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.473 [2024-07-25 01:19:54.494372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:43640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:01.473 [2024-07-25 01:19:54.494386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.473 [2024-07-25 01:19:54.494401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:43648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:01.473 [2024-07-25 01:19:54.494415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.473 [2024-07-25 01:19:54.494430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:43656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:01.473 [2024-07-25 01:19:54.494444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.473 [2024-07-25 01:19:54.494460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:43664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:01.473 [2024-07-25 01:19:54.494473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.473 [2024-07-25 01:19:54.494489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:43672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:01.473 [2024-07-25 01:19:54.494503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.473 [2024-07-25 01:19:54.494534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:43680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:01.473 [2024-07-25 01:19:54.494549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.473 [2024-07-25 01:19:54.494564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:43688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:01.473 [2024-07-25 01:19:54.494577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.473 [2024-07-25 01:19:54.494609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:43696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:01.473 [2024-07-25 01:19:54.494624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.473 [2024-07-25 01:19:54.494641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:43568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.473 [2024-07-25 01:19:54.494656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.473 [2024-07-25 01:19:54.494673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:43576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.473 [2024-07-25 01:19:54.494688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.473 [2024-07-25 01:19:54.494705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:43584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.473 [2024-07-25 01:19:54.494720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.473 [2024-07-25 01:19:54.494737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:43592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.473 [2024-07-25 01:19:54.494752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.473 [2024-07-25 01:19:54.494769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:43600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.473 [2024-07-25 01:19:54.494788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.474 [2024-07-25 01:19:54.494806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:43608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.474 [2024-07-25 01:19:54.494821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.474 [2024-07-25 01:19:54.494838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:43616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.474 [2024-07-25 01:19:54.494854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.474 [2024-07-25 01:19:54.494871] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25369a0 is same with the state(5) to be set 00:34:01.474 [2024-07-25 01:19:54.494888] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:01.474 [2024-07-25 01:19:54.494901] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:01.474 [2024-07-25 01:19:54.494915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:43624 len:8 PRP1 0x0 PRP2 0x0 00:34:01.474 [2024-07-25 01:19:54.494929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.474 [2024-07-25 01:19:54.494993] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x25369a0 was disconnected and freed. reset controller. 00:34:01.474 [2024-07-25 01:19:54.498889] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.474 [2024-07-25 01:19:54.498967] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23061e0 (9): Bad file descriptor 00:34:01.474 [2024-07-25 01:19:54.499664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.474 [2024-07-25 01:19:54.499703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23061e0 with addr=10.0.0.2, port=4420 00:34:01.474 [2024-07-25 01:19:54.499725] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23061e0 is same with the state(5) to be set 00:34:01.474 [2024-07-25 01:19:54.499967] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23061e0 (9): Bad file descriptor 00:34:01.474 [2024-07-25 01:19:54.500212] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.474 [2024-07-25 01:19:54.500252] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.474 [2024-07-25 01:19:54.500304] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.474 [2024-07-25 01:19:54.503891] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.474 [2024-07-25 01:19:54.513070] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.474 [2024-07-25 01:19:54.513480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.474 [2024-07-25 01:19:54.513510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23061e0 with addr=10.0.0.2, port=4420 00:34:01.474 [2024-07-25 01:19:54.513543] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23061e0 is same with the state(5) to be set 00:34:01.474 [2024-07-25 01:19:54.513783] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23061e0 (9): Bad file descriptor 00:34:01.474 [2024-07-25 01:19:54.514028] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.474 [2024-07-25 01:19:54.514052] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.474 [2024-07-25 01:19:54.514067] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.474 [2024-07-25 01:19:54.517699] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.474 [2024-07-25 01:19:54.527078] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.474 [2024-07-25 01:19:54.527555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.474 [2024-07-25 01:19:54.527587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23061e0 with addr=10.0.0.2, port=4420 00:34:01.474 [2024-07-25 01:19:54.527606] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23061e0 is same with the state(5) to be set 00:34:01.474 [2024-07-25 01:19:54.527845] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23061e0 (9): Bad file descriptor 00:34:01.474 [2024-07-25 01:19:54.528090] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.474 [2024-07-25 01:19:54.528115] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.474 [2024-07-25 01:19:54.528131] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.474 [2024-07-25 01:19:54.531726] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.474 [2024-07-25 01:19:54.540991] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.474 [2024-07-25 01:19:54.541448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.474 [2024-07-25 01:19:54.541478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23061e0 with addr=10.0.0.2, port=4420 00:34:01.474 [2024-07-25 01:19:54.541495] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23061e0 is same with the state(5) to be set 00:34:01.474 [2024-07-25 01:19:54.541754] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23061e0 (9): Bad file descriptor 00:34:01.474 [2024-07-25 01:19:54.542000] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.474 [2024-07-25 01:19:54.542026] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.474 [2024-07-25 01:19:54.542042] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.474 [2024-07-25 01:19:54.545589] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.474 [2024-07-25 01:19:54.555061] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.474 [2024-07-25 01:19:54.555476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.474 [2024-07-25 01:19:54.555510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23061e0 with addr=10.0.0.2, port=4420 00:34:01.474 [2024-07-25 01:19:54.555529] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23061e0 is same with the state(5) to be set 00:34:01.474 [2024-07-25 01:19:54.555769] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23061e0 (9): Bad file descriptor 00:34:01.474 [2024-07-25 01:19:54.556015] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.474 [2024-07-25 01:19:54.556040] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.474 [2024-07-25 01:19:54.556056] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.474 [2024-07-25 01:19:54.559670] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.474 [2024-07-25 01:19:54.569008] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.474 [2024-07-25 01:19:54.569441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.474 [2024-07-25 01:19:54.569474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23061e0 with addr=10.0.0.2, port=4420 00:34:01.474 [2024-07-25 01:19:54.569501] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23061e0 is same with the state(5) to be set 00:34:01.474 [2024-07-25 01:19:54.569742] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23061e0 (9): Bad file descriptor 00:34:01.474 [2024-07-25 01:19:54.569987] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.474 [2024-07-25 01:19:54.570013] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.474 [2024-07-25 01:19:54.570029] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.474 [2024-07-25 01:19:54.573630] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.474 [2024-07-25 01:19:54.582948] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.474 [2024-07-25 01:19:54.583349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.474 [2024-07-25 01:19:54.583382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23061e0 with addr=10.0.0.2, port=4420 00:34:01.474 [2024-07-25 01:19:54.583400] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23061e0 is same with the state(5) to be set 00:34:01.474 [2024-07-25 01:19:54.583640] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23061e0 (9): Bad file descriptor 00:34:01.474 [2024-07-25 01:19:54.583885] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.474 [2024-07-25 01:19:54.583910] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.474 [2024-07-25 01:19:54.583926] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.474 [2024-07-25 01:19:54.587531] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.474 [2024-07-25 01:19:54.596860] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.474 [2024-07-25 01:19:54.597271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.474 [2024-07-25 01:19:54.597315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23061e0 with addr=10.0.0.2, port=4420 00:34:01.474 [2024-07-25 01:19:54.597332] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23061e0 is same with the state(5) to be set 00:34:01.474 [2024-07-25 01:19:54.597572] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23061e0 (9): Bad file descriptor 00:34:01.474 [2024-07-25 01:19:54.597817] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.474 [2024-07-25 01:19:54.597843] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.474 [2024-07-25 01:19:54.597859] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.474 [2024-07-25 01:19:54.601461] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.474 [2024-07-25 01:19:54.610785] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.474 [2024-07-25 01:19:54.611225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.474 [2024-07-25 01:19:54.611266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23061e0 with addr=10.0.0.2, port=4420 00:34:01.474 [2024-07-25 01:19:54.611287] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23061e0 is same with the state(5) to be set 00:34:01.474 [2024-07-25 01:19:54.611527] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23061e0 (9): Bad file descriptor 00:34:01.474 [2024-07-25 01:19:54.611773] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.474 [2024-07-25 01:19:54.611804] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.475 [2024-07-25 01:19:54.611821] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.475 [2024-07-25 01:19:54.615581] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.734 [2024-07-25 01:19:54.624757] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.734 [2024-07-25 01:19:54.625191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.734 [2024-07-25 01:19:54.625226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23061e0 with addr=10.0.0.2, port=4420 00:34:01.734 [2024-07-25 01:19:54.625255] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23061e0 is same with the state(5) to be set 00:34:01.734 [2024-07-25 01:19:54.625499] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23061e0 (9): Bad file descriptor 00:34:01.734 [2024-07-25 01:19:54.625746] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.734 [2024-07-25 01:19:54.625771] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.734 [2024-07-25 01:19:54.625787] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.734 [2024-07-25 01:19:54.629500] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.734 [2024-07-25 01:19:54.638629] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.734 [2024-07-25 01:19:54.639043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.734 [2024-07-25 01:19:54.639078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23061e0 with addr=10.0.0.2, port=4420 00:34:01.734 [2024-07-25 01:19:54.639097] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23061e0 is same with the state(5) to be set 00:34:01.734 [2024-07-25 01:19:54.639353] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23061e0 (9): Bad file descriptor 00:34:01.734 [2024-07-25 01:19:54.639598] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.734 [2024-07-25 01:19:54.639624] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.734 [2024-07-25 01:19:54.639641] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.734 [2024-07-25 01:19:54.643231] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.734 [2024-07-25 01:19:54.652584] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.734 [2024-07-25 01:19:54.653028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.734 [2024-07-25 01:19:54.653056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23061e0 with addr=10.0.0.2, port=4420 00:34:01.734 [2024-07-25 01:19:54.653072] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23061e0 is same with the state(5) to be set 00:34:01.734 [2024-07-25 01:19:54.653330] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23061e0 (9): Bad file descriptor 00:34:01.734 [2024-07-25 01:19:54.653575] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.734 [2024-07-25 01:19:54.653601] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.734 [2024-07-25 01:19:54.653617] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.734 [2024-07-25 01:19:54.657211] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.734 [2024-07-25 01:19:54.666552] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.734 [2024-07-25 01:19:54.666975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.734 [2024-07-25 01:19:54.667008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23061e0 with addr=10.0.0.2, port=4420 00:34:01.734 [2024-07-25 01:19:54.667027] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23061e0 is same with the state(5) to be set 00:34:01.734 [2024-07-25 01:19:54.667281] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23061e0 (9): Bad file descriptor 00:34:01.734 [2024-07-25 01:19:54.667527] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.734 [2024-07-25 01:19:54.667552] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.734 [2024-07-25 01:19:54.667568] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.734 [2024-07-25 01:19:54.671160] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.734 [2024-07-25 01:19:54.680497] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.734 [2024-07-25 01:19:54.680931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.734 [2024-07-25 01:19:54.680963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23061e0 with addr=10.0.0.2, port=4420 00:34:01.734 [2024-07-25 01:19:54.680981] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23061e0 is same with the state(5) to be set 00:34:01.734 [2024-07-25 01:19:54.681220] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23061e0 (9): Bad file descriptor 00:34:01.734 [2024-07-25 01:19:54.681476] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.734 [2024-07-25 01:19:54.681502] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.734 [2024-07-25 01:19:54.681518] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.734 [2024-07-25 01:19:54.685110] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.734 [2024-07-25 01:19:54.694443] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.734 [2024-07-25 01:19:54.694834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.734 [2024-07-25 01:19:54.694866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23061e0 with addr=10.0.0.2, port=4420 00:34:01.734 [2024-07-25 01:19:54.694884] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23061e0 is same with the state(5) to be set 00:34:01.734 [2024-07-25 01:19:54.695123] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23061e0 (9): Bad file descriptor 00:34:01.734 [2024-07-25 01:19:54.695382] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.734 [2024-07-25 01:19:54.695409] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.734 [2024-07-25 01:19:54.695425] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.734 [2024-07-25 01:19:54.699015] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.734 [2024-07-25 01:19:54.708353] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.734 [2024-07-25 01:19:54.708768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.734 [2024-07-25 01:19:54.708800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23061e0 with addr=10.0.0.2, port=4420 00:34:01.734 [2024-07-25 01:19:54.708818] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23061e0 is same with the state(5) to be set 00:34:01.734 [2024-07-25 01:19:54.709063] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23061e0 (9): Bad file descriptor 00:34:01.734 [2024-07-25 01:19:54.709320] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.734 [2024-07-25 01:19:54.709346] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.734 [2024-07-25 01:19:54.709363] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.734 [2024-07-25 01:19:54.712956] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.734 [2024-07-25 01:19:54.722324] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.734 [2024-07-25 01:19:54.722723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.734 [2024-07-25 01:19:54.722755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23061e0 with addr=10.0.0.2, port=4420 00:34:01.734 [2024-07-25 01:19:54.722773] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23061e0 is same with the state(5) to be set 00:34:01.734 [2024-07-25 01:19:54.723012] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23061e0 (9): Bad file descriptor 00:34:01.734 [2024-07-25 01:19:54.723268] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.734 [2024-07-25 01:19:54.723294] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.734 [2024-07-25 01:19:54.723311] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.734 [2024-07-25 01:19:54.726903] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.734 [2024-07-25 01:19:54.736238] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.735 [2024-07-25 01:19:54.736666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.735 [2024-07-25 01:19:54.736697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23061e0 with addr=10.0.0.2, port=4420 00:34:01.735 [2024-07-25 01:19:54.736715] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23061e0 is same with the state(5) to be set 00:34:01.735 [2024-07-25 01:19:54.736954] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23061e0 (9): Bad file descriptor 00:34:01.735 [2024-07-25 01:19:54.737197] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.735 [2024-07-25 01:19:54.737222] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.735 [2024-07-25 01:19:54.737239] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.735 [2024-07-25 01:19:54.740845] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.735 [2024-07-25 01:19:54.750184] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.735 [2024-07-25 01:19:54.750618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.735 [2024-07-25 01:19:54.750650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23061e0 with addr=10.0.0.2, port=4420 00:34:01.735 [2024-07-25 01:19:54.750668] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23061e0 is same with the state(5) to be set 00:34:01.735 [2024-07-25 01:19:54.750907] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23061e0 (9): Bad file descriptor 00:34:01.735 [2024-07-25 01:19:54.751151] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.735 [2024-07-25 01:19:54.751176] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.735 [2024-07-25 01:19:54.751197] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.735 [2024-07-25 01:19:54.754803] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.735 [2024-07-25 01:19:54.764182] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.735 [2024-07-25 01:19:54.764605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.735 [2024-07-25 01:19:54.764638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23061e0 with addr=10.0.0.2, port=4420 00:34:01.735 [2024-07-25 01:19:54.764656] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23061e0 is same with the state(5) to be set 00:34:01.735 [2024-07-25 01:19:54.764897] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23061e0 (9): Bad file descriptor 00:34:01.735 [2024-07-25 01:19:54.765140] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.735 [2024-07-25 01:19:54.765166] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.735 [2024-07-25 01:19:54.765182] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.735 [2024-07-25 01:19:54.768786] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.735 [2024-07-25 01:19:54.778101] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.735 [2024-07-25 01:19:54.778537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.735 [2024-07-25 01:19:54.778570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23061e0 with addr=10.0.0.2, port=4420 00:34:01.735 [2024-07-25 01:19:54.778588] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23061e0 is same with the state(5) to be set 00:34:01.735 [2024-07-25 01:19:54.778827] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23061e0 (9): Bad file descriptor 00:34:01.735 [2024-07-25 01:19:54.779071] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.735 [2024-07-25 01:19:54.779096] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.735 [2024-07-25 01:19:54.779111] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.735 [2024-07-25 01:19:54.782738] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.735 [2024-07-25 01:19:54.792071] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.735 [2024-07-25 01:19:54.792578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.735 [2024-07-25 01:19:54.792606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23061e0 with addr=10.0.0.2, port=4420 00:34:01.735 [2024-07-25 01:19:54.792622] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23061e0 is same with the state(5) to be set 00:34:01.735 [2024-07-25 01:19:54.792878] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23061e0 (9): Bad file descriptor 00:34:01.735 [2024-07-25 01:19:54.793123] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.735 [2024-07-25 01:19:54.793148] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.735 [2024-07-25 01:19:54.793164] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.735 [2024-07-25 01:19:54.796765] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.735 [2024-07-25 01:19:54.806094] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.735 [2024-07-25 01:19:54.806550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.735 [2024-07-25 01:19:54.806582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23061e0 with addr=10.0.0.2, port=4420 00:34:01.735 [2024-07-25 01:19:54.806600] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23061e0 is same with the state(5) to be set 00:34:01.735 [2024-07-25 01:19:54.806839] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23061e0 (9): Bad file descriptor 00:34:01.735 [2024-07-25 01:19:54.807082] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.735 [2024-07-25 01:19:54.807107] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.735 [2024-07-25 01:19:54.807122] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.735 [2024-07-25 01:19:54.810724] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.735 [2024-07-25 01:19:54.820013] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.735 [2024-07-25 01:19:54.820427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.735 [2024-07-25 01:19:54.820460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23061e0 with addr=10.0.0.2, port=4420 00:34:01.735 [2024-07-25 01:19:54.820479] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23061e0 is same with the state(5) to be set 00:34:01.735 [2024-07-25 01:19:54.820720] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23061e0 (9): Bad file descriptor 00:34:01.735 [2024-07-25 01:19:54.820965] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.735 [2024-07-25 01:19:54.820991] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.735 [2024-07-25 01:19:54.821006] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.735 [2024-07-25 01:19:54.824607] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.735 [2024-07-25 01:19:54.833938] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.735 [2024-07-25 01:19:54.834369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.735 [2024-07-25 01:19:54.834402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23061e0 with addr=10.0.0.2, port=4420 00:34:01.735 [2024-07-25 01:19:54.834420] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23061e0 is same with the state(5) to be set 00:34:01.735 [2024-07-25 01:19:54.834661] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23061e0 (9): Bad file descriptor 00:34:01.735 [2024-07-25 01:19:54.834906] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.735 [2024-07-25 01:19:54.834930] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.735 [2024-07-25 01:19:54.834947] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.735 [2024-07-25 01:19:54.838560] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.735 [2024-07-25 01:19:54.847895] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.735 [2024-07-25 01:19:54.848301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.735 [2024-07-25 01:19:54.848334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23061e0 with addr=10.0.0.2, port=4420 00:34:01.735 [2024-07-25 01:19:54.848353] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23061e0 is same with the state(5) to be set 00:34:01.735 [2024-07-25 01:19:54.848599] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23061e0 (9): Bad file descriptor 00:34:01.735 [2024-07-25 01:19:54.848844] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.735 [2024-07-25 01:19:54.848869] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.735 [2024-07-25 01:19:54.848885] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.735 [2024-07-25 01:19:54.852487] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.735 [2024-07-25 01:19:54.861811] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.735 [2024-07-25 01:19:54.862229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.735 [2024-07-25 01:19:54.862271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23061e0 with addr=10.0.0.2, port=4420 00:34:01.735 [2024-07-25 01:19:54.862290] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23061e0 is same with the state(5) to be set 00:34:01.735 [2024-07-25 01:19:54.862529] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23061e0 (9): Bad file descriptor 00:34:01.735 [2024-07-25 01:19:54.862772] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.735 [2024-07-25 01:19:54.862797] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.735 [2024-07-25 01:19:54.862813] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.735 [2024-07-25 01:19:54.866413] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.735 [2024-07-25 01:19:54.875734] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.735 [2024-07-25 01:19:54.876155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.736 [2024-07-25 01:19:54.876187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23061e0 with addr=10.0.0.2, port=4420 00:34:01.736 [2024-07-25 01:19:54.876205] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23061e0 is same with the state(5) to be set 00:34:01.736 [2024-07-25 01:19:54.876456] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23061e0 (9): Bad file descriptor 00:34:01.736 [2024-07-25 01:19:54.876700] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.736 [2024-07-25 01:19:54.876726] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.736 [2024-07-25 01:19:54.876742] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.736 [2024-07-25 01:19:54.880416] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.995 [2024-07-25 01:19:54.889801] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.995 [2024-07-25 01:19:54.890233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.995 [2024-07-25 01:19:54.890278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23061e0 with addr=10.0.0.2, port=4420 00:34:01.995 [2024-07-25 01:19:54.890298] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23061e0 is same with the state(5) to be set 00:34:01.995 [2024-07-25 01:19:54.890539] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23061e0 (9): Bad file descriptor 00:34:01.995 [2024-07-25 01:19:54.890782] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.995 [2024-07-25 01:19:54.890808] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.995 [2024-07-25 01:19:54.890830] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.995 [2024-07-25 01:19:54.894433] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.995 [2024-07-25 01:19:54.903765] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.995 [2024-07-25 01:19:54.904193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.995 [2024-07-25 01:19:54.904226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23061e0 with addr=10.0.0.2, port=4420 00:34:01.995 [2024-07-25 01:19:54.904257] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23061e0 is same with the state(5) to be set 00:34:01.995 [2024-07-25 01:19:54.904499] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23061e0 (9): Bad file descriptor 00:34:01.995 [2024-07-25 01:19:54.904742] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.995 [2024-07-25 01:19:54.904768] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.995 [2024-07-25 01:19:54.904784] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.995 [2024-07-25 01:19:54.908381] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.995 [2024-07-25 01:19:54.917703] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.995 [2024-07-25 01:19:54.918138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.995 [2024-07-25 01:19:54.918170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23061e0 with addr=10.0.0.2, port=4420 00:34:01.995 [2024-07-25 01:19:54.918188] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23061e0 is same with the state(5) to be set 00:34:01.995 [2024-07-25 01:19:54.918443] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23061e0 (9): Bad file descriptor 00:34:01.995 [2024-07-25 01:19:54.918687] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.995 [2024-07-25 01:19:54.918713] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.995 [2024-07-25 01:19:54.918729] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.995 [2024-07-25 01:19:54.922325] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.995 [2024-07-25 01:19:54.931668] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.995 [2024-07-25 01:19:54.932072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.995 [2024-07-25 01:19:54.932106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23061e0 with addr=10.0.0.2, port=4420 00:34:01.995 [2024-07-25 01:19:54.932124] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23061e0 is same with the state(5) to be set 00:34:01.995 [2024-07-25 01:19:54.932380] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23061e0 (9): Bad file descriptor 00:34:01.995 [2024-07-25 01:19:54.932626] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.995 [2024-07-25 01:19:54.932652] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.995 [2024-07-25 01:19:54.932668] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.995 [2024-07-25 01:19:54.936271] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.995 [2024-07-25 01:19:54.945600] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.995 [2024-07-25 01:19:54.946020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.995 [2024-07-25 01:19:54.946057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23061e0 with addr=10.0.0.2, port=4420 00:34:01.995 [2024-07-25 01:19:54.946076] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23061e0 is same with the state(5) to be set 00:34:01.995 [2024-07-25 01:19:54.946331] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23061e0 (9): Bad file descriptor 00:34:01.995 [2024-07-25 01:19:54.946574] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.995 [2024-07-25 01:19:54.946600] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.995 [2024-07-25 01:19:54.946616] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.995 [2024-07-25 01:19:54.950205] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.995 [2024-07-25 01:19:54.959551] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.995 [2024-07-25 01:19:54.959971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.995 [2024-07-25 01:19:54.960003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23061e0 with addr=10.0.0.2, port=4420 00:34:01.995 [2024-07-25 01:19:54.960021] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23061e0 is same with the state(5) to be set 00:34:01.995 [2024-07-25 01:19:54.960274] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23061e0 (9): Bad file descriptor 00:34:01.995 [2024-07-25 01:19:54.960519] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.995 [2024-07-25 01:19:54.960544] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.995 [2024-07-25 01:19:54.960560] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.995 [2024-07-25 01:19:54.964150] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.995 [2024-07-25 01:19:54.973492] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.995 [2024-07-25 01:19:54.973919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.995 [2024-07-25 01:19:54.973951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23061e0 with addr=10.0.0.2, port=4420 00:34:01.996 [2024-07-25 01:19:54.973969] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23061e0 is same with the state(5) to be set 00:34:01.996 [2024-07-25 01:19:54.974209] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23061e0 (9): Bad file descriptor 00:34:01.996 [2024-07-25 01:19:54.974462] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.996 [2024-07-25 01:19:54.974488] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.996 [2024-07-25 01:19:54.974503] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.996 [2024-07-25 01:19:54.978090] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.996 [2024-07-25 01:19:54.987417] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.996 [2024-07-25 01:19:54.987833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.996 [2024-07-25 01:19:54.987865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23061e0 with addr=10.0.0.2, port=4420 00:34:01.996 [2024-07-25 01:19:54.987883] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23061e0 is same with the state(5) to be set 00:34:01.996 [2024-07-25 01:19:54.988123] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23061e0 (9): Bad file descriptor 00:34:01.996 [2024-07-25 01:19:54.988384] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.996 [2024-07-25 01:19:54.988410] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.996 [2024-07-25 01:19:54.988426] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.996 [2024-07-25 01:19:54.992013] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.996 [2024-07-25 01:19:55.001344] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.996 [2024-07-25 01:19:55.001768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.996 [2024-07-25 01:19:55.001800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23061e0 with addr=10.0.0.2, port=4420 00:34:01.996 [2024-07-25 01:19:55.001819] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23061e0 is same with the state(5) to be set 00:34:01.996 [2024-07-25 01:19:55.002057] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23061e0 (9): Bad file descriptor 00:34:01.996 [2024-07-25 01:19:55.002311] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.996 [2024-07-25 01:19:55.002336] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.996 [2024-07-25 01:19:55.002352] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.996 [2024-07-25 01:19:55.005941] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.996 [2024-07-25 01:19:55.015305] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.996 [2024-07-25 01:19:55.015903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.996 [2024-07-25 01:19:55.015940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23061e0 with addr=10.0.0.2, port=4420 00:34:01.996 [2024-07-25 01:19:55.015974] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23061e0 is same with the state(5) to be set 00:34:01.996 [2024-07-25 01:19:55.016262] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23061e0 (9): Bad file descriptor 00:34:01.996 [2024-07-25 01:19:55.016511] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.996 [2024-07-25 01:19:55.016536] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.996 [2024-07-25 01:19:55.016551] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.996 [2024-07-25 01:19:55.020144] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.996 [2024-07-25 01:19:55.029287] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.996 [2024-07-25 01:19:55.029714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.996 [2024-07-25 01:19:55.029746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23061e0 with addr=10.0.0.2, port=4420 00:34:01.996 [2024-07-25 01:19:55.029764] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23061e0 is same with the state(5) to be set 00:34:01.996 [2024-07-25 01:19:55.030004] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23061e0 (9): Bad file descriptor 00:34:01.996 [2024-07-25 01:19:55.030258] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.996 [2024-07-25 01:19:55.030283] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.996 [2024-07-25 01:19:55.030299] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.996 [2024-07-25 01:19:55.033899] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.996 [2024-07-25 01:19:55.043269] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.996 [2024-07-25 01:19:55.043699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.996 [2024-07-25 01:19:55.043731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23061e0 with addr=10.0.0.2, port=4420 00:34:01.996 [2024-07-25 01:19:55.043749] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23061e0 is same with the state(5) to be set 00:34:01.996 [2024-07-25 01:19:55.043989] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23061e0 (9): Bad file descriptor 00:34:01.996 [2024-07-25 01:19:55.044233] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.996 [2024-07-25 01:19:55.044268] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.996 [2024-07-25 01:19:55.044285] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.996 [2024-07-25 01:19:55.047881] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.996 [2024-07-25 01:19:55.057221] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.996 [2024-07-25 01:19:55.057646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.996 [2024-07-25 01:19:55.057678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23061e0 with addr=10.0.0.2, port=4420 00:34:01.996 [2024-07-25 01:19:55.057697] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23061e0 is same with the state(5) to be set 00:34:01.996 [2024-07-25 01:19:55.057936] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23061e0 (9): Bad file descriptor 00:34:01.996 [2024-07-25 01:19:55.058180] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.996 [2024-07-25 01:19:55.058205] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.996 [2024-07-25 01:19:55.058221] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.996 [2024-07-25 01:19:55.061818] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.996 [2024-07-25 01:19:55.071141] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.996 [2024-07-25 01:19:55.071543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.996 [2024-07-25 01:19:55.071574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23061e0 with addr=10.0.0.2, port=4420 00:34:01.996 [2024-07-25 01:19:55.071592] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23061e0 is same with the state(5) to be set 00:34:01.996 [2024-07-25 01:19:55.071831] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23061e0 (9): Bad file descriptor 00:34:01.996 [2024-07-25 01:19:55.072075] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.996 [2024-07-25 01:19:55.072102] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.996 [2024-07-25 01:19:55.072118] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.996 [2024-07-25 01:19:55.075718] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.996 [2024-07-25 01:19:55.085060] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.996 [2024-07-25 01:19:55.085468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.996 [2024-07-25 01:19:55.085499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23061e0 with addr=10.0.0.2, port=4420 00:34:01.996 [2024-07-25 01:19:55.085523] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23061e0 is same with the state(5) to be set 00:34:01.996 [2024-07-25 01:19:55.085763] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23061e0 (9): Bad file descriptor 00:34:01.996 [2024-07-25 01:19:55.086007] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.996 [2024-07-25 01:19:55.086031] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.996 [2024-07-25 01:19:55.086047] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.996 [2024-07-25 01:19:55.089674] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.996 [2024-07-25 01:19:55.099019] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.996 [2024-07-25 01:19:55.099410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.996 [2024-07-25 01:19:55.099442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23061e0 with addr=10.0.0.2, port=4420 00:34:01.996 [2024-07-25 01:19:55.099460] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23061e0 is same with the state(5) to be set 00:34:01.996 [2024-07-25 01:19:55.099700] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23061e0 (9): Bad file descriptor 00:34:01.996 [2024-07-25 01:19:55.099943] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.996 [2024-07-25 01:19:55.099968] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.996 [2024-07-25 01:19:55.099984] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.996 [2024-07-25 01:19:55.103591] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.996 [2024-07-25 01:19:55.112941] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.996 [2024-07-25 01:19:55.113336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.996 [2024-07-25 01:19:55.113377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23061e0 with addr=10.0.0.2, port=4420 00:34:01.996 [2024-07-25 01:19:55.113395] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23061e0 is same with the state(5) to be set 00:34:01.997 [2024-07-25 01:19:55.113634] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23061e0 (9): Bad file descriptor 00:34:01.997 [2024-07-25 01:19:55.113878] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.997 [2024-07-25 01:19:55.113903] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.997 [2024-07-25 01:19:55.113918] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.997 [2024-07-25 01:19:55.117523] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.997 [2024-07-25 01:19:55.126864] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.997 [2024-07-25 01:19:55.127304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.997 [2024-07-25 01:19:55.127336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23061e0 with addr=10.0.0.2, port=4420 00:34:01.997 [2024-07-25 01:19:55.127354] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23061e0 is same with the state(5) to be set 00:34:01.997 [2024-07-25 01:19:55.127593] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23061e0 (9): Bad file descriptor 00:34:01.997 [2024-07-25 01:19:55.127839] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.997 [2024-07-25 01:19:55.127870] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.997 [2024-07-25 01:19:55.127887] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.997 [2024-07-25 01:19:55.131502] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.997 [2024-07-25 01:19:55.140855] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.997 [2024-07-25 01:19:55.141323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.997 [2024-07-25 01:19:55.141359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23061e0 with addr=10.0.0.2, port=4420 00:34:01.997 [2024-07-25 01:19:55.141378] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23061e0 is same with the state(5) to be set 00:34:01.997 [2024-07-25 01:19:55.141619] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23061e0 (9): Bad file descriptor 00:34:01.997 [2024-07-25 01:19:55.141863] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.997 [2024-07-25 01:19:55.141889] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.997 [2024-07-25 01:19:55.141905] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:02.256 [2024-07-25 01:19:55.145663] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:02.256 [2024-07-25 01:19:55.154900] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:02.256 [2024-07-25 01:19:55.155395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:02.256 [2024-07-25 01:19:55.155425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23061e0 with addr=10.0.0.2, port=4420 00:34:02.256 [2024-07-25 01:19:55.155441] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23061e0 is same with the state(5) to be set 00:34:02.256 [2024-07-25 01:19:55.155697] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23061e0 (9): Bad file descriptor 00:34:02.256 [2024-07-25 01:19:55.155941] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:02.256 [2024-07-25 01:19:55.155966] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:02.256 [2024-07-25 01:19:55.155982] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:02.256 [2024-07-25 01:19:55.159594] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:02.256 [2024-07-25 01:19:55.168964] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:02.256 [2024-07-25 01:19:55.169407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:02.256 [2024-07-25 01:19:55.169441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23061e0 with addr=10.0.0.2, port=4420 00:34:02.256 [2024-07-25 01:19:55.169460] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23061e0 is same with the state(5) to be set 00:34:02.256 [2024-07-25 01:19:55.169701] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23061e0 (9): Bad file descriptor 00:34:02.256 [2024-07-25 01:19:55.169947] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:02.256 [2024-07-25 01:19:55.169972] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:02.256 [2024-07-25 01:19:55.169988] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:02.256 [2024-07-25 01:19:55.173590] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:02.256 [2024-07-25 01:19:55.182932] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:02.256 [2024-07-25 01:19:55.183353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:02.256 [2024-07-25 01:19:55.183386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23061e0 with addr=10.0.0.2, port=4420 00:34:02.256 [2024-07-25 01:19:55.183405] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23061e0 is same with the state(5) to be set 00:34:02.256 [2024-07-25 01:19:55.183645] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23061e0 (9): Bad file descriptor 00:34:02.256 [2024-07-25 01:19:55.183891] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:02.256 [2024-07-25 01:19:55.183916] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:02.256 [2024-07-25 01:19:55.183932] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:02.256 [2024-07-25 01:19:55.187537] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:02.256 [2024-07-25 01:19:55.196871] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:02.257 [2024-07-25 01:19:55.197301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:02.257 [2024-07-25 01:19:55.197334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23061e0 with addr=10.0.0.2, port=4420 00:34:02.257 [2024-07-25 01:19:55.197353] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23061e0 is same with the state(5) to be set 00:34:02.257 [2024-07-25 01:19:55.197593] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23061e0 (9): Bad file descriptor 00:34:02.257 [2024-07-25 01:19:55.197838] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:02.257 [2024-07-25 01:19:55.197863] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:02.257 [2024-07-25 01:19:55.197879] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:02.257 [2024-07-25 01:19:55.201483] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:02.257 [2024-07-25 01:19:55.210828] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:02.257 [2024-07-25 01:19:55.211259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:02.257 [2024-07-25 01:19:55.211292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23061e0 with addr=10.0.0.2, port=4420 00:34:02.257 [2024-07-25 01:19:55.211310] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23061e0 is same with the state(5) to be set 00:34:02.257 [2024-07-25 01:19:55.211551] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23061e0 (9): Bad file descriptor 00:34:02.257 [2024-07-25 01:19:55.211796] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:02.257 [2024-07-25 01:19:55.211821] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:02.257 [2024-07-25 01:19:55.211837] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:02.257 [2024-07-25 01:19:55.215489] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:02.257 [2024-07-25 01:19:55.224714] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:02.257 [2024-07-25 01:19:55.225144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:02.257 [2024-07-25 01:19:55.225177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23061e0 with addr=10.0.0.2, port=4420 00:34:02.257 [2024-07-25 01:19:55.225195] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23061e0 is same with the state(5) to be set 00:34:02.257 [2024-07-25 01:19:55.225451] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23061e0 (9): Bad file descriptor 00:34:02.257 [2024-07-25 01:19:55.225695] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:02.257 [2024-07-25 01:19:55.225720] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:02.257 [2024-07-25 01:19:55.225737] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:02.257 [2024-07-25 01:19:55.229344] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:02.257 [2024-07-25 01:19:55.238682] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:02.257 [2024-07-25 01:19:55.239107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:02.257 [2024-07-25 01:19:55.239140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23061e0 with addr=10.0.0.2, port=4420 00:34:02.257 [2024-07-25 01:19:55.239158] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23061e0 is same with the state(5) to be set 00:34:02.257 [2024-07-25 01:19:55.239412] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23061e0 (9): Bad file descriptor 00:34:02.257 [2024-07-25 01:19:55.239656] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:02.257 [2024-07-25 01:19:55.239681] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:02.257 [2024-07-25 01:19:55.239697] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:02.257 [2024-07-25 01:19:55.243295] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:02.257 [2024-07-25 01:19:55.252640] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:02.257 [2024-07-25 01:19:55.253045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:02.257 [2024-07-25 01:19:55.253076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23061e0 with addr=10.0.0.2, port=4420 00:34:02.257 [2024-07-25 01:19:55.253094] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23061e0 is same with the state(5) to be set 00:34:02.257 [2024-07-25 01:19:55.253342] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23061e0 (9): Bad file descriptor 00:34:02.257 [2024-07-25 01:19:55.253586] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:02.257 [2024-07-25 01:19:55.253612] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:02.257 [2024-07-25 01:19:55.253628] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:02.257 [2024-07-25 01:19:55.257218] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:02.257 [2024-07-25 01:19:55.266556] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:02.257 [2024-07-25 01:19:55.266949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:02.257 [2024-07-25 01:19:55.266980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23061e0 with addr=10.0.0.2, port=4420 00:34:02.257 [2024-07-25 01:19:55.266998] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23061e0 is same with the state(5) to be set 00:34:02.257 [2024-07-25 01:19:55.267237] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23061e0 (9): Bad file descriptor 00:34:02.257 [2024-07-25 01:19:55.267495] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:02.257 [2024-07-25 01:19:55.267521] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:02.257 [2024-07-25 01:19:55.267545] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:02.257 [2024-07-25 01:19:55.271134] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:02.257 [2024-07-25 01:19:55.280466] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:02.257 [2024-07-25 01:19:55.280895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:02.257 [2024-07-25 01:19:55.280923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23061e0 with addr=10.0.0.2, port=4420 00:34:02.257 [2024-07-25 01:19:55.280939] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23061e0 is same with the state(5) to be set 00:34:02.257 [2024-07-25 01:19:55.281177] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23061e0 (9): Bad file descriptor 00:34:02.257 [2024-07-25 01:19:55.281449] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:02.257 [2024-07-25 01:19:55.281475] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:02.257 [2024-07-25 01:19:55.281491] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:02.257 [2024-07-25 01:19:55.285081] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:02.257 [2024-07-25 01:19:55.294432] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:02.257 [2024-07-25 01:19:55.294849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:02.257 [2024-07-25 01:19:55.294881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23061e0 with addr=10.0.0.2, port=4420 00:34:02.257 [2024-07-25 01:19:55.294898] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23061e0 is same with the state(5) to be set 00:34:02.257 [2024-07-25 01:19:55.295138] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23061e0 (9): Bad file descriptor 00:34:02.257 [2024-07-25 01:19:55.295394] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:02.257 [2024-07-25 01:19:55.295419] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:02.257 [2024-07-25 01:19:55.295435] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:02.257 [2024-07-25 01:19:55.299029] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:02.257 [2024-07-25 01:19:55.308396] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:02.257 [2024-07-25 01:19:55.308792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:02.257 [2024-07-25 01:19:55.308824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23061e0 with addr=10.0.0.2, port=4420 00:34:02.257 [2024-07-25 01:19:55.308842] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23061e0 is same with the state(5) to be set 00:34:02.257 [2024-07-25 01:19:55.309082] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23061e0 (9): Bad file descriptor 00:34:02.257 [2024-07-25 01:19:55.309340] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:02.257 [2024-07-25 01:19:55.309365] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:02.257 [2024-07-25 01:19:55.309381] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:02.257 [2024-07-25 01:19:55.312973] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:02.257 [2024-07-25 01:19:55.322324] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:02.257 [2024-07-25 01:19:55.322732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:02.257 [2024-07-25 01:19:55.322764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23061e0 with addr=10.0.0.2, port=4420 00:34:02.257 [2024-07-25 01:19:55.322782] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23061e0 is same with the state(5) to be set 00:34:02.257 [2024-07-25 01:19:55.323022] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23061e0 (9): Bad file descriptor 00:34:02.257 [2024-07-25 01:19:55.323278] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:02.257 [2024-07-25 01:19:55.323311] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:02.257 [2024-07-25 01:19:55.323327] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:02.258 [2024-07-25 01:19:55.326929] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:02.258 [2024-07-25 01:19:55.336292] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:02.258 [2024-07-25 01:19:55.336799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:02.258 [2024-07-25 01:19:55.336849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23061e0 with addr=10.0.0.2, port=4420 00:34:02.258 [2024-07-25 01:19:55.336868] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23061e0 is same with the state(5) to be set 00:34:02.258 [2024-07-25 01:19:55.337108] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23061e0 (9): Bad file descriptor 00:34:02.258 [2024-07-25 01:19:55.337364] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:02.258 [2024-07-25 01:19:55.337389] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:02.258 [2024-07-25 01:19:55.337405] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:02.258 [2024-07-25 01:19:55.341016] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:02.258 [2024-07-25 01:19:55.350156] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:02.258 [2024-07-25 01:19:55.350575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:02.258 [2024-07-25 01:19:55.350607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23061e0 with addr=10.0.0.2, port=4420 00:34:02.258 [2024-07-25 01:19:55.350625] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23061e0 is same with the state(5) to be set 00:34:02.258 [2024-07-25 01:19:55.350864] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23061e0 (9): Bad file descriptor 00:34:02.258 [2024-07-25 01:19:55.351108] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:02.258 [2024-07-25 01:19:55.351133] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:02.258 [2024-07-25 01:19:55.351149] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:02.258 [2024-07-25 01:19:55.354761] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:02.258 [2024-07-25 01:19:55.364036] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:02.258 [2024-07-25 01:19:55.364465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:02.258 [2024-07-25 01:19:55.364499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23061e0 with addr=10.0.0.2, port=4420 00:34:02.258 [2024-07-25 01:19:55.364517] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23061e0 is same with the state(5) to be set 00:34:02.258 [2024-07-25 01:19:55.364757] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23061e0 (9): Bad file descriptor 00:34:02.258 [2024-07-25 01:19:55.365007] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:02.258 [2024-07-25 01:19:55.365032] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:02.258 [2024-07-25 01:19:55.365047] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:02.258 [2024-07-25 01:19:55.368656] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:02.258 [2024-07-25 01:19:55.378010] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:02.258 [2024-07-25 01:19:55.378441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:02.258 [2024-07-25 01:19:55.378473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23061e0 with addr=10.0.0.2, port=4420 00:34:02.258 [2024-07-25 01:19:55.378491] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23061e0 is same with the state(5) to be set 00:34:02.258 [2024-07-25 01:19:55.378730] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23061e0 (9): Bad file descriptor 00:34:02.258 [2024-07-25 01:19:55.378974] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:02.258 [2024-07-25 01:19:55.378999] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:02.258 [2024-07-25 01:19:55.379015] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:02.258 [2024-07-25 01:19:55.382616] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:02.258 [2024-07-25 01:19:55.391960] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:02.258 [2024-07-25 01:19:55.392388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:02.258 [2024-07-25 01:19:55.392420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23061e0 with addr=10.0.0.2, port=4420 00:34:02.258 [2024-07-25 01:19:55.392438] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23061e0 is same with the state(5) to be set 00:34:02.258 [2024-07-25 01:19:55.392677] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23061e0 (9): Bad file descriptor 00:34:02.258 [2024-07-25 01:19:55.392921] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:02.258 [2024-07-25 01:19:55.392945] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:02.258 [2024-07-25 01:19:55.392961] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:02.258 [2024-07-25 01:19:55.396566] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:02.258 [2024-07-25 01:19:55.406152] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:02.516 [2024-07-25 01:19:55.406605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:02.516 [2024-07-25 01:19:55.406639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23061e0 with addr=10.0.0.2, port=4420 00:34:02.516 [2024-07-25 01:19:55.406658] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23061e0 is same with the state(5) to be set 00:34:02.516 [2024-07-25 01:19:55.406898] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23061e0 (9): Bad file descriptor 00:34:02.516 [2024-07-25 01:19:55.407143] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:02.516 [2024-07-25 01:19:55.407167] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:02.516 [2024-07-25 01:19:55.407183] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:02.516 [2024-07-25 01:19:55.410823] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:02.516 [2024-07-25 01:19:55.420198] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:02.516 [2024-07-25 01:19:55.420653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:02.516 [2024-07-25 01:19:55.420705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23061e0 with addr=10.0.0.2, port=4420 00:34:02.516 [2024-07-25 01:19:55.420724] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23061e0 is same with the state(5) to be set 00:34:02.516 [2024-07-25 01:19:55.420965] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23061e0 (9): Bad file descriptor 00:34:02.516 [2024-07-25 01:19:55.421210] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:02.516 [2024-07-25 01:19:55.421235] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:02.516 [2024-07-25 01:19:55.421263] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:02.516 [2024-07-25 01:19:55.424862] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:02.516 [2024-07-25 01:19:55.434212] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:02.516 [2024-07-25 01:19:55.434722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:02.516 [2024-07-25 01:19:55.434754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23061e0 with addr=10.0.0.2, port=4420 00:34:02.516 [2024-07-25 01:19:55.434772] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23061e0 is same with the state(5) to be set 00:34:02.516 [2024-07-25 01:19:55.435011] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23061e0 (9): Bad file descriptor 00:34:02.516 [2024-07-25 01:19:55.435269] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:02.516 [2024-07-25 01:19:55.435295] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:02.516 [2024-07-25 01:19:55.435311] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:02.516 [2024-07-25 01:19:55.438911] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:02.516 [2024-07-25 01:19:55.448270] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:02.516 [2024-07-25 01:19:55.448682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:02.516 [2024-07-25 01:19:55.448709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23061e0 with addr=10.0.0.2, port=4420 00:34:02.516 [2024-07-25 01:19:55.448724] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23061e0 is same with the state(5) to be set 00:34:02.516 [2024-07-25 01:19:55.448950] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23061e0 (9): Bad file descriptor 00:34:02.516 [2024-07-25 01:19:55.449194] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:02.516 [2024-07-25 01:19:55.449219] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:02.516 [2024-07-25 01:19:55.449235] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:02.516 [2024-07-25 01:19:55.452849] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:02.516 [2024-07-25 01:19:55.462194] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:02.516 [2024-07-25 01:19:55.462693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:02.516 [2024-07-25 01:19:55.462725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23061e0 with addr=10.0.0.2, port=4420 00:34:02.516 [2024-07-25 01:19:55.462749] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23061e0 is same with the state(5) to be set 00:34:02.516 [2024-07-25 01:19:55.462989] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23061e0 (9): Bad file descriptor 00:34:02.516 [2024-07-25 01:19:55.463233] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:02.516 [2024-07-25 01:19:55.463270] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:02.516 [2024-07-25 01:19:55.463292] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:02.516 [2024-07-25 01:19:55.466890] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:02.516 [2024-07-25 01:19:55.476226] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:02.516 [2024-07-25 01:19:55.476718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:02.516 [2024-07-25 01:19:55.476769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23061e0 with addr=10.0.0.2, port=4420 00:34:02.516 [2024-07-25 01:19:55.476788] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23061e0 is same with the state(5) to be set 00:34:02.516 [2024-07-25 01:19:55.477027] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23061e0 (9): Bad file descriptor 00:34:02.516 [2024-07-25 01:19:55.477287] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:02.516 [2024-07-25 01:19:55.477319] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:02.516 [2024-07-25 01:19:55.477335] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:02.516 [2024-07-25 01:19:55.480931] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:02.516 [2024-07-25 01:19:55.490292] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:02.516 [2024-07-25 01:19:55.490690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:02.516 [2024-07-25 01:19:55.490723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23061e0 with addr=10.0.0.2, port=4420 00:34:02.516 [2024-07-25 01:19:55.490741] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23061e0 is same with the state(5) to be set 00:34:02.516 [2024-07-25 01:19:55.490979] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23061e0 (9): Bad file descriptor 00:34:02.516 [2024-07-25 01:19:55.491223] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:02.516 [2024-07-25 01:19:55.491261] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:02.516 [2024-07-25 01:19:55.491279] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:02.517 [2024-07-25 01:19:55.494877] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:02.517 [2024-07-25 01:19:55.504228] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:02.517 [2024-07-25 01:19:55.504658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:02.517 [2024-07-25 01:19:55.504690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23061e0 with addr=10.0.0.2, port=4420 00:34:02.517 [2024-07-25 01:19:55.504708] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23061e0 is same with the state(5) to be set 00:34:02.517 [2024-07-25 01:19:55.504947] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23061e0 (9): Bad file descriptor 00:34:02.517 [2024-07-25 01:19:55.505197] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:02.517 [2024-07-25 01:19:55.505223] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:02.517 [2024-07-25 01:19:55.505239] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:02.517 [2024-07-25 01:19:55.508849] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:02.517 [2024-07-25 01:19:55.518288] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:02.517 [2024-07-25 01:19:55.518710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:02.517 [2024-07-25 01:19:55.518741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23061e0 with addr=10.0.0.2, port=4420 00:34:02.517 [2024-07-25 01:19:55.518759] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23061e0 is same with the state(5) to be set 00:34:02.517 [2024-07-25 01:19:55.518998] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23061e0 (9): Bad file descriptor 00:34:02.517 [2024-07-25 01:19:55.519253] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:02.517 [2024-07-25 01:19:55.519278] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:02.517 [2024-07-25 01:19:55.519294] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:02.517 [2024-07-25 01:19:55.522888] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:02.517 [2024-07-25 01:19:55.532262] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:02.517 [2024-07-25 01:19:55.532692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:02.517 [2024-07-25 01:19:55.532724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23061e0 with addr=10.0.0.2, port=4420 00:34:02.517 [2024-07-25 01:19:55.532741] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23061e0 is same with the state(5) to be set 00:34:02.517 [2024-07-25 01:19:55.532981] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23061e0 (9): Bad file descriptor 00:34:02.517 [2024-07-25 01:19:55.533226] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:02.517 [2024-07-25 01:19:55.533261] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:02.517 [2024-07-25 01:19:55.533278] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:02.517 [2024-07-25 01:19:55.536877] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:02.517 [2024-07-25 01:19:55.546220] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:02.517 [2024-07-25 01:19:55.546629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:02.517 [2024-07-25 01:19:55.546661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23061e0 with addr=10.0.0.2, port=4420 00:34:02.517 [2024-07-25 01:19:55.546679] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23061e0 is same with the state(5) to be set 00:34:02.517 [2024-07-25 01:19:55.546919] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23061e0 (9): Bad file descriptor 00:34:02.517 [2024-07-25 01:19:55.547163] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:02.517 [2024-07-25 01:19:55.547187] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:02.517 [2024-07-25 01:19:55.547203] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:02.517 [2024-07-25 01:19:55.550831] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:02.517 [2024-07-25 01:19:55.560178] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:02.517 [2024-07-25 01:19:55.560619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:02.517 [2024-07-25 01:19:55.560651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23061e0 with addr=10.0.0.2, port=4420 00:34:02.517 [2024-07-25 01:19:55.560669] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23061e0 is same with the state(5) to be set 00:34:02.517 [2024-07-25 01:19:55.560908] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23061e0 (9): Bad file descriptor 00:34:02.517 [2024-07-25 01:19:55.561152] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:02.517 [2024-07-25 01:19:55.561178] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:02.517 [2024-07-25 01:19:55.561193] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:02.517 [2024-07-25 01:19:55.564801] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:02.517 [2024-07-25 01:19:55.574143] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:02.517 [2024-07-25 01:19:55.574585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:02.517 [2024-07-25 01:19:55.574616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23061e0 with addr=10.0.0.2, port=4420 00:34:02.517 [2024-07-25 01:19:55.574635] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23061e0 is same with the state(5) to be set 00:34:02.517 [2024-07-25 01:19:55.574874] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23061e0 (9): Bad file descriptor 00:34:02.517 [2024-07-25 01:19:55.575118] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:02.517 [2024-07-25 01:19:55.575143] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:02.517 [2024-07-25 01:19:55.575160] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:02.517 [2024-07-25 01:19:55.578767] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:02.517 [2024-07-25 01:19:55.588095] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:02.517 [2024-07-25 01:19:55.588499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:02.517 [2024-07-25 01:19:55.588531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23061e0 with addr=10.0.0.2, port=4420 00:34:02.517 [2024-07-25 01:19:55.588549] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23061e0 is same with the state(5) to be set 00:34:02.517 [2024-07-25 01:19:55.588789] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23061e0 (9): Bad file descriptor 00:34:02.517 [2024-07-25 01:19:55.589032] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:02.517 [2024-07-25 01:19:55.589057] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:02.517 [2024-07-25 01:19:55.589073] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:02.517 [2024-07-25 01:19:55.592676] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:02.517 [2024-07-25 01:19:55.602005] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:02.517 [2024-07-25 01:19:55.602444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:02.517 [2024-07-25 01:19:55.602477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23061e0 with addr=10.0.0.2, port=4420 00:34:02.517 [2024-07-25 01:19:55.602501] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23061e0 is same with the state(5) to be set 00:34:02.517 [2024-07-25 01:19:55.602743] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23061e0 (9): Bad file descriptor 00:34:02.517 [2024-07-25 01:19:55.602986] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:02.517 [2024-07-25 01:19:55.603011] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:02.517 [2024-07-25 01:19:55.603027] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:02.517 [2024-07-25 01:19:55.606635] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:02.517 [2024-07-25 01:19:55.616143] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:02.517 [2024-07-25 01:19:55.616569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:02.517 [2024-07-25 01:19:55.616601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23061e0 with addr=10.0.0.2, port=4420 00:34:02.517 [2024-07-25 01:19:55.616619] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23061e0 is same with the state(5) to be set 00:34:02.517 [2024-07-25 01:19:55.616859] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23061e0 (9): Bad file descriptor 00:34:02.517 [2024-07-25 01:19:55.617102] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:02.517 [2024-07-25 01:19:55.617128] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:02.517 [2024-07-25 01:19:55.617143] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:02.517 [2024-07-25 01:19:55.620754] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:02.517 [2024-07-25 01:19:55.630119] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:02.517 [2024-07-25 01:19:55.630558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:02.517 [2024-07-25 01:19:55.630589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23061e0 with addr=10.0.0.2, port=4420 00:34:02.517 [2024-07-25 01:19:55.630608] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23061e0 is same with the state(5) to be set 00:34:02.517 [2024-07-25 01:19:55.630847] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23061e0 (9): Bad file descriptor 00:34:02.517 [2024-07-25 01:19:55.631090] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:02.517 [2024-07-25 01:19:55.631115] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:02.517 [2024-07-25 01:19:55.631132] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:02.517 [2024-07-25 01:19:55.634742] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:02.518 [2024-07-25 01:19:55.644071] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:02.518 [2024-07-25 01:19:55.644492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:02.518 [2024-07-25 01:19:55.644526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23061e0 with addr=10.0.0.2, port=4420 00:34:02.518 [2024-07-25 01:19:55.644545] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23061e0 is same with the state(5) to be set 00:34:02.518 [2024-07-25 01:19:55.644785] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23061e0 (9): Bad file descriptor 00:34:02.518 [2024-07-25 01:19:55.645031] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:02.518 [2024-07-25 01:19:55.645062] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:02.518 [2024-07-25 01:19:55.645079] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:02.518 [2024-07-25 01:19:55.648683] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:02.518 [2024-07-25 01:19:55.658028] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:02.518 [2024-07-25 01:19:55.658471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:02.518 [2024-07-25 01:19:55.658503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23061e0 with addr=10.0.0.2, port=4420 00:34:02.518 [2024-07-25 01:19:55.658522] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23061e0 is same with the state(5) to be set 00:34:02.518 [2024-07-25 01:19:55.658763] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23061e0 (9): Bad file descriptor 00:34:02.518 [2024-07-25 01:19:55.659008] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:02.518 [2024-07-25 01:19:55.659033] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:02.518 [2024-07-25 01:19:55.659049] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:02.518 [2024-07-25 01:19:55.662648] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:02.776 [2024-07-25 01:19:55.672103] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:02.776 [2024-07-25 01:19:55.672552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:02.776 [2024-07-25 01:19:55.672587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23061e0 with addr=10.0.0.2, port=4420 00:34:02.776 [2024-07-25 01:19:55.672607] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23061e0 is same with the state(5) to be set 00:34:02.776 [2024-07-25 01:19:55.672848] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23061e0 (9): Bad file descriptor 00:34:02.776 [2024-07-25 01:19:55.673094] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:02.776 [2024-07-25 01:19:55.673119] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:02.776 [2024-07-25 01:19:55.673135] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:02.776 [2024-07-25 01:19:55.676739] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:02.776 [2024-07-25 01:19:55.686072] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:02.776 [2024-07-25 01:19:55.686503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:02.776 [2024-07-25 01:19:55.686536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23061e0 with addr=10.0.0.2, port=4420 00:34:02.776 [2024-07-25 01:19:55.686555] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23061e0 is same with the state(5) to be set 00:34:02.776 [2024-07-25 01:19:55.686795] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23061e0 (9): Bad file descriptor 00:34:02.777 [2024-07-25 01:19:55.687041] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:02.777 [2024-07-25 01:19:55.687067] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:02.777 [2024-07-25 01:19:55.687084] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:02.777 [2024-07-25 01:19:55.690686] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:02.777 [2024-07-25 01:19:55.700013] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:02.777 [2024-07-25 01:19:55.700435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:02.777 [2024-07-25 01:19:55.700467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23061e0 with addr=10.0.0.2, port=4420 00:34:02.777 [2024-07-25 01:19:55.700486] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23061e0 is same with the state(5) to be set 00:34:02.777 [2024-07-25 01:19:55.700726] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23061e0 (9): Bad file descriptor 00:34:02.777 [2024-07-25 01:19:55.700971] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:02.777 [2024-07-25 01:19:55.700997] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:02.777 [2024-07-25 01:19:55.701012] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:02.777 [2024-07-25 01:19:55.704613] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:02.777 [2024-07-25 01:19:55.713943] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:02.777 [2024-07-25 01:19:55.714367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:02.777 [2024-07-25 01:19:55.714400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23061e0 with addr=10.0.0.2, port=4420 00:34:02.777 [2024-07-25 01:19:55.714418] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23061e0 is same with the state(5) to be set 00:34:02.777 [2024-07-25 01:19:55.714658] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23061e0 (9): Bad file descriptor 00:34:02.777 [2024-07-25 01:19:55.714902] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:02.777 [2024-07-25 01:19:55.714927] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:02.777 [2024-07-25 01:19:55.714942] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:02.777 [2024-07-25 01:19:55.718543] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:02.777 [2024-07-25 01:19:55.727871] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:02.777 [2024-07-25 01:19:55.728303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:02.777 [2024-07-25 01:19:55.728336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23061e0 with addr=10.0.0.2, port=4420 00:34:02.777 [2024-07-25 01:19:55.728353] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23061e0 is same with the state(5) to be set 00:34:02.777 [2024-07-25 01:19:55.728594] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23061e0 (9): Bad file descriptor 00:34:02.777 [2024-07-25 01:19:55.728837] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:02.777 [2024-07-25 01:19:55.728863] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:02.777 [2024-07-25 01:19:55.728879] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:02.777 [2024-07-25 01:19:55.732485] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:02.777 [2024-07-25 01:19:55.741812] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:02.777 [2024-07-25 01:19:55.742206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:02.777 [2024-07-25 01:19:55.742239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23061e0 with addr=10.0.0.2, port=4420 00:34:02.777 [2024-07-25 01:19:55.742268] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23061e0 is same with the state(5) to be set 00:34:02.777 [2024-07-25 01:19:55.742514] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23061e0 (9): Bad file descriptor 00:34:02.777 [2024-07-25 01:19:55.742758] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:02.777 [2024-07-25 01:19:55.742784] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:02.777 [2024-07-25 01:19:55.742800] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:02.777 [2024-07-25 01:19:55.746397] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:02.777 [2024-07-25 01:19:55.755721] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:02.777 [2024-07-25 01:19:55.756140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:02.777 [2024-07-25 01:19:55.756172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23061e0 with addr=10.0.0.2, port=4420 00:34:02.777 [2024-07-25 01:19:55.756190] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23061e0 is same with the state(5) to be set 00:34:02.777 [2024-07-25 01:19:55.756445] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23061e0 (9): Bad file descriptor 00:34:02.777 [2024-07-25 01:19:55.756691] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:02.777 [2024-07-25 01:19:55.756716] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:02.777 [2024-07-25 01:19:55.756733] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:02.777 [2024-07-25 01:19:55.760331] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:02.777 [2024-07-25 01:19:55.769658] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:02.777 [2024-07-25 01:19:55.770054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:02.777 [2024-07-25 01:19:55.770085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23061e0 with addr=10.0.0.2, port=4420 00:34:02.777 [2024-07-25 01:19:55.770103] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23061e0 is same with the state(5) to be set 00:34:02.777 [2024-07-25 01:19:55.770355] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23061e0 (9): Bad file descriptor 00:34:02.777 [2024-07-25 01:19:55.770599] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:02.777 [2024-07-25 01:19:55.770625] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:02.777 [2024-07-25 01:19:55.770641] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:02.777 [2024-07-25 01:19:55.774231] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:02.777 [2024-07-25 01:19:55.783560] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:02.777 [2024-07-25 01:19:55.783959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:02.777 [2024-07-25 01:19:55.783991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23061e0 with addr=10.0.0.2, port=4420 00:34:02.777 [2024-07-25 01:19:55.784009] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23061e0 is same with the state(5) to be set 00:34:02.777 [2024-07-25 01:19:55.784259] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23061e0 (9): Bad file descriptor 00:34:02.777 [2024-07-25 01:19:55.784503] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:02.777 [2024-07-25 01:19:55.784529] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:02.777 [2024-07-25 01:19:55.784551] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:02.777 [2024-07-25 01:19:55.788142] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:02.777 [2024-07-25 01:19:55.797479] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:02.777 [2024-07-25 01:19:55.797874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:02.777 [2024-07-25 01:19:55.797907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23061e0 with addr=10.0.0.2, port=4420 00:34:02.777 [2024-07-25 01:19:55.797925] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23061e0 is same with the state(5) to be set 00:34:02.777 [2024-07-25 01:19:55.798165] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23061e0 (9): Bad file descriptor 00:34:02.777 [2024-07-25 01:19:55.798423] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:02.777 [2024-07-25 01:19:55.798449] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:02.777 [2024-07-25 01:19:55.798465] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:02.777 [2024-07-25 01:19:55.802057] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:02.777 [2024-07-25 01:19:55.811387] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:02.777 [2024-07-25 01:19:55.811808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:02.777 [2024-07-25 01:19:55.811840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23061e0 with addr=10.0.0.2, port=4420 00:34:02.777 [2024-07-25 01:19:55.811858] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23061e0 is same with the state(5) to be set 00:34:02.777 [2024-07-25 01:19:55.812098] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23061e0 (9): Bad file descriptor 00:34:02.777 [2024-07-25 01:19:55.812354] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:02.777 [2024-07-25 01:19:55.812381] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:02.777 [2024-07-25 01:19:55.812397] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:02.777 [2024-07-25 01:19:55.816130] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:02.777 [2024-07-25 01:19:55.825264] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:02.777 [2024-07-25 01:19:55.825663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:02.777 [2024-07-25 01:19:55.825694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23061e0 with addr=10.0.0.2, port=4420 00:34:02.777 [2024-07-25 01:19:55.825712] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23061e0 is same with the state(5) to be set 00:34:02.777 [2024-07-25 01:19:55.825952] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23061e0 (9): Bad file descriptor 00:34:02.778 [2024-07-25 01:19:55.826195] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:02.778 [2024-07-25 01:19:55.826220] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:02.778 [2024-07-25 01:19:55.826236] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:02.778 [2024-07-25 01:19:55.829842] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:02.778 [2024-07-25 01:19:55.839170] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:02.778 [2024-07-25 01:19:55.839616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:02.778 [2024-07-25 01:19:55.839652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23061e0 with addr=10.0.0.2, port=4420 00:34:02.778 [2024-07-25 01:19:55.839671] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23061e0 is same with the state(5) to be set 00:34:02.778 [2024-07-25 01:19:55.839911] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23061e0 (9): Bad file descriptor 00:34:02.778 [2024-07-25 01:19:55.840154] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:02.778 [2024-07-25 01:19:55.840180] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:02.778 [2024-07-25 01:19:55.840196] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:02.778 [2024-07-25 01:19:55.843794] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:02.778 [2024-07-25 01:19:55.853150] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:02.778 [2024-07-25 01:19:55.853539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:02.778 [2024-07-25 01:19:55.853571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23061e0 with addr=10.0.0.2, port=4420 00:34:02.778 [2024-07-25 01:19:55.853589] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23061e0 is same with the state(5) to be set 00:34:02.778 [2024-07-25 01:19:55.853829] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23061e0 (9): Bad file descriptor 00:34:02.778 [2024-07-25 01:19:55.854072] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:02.778 [2024-07-25 01:19:55.854097] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:02.778 [2024-07-25 01:19:55.854113] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:02.778 [2024-07-25 01:19:55.857719] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:02.778 [2024-07-25 01:19:55.867044] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:02.778 [2024-07-25 01:19:55.867457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:02.778 [2024-07-25 01:19:55.867489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23061e0 with addr=10.0.0.2, port=4420 00:34:02.778 [2024-07-25 01:19:55.867507] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23061e0 is same with the state(5) to be set 00:34:02.778 [2024-07-25 01:19:55.867747] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23061e0 (9): Bad file descriptor 00:34:02.778 [2024-07-25 01:19:55.867991] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:02.778 [2024-07-25 01:19:55.868016] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:02.778 [2024-07-25 01:19:55.868032] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:02.778 [2024-07-25 01:19:55.871630] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:02.778 [2024-07-25 01:19:55.880957] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:02.778 [2024-07-25 01:19:55.881377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:02.778 [2024-07-25 01:19:55.881409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23061e0 with addr=10.0.0.2, port=4420 00:34:02.778 [2024-07-25 01:19:55.881427] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23061e0 is same with the state(5) to be set 00:34:02.778 [2024-07-25 01:19:55.881667] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23061e0 (9): Bad file descriptor 00:34:02.778 [2024-07-25 01:19:55.881917] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:02.778 [2024-07-25 01:19:55.881943] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:02.778 [2024-07-25 01:19:55.881959] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:02.778 [2024-07-25 01:19:55.885554] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:02.778 [2024-07-25 01:19:55.894879] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:02.778 [2024-07-25 01:19:55.895302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:02.778 [2024-07-25 01:19:55.895333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23061e0 with addr=10.0.0.2, port=4420 00:34:02.778 [2024-07-25 01:19:55.895351] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23061e0 is same with the state(5) to be set 00:34:02.778 [2024-07-25 01:19:55.895591] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23061e0 (9): Bad file descriptor 00:34:02.778 [2024-07-25 01:19:55.895834] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:02.778 [2024-07-25 01:19:55.895860] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:02.778 [2024-07-25 01:19:55.895876] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:02.778 [2024-07-25 01:19:55.899473] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:02.778 [2024-07-25 01:19:55.908800] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:02.778 [2024-07-25 01:19:55.909226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:02.778 [2024-07-25 01:19:55.909264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23061e0 with addr=10.0.0.2, port=4420 00:34:02.778 [2024-07-25 01:19:55.909284] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23061e0 is same with the state(5) to be set 00:34:02.778 [2024-07-25 01:19:55.909523] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23061e0 (9): Bad file descriptor 00:34:02.778 [2024-07-25 01:19:55.909767] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:02.778 [2024-07-25 01:19:55.909792] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:02.778 [2024-07-25 01:19:55.909808] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:02.778 [2024-07-25 01:19:55.913407] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:02.778 [2024-07-25 01:19:55.922775] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:02.778 [2024-07-25 01:19:55.923228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:02.778 [2024-07-25 01:19:55.923271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23061e0 with addr=10.0.0.2, port=4420 00:34:02.778 [2024-07-25 01:19:55.923290] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23061e0 is same with the state(5) to be set 00:34:02.778 [2024-07-25 01:19:55.923531] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23061e0 (9): Bad file descriptor 00:34:02.778 [2024-07-25 01:19:55.923775] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:02.778 [2024-07-25 01:19:55.923801] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:02.778 [2024-07-25 01:19:55.923817] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:03.037 [2024-07-25 01:19:55.927579] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:03.037 [2024-07-25 01:19:55.936812] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:03.037 [2024-07-25 01:19:55.937221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:03.037 [2024-07-25 01:19:55.937263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23061e0 with addr=10.0.0.2, port=4420 00:34:03.037 [2024-07-25 01:19:55.937284] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23061e0 is same with the state(5) to be set 00:34:03.037 [2024-07-25 01:19:55.937525] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23061e0 (9): Bad file descriptor 00:34:03.037 [2024-07-25 01:19:55.937769] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:03.037 [2024-07-25 01:19:55.937794] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:03.037 [2024-07-25 01:19:55.937810] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:03.037 [2024-07-25 01:19:55.941409] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:03.037 [2024-07-25 01:19:55.950733] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:03.037 [2024-07-25 01:19:55.951134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:03.037 [2024-07-25 01:19:55.951168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23061e0 with addr=10.0.0.2, port=4420 00:34:03.037 [2024-07-25 01:19:55.951187] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23061e0 is same with the state(5) to be set 00:34:03.037 [2024-07-25 01:19:55.951439] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23061e0 (9): Bad file descriptor 00:34:03.037 [2024-07-25 01:19:55.951686] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:03.037 [2024-07-25 01:19:55.951712] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:03.037 [2024-07-25 01:19:55.951728] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:03.037 [2024-07-25 01:19:55.955326] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:03.037 [2024-07-25 01:19:55.964656] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:03.037 [2024-07-25 01:19:55.965097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:03.037 [2024-07-25 01:19:55.965131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23061e0 with addr=10.0.0.2, port=4420 00:34:03.037 [2024-07-25 01:19:55.965150] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23061e0 is same with the state(5) to be set 00:34:03.037 [2024-07-25 01:19:55.965403] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23061e0 (9): Bad file descriptor 00:34:03.037 [2024-07-25 01:19:55.965648] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:03.037 [2024-07-25 01:19:55.965673] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:03.037 [2024-07-25 01:19:55.965690] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:03.037 [2024-07-25 01:19:55.969289] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:03.037 [2024-07-25 01:19:55.978615] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:03.037 [2024-07-25 01:19:55.979013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:03.037 [2024-07-25 01:19:55.979045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23061e0 with addr=10.0.0.2, port=4420 00:34:03.037 [2024-07-25 01:19:55.979073] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23061e0 is same with the state(5) to be set 00:34:03.037 [2024-07-25 01:19:55.979325] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23061e0 (9): Bad file descriptor 00:34:03.037 [2024-07-25 01:19:55.979571] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:03.037 [2024-07-25 01:19:55.979596] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:03.037 [2024-07-25 01:19:55.979611] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:03.037 [2024-07-25 01:19:55.983199] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:03.037 [2024-07-25 01:19:55.992527] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:03.037 [2024-07-25 01:19:55.992926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:03.037 [2024-07-25 01:19:55.992958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23061e0 with addr=10.0.0.2, port=4420 00:34:03.037 [2024-07-25 01:19:55.992976] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23061e0 is same with the state(5) to be set 00:34:03.037 [2024-07-25 01:19:55.993216] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23061e0 (9): Bad file descriptor 00:34:03.037 [2024-07-25 01:19:55.993470] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:03.037 [2024-07-25 01:19:55.993497] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:03.037 [2024-07-25 01:19:55.993512] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:03.037 [2024-07-25 01:19:55.997121] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:03.037 [2024-07-25 01:19:56.006466] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:03.037 [2024-07-25 01:19:56.006864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:03.037 [2024-07-25 01:19:56.006897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23061e0 with addr=10.0.0.2, port=4420 00:34:03.038 [2024-07-25 01:19:56.006915] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23061e0 is same with the state(5) to be set 00:34:03.038 [2024-07-25 01:19:56.007156] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23061e0 (9): Bad file descriptor 00:34:03.038 [2024-07-25 01:19:56.007409] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:03.038 [2024-07-25 01:19:56.007436] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:03.038 [2024-07-25 01:19:56.007452] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:03.038 [2024-07-25 01:19:56.011045] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:03.038 [2024-07-25 01:19:56.020329] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:03.038 [2024-07-25 01:19:56.020738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:03.038 [2024-07-25 01:19:56.020770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23061e0 with addr=10.0.0.2, port=4420 00:34:03.038 [2024-07-25 01:19:56.020788] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23061e0 is same with the state(5) to be set 00:34:03.038 [2024-07-25 01:19:56.021027] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23061e0 (9): Bad file descriptor 00:34:03.038 [2024-07-25 01:19:56.021281] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:03.038 [2024-07-25 01:19:56.021314] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:03.038 [2024-07-25 01:19:56.021332] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:03.038 [2024-07-25 01:19:56.024926] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:03.038 [2024-07-25 01:19:56.034268] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:03.038 [2024-07-25 01:19:56.034689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:03.038 [2024-07-25 01:19:56.034721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23061e0 with addr=10.0.0.2, port=4420 00:34:03.038 [2024-07-25 01:19:56.034739] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23061e0 is same with the state(5) to be set 00:34:03.038 [2024-07-25 01:19:56.034978] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23061e0 (9): Bad file descriptor 00:34:03.038 [2024-07-25 01:19:56.035222] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:03.038 [2024-07-25 01:19:56.035256] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:03.038 [2024-07-25 01:19:56.035275] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:03.038 [2024-07-25 01:19:56.038863] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:03.038 [2024-07-25 01:19:56.048187] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:03.038 [2024-07-25 01:19:56.048617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:03.038 [2024-07-25 01:19:56.048649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23061e0 with addr=10.0.0.2, port=4420 00:34:03.038 [2024-07-25 01:19:56.048667] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23061e0 is same with the state(5) to be set 00:34:03.038 [2024-07-25 01:19:56.048906] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23061e0 (9): Bad file descriptor 00:34:03.038 [2024-07-25 01:19:56.049150] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:03.038 [2024-07-25 01:19:56.049176] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:03.038 [2024-07-25 01:19:56.049191] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:03.038 [2024-07-25 01:19:56.052793] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:03.038 [2024-07-25 01:19:56.062149] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:03.038 [2024-07-25 01:19:56.062598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:03.038 [2024-07-25 01:19:56.062630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23061e0 with addr=10.0.0.2, port=4420 00:34:03.038 [2024-07-25 01:19:56.062648] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23061e0 is same with the state(5) to be set 00:34:03.038 [2024-07-25 01:19:56.062887] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23061e0 (9): Bad file descriptor 00:34:03.038 [2024-07-25 01:19:56.063130] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:03.038 [2024-07-25 01:19:56.063156] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:03.038 [2024-07-25 01:19:56.063172] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:03.038 [2024-07-25 01:19:56.066773] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:03.038 [2024-07-25 01:19:56.076100] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:03.038 [2024-07-25 01:19:56.076505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:03.038 [2024-07-25 01:19:56.076538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23061e0 with addr=10.0.0.2, port=4420 00:34:03.038 [2024-07-25 01:19:56.076557] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23061e0 is same with the state(5) to be set 00:34:03.038 [2024-07-25 01:19:56.076797] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23061e0 (9): Bad file descriptor 00:34:03.038 [2024-07-25 01:19:56.077042] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:03.038 [2024-07-25 01:19:56.077067] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:03.038 [2024-07-25 01:19:56.077083] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:03.038 [2024-07-25 01:19:56.080687] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:03.038 [2024-07-25 01:19:56.090010] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:03.038 [2024-07-25 01:19:56.090419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:03.038 [2024-07-25 01:19:56.090451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23061e0 with addr=10.0.0.2, port=4420 00:34:03.038 [2024-07-25 01:19:56.090470] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23061e0 is same with the state(5) to be set 00:34:03.038 [2024-07-25 01:19:56.090710] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23061e0 (9): Bad file descriptor 00:34:03.038 [2024-07-25 01:19:56.090955] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:03.038 [2024-07-25 01:19:56.090981] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:03.038 [2024-07-25 01:19:56.090996] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:03.038 [2024-07-25 01:19:56.094593] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:03.038 [2024-07-25 01:19:56.103912] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:03.038 [2024-07-25 01:19:56.104336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:03.038 [2024-07-25 01:19:56.104369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23061e0 with addr=10.0.0.2, port=4420 00:34:03.038 [2024-07-25 01:19:56.104387] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23061e0 is same with the state(5) to be set 00:34:03.038 [2024-07-25 01:19:56.104627] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23061e0 (9): Bad file descriptor 00:34:03.038 [2024-07-25 01:19:56.104870] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:03.038 [2024-07-25 01:19:56.104896] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:03.038 [2024-07-25 01:19:56.104912] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:03.038 [2024-07-25 01:19:56.108513] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:03.038 [2024-07-25 01:19:56.117844] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:03.038 [2024-07-25 01:19:56.118250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:03.038 [2024-07-25 01:19:56.118293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23061e0 with addr=10.0.0.2, port=4420 00:34:03.038 [2024-07-25 01:19:56.118316] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23061e0 is same with the state(5) to be set 00:34:03.038 [2024-07-25 01:19:56.118556] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23061e0 (9): Bad file descriptor 00:34:03.038 [2024-07-25 01:19:56.118801] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:03.038 [2024-07-25 01:19:56.118826] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:03.038 [2024-07-25 01:19:56.118842] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:03.038 [2024-07-25 01:19:56.122440] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:03.038 [2024-07-25 01:19:56.131781] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:03.038 [2024-07-25 01:19:56.132185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:03.038 [2024-07-25 01:19:56.132216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23061e0 with addr=10.0.0.2, port=4420 00:34:03.038 [2024-07-25 01:19:56.132235] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23061e0 is same with the state(5) to be set 00:34:03.038 [2024-07-25 01:19:56.132483] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23061e0 (9): Bad file descriptor 00:34:03.038 [2024-07-25 01:19:56.132728] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:03.038 [2024-07-25 01:19:56.132752] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:03.038 [2024-07-25 01:19:56.132769] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:03.038 [2024-07-25 01:19:56.136366] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:03.038 [2024-07-25 01:19:56.145691] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:03.038 [2024-07-25 01:19:56.146110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:03.038 [2024-07-25 01:19:56.146142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23061e0 with addr=10.0.0.2, port=4420 00:34:03.038 [2024-07-25 01:19:56.146160] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23061e0 is same with the state(5) to be set 00:34:03.039 [2024-07-25 01:19:56.146410] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23061e0 (9): Bad file descriptor 00:34:03.039 [2024-07-25 01:19:56.146655] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:03.039 [2024-07-25 01:19:56.146679] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:03.039 [2024-07-25 01:19:56.146696] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:03.039 [2024-07-25 01:19:56.150297] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:03.039 [2024-07-25 01:19:56.159634] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:03.039 [2024-07-25 01:19:56.160062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:03.039 [2024-07-25 01:19:56.160094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23061e0 with addr=10.0.0.2, port=4420 00:34:03.039 [2024-07-25 01:19:56.160113] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23061e0 is same with the state(5) to be set 00:34:03.039 [2024-07-25 01:19:56.160363] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23061e0 (9): Bad file descriptor 00:34:03.039 [2024-07-25 01:19:56.160609] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:03.039 [2024-07-25 01:19:56.160635] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:03.039 [2024-07-25 01:19:56.160656] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:03.039 [2024-07-25 01:19:56.164256] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:03.039 [2024-07-25 01:19:56.173612] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:03.039 [2024-07-25 01:19:56.174015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:03.039 [2024-07-25 01:19:56.174046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23061e0 with addr=10.0.0.2, port=4420 00:34:03.039 [2024-07-25 01:19:56.174084] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23061e0 is same with the state(5) to be set 00:34:03.039 [2024-07-25 01:19:56.174337] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23061e0 (9): Bad file descriptor 00:34:03.039 [2024-07-25 01:19:56.174594] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:03.039 [2024-07-25 01:19:56.174619] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:03.039 [2024-07-25 01:19:56.174635] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:03.039 [2024-07-25 01:19:56.178230] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:03.298 [2024-07-25 01:19:56.187811] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:03.298 [2024-07-25 01:19:56.188197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:03.298 [2024-07-25 01:19:56.188231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23061e0 with addr=10.0.0.2, port=4420 00:34:03.298 [2024-07-25 01:19:56.188275] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23061e0 is same with the state(5) to be set 00:34:03.298 [2024-07-25 01:19:56.188520] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23061e0 (9): Bad file descriptor 00:34:03.298 [2024-07-25 01:19:56.188766] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:03.298 [2024-07-25 01:19:56.188791] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:03.298 [2024-07-25 01:19:56.188807] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:03.298 [2024-07-25 01:19:56.192506] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:03.298 [2024-07-25 01:19:56.201853] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:03.298 [2024-07-25 01:19:56.202279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:03.298 [2024-07-25 01:19:56.202313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23061e0 with addr=10.0.0.2, port=4420 00:34:03.298 [2024-07-25 01:19:56.202332] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23061e0 is same with the state(5) to be set 00:34:03.298 [2024-07-25 01:19:56.202572] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23061e0 (9): Bad file descriptor 00:34:03.298 [2024-07-25 01:19:56.202817] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:03.298 [2024-07-25 01:19:56.202842] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:03.298 [2024-07-25 01:19:56.202858] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:03.298 [2024-07-25 01:19:56.206455] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:03.298 [2024-07-25 01:19:56.215890] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:03.298 [2024-07-25 01:19:56.216369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:03.298 [2024-07-25 01:19:56.216402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23061e0 with addr=10.0.0.2, port=4420 00:34:03.298 [2024-07-25 01:19:56.216421] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23061e0 is same with the state(5) to be set 00:34:03.298 [2024-07-25 01:19:56.216661] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23061e0 (9): Bad file descriptor 00:34:03.298 [2024-07-25 01:19:56.216906] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:03.298 [2024-07-25 01:19:56.216931] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:03.298 [2024-07-25 01:19:56.216947] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:03.298 [2024-07-25 01:19:56.220547] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:03.298 [2024-07-25 01:19:56.229890] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:03.298 [2024-07-25 01:19:56.230294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:03.298 [2024-07-25 01:19:56.230327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23061e0 with addr=10.0.0.2, port=4420 00:34:03.299 [2024-07-25 01:19:56.230346] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23061e0 is same with the state(5) to be set 00:34:03.299 [2024-07-25 01:19:56.230587] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23061e0 (9): Bad file descriptor 00:34:03.299 [2024-07-25 01:19:56.230833] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:03.299 [2024-07-25 01:19:56.230858] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:03.299 [2024-07-25 01:19:56.230874] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:03.299 [2024-07-25 01:19:56.234496] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:03.299 [2024-07-25 01:19:56.243837] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:03.299 [2024-07-25 01:19:56.244285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:03.299 [2024-07-25 01:19:56.244316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23061e0 with addr=10.0.0.2, port=4420 00:34:03.299 [2024-07-25 01:19:56.244335] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23061e0 is same with the state(5) to be set 00:34:03.299 [2024-07-25 01:19:56.244575] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23061e0 (9): Bad file descriptor 00:34:03.299 [2024-07-25 01:19:56.244819] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:03.299 [2024-07-25 01:19:56.244845] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:03.299 [2024-07-25 01:19:56.244861] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:03.299 [2024-07-25 01:19:56.248467] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:03.299 [2024-07-25 01:19:56.257827] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:03.299 [2024-07-25 01:19:56.258257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:03.299 [2024-07-25 01:19:56.258299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23061e0 with addr=10.0.0.2, port=4420 00:34:03.299 [2024-07-25 01:19:56.258317] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23061e0 is same with the state(5) to be set 00:34:03.299 [2024-07-25 01:19:56.258563] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23061e0 (9): Bad file descriptor 00:34:03.299 [2024-07-25 01:19:56.258807] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:03.299 [2024-07-25 01:19:56.258832] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:03.299 [2024-07-25 01:19:56.258848] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:03.299 [2024-07-25 01:19:56.262445] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:03.299 [2024-07-25 01:19:56.271780] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:03.299 [2024-07-25 01:19:56.272200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:03.299 [2024-07-25 01:19:56.272232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23061e0 with addr=10.0.0.2, port=4420 00:34:03.299 [2024-07-25 01:19:56.272260] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23061e0 is same with the state(5) to be set 00:34:03.299 [2024-07-25 01:19:56.272501] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23061e0 (9): Bad file descriptor 00:34:03.299 [2024-07-25 01:19:56.272744] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:03.299 [2024-07-25 01:19:56.272769] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:03.299 [2024-07-25 01:19:56.272785] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:03.299 [2024-07-25 01:19:56.276384] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:03.299 [2024-07-25 01:19:56.285722] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:03.299 [2024-07-25 01:19:56.286142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:03.299 [2024-07-25 01:19:56.286174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23061e0 with addr=10.0.0.2, port=4420 00:34:03.299 [2024-07-25 01:19:56.286192] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23061e0 is same with the state(5) to be set 00:34:03.299 [2024-07-25 01:19:56.286450] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23061e0 (9): Bad file descriptor 00:34:03.299 [2024-07-25 01:19:56.286694] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:03.299 [2024-07-25 01:19:56.286719] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:03.299 [2024-07-25 01:19:56.286734] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:03.299 [2024-07-25 01:19:56.290344] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:03.299 [2024-07-25 01:19:56.299683] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:03.299 [2024-07-25 01:19:56.300110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:03.299 [2024-07-25 01:19:56.300142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23061e0 with addr=10.0.0.2, port=4420 00:34:03.299 [2024-07-25 01:19:56.300160] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23061e0 is same with the state(5) to be set 00:34:03.299 [2024-07-25 01:19:56.300410] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23061e0 (9): Bad file descriptor 00:34:03.299 [2024-07-25 01:19:56.300654] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:03.299 [2024-07-25 01:19:56.300679] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:03.299 [2024-07-25 01:19:56.300701] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:03.299 [2024-07-25 01:19:56.304308] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:03.299 [2024-07-25 01:19:56.313646] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:03.299 [2024-07-25 01:19:56.314020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:03.299 [2024-07-25 01:19:56.314051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23061e0 with addr=10.0.0.2, port=4420 00:34:03.299 [2024-07-25 01:19:56.314070] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23061e0 is same with the state(5) to be set 00:34:03.299 [2024-07-25 01:19:56.314322] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23061e0 (9): Bad file descriptor 00:34:03.299 [2024-07-25 01:19:56.314568] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:03.299 [2024-07-25 01:19:56.314594] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:03.299 [2024-07-25 01:19:56.314610] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:03.299 [2024-07-25 01:19:56.318201] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:03.299 [2024-07-25 01:19:56.327541] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:03.299 [2024-07-25 01:19:56.327973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:03.299 [2024-07-25 01:19:56.328005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23061e0 with addr=10.0.0.2, port=4420 00:34:03.299 [2024-07-25 01:19:56.328024] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23061e0 is same with the state(5) to be set 00:34:03.299 [2024-07-25 01:19:56.328275] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23061e0 (9): Bad file descriptor 00:34:03.299 [2024-07-25 01:19:56.328521] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:03.299 [2024-07-25 01:19:56.328547] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:03.299 [2024-07-25 01:19:56.328562] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:03.299 [2024-07-25 01:19:56.332161] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:03.299 [2024-07-25 01:19:56.341503] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:03.299 [2024-07-25 01:19:56.341926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:03.299 [2024-07-25 01:19:56.341957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23061e0 with addr=10.0.0.2, port=4420 00:34:03.299 [2024-07-25 01:19:56.341975] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23061e0 is same with the state(5) to be set 00:34:03.299 [2024-07-25 01:19:56.342215] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23061e0 (9): Bad file descriptor 00:34:03.299 [2024-07-25 01:19:56.342469] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:03.299 [2024-07-25 01:19:56.342495] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:03.299 [2024-07-25 01:19:56.342511] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:03.299 [2024-07-25 01:19:56.346107] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:03.299 [2024-07-25 01:19:56.355452] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:03.299 [2024-07-25 01:19:56.355884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:03.299 [2024-07-25 01:19:56.355921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23061e0 with addr=10.0.0.2, port=4420 00:34:03.299 [2024-07-25 01:19:56.355941] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23061e0 is same with the state(5) to be set 00:34:03.299 [2024-07-25 01:19:56.356181] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23061e0 (9): Bad file descriptor 00:34:03.299 [2024-07-25 01:19:56.356435] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:03.299 [2024-07-25 01:19:56.356462] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:03.299 [2024-07-25 01:19:56.356478] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:03.299 [2024-07-25 01:19:56.360071] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:03.299 [2024-07-25 01:19:56.369403] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:03.299 [2024-07-25 01:19:56.369844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:03.299 [2024-07-25 01:19:56.369875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23061e0 with addr=10.0.0.2, port=4420 00:34:03.299 [2024-07-25 01:19:56.369893] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23061e0 is same with the state(5) to be set 00:34:03.300 [2024-07-25 01:19:56.370132] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23061e0 (9): Bad file descriptor 00:34:03.300 [2024-07-25 01:19:56.370388] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:03.300 [2024-07-25 01:19:56.370415] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:03.300 [2024-07-25 01:19:56.370432] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:03.300 [2024-07-25 01:19:56.374022] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:03.300 [2024-07-25 01:19:56.383456] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:03.300 [2024-07-25 01:19:56.383880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:03.300 [2024-07-25 01:19:56.383912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23061e0 with addr=10.0.0.2, port=4420 00:34:03.300 [2024-07-25 01:19:56.383930] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23061e0 is same with the state(5) to be set 00:34:03.300 [2024-07-25 01:19:56.384170] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23061e0 (9): Bad file descriptor 00:34:03.300 [2024-07-25 01:19:56.384425] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:03.300 [2024-07-25 01:19:56.384451] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:03.300 [2024-07-25 01:19:56.384467] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:03.300 [2024-07-25 01:19:56.388061] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:03.300 [2024-07-25 01:19:56.397397] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:03.300 [2024-07-25 01:19:56.397826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:03.300 [2024-07-25 01:19:56.397857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23061e0 with addr=10.0.0.2, port=4420 00:34:03.300 [2024-07-25 01:19:56.397875] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23061e0 is same with the state(5) to be set 00:34:03.300 [2024-07-25 01:19:56.398115] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23061e0 (9): Bad file descriptor 00:34:03.300 [2024-07-25 01:19:56.398377] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:03.300 [2024-07-25 01:19:56.398403] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:03.300 [2024-07-25 01:19:56.398419] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:03.300 [2024-07-25 01:19:56.402013] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:03.300 [2024-07-25 01:19:56.411349] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:03.300 [2024-07-25 01:19:56.411768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:03.300 [2024-07-25 01:19:56.411800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23061e0 with addr=10.0.0.2, port=4420 00:34:03.300 [2024-07-25 01:19:56.411818] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23061e0 is same with the state(5) to be set 00:34:03.300 [2024-07-25 01:19:56.412057] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23061e0 (9): Bad file descriptor 00:34:03.300 [2024-07-25 01:19:56.412314] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:03.300 [2024-07-25 01:19:56.412340] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:03.300 [2024-07-25 01:19:56.412356] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:03.300 [2024-07-25 01:19:56.416088] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:03.300 [2024-07-25 01:19:56.425240] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:03.300 [2024-07-25 01:19:56.425670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:03.300 [2024-07-25 01:19:56.425702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23061e0 with addr=10.0.0.2, port=4420 00:34:03.300 [2024-07-25 01:19:56.425720] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23061e0 is same with the state(5) to be set 00:34:03.300 [2024-07-25 01:19:56.425960] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23061e0 (9): Bad file descriptor 00:34:03.300 [2024-07-25 01:19:56.426203] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:03.300 [2024-07-25 01:19:56.426229] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:03.300 [2024-07-25 01:19:56.426254] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:03.300 [2024-07-25 01:19:56.429853] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:03.300 [2024-07-25 01:19:56.439186] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:03.300 [2024-07-25 01:19:56.439622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:03.300 [2024-07-25 01:19:56.439654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23061e0 with addr=10.0.0.2, port=4420 00:34:03.300 [2024-07-25 01:19:56.439672] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23061e0 is same with the state(5) to be set 00:34:03.300 [2024-07-25 01:19:56.439911] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23061e0 (9): Bad file descriptor 00:34:03.300 [2024-07-25 01:19:56.440156] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:03.300 [2024-07-25 01:19:56.440180] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:03.300 [2024-07-25 01:19:56.440196] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:03.300 [2024-07-25 01:19:56.443805] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:03.560 [2024-07-25 01:19:56.453216] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:03.560 [2024-07-25 01:19:56.453677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:03.560 [2024-07-25 01:19:56.453714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23061e0 with addr=10.0.0.2, port=4420 00:34:03.560 [2024-07-25 01:19:56.453734] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23061e0 is same with the state(5) to be set 00:34:03.560 [2024-07-25 01:19:56.453976] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23061e0 (9): Bad file descriptor 00:34:03.560 [2024-07-25 01:19:56.454223] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:03.560 [2024-07-25 01:19:56.454257] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:03.560 [2024-07-25 01:19:56.454286] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:03.560 [2024-07-25 01:19:56.457881] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:03.560 [2024-07-25 01:19:56.467221] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:03.560 [2024-07-25 01:19:56.467651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:03.560 [2024-07-25 01:19:56.467684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23061e0 with addr=10.0.0.2, port=4420 00:34:03.560 [2024-07-25 01:19:56.467703] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23061e0 is same with the state(5) to be set 00:34:03.560 [2024-07-25 01:19:56.467942] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23061e0 (9): Bad file descriptor 00:34:03.560 [2024-07-25 01:19:56.468187] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:03.560 [2024-07-25 01:19:56.468212] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:03.560 [2024-07-25 01:19:56.468228] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:03.560 [2024-07-25 01:19:56.471829] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:03.560 [2024-07-25 01:19:56.481186] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:03.560 [2024-07-25 01:19:56.481619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:03.560 [2024-07-25 01:19:56.481651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23061e0 with addr=10.0.0.2, port=4420 00:34:03.560 [2024-07-25 01:19:56.481669] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23061e0 is same with the state(5) to be set 00:34:03.560 [2024-07-25 01:19:56.481909] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23061e0 (9): Bad file descriptor 00:34:03.560 [2024-07-25 01:19:56.482166] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:03.560 [2024-07-25 01:19:56.482191] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:03.560 [2024-07-25 01:19:56.482206] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:03.560 [2024-07-25 01:19:56.485812] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:03.560 [2024-07-25 01:19:56.495159] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:03.560 [2024-07-25 01:19:56.495559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:03.560 [2024-07-25 01:19:56.495591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23061e0 with addr=10.0.0.2, port=4420 00:34:03.560 [2024-07-25 01:19:56.495615] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23061e0 is same with the state(5) to be set 00:34:03.560 [2024-07-25 01:19:56.495858] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23061e0 (9): Bad file descriptor 00:34:03.560 [2024-07-25 01:19:56.496103] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:03.560 [2024-07-25 01:19:56.496128] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:03.560 [2024-07-25 01:19:56.496144] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:03.560 [2024-07-25 01:19:56.499752] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:03.560 [2024-07-25 01:19:56.509100] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:03.560 [2024-07-25 01:19:56.509495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:03.560 [2024-07-25 01:19:56.509527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23061e0 with addr=10.0.0.2, port=4420 00:34:03.560 [2024-07-25 01:19:56.509545] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23061e0 is same with the state(5) to be set 00:34:03.560 [2024-07-25 01:19:56.509784] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23061e0 (9): Bad file descriptor 00:34:03.560 [2024-07-25 01:19:56.510029] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:03.560 [2024-07-25 01:19:56.510053] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:03.560 [2024-07-25 01:19:56.510069] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:03.560 [2024-07-25 01:19:56.513674] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:03.560 [2024-07-25 01:19:56.523018] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:03.560 [2024-07-25 01:19:56.523428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:03.560 [2024-07-25 01:19:56.523460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23061e0 with addr=10.0.0.2, port=4420 00:34:03.560 [2024-07-25 01:19:56.523478] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23061e0 is same with the state(5) to be set 00:34:03.560 [2024-07-25 01:19:56.523718] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23061e0 (9): Bad file descriptor 00:34:03.560 [2024-07-25 01:19:56.523963] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:03.560 [2024-07-25 01:19:56.523987] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:03.560 [2024-07-25 01:19:56.524003] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:03.560 [2024-07-25 01:19:56.527610] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:03.560 [2024-07-25 01:19:56.537183] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:03.560 [2024-07-25 01:19:56.537627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:03.560 [2024-07-25 01:19:56.537659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23061e0 with addr=10.0.0.2, port=4420 00:34:03.560 [2024-07-25 01:19:56.537677] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23061e0 is same with the state(5) to be set 00:34:03.560 [2024-07-25 01:19:56.537917] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23061e0 (9): Bad file descriptor 00:34:03.560 [2024-07-25 01:19:56.538163] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:03.560 [2024-07-25 01:19:56.538193] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:03.560 [2024-07-25 01:19:56.538210] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:03.560 [2024-07-25 01:19:56.541823] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:03.560 [2024-07-25 01:19:56.551176] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:03.560 [2024-07-25 01:19:56.551620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:03.560 [2024-07-25 01:19:56.551652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23061e0 with addr=10.0.0.2, port=4420 00:34:03.560 [2024-07-25 01:19:56.551670] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23061e0 is same with the state(5) to be set 00:34:03.560 [2024-07-25 01:19:56.551910] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23061e0 (9): Bad file descriptor 00:34:03.560 [2024-07-25 01:19:56.552155] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:03.560 [2024-07-25 01:19:56.552180] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:03.560 [2024-07-25 01:19:56.552195] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:03.560 [2024-07-25 01:19:56.555811] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:03.560 [2024-07-25 01:19:56.565171] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:03.560 [2024-07-25 01:19:56.565608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:03.560 [2024-07-25 01:19:56.565640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23061e0 with addr=10.0.0.2, port=4420 00:34:03.560 [2024-07-25 01:19:56.565658] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23061e0 is same with the state(5) to be set 00:34:03.560 [2024-07-25 01:19:56.565898] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23061e0 (9): Bad file descriptor 00:34:03.560 [2024-07-25 01:19:56.566143] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:03.560 [2024-07-25 01:19:56.566167] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:03.560 [2024-07-25 01:19:56.566183] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:03.560 [2024-07-25 01:19:56.569793] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:03.560 [2024-07-25 01:19:56.579149] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:03.560 [2024-07-25 01:19:56.579591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:03.560 [2024-07-25 01:19:56.579622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23061e0 with addr=10.0.0.2, port=4420 00:34:03.560 [2024-07-25 01:19:56.579640] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23061e0 is same with the state(5) to be set 00:34:03.560 [2024-07-25 01:19:56.579879] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23061e0 (9): Bad file descriptor 00:34:03.560 [2024-07-25 01:19:56.580124] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:03.560 [2024-07-25 01:19:56.580148] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:03.561 [2024-07-25 01:19:56.580163] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:03.561 [2024-07-25 01:19:56.583767] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:03.561 [2024-07-25 01:19:56.593114] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:03.561 [2024-07-25 01:19:56.593530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:03.561 [2024-07-25 01:19:56.593562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23061e0 with addr=10.0.0.2, port=4420 00:34:03.561 [2024-07-25 01:19:56.593580] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23061e0 is same with the state(5) to be set 00:34:03.561 [2024-07-25 01:19:56.593820] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23061e0 (9): Bad file descriptor 00:34:03.561 [2024-07-25 01:19:56.594063] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:03.561 [2024-07-25 01:19:56.594089] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:03.561 [2024-07-25 01:19:56.594105] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:03.561 [2024-07-25 01:19:56.597706] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:03.561 [2024-07-25 01:19:56.607039] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:03.561 [2024-07-25 01:19:56.607466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:03.561 [2024-07-25 01:19:56.607498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23061e0 with addr=10.0.0.2, port=4420 00:34:03.561 [2024-07-25 01:19:56.607516] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23061e0 is same with the state(5) to be set 00:34:03.561 [2024-07-25 01:19:56.607755] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23061e0 (9): Bad file descriptor 00:34:03.561 [2024-07-25 01:19:56.607999] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:03.561 [2024-07-25 01:19:56.608025] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:03.561 [2024-07-25 01:19:56.608040] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:03.561 [2024-07-25 01:19:56.611646] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:03.561 [2024-07-25 01:19:56.620930] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:03.561 [2024-07-25 01:19:56.621360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:03.561 [2024-07-25 01:19:56.621401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23061e0 with addr=10.0.0.2, port=4420 00:34:03.561 [2024-07-25 01:19:56.621419] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23061e0 is same with the state(5) to be set 00:34:03.561 [2024-07-25 01:19:56.621659] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23061e0 (9): Bad file descriptor 00:34:03.561 [2024-07-25 01:19:56.621903] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:03.561 [2024-07-25 01:19:56.621929] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:03.561 [2024-07-25 01:19:56.621945] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:03.561 [2024-07-25 01:19:56.625553] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:03.561 [2024-07-25 01:19:56.634902] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:03.561 [2024-07-25 01:19:56.635298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:03.561 [2024-07-25 01:19:56.635330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23061e0 with addr=10.0.0.2, port=4420 00:34:03.561 [2024-07-25 01:19:56.635348] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23061e0 is same with the state(5) to be set 00:34:03.561 [2024-07-25 01:19:56.635594] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23061e0 (9): Bad file descriptor 00:34:03.561 [2024-07-25 01:19:56.635837] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:03.561 [2024-07-25 01:19:56.635863] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:03.561 [2024-07-25 01:19:56.635878] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:03.561 [2024-07-25 01:19:56.639487] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:03.561 [2024-07-25 01:19:56.648823] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:03.561 [2024-07-25 01:19:56.649248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:03.561 [2024-07-25 01:19:56.649281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23061e0 with addr=10.0.0.2, port=4420 00:34:03.561 [2024-07-25 01:19:56.649299] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23061e0 is same with the state(5) to be set 00:34:03.561 [2024-07-25 01:19:56.649539] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23061e0 (9): Bad file descriptor 00:34:03.561 [2024-07-25 01:19:56.649783] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:03.561 [2024-07-25 01:19:56.649809] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:03.561 [2024-07-25 01:19:56.649824] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:03.561 [2024-07-25 01:19:56.653426] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:03.561 [2024-07-25 01:19:56.662758] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:03.561 [2024-07-25 01:19:56.663160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:03.561 [2024-07-25 01:19:56.663192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23061e0 with addr=10.0.0.2, port=4420 00:34:03.561 [2024-07-25 01:19:56.663210] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23061e0 is same with the state(5) to be set 00:34:03.561 [2024-07-25 01:19:56.663460] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23061e0 (9): Bad file descriptor 00:34:03.561 [2024-07-25 01:19:56.663705] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:03.561 [2024-07-25 01:19:56.663730] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:03.561 [2024-07-25 01:19:56.663746] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:03.561 [2024-07-25 01:19:56.667345] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:03.561 [2024-07-25 01:19:56.676675] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:03.561 [2024-07-25 01:19:56.677078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:03.561 [2024-07-25 01:19:56.677110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23061e0 with addr=10.0.0.2, port=4420 00:34:03.561 [2024-07-25 01:19:56.677128] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23061e0 is same with the state(5) to be set 00:34:03.561 [2024-07-25 01:19:56.677380] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23061e0 (9): Bad file descriptor 00:34:03.561 [2024-07-25 01:19:56.677624] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:03.561 [2024-07-25 01:19:56.677649] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:03.561 [2024-07-25 01:19:56.677674] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:03.561 [2024-07-25 01:19:56.681281] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:03.561 [2024-07-25 01:19:56.690617] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:03.561 [2024-07-25 01:19:56.691081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:03.561 [2024-07-25 01:19:56.691113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23061e0 with addr=10.0.0.2, port=4420 00:34:03.561 [2024-07-25 01:19:56.691131] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23061e0 is same with the state(5) to be set 00:34:03.561 [2024-07-25 01:19:56.691383] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23061e0 (9): Bad file descriptor 00:34:03.561 [2024-07-25 01:19:56.691630] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:03.561 [2024-07-25 01:19:56.691655] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:03.561 [2024-07-25 01:19:56.691672] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:03.561 [2024-07-25 01:19:56.695271] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:03.561 [2024-07-25 01:19:56.704603] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:03.561 [2024-07-25 01:19:56.705002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:03.561 [2024-07-25 01:19:56.705034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23061e0 with addr=10.0.0.2, port=4420 00:34:03.561 [2024-07-25 01:19:56.705052] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23061e0 is same with the state(5) to be set 00:34:03.561 [2024-07-25 01:19:56.705309] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23061e0 (9): Bad file descriptor 00:34:03.561 [2024-07-25 01:19:56.705611] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:03.561 [2024-07-25 01:19:56.705648] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:03.561 [2024-07-25 01:19:56.705677] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:03.561 [2024-07-25 01:19:56.709415] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:03.821 [2024-07-25 01:19:56.718676] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:03.821 [2024-07-25 01:19:56.719114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:03.821 [2024-07-25 01:19:56.719149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23061e0 with addr=10.0.0.2, port=4420 00:34:03.821 [2024-07-25 01:19:56.719168] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23061e0 is same with the state(5) to be set 00:34:03.821 [2024-07-25 01:19:56.719420] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23061e0 (9): Bad file descriptor 00:34:03.821 [2024-07-25 01:19:56.719668] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:03.821 [2024-07-25 01:19:56.719693] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:03.821 [2024-07-25 01:19:56.719709] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:03.821 [2024-07-25 01:19:56.723310] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:03.821 [2024-07-25 01:19:56.732650] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:03.821 [2024-07-25 01:19:56.733059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:03.821 [2024-07-25 01:19:56.733092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23061e0 with addr=10.0.0.2, port=4420 00:34:03.821 [2024-07-25 01:19:56.733111] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23061e0 is same with the state(5) to be set 00:34:03.821 [2024-07-25 01:19:56.733363] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23061e0 (9): Bad file descriptor 00:34:03.821 [2024-07-25 01:19:56.733608] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:03.821 [2024-07-25 01:19:56.733634] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:03.821 [2024-07-25 01:19:56.733650] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:03.821 [2024-07-25 01:19:56.737248] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:03.821 [2024-07-25 01:19:56.746578] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:03.821 [2024-07-25 01:19:56.747002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:03.821 [2024-07-25 01:19:56.747034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23061e0 with addr=10.0.0.2, port=4420 00:34:03.821 [2024-07-25 01:19:56.747052] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23061e0 is same with the state(5) to be set 00:34:03.821 [2024-07-25 01:19:56.747303] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23061e0 (9): Bad file descriptor 00:34:03.821 [2024-07-25 01:19:56.747547] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:03.821 [2024-07-25 01:19:56.747572] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:03.821 [2024-07-25 01:19:56.747589] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:03.821 [2024-07-25 01:19:56.751180] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:03.821 [2024-07-25 01:19:56.760519] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:03.821 [2024-07-25 01:19:56.760948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:03.821 [2024-07-25 01:19:56.760979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23061e0 with addr=10.0.0.2, port=4420 00:34:03.821 [2024-07-25 01:19:56.760998] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23061e0 is same with the state(5) to be set 00:34:03.821 [2024-07-25 01:19:56.761237] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23061e0 (9): Bad file descriptor 00:34:03.821 [2024-07-25 01:19:56.761492] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:03.821 [2024-07-25 01:19:56.761518] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:03.821 [2024-07-25 01:19:56.761534] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:03.821 [2024-07-25 01:19:56.765126] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:03.821 [2024-07-25 01:19:56.774493] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:03.821 [2024-07-25 01:19:56.774895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:03.821 [2024-07-25 01:19:56.774928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23061e0 with addr=10.0.0.2, port=4420 00:34:03.821 [2024-07-25 01:19:56.774946] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23061e0 is same with the state(5) to be set 00:34:03.821 [2024-07-25 01:19:56.775192] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23061e0 (9): Bad file descriptor 00:34:03.821 [2024-07-25 01:19:56.775447] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:03.821 [2024-07-25 01:19:56.775473] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:03.821 [2024-07-25 01:19:56.775489] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:03.821 [2024-07-25 01:19:56.779084] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:03.821 [2024-07-25 01:19:56.788422] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:03.821 [2024-07-25 01:19:56.788856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:03.821 [2024-07-25 01:19:56.788889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23061e0 with addr=10.0.0.2, port=4420 00:34:03.822 [2024-07-25 01:19:56.788907] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23061e0 is same with the state(5) to be set 00:34:03.822 [2024-07-25 01:19:56.789147] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23061e0 (9): Bad file descriptor 00:34:03.822 [2024-07-25 01:19:56.789403] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:03.822 [2024-07-25 01:19:56.789429] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:03.822 [2024-07-25 01:19:56.789446] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:03.822 [2024-07-25 01:19:56.793043] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:03.822 [2024-07-25 01:19:56.802382] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:03.822 [2024-07-25 01:19:56.802808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:03.822 [2024-07-25 01:19:56.802840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23061e0 with addr=10.0.0.2, port=4420 00:34:03.822 [2024-07-25 01:19:56.802857] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23061e0 is same with the state(5) to be set 00:34:03.822 [2024-07-25 01:19:56.803096] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23061e0 (9): Bad file descriptor 00:34:03.822 [2024-07-25 01:19:56.803352] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:03.822 [2024-07-25 01:19:56.803378] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:03.822 [2024-07-25 01:19:56.803394] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:03.822 [2024-07-25 01:19:56.806989] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:03.822 [2024-07-25 01:19:56.816484] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:03.822 [2024-07-25 01:19:56.816910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:03.822 [2024-07-25 01:19:56.816941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23061e0 with addr=10.0.0.2, port=4420 00:34:03.822 [2024-07-25 01:19:56.816959] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23061e0 is same with the state(5) to be set 00:34:03.822 [2024-07-25 01:19:56.817199] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23061e0 (9): Bad file descriptor 00:34:03.822 [2024-07-25 01:19:56.817453] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:03.822 [2024-07-25 01:19:56.817480] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:03.822 [2024-07-25 01:19:56.817503] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:03.822 [2024-07-25 01:19:56.821096] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:03.822 [2024-07-25 01:19:56.830439] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:03.822 [2024-07-25 01:19:56.830865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:03.822 [2024-07-25 01:19:56.830898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23061e0 with addr=10.0.0.2, port=4420 00:34:03.822 [2024-07-25 01:19:56.830916] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23061e0 is same with the state(5) to be set 00:34:03.822 [2024-07-25 01:19:56.831155] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23061e0 (9): Bad file descriptor 00:34:03.822 [2024-07-25 01:19:56.831411] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:03.822 [2024-07-25 01:19:56.831438] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:03.822 [2024-07-25 01:19:56.831454] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:03.822 [2024-07-25 01:19:56.835048] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:03.822 [2024-07-25 01:19:56.844396] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:03.822 [2024-07-25 01:19:56.844838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:03.822 [2024-07-25 01:19:56.844873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23061e0 with addr=10.0.0.2, port=4420 00:34:03.822 [2024-07-25 01:19:56.844892] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23061e0 is same with the state(5) to be set 00:34:03.822 [2024-07-25 01:19:56.845135] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23061e0 (9): Bad file descriptor 00:34:03.822 [2024-07-25 01:19:56.845391] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:03.822 [2024-07-25 01:19:56.845418] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:03.822 [2024-07-25 01:19:56.845435] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:03.822 [2024-07-25 01:19:56.849034] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:03.822 [2024-07-25 01:19:56.858380] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:03.822 [2024-07-25 01:19:56.858842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:03.822 [2024-07-25 01:19:56.858875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23061e0 with addr=10.0.0.2, port=4420 00:34:03.822 [2024-07-25 01:19:56.858894] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23061e0 is same with the state(5) to be set 00:34:03.822 [2024-07-25 01:19:56.859135] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23061e0 (9): Bad file descriptor 00:34:03.822 [2024-07-25 01:19:56.859392] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:03.822 [2024-07-25 01:19:56.859419] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:03.822 [2024-07-25 01:19:56.859436] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:03.822 [2024-07-25 01:19:56.863031] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:03.822 [2024-07-25 01:19:56.872369] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:03.822 [2024-07-25 01:19:56.872804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:03.822 [2024-07-25 01:19:56.872849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23061e0 with addr=10.0.0.2, port=4420 00:34:03.822 [2024-07-25 01:19:56.872869] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23061e0 is same with the state(5) to be set 00:34:03.822 [2024-07-25 01:19:56.873110] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23061e0 (9): Bad file descriptor 00:34:03.822 [2024-07-25 01:19:56.873365] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:03.822 [2024-07-25 01:19:56.873392] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:03.822 [2024-07-25 01:19:56.873409] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:03.822 [2024-07-25 01:19:56.877001] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:03.822 [2024-07-25 01:19:56.886340] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:03.822 [2024-07-25 01:19:56.886783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:03.822 [2024-07-25 01:19:56.886816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23061e0 with addr=10.0.0.2, port=4420 00:34:03.822 [2024-07-25 01:19:56.886835] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23061e0 is same with the state(5) to be set 00:34:03.822 [2024-07-25 01:19:56.887075] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23061e0 (9): Bad file descriptor 00:34:03.822 [2024-07-25 01:19:56.887339] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:03.822 [2024-07-25 01:19:56.887367] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:03.822 [2024-07-25 01:19:56.887384] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:03.822 [2024-07-25 01:19:56.890978] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:03.822 [2024-07-25 01:19:56.900319] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:03.822 [2024-07-25 01:19:56.900773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:03.822 [2024-07-25 01:19:56.900808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23061e0 with addr=10.0.0.2, port=4420 00:34:03.822 [2024-07-25 01:19:56.900827] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23061e0 is same with the state(5) to be set 00:34:03.822 [2024-07-25 01:19:56.901068] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23061e0 (9): Bad file descriptor 00:34:03.822 [2024-07-25 01:19:56.901325] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:03.822 [2024-07-25 01:19:56.901351] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:03.822 [2024-07-25 01:19:56.901368] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:03.822 [2024-07-25 01:19:56.904965] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:03.822 [2024-07-25 01:19:56.914308] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:03.822 [2024-07-25 01:19:56.914752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:03.822 [2024-07-25 01:19:56.914785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23061e0 with addr=10.0.0.2, port=4420 00:34:03.822 [2024-07-25 01:19:56.914803] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23061e0 is same with the state(5) to be set 00:34:03.822 [2024-07-25 01:19:56.915043] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23061e0 (9): Bad file descriptor 00:34:03.822 [2024-07-25 01:19:56.915312] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:03.822 [2024-07-25 01:19:56.915339] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:03.822 [2024-07-25 01:19:56.915355] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:03.822 [2024-07-25 01:19:56.918946] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:03.822 [2024-07-25 01:19:56.928284] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:03.822 [2024-07-25 01:19:56.928730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:03.823 [2024-07-25 01:19:56.928761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23061e0 with addr=10.0.0.2, port=4420 00:34:03.823 [2024-07-25 01:19:56.928779] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23061e0 is same with the state(5) to be set 00:34:03.823 [2024-07-25 01:19:56.929018] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23061e0 (9): Bad file descriptor 00:34:03.823 [2024-07-25 01:19:56.929282] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:03.823 [2024-07-25 01:19:56.929307] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:03.823 [2024-07-25 01:19:56.929323] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:03.823 [2024-07-25 01:19:56.932914] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:03.823 [2024-07-25 01:19:56.942248] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:03.823 [2024-07-25 01:19:56.942682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:03.823 [2024-07-25 01:19:56.942714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23061e0 with addr=10.0.0.2, port=4420 00:34:03.823 [2024-07-25 01:19:56.942733] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23061e0 is same with the state(5) to be set 00:34:03.823 [2024-07-25 01:19:56.942973] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23061e0 (9): Bad file descriptor 00:34:03.823 [2024-07-25 01:19:56.943219] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:03.823 [2024-07-25 01:19:56.943254] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:03.823 [2024-07-25 01:19:56.943272] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:03.823 [2024-07-25 01:19:56.946868] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:03.823 [2024-07-25 01:19:56.956204] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:03.823 [2024-07-25 01:19:56.956624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:03.823 [2024-07-25 01:19:56.956657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23061e0 with addr=10.0.0.2, port=4420 00:34:03.823 [2024-07-25 01:19:56.956676] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23061e0 is same with the state(5) to be set 00:34:03.823 [2024-07-25 01:19:56.956917] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23061e0 (9): Bad file descriptor 00:34:03.823 [2024-07-25 01:19:56.957163] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:03.823 [2024-07-25 01:19:56.957189] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:03.823 [2024-07-25 01:19:56.957205] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:03.823 [2024-07-25 01:19:56.960819] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:03.823 [2024-07-25 01:19:56.970327] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:03.823 [2024-07-25 01:19:56.970752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:03.823 [2024-07-25 01:19:56.970799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23061e0 with addr=10.0.0.2, port=4420 00:34:03.823 [2024-07-25 01:19:56.970832] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23061e0 is same with the state(5) to be set 00:34:03.823 [2024-07-25 01:19:56.971125] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23061e0 (9): Bad file descriptor 00:34:04.082 [2024-07-25 01:19:56.971386] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:04.082 [2024-07-25 01:19:56.971412] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:04.082 [2024-07-25 01:19:56.971429] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:04.082 [2024-07-25 01:19:56.975024] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:04.082 [2024-07-25 01:19:56.984253] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:04.082 [2024-07-25 01:19:56.984693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.082 [2024-07-25 01:19:56.984727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23061e0 with addr=10.0.0.2, port=4420 00:34:04.082 [2024-07-25 01:19:56.984747] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23061e0 is same with the state(5) to be set 00:34:04.082 [2024-07-25 01:19:56.984988] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23061e0 (9): Bad file descriptor 00:34:04.082 [2024-07-25 01:19:56.985233] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:04.082 [2024-07-25 01:19:56.985271] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:04.082 [2024-07-25 01:19:56.985289] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:04.082 [2024-07-25 01:19:56.988889] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:04.082 [2024-07-25 01:19:56.998230] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:04.082 [2024-07-25 01:19:56.998697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.082 [2024-07-25 01:19:56.998730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23061e0 with addr=10.0.0.2, port=4420 00:34:04.082 [2024-07-25 01:19:56.998749] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23061e0 is same with the state(5) to be set 00:34:04.082 [2024-07-25 01:19:56.998989] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23061e0 (9): Bad file descriptor 00:34:04.082 [2024-07-25 01:19:56.999234] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:04.082 [2024-07-25 01:19:56.999273] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:04.082 [2024-07-25 01:19:56.999290] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:04.082 [2024-07-25 01:19:57.002892] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:04.082 [2024-07-25 01:19:57.012250] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:04.082 [2024-07-25 01:19:57.012694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.082 [2024-07-25 01:19:57.012727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23061e0 with addr=10.0.0.2, port=4420 00:34:04.082 [2024-07-25 01:19:57.012752] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23061e0 is same with the state(5) to be set 00:34:04.082 [2024-07-25 01:19:57.012993] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23061e0 (9): Bad file descriptor 00:34:04.082 [2024-07-25 01:19:57.013236] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:04.082 [2024-07-25 01:19:57.013276] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:04.082 [2024-07-25 01:19:57.013292] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:04.082 [2024-07-25 01:19:57.017046] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:04.082 [2024-07-25 01:19:57.026196] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:04.082 [2024-07-25 01:19:57.026615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.082 [2024-07-25 01:19:57.026649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23061e0 with addr=10.0.0.2, port=4420 00:34:04.082 [2024-07-25 01:19:57.026667] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23061e0 is same with the state(5) to be set 00:34:04.082 [2024-07-25 01:19:57.026908] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23061e0 (9): Bad file descriptor 00:34:04.082 [2024-07-25 01:19:57.027152] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:04.082 [2024-07-25 01:19:57.027177] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:04.082 [2024-07-25 01:19:57.027194] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:04.082 [2024-07-25 01:19:57.030804] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:04.082 [2024-07-25 01:19:57.040140] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:04.082 [2024-07-25 01:19:57.040581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.083 [2024-07-25 01:19:57.040614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23061e0 with addr=10.0.0.2, port=4420 00:34:04.083 [2024-07-25 01:19:57.040632] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23061e0 is same with the state(5) to be set 00:34:04.083 [2024-07-25 01:19:57.040872] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23061e0 (9): Bad file descriptor 00:34:04.083 [2024-07-25 01:19:57.041116] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:04.083 [2024-07-25 01:19:57.041141] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:04.083 [2024-07-25 01:19:57.041157] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:04.083 [2024-07-25 01:19:57.044766] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:04.083 [2024-07-25 01:19:57.054099] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:04.083 [2024-07-25 01:19:57.054511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.083 [2024-07-25 01:19:57.054543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23061e0 with addr=10.0.0.2, port=4420 00:34:04.083 [2024-07-25 01:19:57.054561] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23061e0 is same with the state(5) to be set 00:34:04.083 [2024-07-25 01:19:57.054800] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23061e0 (9): Bad file descriptor 00:34:04.083 [2024-07-25 01:19:57.055044] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:04.083 [2024-07-25 01:19:57.055075] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:04.083 [2024-07-25 01:19:57.055091] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:04.083 [2024-07-25 01:19:57.058701] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:04.083 [2024-07-25 01:19:57.068036] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:04.083 [2024-07-25 01:19:57.068466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.083 [2024-07-25 01:19:57.068498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23061e0 with addr=10.0.0.2, port=4420 00:34:04.083 [2024-07-25 01:19:57.068516] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23061e0 is same with the state(5) to be set 00:34:04.083 [2024-07-25 01:19:57.068756] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23061e0 (9): Bad file descriptor 00:34:04.083 [2024-07-25 01:19:57.068999] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:04.083 [2024-07-25 01:19:57.069024] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:04.083 [2024-07-25 01:19:57.069040] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:04.083 [2024-07-25 01:19:57.072646] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:04.083 [2024-07-25 01:19:57.081998] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:04.083 [2024-07-25 01:19:57.082408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.083 [2024-07-25 01:19:57.082441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23061e0 with addr=10.0.0.2, port=4420 00:34:04.083 [2024-07-25 01:19:57.082459] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23061e0 is same with the state(5) to be set 00:34:04.083 [2024-07-25 01:19:57.082699] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23061e0 (9): Bad file descriptor 00:34:04.083 [2024-07-25 01:19:57.082943] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:04.083 [2024-07-25 01:19:57.082968] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:04.083 [2024-07-25 01:19:57.082984] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:04.083 [2024-07-25 01:19:57.086593] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:04.083 [2024-07-25 01:19:57.095928] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:04.083 [2024-07-25 01:19:57.096346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.083 [2024-07-25 01:19:57.096378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23061e0 with addr=10.0.0.2, port=4420 00:34:04.083 [2024-07-25 01:19:57.096396] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23061e0 is same with the state(5) to be set 00:34:04.083 [2024-07-25 01:19:57.096635] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23061e0 (9): Bad file descriptor 00:34:04.083 [2024-07-25 01:19:57.096879] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:04.083 [2024-07-25 01:19:57.096904] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:04.083 [2024-07-25 01:19:57.096920] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:04.083 [2024-07-25 01:19:57.100527] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:04.083 [2024-07-25 01:19:57.109883] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:04.083 [2024-07-25 01:19:57.110313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.083 [2024-07-25 01:19:57.110346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23061e0 with addr=10.0.0.2, port=4420 00:34:04.083 [2024-07-25 01:19:57.110363] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23061e0 is same with the state(5) to be set 00:34:04.083 [2024-07-25 01:19:57.110603] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23061e0 (9): Bad file descriptor 00:34:04.083 [2024-07-25 01:19:57.110847] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:04.083 [2024-07-25 01:19:57.110873] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:04.083 [2024-07-25 01:19:57.110888] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:04.083 [2024-07-25 01:19:57.114493] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:04.083 [2024-07-25 01:19:57.123827] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:04.083 [2024-07-25 01:19:57.124258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.083 [2024-07-25 01:19:57.124290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23061e0 with addr=10.0.0.2, port=4420 00:34:04.083 [2024-07-25 01:19:57.124308] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23061e0 is same with the state(5) to be set 00:34:04.083 [2024-07-25 01:19:57.124547] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23061e0 (9): Bad file descriptor 00:34:04.083 [2024-07-25 01:19:57.124791] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:04.083 [2024-07-25 01:19:57.124816] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:04.083 [2024-07-25 01:19:57.124832] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:04.083 [2024-07-25 01:19:57.128437] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:04.083 [2024-07-25 01:19:57.137777] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:04.083 [2024-07-25 01:19:57.138212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.083 [2024-07-25 01:19:57.138254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23061e0 with addr=10.0.0.2, port=4420 00:34:04.083 [2024-07-25 01:19:57.138276] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23061e0 is same with the state(5) to be set 00:34:04.083 [2024-07-25 01:19:57.138516] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23061e0 (9): Bad file descriptor 00:34:04.083 [2024-07-25 01:19:57.138761] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:04.083 [2024-07-25 01:19:57.138787] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:04.083 [2024-07-25 01:19:57.138803] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:04.083 [2024-07-25 01:19:57.142404] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:04.083 [2024-07-25 01:19:57.151759] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:04.083 [2024-07-25 01:19:57.152158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.083 [2024-07-25 01:19:57.152190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23061e0 with addr=10.0.0.2, port=4420 00:34:04.083 [2024-07-25 01:19:57.152208] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23061e0 is same with the state(5) to be set 00:34:04.083 [2024-07-25 01:19:57.152469] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23061e0 (9): Bad file descriptor 00:34:04.083 [2024-07-25 01:19:57.152714] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:04.083 [2024-07-25 01:19:57.152740] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:04.083 [2024-07-25 01:19:57.152756] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:04.083 [2024-07-25 01:19:57.156357] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:04.083 [2024-07-25 01:19:57.165686] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:04.083 [2024-07-25 01:19:57.166083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.083 [2024-07-25 01:19:57.166116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23061e0 with addr=10.0.0.2, port=4420 00:34:04.083 [2024-07-25 01:19:57.166134] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23061e0 is same with the state(5) to be set 00:34:04.083 [2024-07-25 01:19:57.166389] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23061e0 (9): Bad file descriptor 00:34:04.083 [2024-07-25 01:19:57.166632] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:04.083 [2024-07-25 01:19:57.166658] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:04.083 [2024-07-25 01:19:57.166673] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:04.083 [2024-07-25 01:19:57.170275] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:04.083 [2024-07-25 01:19:57.179602] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:04.083 [2024-07-25 01:19:57.180022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.083 [2024-07-25 01:19:57.180053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23061e0 with addr=10.0.0.2, port=4420 00:34:04.084 [2024-07-25 01:19:57.180071] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23061e0 is same with the state(5) to be set 00:34:04.084 [2024-07-25 01:19:57.180325] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23061e0 (9): Bad file descriptor 00:34:04.084 [2024-07-25 01:19:57.180569] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:04.084 [2024-07-25 01:19:57.180595] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:04.084 [2024-07-25 01:19:57.180611] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:04.084 [2024-07-25 01:19:57.184205] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:04.084 [2024-07-25 01:19:57.193545] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:04.084 [2024-07-25 01:19:57.193963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.084 [2024-07-25 01:19:57.193995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23061e0 with addr=10.0.0.2, port=4420 00:34:04.084 [2024-07-25 01:19:57.194013] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23061e0 is same with the state(5) to be set 00:34:04.084 [2024-07-25 01:19:57.194266] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23061e0 (9): Bad file descriptor 00:34:04.084 [2024-07-25 01:19:57.194509] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:04.084 [2024-07-25 01:19:57.194534] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:04.084 [2024-07-25 01:19:57.194556] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:04.084 [2024-07-25 01:19:57.198151] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:04.084 [2024-07-25 01:19:57.207492] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:04.084 [2024-07-25 01:19:57.207912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.084 [2024-07-25 01:19:57.207944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23061e0 with addr=10.0.0.2, port=4420 00:34:04.084 [2024-07-25 01:19:57.207962] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23061e0 is same with the state(5) to be set 00:34:04.084 [2024-07-25 01:19:57.208203] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23061e0 (9): Bad file descriptor 00:34:04.084 [2024-07-25 01:19:57.208460] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:04.084 [2024-07-25 01:19:57.208486] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:04.084 [2024-07-25 01:19:57.208503] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:04.084 [2024-07-25 01:19:57.212096] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:04.084 [2024-07-25 01:19:57.221409] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:04.084 [2024-07-25 01:19:57.221819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.084 [2024-07-25 01:19:57.221852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23061e0 with addr=10.0.0.2, port=4420 00:34:04.084 [2024-07-25 01:19:57.221869] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23061e0 is same with the state(5) to be set 00:34:04.084 [2024-07-25 01:19:57.222109] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23061e0 (9): Bad file descriptor 00:34:04.084 [2024-07-25 01:19:57.222368] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:04.084 [2024-07-25 01:19:57.222394] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:04.084 [2024-07-25 01:19:57.222410] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:04.084 [2024-07-25 01:19:57.226001] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:04.345 [2024-07-25 01:19:57.235371] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:04.345 [2024-07-25 01:19:57.235808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.345 [2024-07-25 01:19:57.235843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23061e0 with addr=10.0.0.2, port=4420 00:34:04.345 [2024-07-25 01:19:57.235862] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23061e0 is same with the state(5) to be set 00:34:04.345 [2024-07-25 01:19:57.236102] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23061e0 (9): Bad file descriptor 00:34:04.345 [2024-07-25 01:19:57.236383] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:04.345 [2024-07-25 01:19:57.236412] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:04.345 [2024-07-25 01:19:57.236429] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:04.345 [2024-07-25 01:19:57.240106] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:04.345 [2024-07-25 01:19:57.249239] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:04.345 [2024-07-25 01:19:57.249683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.345 [2024-07-25 01:19:57.249715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23061e0 with addr=10.0.0.2, port=4420 00:34:04.345 [2024-07-25 01:19:57.249733] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23061e0 is same with the state(5) to be set 00:34:04.345 [2024-07-25 01:19:57.249973] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23061e0 (9): Bad file descriptor 00:34:04.345 [2024-07-25 01:19:57.250216] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:04.345 [2024-07-25 01:19:57.250254] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:04.345 [2024-07-25 01:19:57.250275] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:04.345 [2024-07-25 01:19:57.253869] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:04.345 [2024-07-25 01:19:57.263207] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:04.345 [2024-07-25 01:19:57.263654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.345 [2024-07-25 01:19:57.263687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23061e0 with addr=10.0.0.2, port=4420 00:34:04.345 [2024-07-25 01:19:57.263705] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23061e0 is same with the state(5) to be set 00:34:04.345 [2024-07-25 01:19:57.263945] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23061e0 (9): Bad file descriptor 00:34:04.345 [2024-07-25 01:19:57.264188] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:04.345 [2024-07-25 01:19:57.264214] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:04.345 [2024-07-25 01:19:57.264230] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:04.345 [2024-07-25 01:19:57.267834] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:04.345 [2024-07-25 01:19:57.277179] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:04.345 [2024-07-25 01:19:57.277729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.345 [2024-07-25 01:19:57.277785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23061e0 with addr=10.0.0.2, port=4420 00:34:04.345 [2024-07-25 01:19:57.277804] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23061e0 is same with the state(5) to be set 00:34:04.345 [2024-07-25 01:19:57.278043] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23061e0 (9): Bad file descriptor 00:34:04.345 [2024-07-25 01:19:57.278302] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:04.345 [2024-07-25 01:19:57.278328] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:04.345 [2024-07-25 01:19:57.278344] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:04.345 [2024-07-25 01:19:57.281936] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:04.345 [2024-07-25 01:19:57.291066] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:04.345 [2024-07-25 01:19:57.291500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.345 [2024-07-25 01:19:57.291531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23061e0 with addr=10.0.0.2, port=4420 00:34:04.345 [2024-07-25 01:19:57.291550] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23061e0 is same with the state(5) to be set 00:34:04.345 [2024-07-25 01:19:57.291789] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23061e0 (9): Bad file descriptor 00:34:04.345 [2024-07-25 01:19:57.292040] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:04.345 [2024-07-25 01:19:57.292065] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:04.345 [2024-07-25 01:19:57.292080] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:04.345 [2024-07-25 01:19:57.295680] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:04.345 [2024-07-25 01:19:57.305022] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:04.345 [2024-07-25 01:19:57.305453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.345 [2024-07-25 01:19:57.305485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23061e0 with addr=10.0.0.2, port=4420 00:34:04.345 [2024-07-25 01:19:57.305503] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23061e0 is same with the state(5) to be set 00:34:04.345 [2024-07-25 01:19:57.305743] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23061e0 (9): Bad file descriptor 00:34:04.345 [2024-07-25 01:19:57.305987] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:04.345 [2024-07-25 01:19:57.306012] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:04.345 [2024-07-25 01:19:57.306028] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:04.345 [2024-07-25 01:19:57.309628] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:04.345 [2024-07-25 01:19:57.318952] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:04.345 [2024-07-25 01:19:57.319388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.345 [2024-07-25 01:19:57.319420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23061e0 with addr=10.0.0.2, port=4420 00:34:04.345 [2024-07-25 01:19:57.319438] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23061e0 is same with the state(5) to be set 00:34:04.345 [2024-07-25 01:19:57.319678] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23061e0 (9): Bad file descriptor 00:34:04.345 [2024-07-25 01:19:57.319923] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:04.345 [2024-07-25 01:19:57.319949] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:04.345 [2024-07-25 01:19:57.319965] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:04.345 [2024-07-25 01:19:57.323573] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:04.345 [2024-07-25 01:19:57.332924] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:04.345 [2024-07-25 01:19:57.333355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.345 [2024-07-25 01:19:57.333388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23061e0 with addr=10.0.0.2, port=4420 00:34:04.345 [2024-07-25 01:19:57.333406] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23061e0 is same with the state(5) to be set 00:34:04.345 [2024-07-25 01:19:57.333647] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23061e0 (9): Bad file descriptor 00:34:04.345 [2024-07-25 01:19:57.333908] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:04.345 [2024-07-25 01:19:57.333942] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:04.345 [2024-07-25 01:19:57.333958] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:04.345 [2024-07-25 01:19:57.337587] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:04.346 [2024-07-25 01:19:57.346819] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:04.346 [2024-07-25 01:19:57.347336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.346 [2024-07-25 01:19:57.347368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23061e0 with addr=10.0.0.2, port=4420 00:34:04.346 [2024-07-25 01:19:57.347387] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23061e0 is same with the state(5) to be set 00:34:04.346 [2024-07-25 01:19:57.347627] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23061e0 (9): Bad file descriptor 00:34:04.346 [2024-07-25 01:19:57.347872] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:04.346 [2024-07-25 01:19:57.347897] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:04.346 [2024-07-25 01:19:57.347912] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:04.346 [2024-07-25 01:19:57.351513] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:04.346 [2024-07-25 01:19:57.360847] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:04.346 [2024-07-25 01:19:57.361278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.346 [2024-07-25 01:19:57.361310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23061e0 with addr=10.0.0.2, port=4420 00:34:04.346 [2024-07-25 01:19:57.361329] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23061e0 is same with the state(5) to be set 00:34:04.346 [2024-07-25 01:19:57.361569] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23061e0 (9): Bad file descriptor 00:34:04.346 [2024-07-25 01:19:57.361813] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:04.346 [2024-07-25 01:19:57.361838] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:04.346 [2024-07-25 01:19:57.361854] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:04.346 [2024-07-25 01:19:57.365454] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:04.346 [2024-07-25 01:19:57.374773] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:04.346 [2024-07-25 01:19:57.375173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.346 [2024-07-25 01:19:57.375205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23061e0 with addr=10.0.0.2, port=4420 00:34:04.346 [2024-07-25 01:19:57.375223] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23061e0 is same with the state(5) to be set 00:34:04.346 [2024-07-25 01:19:57.375472] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23061e0 (9): Bad file descriptor 00:34:04.346 [2024-07-25 01:19:57.375718] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:04.346 [2024-07-25 01:19:57.375742] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:04.346 [2024-07-25 01:19:57.375758] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:04.346 [2024-07-25 01:19:57.379357] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:04.346 [2024-07-25 01:19:57.388685] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:04.346 [2024-07-25 01:19:57.389116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.346 [2024-07-25 01:19:57.389155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23061e0 with addr=10.0.0.2, port=4420 00:34:04.346 [2024-07-25 01:19:57.389175] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23061e0 is same with the state(5) to be set 00:34:04.346 [2024-07-25 01:19:57.389424] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23061e0 (9): Bad file descriptor 00:34:04.346 [2024-07-25 01:19:57.389668] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:04.346 [2024-07-25 01:19:57.389694] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:04.346 [2024-07-25 01:19:57.389710] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:04.346 [2024-07-25 01:19:57.393314] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:04.346 [2024-07-25 01:19:57.402659] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:04.346 [2024-07-25 01:19:57.403094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.346 [2024-07-25 01:19:57.403126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23061e0 with addr=10.0.0.2, port=4420 00:34:04.346 [2024-07-25 01:19:57.403144] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23061e0 is same with the state(5) to be set 00:34:04.346 [2024-07-25 01:19:57.403396] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23061e0 (9): Bad file descriptor 00:34:04.346 [2024-07-25 01:19:57.403641] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:04.346 [2024-07-25 01:19:57.403666] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:04.346 [2024-07-25 01:19:57.403682] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:04.346 [2024-07-25 01:19:57.407279] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:04.346 [2024-07-25 01:19:57.416804] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:04.346 [2024-07-25 01:19:57.417229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.346 [2024-07-25 01:19:57.417272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23061e0 with addr=10.0.0.2, port=4420 00:34:04.346 [2024-07-25 01:19:57.417291] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23061e0 is same with the state(5) to be set 00:34:04.346 [2024-07-25 01:19:57.417530] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23061e0 (9): Bad file descriptor 00:34:04.346 [2024-07-25 01:19:57.417773] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:04.346 [2024-07-25 01:19:57.417799] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:04.346 [2024-07-25 01:19:57.417816] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:04.346 [2024-07-25 01:19:57.421429] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:04.346 [2024-07-25 01:19:57.430778] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:04.346 [2024-07-25 01:19:57.431208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.346 [2024-07-25 01:19:57.431240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23061e0 with addr=10.0.0.2, port=4420 00:34:04.346 [2024-07-25 01:19:57.431268] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23061e0 is same with the state(5) to be set 00:34:04.346 [2024-07-25 01:19:57.431518] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23061e0 (9): Bad file descriptor 00:34:04.346 [2024-07-25 01:19:57.431770] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:04.346 [2024-07-25 01:19:57.431796] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:04.346 [2024-07-25 01:19:57.431812] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:04.346 [2024-07-25 01:19:57.435418] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:04.346 [2024-07-25 01:19:57.444764] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:04.346 [2024-07-25 01:19:57.445191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.346 [2024-07-25 01:19:57.445223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23061e0 with addr=10.0.0.2, port=4420 00:34:04.346 [2024-07-25 01:19:57.445240] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23061e0 is same with the state(5) to be set 00:34:04.346 [2024-07-25 01:19:57.445492] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23061e0 (9): Bad file descriptor 00:34:04.346 [2024-07-25 01:19:57.445735] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:04.346 [2024-07-25 01:19:57.445760] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:04.346 [2024-07-25 01:19:57.445777] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:04.346 [2024-07-25 01:19:57.449385] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:04.346 [2024-07-25 01:19:57.458728] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:04.346 [2024-07-25 01:19:57.459173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.346 [2024-07-25 01:19:57.459205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23061e0 with addr=10.0.0.2, port=4420 00:34:04.346 [2024-07-25 01:19:57.459223] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23061e0 is same with the state(5) to be set 00:34:04.346 [2024-07-25 01:19:57.459472] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23061e0 (9): Bad file descriptor 00:34:04.346 [2024-07-25 01:19:57.459717] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:04.346 [2024-07-25 01:19:57.459742] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:04.346 [2024-07-25 01:19:57.459758] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:04.346 [2024-07-25 01:19:57.463362] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:04.346 [2024-07-25 01:19:57.472693] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:04.346 [2024-07-25 01:19:57.473122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.346 [2024-07-25 01:19:57.473154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23061e0 with addr=10.0.0.2, port=4420 00:34:04.347 [2024-07-25 01:19:57.473172] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23061e0 is same with the state(5) to be set 00:34:04.347 [2024-07-25 01:19:57.473426] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23061e0 (9): Bad file descriptor 00:34:04.347 [2024-07-25 01:19:57.473671] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:04.347 [2024-07-25 01:19:57.473696] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:04.347 [2024-07-25 01:19:57.473712] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:04.347 [2024-07-25 01:19:57.477313] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:04.347 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 3923469 Killed "${NVMF_APP[@]}" "$@" 00:34:04.347 01:19:57 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:34:04.347 01:19:57 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:34:04.347 01:19:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:34:04.347 01:19:57 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@720 -- # xtrace_disable 00:34:04.347 01:19:57 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:04.347 [2024-07-25 01:19:57.486654] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:04.347 [2024-07-25 01:19:57.487055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.347 [2024-07-25 01:19:57.487086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23061e0 with addr=10.0.0.2, port=4420 00:34:04.347 [2024-07-25 01:19:57.487105] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23061e0 is same with the state(5) to be set 00:34:04.347 [2024-07-25 01:19:57.487370] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23061e0 (9): Bad file descriptor 00:34:04.347 [2024-07-25 01:19:57.487616] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:04.347 [2024-07-25 01:19:57.487641] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:04.347 [2024-07-25 01:19:57.487657] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:04.347 01:19:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=3924423 00:34:04.347 01:19:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:34:04.347 01:19:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 3924423 00:34:04.347 01:19:57 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@827 -- # '[' -z 3924423 ']' 00:34:04.347 01:19:57 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:04.347 01:19:57 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@832 -- # local max_retries=100 00:34:04.347 01:19:57 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:04.347 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:04.347 01:19:57 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@836 -- # xtrace_disable 00:34:04.347 01:19:57 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:04.347 [2024-07-25 01:19:57.491335] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:04.607 [2024-07-25 01:19:57.500723] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:04.607 [2024-07-25 01:19:57.501216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.607 [2024-07-25 01:19:57.501280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23061e0 with addr=10.0.0.2, port=4420 00:34:04.607 [2024-07-25 01:19:57.501300] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23061e0 is same with the state(5) to be set 00:34:04.607 [2024-07-25 01:19:57.501540] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23061e0 (9): Bad file descriptor 00:34:04.607 [2024-07-25 01:19:57.501786] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:04.607 [2024-07-25 01:19:57.501811] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:04.607 [2024-07-25 01:19:57.501828] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:04.607 [2024-07-25 01:19:57.505429] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:04.607 [2024-07-25 01:19:57.514766] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:04.607 [2024-07-25 01:19:57.515300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.607 [2024-07-25 01:19:57.515332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23061e0 with addr=10.0.0.2, port=4420 00:34:04.607 [2024-07-25 01:19:57.515351] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23061e0 is same with the state(5) to be set 00:34:04.607 [2024-07-25 01:19:57.515590] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23061e0 (9): Bad file descriptor 00:34:04.607 [2024-07-25 01:19:57.515834] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:04.607 [2024-07-25 01:19:57.515858] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:04.607 [2024-07-25 01:19:57.515874] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:04.607 [2024-07-25 01:19:57.519477] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:04.607 [2024-07-25 01:19:57.528805] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:04.607 [2024-07-25 01:19:57.529296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.607 [2024-07-25 01:19:57.529329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23061e0 with addr=10.0.0.2, port=4420 00:34:04.607 [2024-07-25 01:19:57.529347] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23061e0 is same with the state(5) to be set 00:34:04.607 [2024-07-25 01:19:57.529586] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23061e0 (9): Bad file descriptor 00:34:04.607 [2024-07-25 01:19:57.529843] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:04.607 [2024-07-25 01:19:57.529867] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:04.607 [2024-07-25 01:19:57.529883] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:04.607 [2024-07-25 01:19:57.533513] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:04.607 [2024-07-25 01:19:57.537124] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 23.11.0 initialization... 00:34:04.607 [2024-07-25 01:19:57.537201] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:04.607 [2024-07-25 01:19:57.542862] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:04.607 [2024-07-25 01:19:57.543345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.607 [2024-07-25 01:19:57.543378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23061e0 with addr=10.0.0.2, port=4420 00:34:04.607 [2024-07-25 01:19:57.543397] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23061e0 is same with the state(5) to be set 00:34:04.607 [2024-07-25 01:19:57.543637] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23061e0 (9): Bad file descriptor 00:34:04.607 [2024-07-25 01:19:57.543882] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:04.607 [2024-07-25 01:19:57.543906] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:04.607 [2024-07-25 01:19:57.543921] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:04.607 [2024-07-25 01:19:57.547532] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:04.607 [2024-07-25 01:19:57.556855] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:04.607 [2024-07-25 01:19:57.557307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.607 [2024-07-25 01:19:57.557339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23061e0 with addr=10.0.0.2, port=4420 00:34:04.607 [2024-07-25 01:19:57.557368] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23061e0 is same with the state(5) to be set 00:34:04.607 [2024-07-25 01:19:57.557608] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23061e0 (9): Bad file descriptor 00:34:04.607 [2024-07-25 01:19:57.557853] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:04.607 [2024-07-25 01:19:57.557877] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:04.607 [2024-07-25 01:19:57.557893] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:04.607 [2024-07-25 01:19:57.561502] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:04.607 [2024-07-25 01:19:57.570859] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:04.607 [2024-07-25 01:19:57.571258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.607 [2024-07-25 01:19:57.571290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23061e0 with addr=10.0.0.2, port=4420 00:34:04.607 [2024-07-25 01:19:57.571309] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23061e0 is same with the state(5) to be set 00:34:04.607 [2024-07-25 01:19:57.571549] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23061e0 (9): Bad file descriptor 00:34:04.607 [2024-07-25 01:19:57.571793] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:04.607 [2024-07-25 01:19:57.571818] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:04.607 [2024-07-25 01:19:57.571833] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:04.607 EAL: No free 2048 kB hugepages reported on node 1 00:34:04.607 [2024-07-25 01:19:57.575435] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:04.607 [2024-07-25 01:19:57.584780] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:04.607 [2024-07-25 01:19:57.585208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.607 [2024-07-25 01:19:57.585239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23061e0 with addr=10.0.0.2, port=4420 00:34:04.607 [2024-07-25 01:19:57.585275] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23061e0 is same with the state(5) to be set 00:34:04.607 [2024-07-25 01:19:57.585514] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23061e0 (9): Bad file descriptor 00:34:04.607 [2024-07-25 01:19:57.585759] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:04.607 [2024-07-25 01:19:57.585783] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:04.607 [2024-07-25 01:19:57.585799] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:04.607 [2024-07-25 01:19:57.589415] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:04.607 [2024-07-25 01:19:57.598787] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:04.607 [2024-07-25 01:19:57.599214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.607 [2024-07-25 01:19:57.599263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23061e0 with addr=10.0.0.2, port=4420 00:34:04.607 [2024-07-25 01:19:57.599284] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23061e0 is same with the state(5) to be set 00:34:04.607 [2024-07-25 01:19:57.599530] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23061e0 (9): Bad file descriptor 00:34:04.607 [2024-07-25 01:19:57.599785] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:04.607 [2024-07-25 01:19:57.599810] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:04.607 [2024-07-25 01:19:57.599825] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:04.607 [2024-07-25 01:19:57.603433] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:04.607 [2024-07-25 01:19:57.610220] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:34:04.607 [2024-07-25 01:19:57.612772] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:04.607 [2024-07-25 01:19:57.613207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.607 [2024-07-25 01:19:57.613258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23061e0 with addr=10.0.0.2, port=4420 00:34:04.607 [2024-07-25 01:19:57.613279] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23061e0 is same with the state(5) to be set 00:34:04.607 [2024-07-25 01:19:57.613519] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23061e0 (9): Bad file descriptor 00:34:04.607 [2024-07-25 01:19:57.613780] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:04.607 [2024-07-25 01:19:57.613804] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:04.607 [2024-07-25 01:19:57.613820] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:04.607 [2024-07-25 01:19:57.617657] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:04.607 [2024-07-25 01:19:57.626884] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:04.607 [2024-07-25 01:19:57.627452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.607 [2024-07-25 01:19:57.627493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23061e0 with addr=10.0.0.2, port=4420 00:34:04.607 [2024-07-25 01:19:57.627515] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23061e0 is same with the state(5) to be set 00:34:04.608 [2024-07-25 01:19:57.627771] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23061e0 (9): Bad file descriptor 00:34:04.608 [2024-07-25 01:19:57.628019] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:04.608 [2024-07-25 01:19:57.628044] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:04.608 [2024-07-25 01:19:57.628063] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:04.608 [2024-07-25 01:19:57.631674] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:04.608 [2024-07-25 01:19:57.640797] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:04.608 [2024-07-25 01:19:57.641255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.608 [2024-07-25 01:19:57.641288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23061e0 with addr=10.0.0.2, port=4420 00:34:04.608 [2024-07-25 01:19:57.641307] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23061e0 is same with the state(5) to be set 00:34:04.608 [2024-07-25 01:19:57.641546] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23061e0 (9): Bad file descriptor 00:34:04.608 [2024-07-25 01:19:57.641790] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:04.608 [2024-07-25 01:19:57.641829] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:04.608 [2024-07-25 01:19:57.641846] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:04.608 [2024-07-25 01:19:57.645455] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:04.608 [2024-07-25 01:19:57.654790] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:04.608 [2024-07-25 01:19:57.655213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.608 [2024-07-25 01:19:57.655271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23061e0 with addr=10.0.0.2, port=4420 00:34:04.608 [2024-07-25 01:19:57.655293] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23061e0 is same with the state(5) to be set 00:34:04.608 [2024-07-25 01:19:57.655534] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23061e0 (9): Bad file descriptor 00:34:04.608 [2024-07-25 01:19:57.655790] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:04.608 [2024-07-25 01:19:57.655816] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:04.608 [2024-07-25 01:19:57.655832] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:04.608 [2024-07-25 01:19:57.659433] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:04.608 [2024-07-25 01:19:57.668808] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:04.608 [2024-07-25 01:19:57.669412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.608 [2024-07-25 01:19:57.669456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23061e0 with addr=10.0.0.2, port=4420 00:34:04.608 [2024-07-25 01:19:57.669477] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23061e0 is same with the state(5) to be set 00:34:04.608 [2024-07-25 01:19:57.669727] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23061e0 (9): Bad file descriptor 00:34:04.608 [2024-07-25 01:19:57.669975] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:04.608 [2024-07-25 01:19:57.670000] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:04.608 [2024-07-25 01:19:57.670019] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:04.608 [2024-07-25 01:19:57.673634] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:04.608 [2024-07-25 01:19:57.682784] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:04.608 [2024-07-25 01:19:57.683250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.608 [2024-07-25 01:19:57.683296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23061e0 with addr=10.0.0.2, port=4420 00:34:04.608 [2024-07-25 01:19:57.683316] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23061e0 is same with the state(5) to be set 00:34:04.608 [2024-07-25 01:19:57.683560] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23061e0 (9): Bad file descriptor 00:34:04.608 [2024-07-25 01:19:57.683813] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:04.608 [2024-07-25 01:19:57.683838] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:04.608 [2024-07-25 01:19:57.683855] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:04.608 [2024-07-25 01:19:57.687456] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:04.608 [2024-07-25 01:19:57.696800] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:04.608 [2024-07-25 01:19:57.697254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.608 [2024-07-25 01:19:57.697286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23061e0 with addr=10.0.0.2, port=4420 00:34:04.608 [2024-07-25 01:19:57.697306] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23061e0 is same with the state(5) to be set 00:34:04.608 [2024-07-25 01:19:57.697546] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23061e0 (9): Bad file descriptor 00:34:04.608 [2024-07-25 01:19:57.697791] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:04.608 [2024-07-25 01:19:57.697816] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:04.608 [2024-07-25 01:19:57.697844] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:04.608 [2024-07-25 01:19:57.701447] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:04.608 [2024-07-25 01:19:57.706250] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:04.608 [2024-07-25 01:19:57.706288] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:04.608 [2024-07-25 01:19:57.706304] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:04.608 [2024-07-25 01:19:57.706317] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:04.608 [2024-07-25 01:19:57.706328] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:04.608 [2024-07-25 01:19:57.706408] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:34:04.608 [2024-07-25 01:19:57.706475] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:34:04.608 [2024-07-25 01:19:57.706478] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:34:04.608 [2024-07-25 01:19:57.710797] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:04.608 [2024-07-25 01:19:57.711312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.608 [2024-07-25 01:19:57.711347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23061e0 with addr=10.0.0.2, port=4420 00:34:04.608 [2024-07-25 01:19:57.711367] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23061e0 is same with the state(5) to be set 00:34:04.608 [2024-07-25 01:19:57.711616] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23061e0 (9): Bad file descriptor 00:34:04.608 [2024-07-25 01:19:57.711874] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:04.608 [2024-07-25 01:19:57.711899] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:04.608 [2024-07-25 01:19:57.711916] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:04.608 [2024-07-25 01:19:57.715522] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:04.608 [2024-07-25 01:19:57.724695] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:04.608 [2024-07-25 01:19:57.725297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.608 [2024-07-25 01:19:57.725350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23061e0 with addr=10.0.0.2, port=4420 00:34:04.608 [2024-07-25 01:19:57.725372] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23061e0 is same with the state(5) to be set 00:34:04.608 [2024-07-25 01:19:57.725627] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23061e0 (9): Bad file descriptor 00:34:04.608 [2024-07-25 01:19:57.725875] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:04.608 [2024-07-25 01:19:57.725900] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:04.608 [2024-07-25 01:19:57.725929] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:04.608 [2024-07-25 01:19:57.729553] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:04.608 [2024-07-25 01:19:57.738726] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:04.608 [2024-07-25 01:19:57.739390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.608 [2024-07-25 01:19:57.739434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23061e0 with addr=10.0.0.2, port=4420 00:34:04.608 [2024-07-25 01:19:57.739456] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23061e0 is same with the state(5) to be set 00:34:04.608 [2024-07-25 01:19:57.739712] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23061e0 (9): Bad file descriptor 00:34:04.608 [2024-07-25 01:19:57.739962] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:04.608 [2024-07-25 01:19:57.739989] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:04.608 [2024-07-25 01:19:57.740008] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:04.608 [2024-07-25 01:19:57.743616] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:04.608 [2024-07-25 01:19:57.752857] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:04.608 [2024-07-25 01:19:57.753402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.608 [2024-07-25 01:19:57.753450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23061e0 with addr=10.0.0.2, port=4420 00:34:04.608 [2024-07-25 01:19:57.753473] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23061e0 is same with the state(5) to be set 00:34:04.608 [2024-07-25 01:19:57.753723] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23061e0 (9): Bad file descriptor 00:34:04.608 [2024-07-25 01:19:57.753971] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:04.608 [2024-07-25 01:19:57.753997] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:04.609 [2024-07-25 01:19:57.754015] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:04.867 [2024-07-25 01:19:57.757786] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:04.867 [2024-07-25 01:19:57.766819] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:04.867 [2024-07-25 01:19:57.767328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.867 [2024-07-25 01:19:57.767370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23061e0 with addr=10.0.0.2, port=4420 00:34:04.867 [2024-07-25 01:19:57.767392] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23061e0 is same with the state(5) to be set 00:34:04.867 [2024-07-25 01:19:57.767639] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23061e0 (9): Bad file descriptor 00:34:04.867 [2024-07-25 01:19:57.767889] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:04.867 [2024-07-25 01:19:57.767915] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:04.867 [2024-07-25 01:19:57.767933] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:04.867 [2024-07-25 01:19:57.771538] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:04.867 [2024-07-25 01:19:57.780892] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:04.868 [2024-07-25 01:19:57.781526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.868 [2024-07-25 01:19:57.781575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23061e0 with addr=10.0.0.2, port=4420 00:34:04.868 [2024-07-25 01:19:57.781598] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23061e0 is same with the state(5) to be set 00:34:04.868 [2024-07-25 01:19:57.781851] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23061e0 (9): Bad file descriptor 00:34:04.868 [2024-07-25 01:19:57.782100] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:04.868 [2024-07-25 01:19:57.782126] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:04.868 [2024-07-25 01:19:57.782145] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:04.868 [2024-07-25 01:19:57.785744] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:04.868 [2024-07-25 01:19:57.794878] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:04.868 [2024-07-25 01:19:57.795425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.868 [2024-07-25 01:19:57.795463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23061e0 with addr=10.0.0.2, port=4420 00:34:04.868 [2024-07-25 01:19:57.795483] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23061e0 is same with the state(5) to be set 00:34:04.868 [2024-07-25 01:19:57.795730] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23061e0 (9): Bad file descriptor 00:34:04.868 [2024-07-25 01:19:57.795977] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:04.868 [2024-07-25 01:19:57.796003] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:04.868 [2024-07-25 01:19:57.796020] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:04.868 [2024-07-25 01:19:57.799618] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:04.868 [2024-07-25 01:19:57.808948] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:04.868 [2024-07-25 01:19:57.809399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.868 [2024-07-25 01:19:57.809431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23061e0 with addr=10.0.0.2, port=4420 00:34:04.868 [2024-07-25 01:19:57.809450] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23061e0 is same with the state(5) to be set 00:34:04.868 [2024-07-25 01:19:57.809690] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23061e0 (9): Bad file descriptor 00:34:04.868 [2024-07-25 01:19:57.809933] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:04.868 [2024-07-25 01:19:57.809959] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:04.868 [2024-07-25 01:19:57.809975] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:04.868 [2024-07-25 01:19:57.813478] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:04.868 [2024-07-25 01:19:57.822500] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:04.868 [2024-07-25 01:19:57.822888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.868 [2024-07-25 01:19:57.822918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23061e0 with addr=10.0.0.2, port=4420 00:34:04.868 [2024-07-25 01:19:57.822935] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23061e0 is same with the state(5) to be set 00:34:04.868 [2024-07-25 01:19:57.823176] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23061e0 (9): Bad file descriptor 00:34:04.868 [2024-07-25 01:19:57.823419] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:04.868 [2024-07-25 01:19:57.823443] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:04.868 [2024-07-25 01:19:57.823457] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:04.868 [2024-07-25 01:19:57.826728] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:04.868 01:19:57 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:34:04.868 01:19:57 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@860 -- # return 0 00:34:04.868 01:19:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:34:04.868 01:19:57 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:04.868 01:19:57 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:04.868 [2024-07-25 01:19:57.836109] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:04.868 [2024-07-25 01:19:57.836471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.868 [2024-07-25 01:19:57.836500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23061e0 with addr=10.0.0.2, port=4420 00:34:04.868 [2024-07-25 01:19:57.836526] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23061e0 is same with the state(5) to be set 00:34:04.868 [2024-07-25 01:19:57.836759] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23061e0 (9): Bad file descriptor 00:34:04.868 [2024-07-25 01:19:57.836982] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:04.868 [2024-07-25 01:19:57.837004] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:04.868 [2024-07-25 01:19:57.837019] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:04.868 [2024-07-25 01:19:57.840339] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:04.868 [2024-07-25 01:19:57.849781] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:04.868 [2024-07-25 01:19:57.850151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.868 [2024-07-25 01:19:57.850181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23061e0 with addr=10.0.0.2, port=4420 00:34:04.868 [2024-07-25 01:19:57.850197] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23061e0 is same with the state(5) to be set 00:34:04.868 [2024-07-25 01:19:57.850423] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23061e0 (9): Bad file descriptor 00:34:04.868 [2024-07-25 01:19:57.850658] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:04.868 [2024-07-25 01:19:57.850680] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:04.868 [2024-07-25 01:19:57.850693] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:04.868 01:19:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:04.868 01:19:57 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:34:04.868 01:19:57 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:04.868 [2024-07-25 01:19:57.853903] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:04.868 01:19:57 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:04.868 [2024-07-25 01:19:57.857548] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:04.868 01:19:57 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:04.868 [2024-07-25 01:19:57.863456] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:04.868 01:19:57 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:34:04.868 01:19:57 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:04.868 [2024-07-25 01:19:57.863846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.868 01:19:57 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:04.868 [2024-07-25 01:19:57.863875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23061e0 with addr=10.0.0.2, port=4420 00:34:04.868 [2024-07-25 01:19:57.863892] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23061e0 is same with the state(5) to be set 00:34:04.868 [2024-07-25 01:19:57.864108] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23061e0 (9): Bad file descriptor 00:34:04.868 [2024-07-25 01:19:57.864335] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:04.868 [2024-07-25 01:19:57.864359] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:04.868 [2024-07-25 01:19:57.864373] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:04.868 [2024-07-25 01:19:57.867597] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:04.868 [2024-07-25 01:19:57.876938] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:04.868 [2024-07-25 01:19:57.877326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.868 [2024-07-25 01:19:57.877355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23061e0 with addr=10.0.0.2, port=4420 00:34:04.868 [2024-07-25 01:19:57.877371] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23061e0 is same with the state(5) to be set 00:34:04.868 [2024-07-25 01:19:57.877600] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23061e0 (9): Bad file descriptor 00:34:04.868 [2024-07-25 01:19:57.877815] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:04.868 [2024-07-25 01:19:57.877836] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:04.868 [2024-07-25 01:19:57.877850] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:04.868 [2024-07-25 01:19:57.881021] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:04.869 [2024-07-25 01:19:57.890515] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:04.869 [2024-07-25 01:19:57.891048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.869 [2024-07-25 01:19:57.891078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23061e0 with addr=10.0.0.2, port=4420 00:34:04.869 [2024-07-25 01:19:57.891095] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23061e0 is same with the state(5) to be set 00:34:04.869 [2024-07-25 01:19:57.891338] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23061e0 (9): Bad file descriptor 00:34:04.869 [2024-07-25 01:19:57.891567] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:04.869 [2024-07-25 01:19:57.891590] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:04.869 [2024-07-25 01:19:57.891605] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:04.869 [2024-07-25 01:19:57.894760] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:04.869 [2024-07-25 01:19:57.903938] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:04.869 [2024-07-25 01:19:57.904542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.869 [2024-07-25 01:19:57.904583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23061e0 with addr=10.0.0.2, port=4420 00:34:04.869 [2024-07-25 01:19:57.904603] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23061e0 is same with the state(5) to be set 00:34:04.869 [2024-07-25 01:19:57.904855] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23061e0 (9): Bad file descriptor 00:34:04.869 [2024-07-25 01:19:57.905066] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:04.869 [2024-07-25 01:19:57.905088] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:04.869 [2024-07-25 01:19:57.905104] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:04.869 Malloc0 00:34:04.869 01:19:57 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:04.869 01:19:57 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:34:04.869 01:19:57 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:04.869 01:19:57 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:04.869 [2024-07-25 01:19:57.908452] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:04.869 01:19:57 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:04.869 01:19:57 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:34:04.869 01:19:57 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:04.869 01:19:57 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:04.869 [2024-07-25 01:19:57.917504] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:04.869 [2024-07-25 01:19:57.917900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.869 [2024-07-25 01:19:57.917929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23061e0 with addr=10.0.0.2, port=4420 00:34:04.869 [2024-07-25 01:19:57.917946] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23061e0 is same with the state(5) to be set 00:34:04.869 [2024-07-25 01:19:57.918189] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23061e0 (9): Bad file descriptor 00:34:04.869 [2024-07-25 01:19:57.918447] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:04.869 [2024-07-25 01:19:57.918471] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:04.869 [2024-07-25 01:19:57.918486] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:04.869 [2024-07-25 01:19:57.921732] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:04.869 01:19:57 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:04.869 01:19:57 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:04.869 01:19:57 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:04.869 01:19:57 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:04.869 [2024-07-25 01:19:57.926630] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:04.869 01:19:57 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:04.869 01:19:57 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 3923761 00:34:04.869 [2024-07-25 01:19:57.931013] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:05.126 [2024-07-25 01:19:58.096033] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:34:15.104 00:34:15.104 Latency(us) 00:34:15.104 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:15.104 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:34:15.104 Verification LBA range: start 0x0 length 0x4000 00:34:15.104 Nvme1n1 : 15.01 6753.62 26.38 8785.32 0.00 8212.10 843.47 20486.07 00:34:15.104 =================================================================================================================== 00:34:15.104 Total : 6753.62 26.38 8785.32 0.00 8212.10 843.47 20486.07 00:34:15.104 01:20:07 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:34:15.104 01:20:07 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:15.104 01:20:07 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:15.104 01:20:07 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:15.104 01:20:07 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:15.104 01:20:07 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:34:15.104 01:20:07 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:34:15.104 01:20:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@488 -- # nvmfcleanup 00:34:15.104 01:20:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@117 -- # sync 00:34:15.104 01:20:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:34:15.104 01:20:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@120 -- # set +e 00:34:15.104 01:20:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@121 -- # for i in {1..20} 00:34:15.104 01:20:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:34:15.104 rmmod nvme_tcp 00:34:15.104 rmmod nvme_fabrics 00:34:15.104 rmmod nvme_keyring 00:34:15.104 01:20:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:34:15.104 01:20:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@124 -- # set -e 00:34:15.104 01:20:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@125 -- # return 0 00:34:15.104 01:20:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@489 -- # '[' -n 3924423 ']' 00:34:15.104 01:20:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@490 -- # killprocess 3924423 00:34:15.104 01:20:07 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@946 -- # '[' -z 3924423 ']' 00:34:15.104 01:20:07 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@950 -- # kill -0 3924423 00:34:15.104 01:20:07 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@951 -- # uname 00:34:15.104 01:20:07 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:34:15.104 01:20:07 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3924423 00:34:15.104 01:20:07 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:34:15.104 01:20:07 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:34:15.104 01:20:07 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3924423' 00:34:15.104 killing process with pid 3924423 00:34:15.104 01:20:07 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@965 -- # kill 3924423 00:34:15.104 01:20:07 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@970 -- # wait 3924423 00:34:15.104 01:20:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:34:15.104 01:20:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:34:15.104 01:20:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:34:15.104 01:20:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:34:15.104 01:20:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:34:15.104 01:20:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:15.104 01:20:07 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:34:15.104 01:20:07 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:16.482 01:20:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:34:16.482 00:34:16.482 real 0m22.474s 00:34:16.482 user 0m57.928s 00:34:16.482 sys 0m5.307s 00:34:16.482 01:20:09 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:34:16.482 01:20:09 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:16.482 ************************************ 00:34:16.482 END TEST nvmf_bdevperf 00:34:16.482 ************************************ 00:34:16.741 01:20:09 nvmf_tcp -- nvmf/nvmf.sh@123 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:34:16.741 01:20:09 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:34:16.741 01:20:09 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:34:16.741 01:20:09 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:16.741 ************************************ 00:34:16.741 START TEST nvmf_target_disconnect 00:34:16.741 ************************************ 00:34:16.741 01:20:09 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:34:16.741 * Looking for test storage... 00:34:16.741 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:34:16.741 01:20:09 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:16.741 01:20:09 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:34:16.741 01:20:09 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:16.741 01:20:09 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:16.741 01:20:09 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:16.741 01:20:09 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:16.741 01:20:09 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:16.741 01:20:09 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:16.741 01:20:09 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:16.741 01:20:09 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:16.741 01:20:09 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:16.741 01:20:09 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:16.741 01:20:09 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:34:16.741 01:20:09 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:34:16.741 01:20:09 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:16.741 01:20:09 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:16.741 01:20:09 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:16.741 01:20:09 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:16.741 01:20:09 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:16.741 01:20:09 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:16.741 01:20:09 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:16.741 01:20:09 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:16.741 01:20:09 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:16.741 01:20:09 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:16.741 01:20:09 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:16.741 01:20:09 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:34:16.741 01:20:09 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:16.741 01:20:09 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@47 -- # : 0 00:34:16.741 01:20:09 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:34:16.741 01:20:09 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:34:16.741 01:20:09 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:16.741 01:20:09 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:16.741 01:20:09 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:16.741 01:20:09 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:34:16.741 01:20:09 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:34:16.741 01:20:09 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:34:16.741 01:20:09 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:34:16.741 01:20:09 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:34:16.741 01:20:09 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:34:16.741 01:20:09 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:34:16.741 01:20:09 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:34:16.741 01:20:09 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:16.741 01:20:09 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:34:16.741 01:20:09 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:34:16.741 01:20:09 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:34:16.741 01:20:09 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:16.741 01:20:09 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:34:16.741 01:20:09 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:16.741 01:20:09 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:34:16.741 01:20:09 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:34:16.741 01:20:09 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:34:16.741 01:20:09 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:34:18.639 01:20:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:18.639 01:20:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:34:18.639 01:20:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:34:18.639 01:20:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:34:18.639 01:20:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:34:18.639 01:20:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:34:18.639 01:20:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:34:18.639 01:20:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:34:18.639 01:20:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:34:18.639 01:20:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@296 -- # e810=() 00:34:18.639 01:20:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:34:18.639 01:20:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@297 -- # x722=() 00:34:18.639 01:20:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:34:18.639 01:20:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:34:18.639 01:20:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:34:18.639 01:20:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:18.639 01:20:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:18.639 01:20:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:18.639 01:20:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:18.639 01:20:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:18.639 01:20:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:18.639 01:20:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:18.639 01:20:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:18.639 01:20:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:18.639 01:20:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:18.639 01:20:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:18.639 01:20:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:34:18.639 01:20:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:34:18.639 01:20:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:34:18.639 01:20:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:34:18.639 01:20:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:34:18.639 01:20:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:34:18.639 01:20:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:34:18.639 01:20:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:34:18.639 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:34:18.639 01:20:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:34:18.639 01:20:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:34:18.640 01:20:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:18.640 01:20:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:18.640 01:20:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:34:18.640 01:20:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:34:18.640 01:20:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:34:18.640 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:34:18.640 01:20:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:34:18.640 01:20:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:34:18.640 01:20:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:18.640 01:20:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:18.640 01:20:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:34:18.640 01:20:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:34:18.640 01:20:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:34:18.640 01:20:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:34:18.640 01:20:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:34:18.640 01:20:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:18.640 01:20:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:34:18.640 01:20:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:18.640 01:20:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:34:18.640 01:20:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:34:18.640 01:20:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:18.640 01:20:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:34:18.640 Found net devices under 0000:0a:00.0: cvl_0_0 00:34:18.640 01:20:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:34:18.640 01:20:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:34:18.640 01:20:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:18.640 01:20:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:34:18.640 01:20:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:18.640 01:20:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:34:18.640 01:20:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:34:18.640 01:20:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:18.640 01:20:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:34:18.640 Found net devices under 0000:0a:00.1: cvl_0_1 00:34:18.640 01:20:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:34:18.640 01:20:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:34:18.640 01:20:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:34:18.640 01:20:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:34:18.640 01:20:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:34:18.640 01:20:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:34:18.640 01:20:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:18.640 01:20:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:18.640 01:20:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:18.640 01:20:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:34:18.640 01:20:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:18.640 01:20:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:18.640 01:20:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:34:18.640 01:20:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:18.640 01:20:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:18.640 01:20:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:34:18.640 01:20:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:34:18.640 01:20:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:34:18.640 01:20:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:18.640 01:20:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:18.640 01:20:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:18.640 01:20:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:34:18.640 01:20:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:18.911 01:20:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:18.911 01:20:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:18.911 01:20:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:34:18.911 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:18.911 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.171 ms 00:34:18.911 00:34:18.911 --- 10.0.0.2 ping statistics --- 00:34:18.911 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:18.911 rtt min/avg/max/mdev = 0.171/0.171/0.171/0.000 ms 00:34:18.911 01:20:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:18.911 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:18.911 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.131 ms 00:34:18.911 00:34:18.911 --- 10.0.0.1 ping statistics --- 00:34:18.911 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:18.911 rtt min/avg/max/mdev = 0.131/0.131/0.131/0.000 ms 00:34:18.911 01:20:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:18.911 01:20:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@422 -- # return 0 00:34:18.911 01:20:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:34:18.911 01:20:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:18.911 01:20:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:34:18.911 01:20:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:34:18.911 01:20:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:18.911 01:20:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:34:18.911 01:20:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:34:18.911 01:20:11 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:34:18.911 01:20:11 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:34:18.911 01:20:11 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1103 -- # xtrace_disable 00:34:18.911 01:20:11 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:34:18.911 ************************************ 00:34:18.911 START TEST nvmf_target_disconnect_tc1 00:34:18.911 ************************************ 00:34:18.911 01:20:11 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1121 -- # nvmf_target_disconnect_tc1 00:34:18.911 01:20:11 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:34:18.912 01:20:11 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@648 -- # local es=0 00:34:18.912 01:20:11 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:34:18.912 01:20:11 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:34:18.912 01:20:11 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:34:18.912 01:20:11 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:34:18.912 01:20:11 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:34:18.912 01:20:11 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:34:18.912 01:20:11 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:34:18.912 01:20:11 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:34:18.912 01:20:11 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:34:18.912 01:20:11 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:34:18.912 EAL: No free 2048 kB hugepages reported on node 1 00:34:18.912 [2024-07-25 01:20:11.956532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.912 [2024-07-25 01:20:11.956603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x817740 with addr=10.0.0.2, port=4420 00:34:18.912 [2024-07-25 01:20:11.956638] nvme_tcp.c:2702:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:34:18.912 [2024-07-25 01:20:11.956666] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:34:18.912 [2024-07-25 01:20:11.956681] nvme.c: 898:spdk_nvme_probe: *ERROR*: Create probe context failed 00:34:18.912 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:34:18.912 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:34:18.912 Initializing NVMe Controllers 00:34:18.912 01:20:11 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@651 -- # es=1 00:34:18.912 01:20:11 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:34:18.912 01:20:11 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:34:18.912 01:20:11 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:34:18.912 00:34:18.912 real 0m0.097s 00:34:18.912 user 0m0.043s 00:34:18.912 sys 0m0.052s 00:34:18.912 01:20:11 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:34:18.912 01:20:11 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:34:18.912 ************************************ 00:34:18.912 END TEST nvmf_target_disconnect_tc1 00:34:18.912 ************************************ 00:34:18.912 01:20:11 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:34:18.912 01:20:11 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:34:18.912 01:20:11 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1103 -- # xtrace_disable 00:34:18.912 01:20:11 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:34:18.912 ************************************ 00:34:18.912 START TEST nvmf_target_disconnect_tc2 00:34:18.912 ************************************ 00:34:18.912 01:20:12 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1121 -- # nvmf_target_disconnect_tc2 00:34:18.912 01:20:12 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:34:18.912 01:20:12 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:34:18.912 01:20:12 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:34:18.912 01:20:12 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@720 -- # xtrace_disable 00:34:18.912 01:20:12 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:18.912 01:20:12 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=3927568 00:34:18.912 01:20:12 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:34:18.912 01:20:12 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 3927568 00:34:18.912 01:20:12 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@827 -- # '[' -z 3927568 ']' 00:34:18.912 01:20:12 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:18.912 01:20:12 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@832 -- # local max_retries=100 00:34:18.912 01:20:12 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:18.912 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:18.912 01:20:12 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # xtrace_disable 00:34:18.912 01:20:12 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:19.172 [2024-07-25 01:20:12.070607] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 23.11.0 initialization... 00:34:19.172 [2024-07-25 01:20:12.070680] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:19.172 EAL: No free 2048 kB hugepages reported on node 1 00:34:19.172 [2024-07-25 01:20:12.138839] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:34:19.172 [2024-07-25 01:20:12.238734] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:19.172 [2024-07-25 01:20:12.238791] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:19.172 [2024-07-25 01:20:12.238808] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:19.172 [2024-07-25 01:20:12.238822] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:19.172 [2024-07-25 01:20:12.238833] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:19.172 [2024-07-25 01:20:12.238914] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:34:19.172 [2024-07-25 01:20:12.238969] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:34:19.172 [2024-07-25 01:20:12.239022] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:34:19.172 [2024-07-25 01:20:12.239026] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:34:19.429 01:20:12 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:34:19.429 01:20:12 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@860 -- # return 0 00:34:19.429 01:20:12 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:34:19.429 01:20:12 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:19.429 01:20:12 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:19.429 01:20:12 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:19.429 01:20:12 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:34:19.429 01:20:12 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:19.429 01:20:12 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:19.429 Malloc0 00:34:19.429 01:20:12 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:19.429 01:20:12 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:34:19.429 01:20:12 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:19.429 01:20:12 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:19.429 [2024-07-25 01:20:12.416855] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:19.429 01:20:12 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:19.429 01:20:12 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:34:19.429 01:20:12 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:19.429 01:20:12 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:19.429 01:20:12 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:19.429 01:20:12 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:34:19.429 01:20:12 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:19.429 01:20:12 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:19.429 01:20:12 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:19.429 01:20:12 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:19.429 01:20:12 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:19.429 01:20:12 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:19.429 [2024-07-25 01:20:12.445125] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:19.429 01:20:12 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:19.429 01:20:12 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:34:19.429 01:20:12 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:19.429 01:20:12 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:19.429 01:20:12 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:19.429 01:20:12 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=3927682 00:34:19.429 01:20:12 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:34:19.429 01:20:12 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:34:19.429 EAL: No free 2048 kB hugepages reported on node 1 00:34:21.327 01:20:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 3927568 00:34:21.327 01:20:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:34:21.327 Read completed with error (sct=0, sc=8) 00:34:21.327 starting I/O failed 00:34:21.327 Read completed with error (sct=0, sc=8) 00:34:21.327 starting I/O failed 00:34:21.327 Read completed with error (sct=0, sc=8) 00:34:21.327 starting I/O failed 00:34:21.327 Read completed with error (sct=0, sc=8) 00:34:21.327 starting I/O failed 00:34:21.327 Read completed with error (sct=0, sc=8) 00:34:21.327 starting I/O failed 00:34:21.327 Read completed with error (sct=0, sc=8) 00:34:21.327 starting I/O failed 00:34:21.327 Read completed with error (sct=0, sc=8) 00:34:21.327 starting I/O failed 00:34:21.327 Read completed with error (sct=0, sc=8) 00:34:21.327 starting I/O failed 00:34:21.327 Read completed with error (sct=0, sc=8) 00:34:21.327 starting I/O failed 00:34:21.327 Read completed with error (sct=0, sc=8) 00:34:21.327 starting I/O failed 00:34:21.327 Read completed with error (sct=0, sc=8) 00:34:21.327 starting I/O failed 00:34:21.327 Write completed with error (sct=0, sc=8) 00:34:21.327 starting I/O failed 00:34:21.327 Read completed with error (sct=0, sc=8) 00:34:21.327 starting I/O failed 00:34:21.327 Read completed with error (sct=0, sc=8) 00:34:21.327 starting I/O failed 00:34:21.327 Read completed with error (sct=0, sc=8) 00:34:21.327 starting I/O failed 00:34:21.327 Read completed with error (sct=0, sc=8) 00:34:21.327 starting I/O failed 00:34:21.327 Read completed with error (sct=0, sc=8) 00:34:21.327 starting I/O failed 00:34:21.327 Write completed with error (sct=0, sc=8) 00:34:21.327 starting I/O failed 00:34:21.327 Write completed with error (sct=0, sc=8) 00:34:21.327 starting I/O failed 00:34:21.327 Write completed with error (sct=0, sc=8) 00:34:21.327 starting I/O failed 00:34:21.327 Read completed with error (sct=0, sc=8) 00:34:21.327 starting I/O failed 00:34:21.327 Read completed with error (sct=0, sc=8) 00:34:21.327 starting I/O failed 00:34:21.327 Write completed with error (sct=0, sc=8) 00:34:21.327 starting I/O failed 00:34:21.327 Read completed with error (sct=0, sc=8) 00:34:21.327 starting I/O failed 00:34:21.327 Read completed with error (sct=0, sc=8) 00:34:21.327 starting I/O failed 00:34:21.327 Read completed with error (sct=0, sc=8) 00:34:21.327 starting I/O failed 00:34:21.327 Read completed with error (sct=0, sc=8) 00:34:21.327 starting I/O failed 00:34:21.327 Read completed with error (sct=0, sc=8) 00:34:21.327 starting I/O failed 00:34:21.327 Read completed with error (sct=0, sc=8) 00:34:21.327 starting I/O failed 00:34:21.327 Read completed with error (sct=0, sc=8) 00:34:21.327 starting I/O failed 00:34:21.327 Read completed with error (sct=0, sc=8) 00:34:21.327 starting I/O failed 00:34:21.327 Write completed with error (sct=0, sc=8) 00:34:21.327 starting I/O failed 00:34:21.327 [2024-07-25 01:20:14.470178] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:21.327 Read completed with error (sct=0, sc=8) 00:34:21.327 starting I/O failed 00:34:21.327 Read completed with error (sct=0, sc=8) 00:34:21.327 starting I/O failed 00:34:21.327 Read completed with error (sct=0, sc=8) 00:34:21.327 starting I/O failed 00:34:21.327 Read completed with error (sct=0, sc=8) 00:34:21.327 starting I/O failed 00:34:21.327 Write completed with error (sct=0, sc=8) 00:34:21.327 starting I/O failed 00:34:21.327 Read completed with error (sct=0, sc=8) 00:34:21.327 starting I/O failed 00:34:21.327 Read completed with error (sct=0, sc=8) 00:34:21.327 starting I/O failed 00:34:21.327 Read completed with error (sct=0, sc=8) 00:34:21.327 starting I/O failed 00:34:21.327 Read completed with error (sct=0, sc=8) 00:34:21.327 starting I/O failed 00:34:21.327 Write completed with error (sct=0, sc=8) 00:34:21.327 starting I/O failed 00:34:21.327 Read completed with error (sct=0, sc=8) 00:34:21.327 starting I/O failed 00:34:21.327 Read completed with error (sct=0, sc=8) 00:34:21.327 starting I/O failed 00:34:21.327 Write completed with error (sct=0, sc=8) 00:34:21.327 starting I/O failed 00:34:21.327 Read completed with error (sct=0, sc=8) 00:34:21.327 starting I/O failed 00:34:21.327 Read completed with error (sct=0, sc=8) 00:34:21.327 starting I/O failed 00:34:21.327 Read completed with error (sct=0, sc=8) 00:34:21.327 starting I/O failed 00:34:21.327 Write completed with error (sct=0, sc=8) 00:34:21.327 starting I/O failed 00:34:21.327 Write completed with error (sct=0, sc=8) 00:34:21.327 starting I/O failed 00:34:21.327 Write completed with error (sct=0, sc=8) 00:34:21.327 starting I/O failed 00:34:21.327 Read completed with error (sct=0, sc=8) 00:34:21.327 starting I/O failed 00:34:21.327 Read completed with error (sct=0, sc=8) 00:34:21.327 starting I/O failed 00:34:21.327 Read completed with error (sct=0, sc=8) 00:34:21.327 starting I/O failed 00:34:21.327 Write completed with error (sct=0, sc=8) 00:34:21.327 starting I/O failed 00:34:21.327 Read completed with error (sct=0, sc=8) 00:34:21.327 starting I/O failed 00:34:21.327 Read completed with error (sct=0, sc=8) 00:34:21.327 starting I/O failed 00:34:21.328 Read completed with error (sct=0, sc=8) 00:34:21.328 starting I/O failed 00:34:21.328 Write completed with error (sct=0, sc=8) 00:34:21.328 starting I/O failed 00:34:21.328 Read completed with error (sct=0, sc=8) 00:34:21.328 starting I/O failed 00:34:21.328 Read completed with error (sct=0, sc=8) 00:34:21.328 starting I/O failed 00:34:21.328 Read completed with error (sct=0, sc=8) 00:34:21.328 starting I/O failed 00:34:21.328 Read completed with error (sct=0, sc=8) 00:34:21.328 starting I/O failed 00:34:21.328 Write completed with error (sct=0, sc=8) 00:34:21.328 starting I/O failed 00:34:21.328 [2024-07-25 01:20:14.470577] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:21.328 Read completed with error (sct=0, sc=8) 00:34:21.328 starting I/O failed 00:34:21.328 Read completed with error (sct=0, sc=8) 00:34:21.328 starting I/O failed 00:34:21.328 Read completed with error (sct=0, sc=8) 00:34:21.328 starting I/O failed 00:34:21.328 Read completed with error (sct=0, sc=8) 00:34:21.328 starting I/O failed 00:34:21.328 Read completed with error (sct=0, sc=8) 00:34:21.328 starting I/O failed 00:34:21.328 Read completed with error (sct=0, sc=8) 00:34:21.328 starting I/O failed 00:34:21.328 Read completed with error (sct=0, sc=8) 00:34:21.328 starting I/O failed 00:34:21.328 Read completed with error (sct=0, sc=8) 00:34:21.328 starting I/O failed 00:34:21.328 Read completed with error (sct=0, sc=8) 00:34:21.328 starting I/O failed 00:34:21.328 Read completed with error (sct=0, sc=8) 00:34:21.328 starting I/O failed 00:34:21.328 Read completed with error (sct=0, sc=8) 00:34:21.328 starting I/O failed 00:34:21.328 Read completed with error (sct=0, sc=8) 00:34:21.328 starting I/O failed 00:34:21.328 Read completed with error (sct=0, sc=8) 00:34:21.328 starting I/O failed 00:34:21.328 Write completed with error (sct=0, sc=8) 00:34:21.328 starting I/O failed 00:34:21.328 Read completed with error (sct=0, sc=8) 00:34:21.328 starting I/O failed 00:34:21.328 Read completed with error (sct=0, sc=8) 00:34:21.328 starting I/O failed 00:34:21.328 Write completed with error (sct=0, sc=8) 00:34:21.328 starting I/O failed 00:34:21.328 Write completed with error (sct=0, sc=8) 00:34:21.328 starting I/O failed 00:34:21.328 Write completed with error (sct=0, sc=8) 00:34:21.328 starting I/O failed 00:34:21.328 Write completed with error (sct=0, sc=8) 00:34:21.328 starting I/O failed 00:34:21.328 Read completed with error (sct=0, sc=8) 00:34:21.328 starting I/O failed 00:34:21.328 Read completed with error (sct=0, sc=8) 00:34:21.328 starting I/O failed 00:34:21.328 Read completed with error (sct=0, sc=8) 00:34:21.328 starting I/O failed 00:34:21.328 Write completed with error (sct=0, sc=8) 00:34:21.328 starting I/O failed 00:34:21.328 Write completed with error (sct=0, sc=8) 00:34:21.328 starting I/O failed 00:34:21.328 Write completed with error (sct=0, sc=8) 00:34:21.328 starting I/O failed 00:34:21.328 Read completed with error (sct=0, sc=8) 00:34:21.328 starting I/O failed 00:34:21.328 Read completed with error (sct=0, sc=8) 00:34:21.328 starting I/O failed 00:34:21.328 Read completed with error (sct=0, sc=8) 00:34:21.328 starting I/O failed 00:34:21.328 Write completed with error (sct=0, sc=8) 00:34:21.328 starting I/O failed 00:34:21.328 Write completed with error (sct=0, sc=8) 00:34:21.328 starting I/O failed 00:34:21.328 Write completed with error (sct=0, sc=8) 00:34:21.328 starting I/O failed 00:34:21.328 [2024-07-25 01:20:14.470874] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:21.328 Read completed with error (sct=0, sc=8) 00:34:21.328 starting I/O failed 00:34:21.328 Read completed with error (sct=0, sc=8) 00:34:21.328 starting I/O failed 00:34:21.328 Read completed with error (sct=0, sc=8) 00:34:21.328 starting I/O failed 00:34:21.328 Read completed with error (sct=0, sc=8) 00:34:21.328 starting I/O failed 00:34:21.328 Read completed with error (sct=0, sc=8) 00:34:21.328 starting I/O failed 00:34:21.328 Read completed with error (sct=0, sc=8) 00:34:21.328 starting I/O failed 00:34:21.328 Read completed with error (sct=0, sc=8) 00:34:21.328 starting I/O failed 00:34:21.328 Read completed with error (sct=0, sc=8) 00:34:21.328 starting I/O failed 00:34:21.328 Read completed with error (sct=0, sc=8) 00:34:21.328 starting I/O failed 00:34:21.328 Read completed with error (sct=0, sc=8) 00:34:21.328 starting I/O failed 00:34:21.328 Read completed with error (sct=0, sc=8) 00:34:21.328 starting I/O failed 00:34:21.328 Read completed with error (sct=0, sc=8) 00:34:21.328 starting I/O failed 00:34:21.328 Read completed with error (sct=0, sc=8) 00:34:21.328 starting I/O failed 00:34:21.328 Read completed with error (sct=0, sc=8) 00:34:21.328 starting I/O failed 00:34:21.328 Read completed with error (sct=0, sc=8) 00:34:21.328 starting I/O failed 00:34:21.328 Write completed with error (sct=0, sc=8) 00:34:21.328 starting I/O failed 00:34:21.328 Read completed with error (sct=0, sc=8) 00:34:21.328 starting I/O failed 00:34:21.328 Write completed with error (sct=0, sc=8) 00:34:21.328 starting I/O failed 00:34:21.328 Read completed with error (sct=0, sc=8) 00:34:21.328 starting I/O failed 00:34:21.328 Read completed with error (sct=0, sc=8) 00:34:21.328 starting I/O failed 00:34:21.328 Read completed with error (sct=0, sc=8) 00:34:21.328 starting I/O failed 00:34:21.328 Write completed with error (sct=0, sc=8) 00:34:21.328 starting I/O failed 00:34:21.328 Write completed with error (sct=0, sc=8) 00:34:21.328 starting I/O failed 00:34:21.328 Read completed with error (sct=0, sc=8) 00:34:21.328 starting I/O failed 00:34:21.328 Read completed with error (sct=0, sc=8) 00:34:21.328 starting I/O failed 00:34:21.328 Read completed with error (sct=0, sc=8) 00:34:21.328 starting I/O failed 00:34:21.328 Read completed with error (sct=0, sc=8) 00:34:21.328 starting I/O failed 00:34:21.328 Read completed with error (sct=0, sc=8) 00:34:21.328 starting I/O failed 00:34:21.328 Read completed with error (sct=0, sc=8) 00:34:21.328 starting I/O failed 00:34:21.328 Read completed with error (sct=0, sc=8) 00:34:21.328 starting I/O failed 00:34:21.328 Read completed with error (sct=0, sc=8) 00:34:21.328 starting I/O failed 00:34:21.328 Read completed with error (sct=0, sc=8) 00:34:21.328 starting I/O failed 00:34:21.328 [2024-07-25 01:20:14.471222] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:21.328 [2024-07-25 01:20:14.471447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.328 [2024-07-25 01:20:14.471488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.328 qpair failed and we were unable to recover it. 00:34:21.328 [2024-07-25 01:20:14.471691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.328 [2024-07-25 01:20:14.471718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.328 qpair failed and we were unable to recover it. 00:34:21.328 [2024-07-25 01:20:14.471852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.328 [2024-07-25 01:20:14.471881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.328 qpair failed and we were unable to recover it. 00:34:21.328 [2024-07-25 01:20:14.472154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.328 [2024-07-25 01:20:14.472205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.328 qpair failed and we were unable to recover it. 00:34:21.328 [2024-07-25 01:20:14.472365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.328 [2024-07-25 01:20:14.472391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.328 qpair failed and we were unable to recover it. 00:34:21.328 [2024-07-25 01:20:14.472537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.328 [2024-07-25 01:20:14.472562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.328 qpair failed and we were unable to recover it. 00:34:21.328 [2024-07-25 01:20:14.472756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.328 [2024-07-25 01:20:14.472787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.328 qpair failed and we were unable to recover it. 00:34:21.328 [2024-07-25 01:20:14.472941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.328 [2024-07-25 01:20:14.472985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.328 qpair failed and we were unable to recover it. 00:34:21.328 [2024-07-25 01:20:14.473221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.328 [2024-07-25 01:20:14.473253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.328 qpair failed and we were unable to recover it. 00:34:21.328 [2024-07-25 01:20:14.473378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.328 [2024-07-25 01:20:14.473407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.328 qpair failed and we were unable to recover it. 00:34:21.328 [2024-07-25 01:20:14.473530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.328 [2024-07-25 01:20:14.473555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.328 qpair failed and we were unable to recover it. 00:34:21.328 [2024-07-25 01:20:14.473670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.328 [2024-07-25 01:20:14.473695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.328 qpair failed and we were unable to recover it. 00:34:21.329 [2024-07-25 01:20:14.473815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.329 [2024-07-25 01:20:14.473851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.329 qpair failed and we were unable to recover it. 00:34:21.329 [2024-07-25 01:20:14.474112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.329 [2024-07-25 01:20:14.474155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.329 qpair failed and we were unable to recover it. 00:34:21.329 [2024-07-25 01:20:14.474350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.329 [2024-07-25 01:20:14.474382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.329 qpair failed and we were unable to recover it. 00:34:21.329 [2024-07-25 01:20:14.474504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.329 [2024-07-25 01:20:14.474531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.329 qpair failed and we were unable to recover it. 00:34:21.329 [2024-07-25 01:20:14.474745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.329 [2024-07-25 01:20:14.474771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.329 qpair failed and we were unable to recover it. 00:34:21.329 [2024-07-25 01:20:14.474883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.329 [2024-07-25 01:20:14.474908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.329 qpair failed and we were unable to recover it. 00:34:21.329 [2024-07-25 01:20:14.475050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.329 [2024-07-25 01:20:14.475074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.329 qpair failed and we were unable to recover it. 00:34:21.329 [2024-07-25 01:20:14.475264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.329 [2024-07-25 01:20:14.475291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.329 qpair failed and we were unable to recover it. 00:34:21.329 [2024-07-25 01:20:14.475416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.329 [2024-07-25 01:20:14.475441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.329 qpair failed and we were unable to recover it. 00:34:21.329 [2024-07-25 01:20:14.475590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.329 [2024-07-25 01:20:14.475615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.329 qpair failed and we were unable to recover it. 00:34:21.329 [2024-07-25 01:20:14.475735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.329 [2024-07-25 01:20:14.475770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.329 qpair failed and we were unable to recover it. 00:34:21.329 [2024-07-25 01:20:14.475918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.329 [2024-07-25 01:20:14.475947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.329 qpair failed and we were unable to recover it. 00:34:21.329 [2024-07-25 01:20:14.476093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.329 [2024-07-25 01:20:14.476119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.329 qpair failed and we were unable to recover it. 00:34:21.329 [2024-07-25 01:20:14.476302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.329 [2024-07-25 01:20:14.476329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.329 qpair failed and we were unable to recover it. 00:34:21.329 [2024-07-25 01:20:14.476476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.329 [2024-07-25 01:20:14.476501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.329 qpair failed and we were unable to recover it. 00:34:21.329 [2024-07-25 01:20:14.476641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.329 [2024-07-25 01:20:14.476669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.329 qpair failed and we were unable to recover it. 00:34:21.329 [2024-07-25 01:20:14.476851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.329 [2024-07-25 01:20:14.476876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.329 qpair failed and we were unable to recover it. 00:34:21.329 [2024-07-25 01:20:14.477038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.329 [2024-07-25 01:20:14.477066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.329 qpair failed and we were unable to recover it. 00:34:21.329 [2024-07-25 01:20:14.477209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.329 [2024-07-25 01:20:14.477234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.329 qpair failed and we were unable to recover it. 00:34:21.329 [2024-07-25 01:20:14.477393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.329 [2024-07-25 01:20:14.477429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.329 qpair failed and we were unable to recover it. 00:34:21.607 [2024-07-25 01:20:14.477575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.607 [2024-07-25 01:20:14.477628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.607 qpair failed and we were unable to recover it. 00:34:21.607 [2024-07-25 01:20:14.477830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.607 [2024-07-25 01:20:14.477859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.607 qpair failed and we were unable to recover it. 00:34:21.607 [2024-07-25 01:20:14.477975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.607 [2024-07-25 01:20:14.478001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.607 qpair failed and we were unable to recover it. 00:34:21.607 [2024-07-25 01:20:14.478169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.607 [2024-07-25 01:20:14.478195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.607 qpair failed and we were unable to recover it. 00:34:21.607 [2024-07-25 01:20:14.478316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.607 [2024-07-25 01:20:14.478343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.607 qpair failed and we were unable to recover it. 00:34:21.607 [2024-07-25 01:20:14.478466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.607 [2024-07-25 01:20:14.478491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.607 qpair failed and we were unable to recover it. 00:34:21.607 [2024-07-25 01:20:14.478633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.607 [2024-07-25 01:20:14.478658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.607 qpair failed and we were unable to recover it. 00:34:21.607 [2024-07-25 01:20:14.478825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.607 [2024-07-25 01:20:14.478854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.607 qpair failed and we were unable to recover it. 00:34:21.607 [2024-07-25 01:20:14.479012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.607 [2024-07-25 01:20:14.479040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.607 qpair failed and we were unable to recover it. 00:34:21.607 [2024-07-25 01:20:14.479201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.607 [2024-07-25 01:20:14.479229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.607 qpair failed and we were unable to recover it. 00:34:21.607 [2024-07-25 01:20:14.479377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.607 [2024-07-25 01:20:14.479403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.607 qpair failed and we were unable to recover it. 00:34:21.607 [2024-07-25 01:20:14.479549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.607 [2024-07-25 01:20:14.479575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.607 qpair failed and we were unable to recover it. 00:34:21.607 [2024-07-25 01:20:14.479744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.607 [2024-07-25 01:20:14.479787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.607 qpair failed and we were unable to recover it. 00:34:21.607 [2024-07-25 01:20:14.479960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.607 [2024-07-25 01:20:14.479985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.607 qpair failed and we were unable to recover it. 00:34:21.607 [2024-07-25 01:20:14.480184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.607 [2024-07-25 01:20:14.480226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.607 qpair failed and we were unable to recover it. 00:34:21.607 [2024-07-25 01:20:14.480359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.607 [2024-07-25 01:20:14.480386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.607 qpair failed and we were unable to recover it. 00:34:21.607 [2024-07-25 01:20:14.480502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.607 [2024-07-25 01:20:14.480528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.607 qpair failed and we were unable to recover it. 00:34:21.607 [2024-07-25 01:20:14.480673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.607 [2024-07-25 01:20:14.480699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.607 qpair failed and we were unable to recover it. 00:34:21.607 [2024-07-25 01:20:14.480818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.607 [2024-07-25 01:20:14.480845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.607 qpair failed and we were unable to recover it. 00:34:21.607 [2024-07-25 01:20:14.481013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.607 [2024-07-25 01:20:14.481038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.607 qpair failed and we were unable to recover it. 00:34:21.607 [2024-07-25 01:20:14.481155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.607 [2024-07-25 01:20:14.481180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.607 qpair failed and we were unable to recover it. 00:34:21.607 [2024-07-25 01:20:14.481320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.607 [2024-07-25 01:20:14.481365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.607 qpair failed and we were unable to recover it. 00:34:21.607 [2024-07-25 01:20:14.481531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.607 [2024-07-25 01:20:14.481567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.607 qpair failed and we were unable to recover it. 00:34:21.607 [2024-07-25 01:20:14.481716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.607 [2024-07-25 01:20:14.481741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.607 qpair failed and we were unable to recover it. 00:34:21.607 [2024-07-25 01:20:14.481881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.608 [2024-07-25 01:20:14.481923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.608 qpair failed and we were unable to recover it. 00:34:21.608 [2024-07-25 01:20:14.482112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.608 [2024-07-25 01:20:14.482142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.608 qpair failed and we were unable to recover it. 00:34:21.608 [2024-07-25 01:20:14.482318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.608 [2024-07-25 01:20:14.482354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.608 qpair failed and we were unable to recover it. 00:34:21.608 [2024-07-25 01:20:14.482522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.608 [2024-07-25 01:20:14.482560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.608 qpair failed and we were unable to recover it. 00:34:21.608 [2024-07-25 01:20:14.482695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.608 [2024-07-25 01:20:14.482721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.608 qpair failed and we were unable to recover it. 00:34:21.608 [2024-07-25 01:20:14.482888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.608 [2024-07-25 01:20:14.482914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.608 qpair failed and we were unable to recover it. 00:34:21.608 [2024-07-25 01:20:14.483062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.608 [2024-07-25 01:20:14.483087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.608 qpair failed and we were unable to recover it. 00:34:21.608 [2024-07-25 01:20:14.483197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.608 [2024-07-25 01:20:14.483222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.608 qpair failed and we were unable to recover it. 00:34:21.608 [2024-07-25 01:20:14.483354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.608 [2024-07-25 01:20:14.483379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.608 qpair failed and we were unable to recover it. 00:34:21.608 [2024-07-25 01:20:14.483529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.608 [2024-07-25 01:20:14.483555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.608 qpair failed and we were unable to recover it. 00:34:21.608 [2024-07-25 01:20:14.483679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.608 [2024-07-25 01:20:14.483704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.608 qpair failed and we were unable to recover it. 00:34:21.608 [2024-07-25 01:20:14.483852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.608 [2024-07-25 01:20:14.483880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.608 qpair failed and we were unable to recover it. 00:34:21.608 [2024-07-25 01:20:14.484046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.608 [2024-07-25 01:20:14.484074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.608 qpair failed and we were unable to recover it. 00:34:21.608 [2024-07-25 01:20:14.484267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.608 [2024-07-25 01:20:14.484293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.608 qpair failed and we were unable to recover it. 00:34:21.608 [2024-07-25 01:20:14.484410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.608 [2024-07-25 01:20:14.484436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.608 qpair failed and we were unable to recover it. 00:34:21.608 [2024-07-25 01:20:14.484580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.608 [2024-07-25 01:20:14.484605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.608 qpair failed and we were unable to recover it. 00:34:21.608 [2024-07-25 01:20:14.484732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.608 [2024-07-25 01:20:14.484760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.608 qpair failed and we were unable to recover it. 00:34:21.608 [2024-07-25 01:20:14.485004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.608 [2024-07-25 01:20:14.485055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.608 qpair failed and we were unable to recover it. 00:34:21.608 [2024-07-25 01:20:14.485222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.608 [2024-07-25 01:20:14.485255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.608 qpair failed and we were unable to recover it. 00:34:21.608 [2024-07-25 01:20:14.485396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.608 [2024-07-25 01:20:14.485421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.608 qpair failed and we were unable to recover it. 00:34:21.608 [2024-07-25 01:20:14.485538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.608 [2024-07-25 01:20:14.485563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.608 qpair failed and we were unable to recover it. 00:34:21.608 [2024-07-25 01:20:14.485733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.608 [2024-07-25 01:20:14.485758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.608 qpair failed and we were unable to recover it. 00:34:21.608 [2024-07-25 01:20:14.485902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.608 [2024-07-25 01:20:14.485927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.608 qpair failed and we were unable to recover it. 00:34:21.608 [2024-07-25 01:20:14.486046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.608 [2024-07-25 01:20:14.486071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.608 qpair failed and we were unable to recover it. 00:34:21.608 [2024-07-25 01:20:14.486214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.608 [2024-07-25 01:20:14.486240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.608 qpair failed and we were unable to recover it. 00:34:21.608 [2024-07-25 01:20:14.486376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.608 [2024-07-25 01:20:14.486415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.608 qpair failed and we were unable to recover it. 00:34:21.608 [2024-07-25 01:20:14.486545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.608 [2024-07-25 01:20:14.486572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.608 qpair failed and we were unable to recover it. 00:34:21.608 [2024-07-25 01:20:14.486764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.608 [2024-07-25 01:20:14.486793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.608 qpair failed and we were unable to recover it. 00:34:21.608 [2024-07-25 01:20:14.487030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.608 [2024-07-25 01:20:14.487080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.608 qpair failed and we were unable to recover it. 00:34:21.608 [2024-07-25 01:20:14.487218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.608 [2024-07-25 01:20:14.487251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.608 qpair failed and we were unable to recover it. 00:34:21.608 [2024-07-25 01:20:14.487369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.608 [2024-07-25 01:20:14.487395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.608 qpair failed and we were unable to recover it. 00:34:21.608 [2024-07-25 01:20:14.487513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.608 [2024-07-25 01:20:14.487539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.608 qpair failed and we were unable to recover it. 00:34:21.608 [2024-07-25 01:20:14.487657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.608 [2024-07-25 01:20:14.487682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.608 qpair failed and we were unable to recover it. 00:34:21.608 [2024-07-25 01:20:14.487859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.608 [2024-07-25 01:20:14.487885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.608 qpair failed and we were unable to recover it. 00:34:21.608 [2024-07-25 01:20:14.488038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.608 [2024-07-25 01:20:14.488063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.608 qpair failed and we were unable to recover it. 00:34:21.608 [2024-07-25 01:20:14.488205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.608 [2024-07-25 01:20:14.488230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.608 qpair failed and we were unable to recover it. 00:34:21.608 [2024-07-25 01:20:14.488410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.608 [2024-07-25 01:20:14.488449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.608 qpair failed and we were unable to recover it. 00:34:21.608 [2024-07-25 01:20:14.488606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.608 [2024-07-25 01:20:14.488648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.608 qpair failed and we were unable to recover it. 00:34:21.609 [2024-07-25 01:20:14.488838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.609 [2024-07-25 01:20:14.488863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.609 qpair failed and we were unable to recover it. 00:34:21.609 [2024-07-25 01:20:14.489071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.609 [2024-07-25 01:20:14.489135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.609 qpair failed and we were unable to recover it. 00:34:21.609 [2024-07-25 01:20:14.489303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.609 [2024-07-25 01:20:14.489328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.609 qpair failed and we were unable to recover it. 00:34:21.609 [2024-07-25 01:20:14.489444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.609 [2024-07-25 01:20:14.489468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.609 qpair failed and we were unable to recover it. 00:34:21.609 [2024-07-25 01:20:14.489637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.609 [2024-07-25 01:20:14.489662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.609 qpair failed and we were unable to recover it. 00:34:21.609 [2024-07-25 01:20:14.489878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.609 [2024-07-25 01:20:14.489907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.609 qpair failed and we were unable to recover it. 00:34:21.609 [2024-07-25 01:20:14.490092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.609 [2024-07-25 01:20:14.490116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.609 qpair failed and we were unable to recover it. 00:34:21.609 [2024-07-25 01:20:14.490284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.609 [2024-07-25 01:20:14.490309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.609 qpair failed and we were unable to recover it. 00:34:21.609 [2024-07-25 01:20:14.490422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.609 [2024-07-25 01:20:14.490446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.609 qpair failed and we were unable to recover it. 00:34:21.609 [2024-07-25 01:20:14.490581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.609 [2024-07-25 01:20:14.490606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.609 qpair failed and we were unable to recover it. 00:34:21.609 [2024-07-25 01:20:14.490746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.609 [2024-07-25 01:20:14.490772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.609 qpair failed and we were unable to recover it. 00:34:21.609 [2024-07-25 01:20:14.490950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.609 [2024-07-25 01:20:14.491000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.609 qpair failed and we were unable to recover it. 00:34:21.609 [2024-07-25 01:20:14.491184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.609 [2024-07-25 01:20:14.491209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.609 qpair failed and we were unable to recover it. 00:34:21.609 [2024-07-25 01:20:14.491364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.609 [2024-07-25 01:20:14.491390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.609 qpair failed and we were unable to recover it. 00:34:21.609 [2024-07-25 01:20:14.491547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.609 [2024-07-25 01:20:14.491601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.609 qpair failed and we were unable to recover it. 00:34:21.609 [2024-07-25 01:20:14.491778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.609 [2024-07-25 01:20:14.491805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.609 qpair failed and we were unable to recover it. 00:34:21.609 [2024-07-25 01:20:14.491948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.609 [2024-07-25 01:20:14.491974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.609 qpair failed and we were unable to recover it. 00:34:21.609 [2024-07-25 01:20:14.492122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.609 [2024-07-25 01:20:14.492165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.609 qpair failed and we were unable to recover it. 00:34:21.609 [2024-07-25 01:20:14.492339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.609 [2024-07-25 01:20:14.492365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.609 qpair failed and we were unable to recover it. 00:34:21.609 [2024-07-25 01:20:14.492485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.609 [2024-07-25 01:20:14.492510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.609 qpair failed and we were unable to recover it. 00:34:21.609 [2024-07-25 01:20:14.492649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.609 [2024-07-25 01:20:14.492675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.609 qpair failed and we were unable to recover it. 00:34:21.609 [2024-07-25 01:20:14.492818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.609 [2024-07-25 01:20:14.492843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.609 qpair failed and we were unable to recover it. 00:34:21.609 [2024-07-25 01:20:14.492983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.609 [2024-07-25 01:20:14.493008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.609 qpair failed and we were unable to recover it. 00:34:21.609 [2024-07-25 01:20:14.493152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.609 [2024-07-25 01:20:14.493181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.609 qpair failed and we were unable to recover it. 00:34:21.609 [2024-07-25 01:20:14.493344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.609 [2024-07-25 01:20:14.493371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.609 qpair failed and we were unable to recover it. 00:34:21.609 [2024-07-25 01:20:14.493516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.609 [2024-07-25 01:20:14.493542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.609 qpair failed and we were unable to recover it. 00:34:21.609 [2024-07-25 01:20:14.493684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.609 [2024-07-25 01:20:14.493709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.609 qpair failed and we were unable to recover it. 00:34:21.609 [2024-07-25 01:20:14.493853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.609 [2024-07-25 01:20:14.493878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.609 qpair failed and we were unable to recover it. 00:34:21.609 [2024-07-25 01:20:14.494018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.609 [2024-07-25 01:20:14.494043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.609 qpair failed and we were unable to recover it. 00:34:21.609 [2024-07-25 01:20:14.494206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.609 [2024-07-25 01:20:14.494231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.609 qpair failed and we were unable to recover it. 00:34:21.609 [2024-07-25 01:20:14.494355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.609 [2024-07-25 01:20:14.494380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.609 qpair failed and we were unable to recover it. 00:34:21.609 [2024-07-25 01:20:14.494518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.609 [2024-07-25 01:20:14.494543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.609 qpair failed and we were unable to recover it. 00:34:21.609 [2024-07-25 01:20:14.494710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.609 [2024-07-25 01:20:14.494738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.609 qpair failed and we were unable to recover it. 00:34:21.609 [2024-07-25 01:20:14.494895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.609 [2024-07-25 01:20:14.494921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.609 qpair failed and we were unable to recover it. 00:34:21.609 [2024-07-25 01:20:14.495060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.609 [2024-07-25 01:20:14.495085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.609 qpair failed and we were unable to recover it. 00:34:21.609 [2024-07-25 01:20:14.495196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.609 [2024-07-25 01:20:14.495222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.609 qpair failed and we were unable to recover it. 00:34:21.609 [2024-07-25 01:20:14.495367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.609 [2024-07-25 01:20:14.495392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.609 qpair failed and we were unable to recover it. 00:34:21.609 [2024-07-25 01:20:14.495512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.609 [2024-07-25 01:20:14.495553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.610 qpair failed and we were unable to recover it. 00:34:21.610 [2024-07-25 01:20:14.495709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.610 [2024-07-25 01:20:14.495738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.610 qpair failed and we were unable to recover it. 00:34:21.610 [2024-07-25 01:20:14.495899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.610 [2024-07-25 01:20:14.495926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.610 qpair failed and we were unable to recover it. 00:34:21.610 [2024-07-25 01:20:14.496041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.610 [2024-07-25 01:20:14.496067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.610 qpair failed and we were unable to recover it. 00:34:21.610 [2024-07-25 01:20:14.496190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.610 [2024-07-25 01:20:14.496218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.610 qpair failed and we were unable to recover it. 00:34:21.610 [2024-07-25 01:20:14.496401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.610 [2024-07-25 01:20:14.496427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.610 qpair failed and we were unable to recover it. 00:34:21.610 [2024-07-25 01:20:14.496542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.610 [2024-07-25 01:20:14.496567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.610 qpair failed and we were unable to recover it. 00:34:21.610 [2024-07-25 01:20:14.496689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.610 [2024-07-25 01:20:14.496715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.610 qpair failed and we were unable to recover it. 00:34:21.610 [2024-07-25 01:20:14.496880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.610 [2024-07-25 01:20:14.496905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.610 qpair failed and we were unable to recover it. 00:34:21.610 [2024-07-25 01:20:14.497056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.610 [2024-07-25 01:20:14.497083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.610 qpair failed and we were unable to recover it. 00:34:21.610 [2024-07-25 01:20:14.497230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.610 [2024-07-25 01:20:14.497263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.610 qpair failed and we were unable to recover it. 00:34:21.610 [2024-07-25 01:20:14.497438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.610 [2024-07-25 01:20:14.497463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.610 qpair failed and we were unable to recover it. 00:34:21.610 [2024-07-25 01:20:14.497608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.610 [2024-07-25 01:20:14.497633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.610 qpair failed and we were unable to recover it. 00:34:21.610 [2024-07-25 01:20:14.497802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.610 [2024-07-25 01:20:14.497827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.610 qpair failed and we were unable to recover it. 00:34:21.610 [2024-07-25 01:20:14.497945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.610 [2024-07-25 01:20:14.497970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.610 qpair failed and we were unable to recover it. 00:34:21.610 [2024-07-25 01:20:14.498144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.610 [2024-07-25 01:20:14.498170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.610 qpair failed and we were unable to recover it. 00:34:21.610 [2024-07-25 01:20:14.498392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.610 [2024-07-25 01:20:14.498431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.610 qpair failed and we were unable to recover it. 00:34:21.610 [2024-07-25 01:20:14.498586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.610 [2024-07-25 01:20:14.498614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.610 qpair failed and we were unable to recover it. 00:34:21.610 [2024-07-25 01:20:14.498792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.610 [2024-07-25 01:20:14.498818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.610 qpair failed and we were unable to recover it. 00:34:21.610 [2024-07-25 01:20:14.498988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.610 [2024-07-25 01:20:14.499013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.610 qpair failed and we were unable to recover it. 00:34:21.610 [2024-07-25 01:20:14.499136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.610 [2024-07-25 01:20:14.499162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.610 qpair failed and we were unable to recover it. 00:34:21.610 [2024-07-25 01:20:14.499290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.610 [2024-07-25 01:20:14.499330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.610 qpair failed and we were unable to recover it. 00:34:21.610 [2024-07-25 01:20:14.499461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.610 [2024-07-25 01:20:14.499499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.610 qpair failed and we were unable to recover it. 00:34:21.610 [2024-07-25 01:20:14.499646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.610 [2024-07-25 01:20:14.499673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.610 qpair failed and we were unable to recover it. 00:34:21.610 [2024-07-25 01:20:14.499814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.610 [2024-07-25 01:20:14.499839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.610 qpair failed and we were unable to recover it. 00:34:21.610 [2024-07-25 01:20:14.499989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.610 [2024-07-25 01:20:14.500014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.610 qpair failed and we were unable to recover it. 00:34:21.610 [2024-07-25 01:20:14.500172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.610 [2024-07-25 01:20:14.500200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.610 qpair failed and we were unable to recover it. 00:34:21.610 [2024-07-25 01:20:14.500399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.610 [2024-07-25 01:20:14.500425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.610 qpair failed and we were unable to recover it. 00:34:21.610 [2024-07-25 01:20:14.500543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.610 [2024-07-25 01:20:14.500569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.610 qpair failed and we were unable to recover it. 00:34:21.610 [2024-07-25 01:20:14.500711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.610 [2024-07-25 01:20:14.500736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.610 qpair failed and we were unable to recover it. 00:34:21.610 [2024-07-25 01:20:14.500901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.610 [2024-07-25 01:20:14.500943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.610 qpair failed and we were unable to recover it. 00:34:21.610 [2024-07-25 01:20:14.501100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.610 [2024-07-25 01:20:14.501129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.610 qpair failed and we were unable to recover it. 00:34:21.610 [2024-07-25 01:20:14.501316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.610 [2024-07-25 01:20:14.501343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.610 qpair failed and we were unable to recover it. 00:34:21.610 [2024-07-25 01:20:14.501492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.610 [2024-07-25 01:20:14.501517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.610 qpair failed and we were unable to recover it. 00:34:21.610 [2024-07-25 01:20:14.501667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.610 [2024-07-25 01:20:14.501693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.610 qpair failed and we were unable to recover it. 00:34:21.610 [2024-07-25 01:20:14.501859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.610 [2024-07-25 01:20:14.501884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.610 qpair failed and we were unable to recover it. 00:34:21.610 [2024-07-25 01:20:14.502044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.610 [2024-07-25 01:20:14.502071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.610 qpair failed and we were unable to recover it. 00:34:21.610 [2024-07-25 01:20:14.502220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.610 [2024-07-25 01:20:14.502255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.610 qpair failed and we were unable to recover it. 00:34:21.610 [2024-07-25 01:20:14.502413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.610 [2024-07-25 01:20:14.502438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.610 qpair failed and we were unable to recover it. 00:34:21.611 [2024-07-25 01:20:14.502591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.611 [2024-07-25 01:20:14.502619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.611 qpair failed and we were unable to recover it. 00:34:21.611 [2024-07-25 01:20:14.502811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.611 [2024-07-25 01:20:14.502839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.611 qpair failed and we were unable to recover it. 00:34:21.611 [2024-07-25 01:20:14.503026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.611 [2024-07-25 01:20:14.503051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.611 qpair failed and we were unable to recover it. 00:34:21.611 [2024-07-25 01:20:14.503166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.611 [2024-07-25 01:20:14.503192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.611 qpair failed and we were unable to recover it. 00:34:21.611 [2024-07-25 01:20:14.503361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.611 [2024-07-25 01:20:14.503387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.611 qpair failed and we were unable to recover it. 00:34:21.611 [2024-07-25 01:20:14.503530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.611 [2024-07-25 01:20:14.503556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.611 qpair failed and we were unable to recover it. 00:34:21.611 [2024-07-25 01:20:14.503748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.611 [2024-07-25 01:20:14.503776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.611 qpair failed and we were unable to recover it. 00:34:21.611 [2024-07-25 01:20:14.504000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.611 [2024-07-25 01:20:14.504028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.611 qpair failed and we were unable to recover it. 00:34:21.611 [2024-07-25 01:20:14.504186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.611 [2024-07-25 01:20:14.504211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.611 qpair failed and we were unable to recover it. 00:34:21.611 [2024-07-25 01:20:14.504383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.611 [2024-07-25 01:20:14.504408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.611 qpair failed and we were unable to recover it. 00:34:21.611 [2024-07-25 01:20:14.504527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.611 [2024-07-25 01:20:14.504552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.611 qpair failed and we were unable to recover it. 00:34:21.611 [2024-07-25 01:20:14.504661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.611 [2024-07-25 01:20:14.504686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.611 qpair failed and we were unable to recover it. 00:34:21.611 [2024-07-25 01:20:14.504822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.611 [2024-07-25 01:20:14.504847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.611 qpair failed and we were unable to recover it. 00:34:21.611 [2024-07-25 01:20:14.505012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.611 [2024-07-25 01:20:14.505039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.611 qpair failed and we were unable to recover it. 00:34:21.611 [2024-07-25 01:20:14.505202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.611 [2024-07-25 01:20:14.505226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.611 qpair failed and we were unable to recover it. 00:34:21.611 [2024-07-25 01:20:14.505372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.611 [2024-07-25 01:20:14.505397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.611 qpair failed and we were unable to recover it. 00:34:21.611 [2024-07-25 01:20:14.505550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.611 [2024-07-25 01:20:14.505589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.611 qpair failed and we were unable to recover it. 00:34:21.611 [2024-07-25 01:20:14.505738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.611 [2024-07-25 01:20:14.505764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.611 qpair failed and we were unable to recover it. 00:34:21.611 [2024-07-25 01:20:14.505890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.611 [2024-07-25 01:20:14.505917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.611 qpair failed and we were unable to recover it. 00:34:21.611 [2024-07-25 01:20:14.506061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.611 [2024-07-25 01:20:14.506086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.611 qpair failed and we were unable to recover it. 00:34:21.611 [2024-07-25 01:20:14.506256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.611 [2024-07-25 01:20:14.506282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.611 qpair failed and we were unable to recover it. 00:34:21.611 [2024-07-25 01:20:14.506441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.611 [2024-07-25 01:20:14.506466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.611 qpair failed and we were unable to recover it. 00:34:21.611 [2024-07-25 01:20:14.506604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.611 [2024-07-25 01:20:14.506629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.611 qpair failed and we were unable to recover it. 00:34:21.611 [2024-07-25 01:20:14.506770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.611 [2024-07-25 01:20:14.506795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.611 qpair failed and we were unable to recover it. 00:34:21.611 [2024-07-25 01:20:14.506983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.611 [2024-07-25 01:20:14.507010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.611 qpair failed and we were unable to recover it. 00:34:21.611 [2024-07-25 01:20:14.507160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.611 [2024-07-25 01:20:14.507187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.611 qpair failed and we were unable to recover it. 00:34:21.611 [2024-07-25 01:20:14.507380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.611 [2024-07-25 01:20:14.507406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.611 qpair failed and we were unable to recover it. 00:34:21.611 [2024-07-25 01:20:14.507551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.611 [2024-07-25 01:20:14.507576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.611 qpair failed and we were unable to recover it. 00:34:21.611 [2024-07-25 01:20:14.507800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.611 [2024-07-25 01:20:14.507857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.611 qpair failed and we were unable to recover it. 00:34:21.611 [2024-07-25 01:20:14.507991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.611 [2024-07-25 01:20:14.508015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.611 qpair failed and we were unable to recover it. 00:34:21.611 [2024-07-25 01:20:14.508160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.611 [2024-07-25 01:20:14.508185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.611 qpair failed and we were unable to recover it. 00:34:21.611 [2024-07-25 01:20:14.508381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.611 [2024-07-25 01:20:14.508407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.611 qpair failed and we were unable to recover it. 00:34:21.611 [2024-07-25 01:20:14.508550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.612 [2024-07-25 01:20:14.508574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.612 qpair failed and we were unable to recover it. 00:34:21.612 [2024-07-25 01:20:14.508714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.612 [2024-07-25 01:20:14.508738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.612 qpair failed and we were unable to recover it. 00:34:21.612 [2024-07-25 01:20:14.508864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.612 [2024-07-25 01:20:14.508889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.612 qpair failed and we were unable to recover it. 00:34:21.612 [2024-07-25 01:20:14.509026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.612 [2024-07-25 01:20:14.509051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.612 qpair failed and we were unable to recover it. 00:34:21.612 [2024-07-25 01:20:14.509196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.612 [2024-07-25 01:20:14.509220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.612 qpair failed and we were unable to recover it. 00:34:21.612 [2024-07-25 01:20:14.509397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.612 [2024-07-25 01:20:14.509423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.612 qpair failed and we were unable to recover it. 00:34:21.612 [2024-07-25 01:20:14.509592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.612 [2024-07-25 01:20:14.509618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.612 qpair failed and we were unable to recover it. 00:34:21.612 [2024-07-25 01:20:14.509755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.612 [2024-07-25 01:20:14.509780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.612 qpair failed and we were unable to recover it. 00:34:21.612 [2024-07-25 01:20:14.509925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.612 [2024-07-25 01:20:14.509950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.612 qpair failed and we were unable to recover it. 00:34:21.612 [2024-07-25 01:20:14.510100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.612 [2024-07-25 01:20:14.510124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.612 qpair failed and we were unable to recover it. 00:34:21.612 [2024-07-25 01:20:14.510238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.612 [2024-07-25 01:20:14.510286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.612 qpair failed and we were unable to recover it. 00:34:21.612 [2024-07-25 01:20:14.510418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.612 [2024-07-25 01:20:14.510445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.612 qpair failed and we were unable to recover it. 00:34:21.612 [2024-07-25 01:20:14.510612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.612 [2024-07-25 01:20:14.510636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.612 qpair failed and we were unable to recover it. 00:34:21.612 [2024-07-25 01:20:14.510776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.612 [2024-07-25 01:20:14.510800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.612 qpair failed and we were unable to recover it. 00:34:21.612 [2024-07-25 01:20:14.510949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.612 [2024-07-25 01:20:14.510982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.612 qpair failed and we were unable to recover it. 00:34:21.612 [2024-07-25 01:20:14.511145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.612 [2024-07-25 01:20:14.511170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.612 qpair failed and we were unable to recover it. 00:34:21.612 [2024-07-25 01:20:14.511325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.612 [2024-07-25 01:20:14.511363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.612 qpair failed and we were unable to recover it. 00:34:21.612 [2024-07-25 01:20:14.511536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.612 [2024-07-25 01:20:14.511563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.612 qpair failed and we were unable to recover it. 00:34:21.612 [2024-07-25 01:20:14.511678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.612 [2024-07-25 01:20:14.511705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.612 qpair failed and we were unable to recover it. 00:34:21.612 [2024-07-25 01:20:14.511846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.612 [2024-07-25 01:20:14.511872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.612 qpair failed and we were unable to recover it. 00:34:21.612 [2024-07-25 01:20:14.511986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.612 [2024-07-25 01:20:14.512010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.612 qpair failed and we were unable to recover it. 00:34:21.612 [2024-07-25 01:20:14.512127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.612 [2024-07-25 01:20:14.512153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.612 qpair failed and we were unable to recover it. 00:34:21.612 [2024-07-25 01:20:14.512340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.612 [2024-07-25 01:20:14.512367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.612 qpair failed and we were unable to recover it. 00:34:21.612 [2024-07-25 01:20:14.512490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.612 [2024-07-25 01:20:14.512530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.612 qpair failed and we were unable to recover it. 00:34:21.612 [2024-07-25 01:20:14.512694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.612 [2024-07-25 01:20:14.512718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.612 qpair failed and we were unable to recover it. 00:34:21.612 [2024-07-25 01:20:14.512888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.612 [2024-07-25 01:20:14.512912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.612 qpair failed and we were unable to recover it. 00:34:21.612 [2024-07-25 01:20:14.513047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.612 [2024-07-25 01:20:14.513071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.612 qpair failed and we were unable to recover it. 00:34:21.612 [2024-07-25 01:20:14.513238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.612 [2024-07-25 01:20:14.513268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.612 qpair failed and we were unable to recover it. 00:34:21.612 [2024-07-25 01:20:14.513392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.612 [2024-07-25 01:20:14.513418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.612 qpair failed and we were unable to recover it. 00:34:21.612 [2024-07-25 01:20:14.513598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.613 [2024-07-25 01:20:14.513623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.613 qpair failed and we were unable to recover it. 00:34:21.613 [2024-07-25 01:20:14.513748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.613 [2024-07-25 01:20:14.513774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.613 qpair failed and we were unable to recover it. 00:34:21.613 [2024-07-25 01:20:14.513889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.613 [2024-07-25 01:20:14.513915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.613 qpair failed and we were unable to recover it. 00:34:21.613 [2024-07-25 01:20:14.514059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.613 [2024-07-25 01:20:14.514088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.613 qpair failed and we were unable to recover it. 00:34:21.613 [2024-07-25 01:20:14.514252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.613 [2024-07-25 01:20:14.514278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.613 qpair failed and we were unable to recover it. 00:34:21.613 [2024-07-25 01:20:14.514419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.613 [2024-07-25 01:20:14.514445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.613 qpair failed and we were unable to recover it. 00:34:21.613 [2024-07-25 01:20:14.514577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.613 [2024-07-25 01:20:14.514606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.613 qpair failed and we were unable to recover it. 00:34:21.613 [2024-07-25 01:20:14.514749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.613 [2024-07-25 01:20:14.514775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.613 qpair failed and we were unable to recover it. 00:34:21.613 [2024-07-25 01:20:14.514917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.613 [2024-07-25 01:20:14.514958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.613 qpair failed and we were unable to recover it. 00:34:21.613 [2024-07-25 01:20:14.515149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.613 [2024-07-25 01:20:14.515175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.613 qpair failed and we were unable to recover it. 00:34:21.613 [2024-07-25 01:20:14.515299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.613 [2024-07-25 01:20:14.515325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.613 qpair failed and we were unable to recover it. 00:34:21.613 [2024-07-25 01:20:14.515496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.613 [2024-07-25 01:20:14.515521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.613 qpair failed and we were unable to recover it. 00:34:21.613 [2024-07-25 01:20:14.515695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.613 [2024-07-25 01:20:14.515720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.613 qpair failed and we were unable to recover it. 00:34:21.613 [2024-07-25 01:20:14.515878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.613 [2024-07-25 01:20:14.515903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.613 qpair failed and we were unable to recover it. 00:34:21.613 [2024-07-25 01:20:14.516044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.613 [2024-07-25 01:20:14.516069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.613 qpair failed and we were unable to recover it. 00:34:21.613 [2024-07-25 01:20:14.516187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.613 [2024-07-25 01:20:14.516212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.613 qpair failed and we were unable to recover it. 00:34:21.613 [2024-07-25 01:20:14.516357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.613 [2024-07-25 01:20:14.516383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.613 qpair failed and we were unable to recover it. 00:34:21.613 [2024-07-25 01:20:14.516504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.613 [2024-07-25 01:20:14.516529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.613 qpair failed and we were unable to recover it. 00:34:21.613 [2024-07-25 01:20:14.516664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.613 [2024-07-25 01:20:14.516689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.613 qpair failed and we were unable to recover it. 00:34:21.613 [2024-07-25 01:20:14.516860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.613 [2024-07-25 01:20:14.516885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.613 qpair failed and we were unable to recover it. 00:34:21.613 [2024-07-25 01:20:14.517048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.613 [2024-07-25 01:20:14.517076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.613 qpair failed and we were unable to recover it. 00:34:21.613 [2024-07-25 01:20:14.517203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.613 [2024-07-25 01:20:14.517230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.613 qpair failed and we were unable to recover it. 00:34:21.613 [2024-07-25 01:20:14.517378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.613 [2024-07-25 01:20:14.517404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.613 qpair failed and we were unable to recover it. 00:34:21.613 [2024-07-25 01:20:14.517554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.613 [2024-07-25 01:20:14.517579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.613 qpair failed and we were unable to recover it. 00:34:21.613 [2024-07-25 01:20:14.517697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.613 [2024-07-25 01:20:14.517722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.613 qpair failed and we were unable to recover it. 00:34:21.613 [2024-07-25 01:20:14.517871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.613 [2024-07-25 01:20:14.517901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.613 qpair failed and we were unable to recover it. 00:34:21.613 [2024-07-25 01:20:14.518010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.613 [2024-07-25 01:20:14.518036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.613 qpair failed and we were unable to recover it. 00:34:21.613 [2024-07-25 01:20:14.518180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.613 [2024-07-25 01:20:14.518205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.613 qpair failed and we were unable to recover it. 00:34:21.613 [2024-07-25 01:20:14.518325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.614 [2024-07-25 01:20:14.518351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.614 qpair failed and we were unable to recover it. 00:34:21.614 [2024-07-25 01:20:14.518490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.614 [2024-07-25 01:20:14.518515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.614 qpair failed and we were unable to recover it. 00:34:21.614 [2024-07-25 01:20:14.518650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.614 [2024-07-25 01:20:14.518675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.614 qpair failed and we were unable to recover it. 00:34:21.614 [2024-07-25 01:20:14.518788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.614 [2024-07-25 01:20:14.518814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.614 qpair failed and we were unable to recover it. 00:34:21.614 [2024-07-25 01:20:14.518956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.614 [2024-07-25 01:20:14.518998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.614 qpair failed and we were unable to recover it. 00:34:21.614 [2024-07-25 01:20:14.519149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.614 [2024-07-25 01:20:14.519177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.614 qpair failed and we were unable to recover it. 00:34:21.614 [2024-07-25 01:20:14.519342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.614 [2024-07-25 01:20:14.519368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.614 qpair failed and we were unable to recover it. 00:34:21.614 [2024-07-25 01:20:14.519475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.614 [2024-07-25 01:20:14.519500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.614 qpair failed and we were unable to recover it. 00:34:21.614 [2024-07-25 01:20:14.519659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.614 [2024-07-25 01:20:14.519688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.614 qpair failed and we were unable to recover it. 00:34:21.614 [2024-07-25 01:20:14.519848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.614 [2024-07-25 01:20:14.519873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.614 qpair failed and we were unable to recover it. 00:34:21.614 [2024-07-25 01:20:14.520039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.614 [2024-07-25 01:20:14.520064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.614 qpair failed and we were unable to recover it. 00:34:21.614 [2024-07-25 01:20:14.520212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.614 [2024-07-25 01:20:14.520237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.614 qpair failed and we were unable to recover it. 00:34:21.614 [2024-07-25 01:20:14.520361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.614 [2024-07-25 01:20:14.520387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.614 qpair failed and we were unable to recover it. 00:34:21.614 [2024-07-25 01:20:14.520509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.614 [2024-07-25 01:20:14.520534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.614 qpair failed and we were unable to recover it. 00:34:21.614 [2024-07-25 01:20:14.520684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.614 [2024-07-25 01:20:14.520709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.614 qpair failed and we were unable to recover it. 00:34:21.614 [2024-07-25 01:20:14.520855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.614 [2024-07-25 01:20:14.520880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.614 qpair failed and we were unable to recover it. 00:34:21.614 [2024-07-25 01:20:14.521027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.614 [2024-07-25 01:20:14.521052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.614 qpair failed and we were unable to recover it. 00:34:21.614 [2024-07-25 01:20:14.521217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.614 [2024-07-25 01:20:14.521252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.614 qpair failed and we were unable to recover it. 00:34:21.614 [2024-07-25 01:20:14.521445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.614 [2024-07-25 01:20:14.521470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.614 qpair failed and we were unable to recover it. 00:34:21.614 [2024-07-25 01:20:14.521608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.614 [2024-07-25 01:20:14.521633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.614 qpair failed and we were unable to recover it. 00:34:21.614 [2024-07-25 01:20:14.521772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.614 [2024-07-25 01:20:14.521797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.614 qpair failed and we were unable to recover it. 00:34:21.614 [2024-07-25 01:20:14.521920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.614 [2024-07-25 01:20:14.521945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.614 qpair failed and we were unable to recover it. 00:34:21.614 [2024-07-25 01:20:14.522125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.614 [2024-07-25 01:20:14.522150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.614 qpair failed and we were unable to recover it. 00:34:21.614 [2024-07-25 01:20:14.522291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.614 [2024-07-25 01:20:14.522318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.614 qpair failed and we were unable to recover it. 00:34:21.614 [2024-07-25 01:20:14.522442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.614 [2024-07-25 01:20:14.522467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.614 qpair failed and we were unable to recover it. 00:34:21.614 [2024-07-25 01:20:14.522608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.614 [2024-07-25 01:20:14.522633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.614 qpair failed and we were unable to recover it. 00:34:21.614 [2024-07-25 01:20:14.522811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.614 [2024-07-25 01:20:14.522835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.614 qpair failed and we were unable to recover it. 00:34:21.614 [2024-07-25 01:20:14.522970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.614 [2024-07-25 01:20:14.522995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.614 qpair failed and we were unable to recover it. 00:34:21.614 [2024-07-25 01:20:14.523109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.615 [2024-07-25 01:20:14.523134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.615 qpair failed and we were unable to recover it. 00:34:21.615 [2024-07-25 01:20:14.523344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.615 [2024-07-25 01:20:14.523369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.615 qpair failed and we were unable to recover it. 00:34:21.615 [2024-07-25 01:20:14.523482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.615 [2024-07-25 01:20:14.523507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.615 qpair failed and we were unable to recover it. 00:34:21.615 [2024-07-25 01:20:14.523649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.615 [2024-07-25 01:20:14.523690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.615 qpair failed and we were unable to recover it. 00:34:21.615 [2024-07-25 01:20:14.523819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.615 [2024-07-25 01:20:14.523848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.615 qpair failed and we were unable to recover it. 00:34:21.615 [2024-07-25 01:20:14.523982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.615 [2024-07-25 01:20:14.524007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.615 qpair failed and we were unable to recover it. 00:34:21.615 [2024-07-25 01:20:14.524154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.615 [2024-07-25 01:20:14.524195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.615 qpair failed and we were unable to recover it. 00:34:21.615 [2024-07-25 01:20:14.524339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.615 [2024-07-25 01:20:14.524365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.615 qpair failed and we were unable to recover it. 00:34:21.615 [2024-07-25 01:20:14.524532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.615 [2024-07-25 01:20:14.524557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.615 qpair failed and we were unable to recover it. 00:34:21.615 [2024-07-25 01:20:14.524719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.615 [2024-07-25 01:20:14.524751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.615 qpair failed and we were unable to recover it. 00:34:21.615 [2024-07-25 01:20:14.524922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.615 [2024-07-25 01:20:14.524947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.615 qpair failed and we were unable to recover it. 00:34:21.615 [2024-07-25 01:20:14.525090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.615 [2024-07-25 01:20:14.525115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.615 qpair failed and we were unable to recover it. 00:34:21.615 [2024-07-25 01:20:14.525256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.615 [2024-07-25 01:20:14.525282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.615 qpair failed and we were unable to recover it. 00:34:21.615 [2024-07-25 01:20:14.525453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.615 [2024-07-25 01:20:14.525478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.615 qpair failed and we were unable to recover it. 00:34:21.615 [2024-07-25 01:20:14.525621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.615 [2024-07-25 01:20:14.525646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.615 qpair failed and we were unable to recover it. 00:34:21.615 [2024-07-25 01:20:14.525782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.615 [2024-07-25 01:20:14.525806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.615 qpair failed and we were unable to recover it. 00:34:21.615 [2024-07-25 01:20:14.525952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.615 [2024-07-25 01:20:14.525977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.615 qpair failed and we were unable to recover it. 00:34:21.615 [2024-07-25 01:20:14.526102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.615 [2024-07-25 01:20:14.526127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.615 qpair failed and we were unable to recover it. 00:34:21.615 [2024-07-25 01:20:14.526272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.615 [2024-07-25 01:20:14.526297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.615 qpair failed and we were unable to recover it. 00:34:21.615 [2024-07-25 01:20:14.526411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.615 [2024-07-25 01:20:14.526436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.615 qpair failed and we were unable to recover it. 00:34:21.615 [2024-07-25 01:20:14.526551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.615 [2024-07-25 01:20:14.526576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.615 qpair failed and we were unable to recover it. 00:34:21.615 [2024-07-25 01:20:14.526721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.615 [2024-07-25 01:20:14.526745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.615 qpair failed and we were unable to recover it. 00:34:21.615 [2024-07-25 01:20:14.526911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.615 [2024-07-25 01:20:14.526938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.615 qpair failed and we were unable to recover it. 00:34:21.615 [2024-07-25 01:20:14.527113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.615 [2024-07-25 01:20:14.527138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.615 qpair failed and we were unable to recover it. 00:34:21.615 [2024-07-25 01:20:14.527288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.615 [2024-07-25 01:20:14.527314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.615 qpair failed and we were unable to recover it. 00:34:21.615 [2024-07-25 01:20:14.527424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.615 [2024-07-25 01:20:14.527449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.615 qpair failed and we were unable to recover it. 00:34:21.615 [2024-07-25 01:20:14.527563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.615 [2024-07-25 01:20:14.527588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.615 qpair failed and we were unable to recover it. 00:34:21.615 [2024-07-25 01:20:14.527732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.615 [2024-07-25 01:20:14.527757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.615 qpair failed and we were unable to recover it. 00:34:21.615 [2024-07-25 01:20:14.527903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.615 [2024-07-25 01:20:14.527929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.615 qpair failed and we were unable to recover it. 00:34:21.615 [2024-07-25 01:20:14.528103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.615 [2024-07-25 01:20:14.528131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.615 qpair failed and we were unable to recover it. 00:34:21.615 [2024-07-25 01:20:14.528295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.615 [2024-07-25 01:20:14.528321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.615 qpair failed and we were unable to recover it. 00:34:21.615 [2024-07-25 01:20:14.528491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.615 [2024-07-25 01:20:14.528515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.615 qpair failed and we were unable to recover it. 00:34:21.615 [2024-07-25 01:20:14.528642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.615 [2024-07-25 01:20:14.528668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.615 qpair failed and we were unable to recover it. 00:34:21.615 [2024-07-25 01:20:14.528819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.615 [2024-07-25 01:20:14.528845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.615 qpair failed and we were unable to recover it. 00:34:21.615 [2024-07-25 01:20:14.528972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.615 [2024-07-25 01:20:14.529000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.615 qpair failed and we were unable to recover it. 00:34:21.615 [2024-07-25 01:20:14.529164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.615 [2024-07-25 01:20:14.529189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.615 qpair failed and we were unable to recover it. 00:34:21.615 [2024-07-25 01:20:14.529340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.615 [2024-07-25 01:20:14.529366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.615 qpair failed and we were unable to recover it. 00:34:21.615 [2024-07-25 01:20:14.529509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.615 [2024-07-25 01:20:14.529534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.616 qpair failed and we were unable to recover it. 00:34:21.616 [2024-07-25 01:20:14.529679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.616 [2024-07-25 01:20:14.529705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.616 qpair failed and we were unable to recover it. 00:34:21.616 [2024-07-25 01:20:14.529814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.616 [2024-07-25 01:20:14.529839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.616 qpair failed and we were unable to recover it. 00:34:21.616 [2024-07-25 01:20:14.529984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.616 [2024-07-25 01:20:14.530011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.616 qpair failed and we were unable to recover it. 00:34:21.616 [2024-07-25 01:20:14.530148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.616 [2024-07-25 01:20:14.530175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.616 qpair failed and we were unable to recover it. 00:34:21.616 [2024-07-25 01:20:14.530319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.616 [2024-07-25 01:20:14.530344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.616 qpair failed and we were unable to recover it. 00:34:21.616 [2024-07-25 01:20:14.530485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.616 [2024-07-25 01:20:14.530510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.616 qpair failed and we were unable to recover it. 00:34:21.616 [2024-07-25 01:20:14.530644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.616 [2024-07-25 01:20:14.530670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.616 qpair failed and we were unable to recover it. 00:34:21.616 [2024-07-25 01:20:14.530780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.616 [2024-07-25 01:20:14.530805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.616 qpair failed and we were unable to recover it. 00:34:21.616 [2024-07-25 01:20:14.530948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.616 [2024-07-25 01:20:14.530974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.616 qpair failed and we were unable to recover it. 00:34:21.616 [2024-07-25 01:20:14.531119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.616 [2024-07-25 01:20:14.531144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.616 qpair failed and we were unable to recover it. 00:34:21.616 [2024-07-25 01:20:14.531287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.616 [2024-07-25 01:20:14.531328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.616 qpair failed and we were unable to recover it. 00:34:21.616 [2024-07-25 01:20:14.531482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.616 [2024-07-25 01:20:14.531513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.616 qpair failed and we were unable to recover it. 00:34:21.616 [2024-07-25 01:20:14.531678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.616 [2024-07-25 01:20:14.531703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.616 qpair failed and we were unable to recover it. 00:34:21.616 [2024-07-25 01:20:14.531887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.616 [2024-07-25 01:20:14.531914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.616 qpair failed and we were unable to recover it. 00:34:21.616 [2024-07-25 01:20:14.532061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.616 [2024-07-25 01:20:14.532086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.616 qpair failed and we were unable to recover it. 00:34:21.616 [2024-07-25 01:20:14.532225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.616 [2024-07-25 01:20:14.532259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.616 qpair failed and we were unable to recover it. 00:34:21.616 [2024-07-25 01:20:14.532406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.616 [2024-07-25 01:20:14.532431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.616 qpair failed and we were unable to recover it. 00:34:21.616 [2024-07-25 01:20:14.532571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.616 [2024-07-25 01:20:14.532596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.616 qpair failed and we were unable to recover it. 00:34:21.616 [2024-07-25 01:20:14.532705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.616 [2024-07-25 01:20:14.532730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.616 qpair failed and we were unable to recover it. 00:34:21.616 [2024-07-25 01:20:14.532869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.616 [2024-07-25 01:20:14.532893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.616 qpair failed and we were unable to recover it. 00:34:21.616 [2024-07-25 01:20:14.533035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.616 [2024-07-25 01:20:14.533062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.616 qpair failed and we were unable to recover it. 00:34:21.616 [2024-07-25 01:20:14.533218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.616 [2024-07-25 01:20:14.533248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.616 qpair failed and we were unable to recover it. 00:34:21.616 [2024-07-25 01:20:14.533367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.616 [2024-07-25 01:20:14.533392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.616 qpair failed and we were unable to recover it. 00:34:21.616 [2024-07-25 01:20:14.533585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.616 [2024-07-25 01:20:14.533612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.616 qpair failed and we were unable to recover it. 00:34:21.616 [2024-07-25 01:20:14.533754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.616 [2024-07-25 01:20:14.533780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.616 qpair failed and we were unable to recover it. 00:34:21.616 [2024-07-25 01:20:14.533953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.616 [2024-07-25 01:20:14.533978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.616 qpair failed and we were unable to recover it. 00:34:21.616 [2024-07-25 01:20:14.534102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.616 [2024-07-25 01:20:14.534127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.616 qpair failed and we were unable to recover it. 00:34:21.616 [2024-07-25 01:20:14.534304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.616 [2024-07-25 01:20:14.534329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.616 qpair failed and we were unable to recover it. 00:34:21.616 [2024-07-25 01:20:14.534440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.616 [2024-07-25 01:20:14.534464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.616 qpair failed and we were unable to recover it. 00:34:21.616 [2024-07-25 01:20:14.534602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.616 [2024-07-25 01:20:14.534627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.616 qpair failed and we were unable to recover it. 00:34:21.616 [2024-07-25 01:20:14.534771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.616 [2024-07-25 01:20:14.534796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.616 qpair failed and we were unable to recover it. 00:34:21.616 [2024-07-25 01:20:14.534938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.616 [2024-07-25 01:20:14.534962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.616 qpair failed and we were unable to recover it. 00:34:21.616 [2024-07-25 01:20:14.535104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.616 [2024-07-25 01:20:14.535129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.616 qpair failed and we were unable to recover it. 00:34:21.616 [2024-07-25 01:20:14.535253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.616 [2024-07-25 01:20:14.535278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.616 qpair failed and we were unable to recover it. 00:34:21.616 [2024-07-25 01:20:14.535394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.616 [2024-07-25 01:20:14.535419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.616 qpair failed and we were unable to recover it. 00:34:21.616 [2024-07-25 01:20:14.535553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.616 [2024-07-25 01:20:14.535581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.616 qpair failed and we were unable to recover it. 00:34:21.616 [2024-07-25 01:20:14.535745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.616 [2024-07-25 01:20:14.535769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.616 qpair failed and we were unable to recover it. 00:34:21.617 [2024-07-25 01:20:14.535916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.617 [2024-07-25 01:20:14.535942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.617 qpair failed and we were unable to recover it. 00:34:21.617 [2024-07-25 01:20:14.536085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.617 [2024-07-25 01:20:14.536111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.617 qpair failed and we were unable to recover it. 00:34:21.617 [2024-07-25 01:20:14.536267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.617 [2024-07-25 01:20:14.536293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.617 qpair failed and we were unable to recover it. 00:34:21.617 [2024-07-25 01:20:14.536443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.617 [2024-07-25 01:20:14.536469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.617 qpair failed and we were unable to recover it. 00:34:21.617 [2024-07-25 01:20:14.536611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.617 [2024-07-25 01:20:14.536640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.617 qpair failed and we were unable to recover it. 00:34:21.617 [2024-07-25 01:20:14.536804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.617 [2024-07-25 01:20:14.536830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.617 qpair failed and we were unable to recover it. 00:34:21.617 [2024-07-25 01:20:14.536976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.617 [2024-07-25 01:20:14.537001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.617 qpair failed and we were unable to recover it. 00:34:21.617 [2024-07-25 01:20:14.537145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.617 [2024-07-25 01:20:14.537170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.617 qpair failed and we were unable to recover it. 00:34:21.617 [2024-07-25 01:20:14.537312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.617 [2024-07-25 01:20:14.537337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.617 qpair failed and we were unable to recover it. 00:34:21.617 [2024-07-25 01:20:14.537451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.617 [2024-07-25 01:20:14.537476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.617 qpair failed and we were unable to recover it. 00:34:21.617 [2024-07-25 01:20:14.537610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.617 [2024-07-25 01:20:14.537635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.617 qpair failed and we were unable to recover it. 00:34:21.617 [2024-07-25 01:20:14.537752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.617 [2024-07-25 01:20:14.537776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.617 qpair failed and we were unable to recover it. 00:34:21.617 [2024-07-25 01:20:14.537917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.617 [2024-07-25 01:20:14.537957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.617 qpair failed and we were unable to recover it. 00:34:21.617 [2024-07-25 01:20:14.538085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.617 [2024-07-25 01:20:14.538112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.617 qpair failed and we were unable to recover it. 00:34:21.617 [2024-07-25 01:20:14.538277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.617 [2024-07-25 01:20:14.538307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.617 qpair failed and we were unable to recover it. 00:34:21.617 [2024-07-25 01:20:14.538457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.617 [2024-07-25 01:20:14.538485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.617 qpair failed and we were unable to recover it. 00:34:21.617 [2024-07-25 01:20:14.538610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.617 [2024-07-25 01:20:14.538638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.617 qpair failed and we were unable to recover it. 00:34:21.617 [2024-07-25 01:20:14.538771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.617 [2024-07-25 01:20:14.538796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.617 qpair failed and we were unable to recover it. 00:34:21.617 [2024-07-25 01:20:14.538966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.617 [2024-07-25 01:20:14.538990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.617 qpair failed and we were unable to recover it. 00:34:21.617 [2024-07-25 01:20:14.539125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.617 [2024-07-25 01:20:14.539149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.617 qpair failed and we were unable to recover it. 00:34:21.617 [2024-07-25 01:20:14.539306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.617 [2024-07-25 01:20:14.539332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.617 qpair failed and we were unable to recover it. 00:34:21.617 [2024-07-25 01:20:14.539456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.617 [2024-07-25 01:20:14.539480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.617 qpair failed and we were unable to recover it. 00:34:21.617 [2024-07-25 01:20:14.539612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.617 [2024-07-25 01:20:14.539641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.617 qpair failed and we were unable to recover it. 00:34:21.617 [2024-07-25 01:20:14.539785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.617 [2024-07-25 01:20:14.539810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.617 qpair failed and we were unable to recover it. 00:34:21.617 [2024-07-25 01:20:14.539953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.617 [2024-07-25 01:20:14.539977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.617 qpair failed and we were unable to recover it. 00:34:21.617 [2024-07-25 01:20:14.540121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.617 [2024-07-25 01:20:14.540147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.617 qpair failed and we were unable to recover it. 00:34:21.617 [2024-07-25 01:20:14.540268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.617 [2024-07-25 01:20:14.540294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.617 qpair failed and we were unable to recover it. 00:34:21.617 [2024-07-25 01:20:14.540434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.617 [2024-07-25 01:20:14.540459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.617 qpair failed and we were unable to recover it. 00:34:21.617 [2024-07-25 01:20:14.540606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.617 [2024-07-25 01:20:14.540633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.617 qpair failed and we were unable to recover it. 00:34:21.617 [2024-07-25 01:20:14.540839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.617 [2024-07-25 01:20:14.540865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.617 qpair failed and we were unable to recover it. 00:34:21.617 [2024-07-25 01:20:14.540982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.617 [2024-07-25 01:20:14.541007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.617 qpair failed and we were unable to recover it. 00:34:21.617 [2024-07-25 01:20:14.541153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.618 [2024-07-25 01:20:14.541179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.618 qpair failed and we were unable to recover it. 00:34:21.618 [2024-07-25 01:20:14.541323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.618 [2024-07-25 01:20:14.541349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.618 qpair failed and we were unable to recover it. 00:34:21.618 [2024-07-25 01:20:14.541461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.618 [2024-07-25 01:20:14.541486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.618 qpair failed and we were unable to recover it. 00:34:21.618 [2024-07-25 01:20:14.541634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.618 [2024-07-25 01:20:14.541660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.618 qpair failed and we were unable to recover it. 00:34:21.618 [2024-07-25 01:20:14.541773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.618 [2024-07-25 01:20:14.541797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.618 qpair failed and we were unable to recover it. 00:34:21.618 [2024-07-25 01:20:14.541965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.618 [2024-07-25 01:20:14.541990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.618 qpair failed and we were unable to recover it. 00:34:21.618 [2024-07-25 01:20:14.542133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.618 [2024-07-25 01:20:14.542159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.618 qpair failed and we were unable to recover it. 00:34:21.618 [2024-07-25 01:20:14.542303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.618 [2024-07-25 01:20:14.542328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.618 qpair failed and we were unable to recover it. 00:34:21.618 [2024-07-25 01:20:14.542439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.618 [2024-07-25 01:20:14.542464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.618 qpair failed and we were unable to recover it. 00:34:21.618 [2024-07-25 01:20:14.542575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.618 [2024-07-25 01:20:14.542600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.618 qpair failed and we were unable to recover it. 00:34:21.618 [2024-07-25 01:20:14.542738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.618 [2024-07-25 01:20:14.542763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.618 qpair failed and we were unable to recover it. 00:34:21.618 [2024-07-25 01:20:14.542903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.618 [2024-07-25 01:20:14.542929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.618 qpair failed and we were unable to recover it. 00:34:21.618 [2024-07-25 01:20:14.543075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.618 [2024-07-25 01:20:14.543100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.618 qpair failed and we were unable to recover it. 00:34:21.618 [2024-07-25 01:20:14.543248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.618 [2024-07-25 01:20:14.543274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.618 qpair failed and we were unable to recover it. 00:34:21.618 [2024-07-25 01:20:14.543433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.618 [2024-07-25 01:20:14.543462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.618 qpair failed and we were unable to recover it. 00:34:21.618 [2024-07-25 01:20:14.543629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.618 [2024-07-25 01:20:14.543654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.618 qpair failed and we were unable to recover it. 00:34:21.618 [2024-07-25 01:20:14.543794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.618 [2024-07-25 01:20:14.543819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.618 qpair failed and we were unable to recover it. 00:34:21.618 [2024-07-25 01:20:14.543990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.618 [2024-07-25 01:20:14.544016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.618 qpair failed and we were unable to recover it. 00:34:21.618 [2024-07-25 01:20:14.544188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.618 [2024-07-25 01:20:14.544214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.618 qpair failed and we were unable to recover it. 00:34:21.618 [2024-07-25 01:20:14.544363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.618 [2024-07-25 01:20:14.544388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.618 qpair failed and we were unable to recover it. 00:34:21.618 [2024-07-25 01:20:14.544515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.618 [2024-07-25 01:20:14.544540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.618 qpair failed and we were unable to recover it. 00:34:21.618 [2024-07-25 01:20:14.544655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.618 [2024-07-25 01:20:14.544680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.618 qpair failed and we were unable to recover it. 00:34:21.618 [2024-07-25 01:20:14.544849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.618 [2024-07-25 01:20:14.544873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.618 qpair failed and we were unable to recover it. 00:34:21.618 [2024-07-25 01:20:14.545016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.618 [2024-07-25 01:20:14.545046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.618 qpair failed and we were unable to recover it. 00:34:21.618 [2024-07-25 01:20:14.545164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.618 [2024-07-25 01:20:14.545189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.618 qpair failed and we were unable to recover it. 00:34:21.618 [2024-07-25 01:20:14.545355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.618 [2024-07-25 01:20:14.545380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.618 qpair failed and we were unable to recover it. 00:34:21.618 [2024-07-25 01:20:14.545552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.618 [2024-07-25 01:20:14.545577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.618 qpair failed and we were unable to recover it. 00:34:21.618 [2024-07-25 01:20:14.545719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.618 [2024-07-25 01:20:14.545744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.618 qpair failed and we were unable to recover it. 00:34:21.618 [2024-07-25 01:20:14.545880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.618 [2024-07-25 01:20:14.545905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.618 qpair failed and we were unable to recover it. 00:34:21.618 [2024-07-25 01:20:14.546048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.618 [2024-07-25 01:20:14.546073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.618 qpair failed and we were unable to recover it. 00:34:21.618 [2024-07-25 01:20:14.546191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.618 [2024-07-25 01:20:14.546216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.618 qpair failed and we were unable to recover it. 00:34:21.618 [2024-07-25 01:20:14.546368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.618 [2024-07-25 01:20:14.546393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.618 qpair failed and we were unable to recover it. 00:34:21.618 [2024-07-25 01:20:14.546548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.618 [2024-07-25 01:20:14.546574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.618 qpair failed and we were unable to recover it. 00:34:21.618 [2024-07-25 01:20:14.546735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.618 [2024-07-25 01:20:14.546760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.618 qpair failed and we were unable to recover it. 00:34:21.618 [2024-07-25 01:20:14.546911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.618 [2024-07-25 01:20:14.546934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.618 qpair failed and we were unable to recover it. 00:34:21.618 [2024-07-25 01:20:14.547082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.618 [2024-07-25 01:20:14.547106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.618 qpair failed and we were unable to recover it. 00:34:21.618 [2024-07-25 01:20:14.547213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.618 [2024-07-25 01:20:14.547236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.618 qpair failed and we were unable to recover it. 00:34:21.618 [2024-07-25 01:20:14.547420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.618 [2024-07-25 01:20:14.547443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.618 qpair failed and we were unable to recover it. 00:34:21.619 [2024-07-25 01:20:14.547587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.619 [2024-07-25 01:20:14.547612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.619 qpair failed and we were unable to recover it. 00:34:21.619 [2024-07-25 01:20:14.547783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.619 [2024-07-25 01:20:14.547810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.619 qpair failed and we were unable to recover it. 00:34:21.619 [2024-07-25 01:20:14.547940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.619 [2024-07-25 01:20:14.547964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.619 qpair failed and we were unable to recover it. 00:34:21.619 [2024-07-25 01:20:14.548126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.619 [2024-07-25 01:20:14.548150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.619 qpair failed and we were unable to recover it. 00:34:21.619 [2024-07-25 01:20:14.548319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.619 [2024-07-25 01:20:14.548344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.619 qpair failed and we were unable to recover it. 00:34:21.619 [2024-07-25 01:20:14.548492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.619 [2024-07-25 01:20:14.548516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.619 qpair failed and we were unable to recover it. 00:34:21.619 [2024-07-25 01:20:14.548639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.619 [2024-07-25 01:20:14.548663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.619 qpair failed and we were unable to recover it. 00:34:21.619 [2024-07-25 01:20:14.548835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.619 [2024-07-25 01:20:14.548861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.619 qpair failed and we were unable to recover it. 00:34:21.619 [2024-07-25 01:20:14.549005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.619 [2024-07-25 01:20:14.549030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.619 qpair failed and we were unable to recover it. 00:34:21.619 [2024-07-25 01:20:14.549144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.619 [2024-07-25 01:20:14.549170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.619 qpair failed and we were unable to recover it. 00:34:21.619 [2024-07-25 01:20:14.549284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.619 [2024-07-25 01:20:14.549309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.619 qpair failed and we were unable to recover it. 00:34:21.619 [2024-07-25 01:20:14.549427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.619 [2024-07-25 01:20:14.549452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.619 qpair failed and we were unable to recover it. 00:34:21.619 [2024-07-25 01:20:14.549615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.619 [2024-07-25 01:20:14.549653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.619 qpair failed and we were unable to recover it. 00:34:21.619 [2024-07-25 01:20:14.549863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.619 [2024-07-25 01:20:14.549889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.619 qpair failed and we were unable to recover it. 00:34:21.619 [2024-07-25 01:20:14.550026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.619 [2024-07-25 01:20:14.550051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.619 qpair failed and we were unable to recover it. 00:34:21.619 [2024-07-25 01:20:14.550207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.619 [2024-07-25 01:20:14.550231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.619 qpair failed and we were unable to recover it. 00:34:21.619 [2024-07-25 01:20:14.550393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.619 [2024-07-25 01:20:14.550419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.619 qpair failed and we were unable to recover it. 00:34:21.619 [2024-07-25 01:20:14.550542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.619 [2024-07-25 01:20:14.550569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.619 qpair failed and we were unable to recover it. 00:34:21.619 [2024-07-25 01:20:14.550709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.619 [2024-07-25 01:20:14.550734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.619 qpair failed and we were unable to recover it. 00:34:21.619 [2024-07-25 01:20:14.550853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.619 [2024-07-25 01:20:14.550878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.619 qpair failed and we were unable to recover it. 00:34:21.619 [2024-07-25 01:20:14.551025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.619 [2024-07-25 01:20:14.551049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.619 qpair failed and we were unable to recover it. 00:34:21.619 [2024-07-25 01:20:14.551169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.619 [2024-07-25 01:20:14.551194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.619 qpair failed and we were unable to recover it. 00:34:21.619 [2024-07-25 01:20:14.551337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.619 [2024-07-25 01:20:14.551364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.619 qpair failed and we were unable to recover it. 00:34:21.619 [2024-07-25 01:20:14.551480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.619 [2024-07-25 01:20:14.551504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.619 qpair failed and we were unable to recover it. 00:34:21.619 [2024-07-25 01:20:14.551654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.619 [2024-07-25 01:20:14.551679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.619 qpair failed and we were unable to recover it. 00:34:21.619 [2024-07-25 01:20:14.551791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.619 [2024-07-25 01:20:14.551815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.619 qpair failed and we were unable to recover it. 00:34:21.619 [2024-07-25 01:20:14.551964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.619 [2024-07-25 01:20:14.551989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.619 qpair failed and we were unable to recover it. 00:34:21.619 [2024-07-25 01:20:14.552136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.619 [2024-07-25 01:20:14.552160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.619 qpair failed and we were unable to recover it. 00:34:21.619 [2024-07-25 01:20:14.552304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.619 [2024-07-25 01:20:14.552329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.619 qpair failed and we were unable to recover it. 00:34:21.619 [2024-07-25 01:20:14.552470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.619 [2024-07-25 01:20:14.552495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.619 qpair failed and we were unable to recover it. 00:34:21.619 [2024-07-25 01:20:14.552639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.619 [2024-07-25 01:20:14.552663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.619 qpair failed and we were unable to recover it. 00:34:21.619 [2024-07-25 01:20:14.552776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.619 [2024-07-25 01:20:14.552802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.619 qpair failed and we were unable to recover it. 00:34:21.619 [2024-07-25 01:20:14.552942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.619 [2024-07-25 01:20:14.552968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.619 qpair failed and we were unable to recover it. 00:34:21.619 [2024-07-25 01:20:14.553081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.619 [2024-07-25 01:20:14.553105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.619 qpair failed and we were unable to recover it. 00:34:21.619 [2024-07-25 01:20:14.553224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.619 [2024-07-25 01:20:14.553255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.619 qpair failed and we were unable to recover it. 00:34:21.619 [2024-07-25 01:20:14.553401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.619 [2024-07-25 01:20:14.553425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.619 qpair failed and we were unable to recover it. 00:34:21.619 [2024-07-25 01:20:14.553556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.619 [2024-07-25 01:20:14.553580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.619 qpair failed and we were unable to recover it. 00:34:21.619 [2024-07-25 01:20:14.553694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.619 [2024-07-25 01:20:14.553718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.619 qpair failed and we were unable to recover it. 00:34:21.620 [2024-07-25 01:20:14.553861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.620 [2024-07-25 01:20:14.553886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.620 qpair failed and we were unable to recover it. 00:34:21.620 [2024-07-25 01:20:14.554018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.620 [2024-07-25 01:20:14.554047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.620 qpair failed and we were unable to recover it. 00:34:21.620 [2024-07-25 01:20:14.554199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.620 [2024-07-25 01:20:14.554223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.620 qpair failed and we were unable to recover it. 00:34:21.620 [2024-07-25 01:20:14.554379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.620 [2024-07-25 01:20:14.554418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:21.620 qpair failed and we were unable to recover it. 00:34:21.620 [2024-07-25 01:20:14.554547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.620 [2024-07-25 01:20:14.554585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.620 qpair failed and we were unable to recover it. 00:34:21.620 [2024-07-25 01:20:14.554736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.620 [2024-07-25 01:20:14.554762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.620 qpair failed and we were unable to recover it. 00:34:21.620 [2024-07-25 01:20:14.554874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.620 [2024-07-25 01:20:14.554899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.620 qpair failed and we were unable to recover it. 00:34:21.620 [2024-07-25 01:20:14.555023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.620 [2024-07-25 01:20:14.555050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.620 qpair failed and we were unable to recover it. 00:34:21.620 [2024-07-25 01:20:14.555223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.620 [2024-07-25 01:20:14.555255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.620 qpair failed and we were unable to recover it. 00:34:21.620 [2024-07-25 01:20:14.555405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.620 [2024-07-25 01:20:14.555431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.620 qpair failed and we were unable to recover it. 00:34:21.620 [2024-07-25 01:20:14.555570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.620 [2024-07-25 01:20:14.555595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.620 qpair failed and we were unable to recover it. 00:34:21.620 [2024-07-25 01:20:14.555737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.620 [2024-07-25 01:20:14.555763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.620 qpair failed and we were unable to recover it. 00:34:21.620 [2024-07-25 01:20:14.555905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.620 [2024-07-25 01:20:14.555930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.620 qpair failed and we were unable to recover it. 00:34:21.620 [2024-07-25 01:20:14.556076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.620 [2024-07-25 01:20:14.556101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.620 qpair failed and we were unable to recover it. 00:34:21.620 [2024-07-25 01:20:14.556222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.620 [2024-07-25 01:20:14.556251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.620 qpair failed and we were unable to recover it. 00:34:21.620 [2024-07-25 01:20:14.556372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.620 [2024-07-25 01:20:14.556398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.620 qpair failed and we were unable to recover it. 00:34:21.620 [2024-07-25 01:20:14.556547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.620 [2024-07-25 01:20:14.556571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.620 qpair failed and we were unable to recover it. 00:34:21.620 [2024-07-25 01:20:14.556705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.620 [2024-07-25 01:20:14.556730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.620 qpair failed and we were unable to recover it. 00:34:21.620 [2024-07-25 01:20:14.556844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.620 [2024-07-25 01:20:14.556870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.620 qpair failed and we were unable to recover it. 00:34:21.620 [2024-07-25 01:20:14.556985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.620 [2024-07-25 01:20:14.557009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.620 qpair failed and we were unable to recover it. 00:34:21.620 [2024-07-25 01:20:14.557127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.620 [2024-07-25 01:20:14.557152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.620 qpair failed and we were unable to recover it. 00:34:21.620 [2024-07-25 01:20:14.557315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.620 [2024-07-25 01:20:14.557353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.620 qpair failed and we were unable to recover it. 00:34:21.620 [2024-07-25 01:20:14.557480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.620 [2024-07-25 01:20:14.557507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.620 qpair failed and we were unable to recover it. 00:34:21.620 [2024-07-25 01:20:14.557651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.620 [2024-07-25 01:20:14.557677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.620 qpair failed and we were unable to recover it. 00:34:21.620 [2024-07-25 01:20:14.557832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.620 [2024-07-25 01:20:14.557857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.620 qpair failed and we were unable to recover it. 00:34:21.620 [2024-07-25 01:20:14.557969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.620 [2024-07-25 01:20:14.557995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.620 qpair failed and we were unable to recover it. 00:34:21.620 [2024-07-25 01:20:14.558118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.620 [2024-07-25 01:20:14.558144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.620 qpair failed and we were unable to recover it. 00:34:21.620 [2024-07-25 01:20:14.558315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.620 [2024-07-25 01:20:14.558341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.620 qpair failed and we were unable to recover it. 00:34:21.620 [2024-07-25 01:20:14.558481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.620 [2024-07-25 01:20:14.558510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.620 qpair failed and we were unable to recover it. 00:34:21.620 [2024-07-25 01:20:14.558657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.620 [2024-07-25 01:20:14.558683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.620 qpair failed and we were unable to recover it. 00:34:21.620 [2024-07-25 01:20:14.558844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.620 [2024-07-25 01:20:14.558870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.620 qpair failed and we were unable to recover it. 00:34:21.620 [2024-07-25 01:20:14.559008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.620 [2024-07-25 01:20:14.559034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.620 qpair failed and we were unable to recover it. 00:34:21.620 [2024-07-25 01:20:14.559176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.620 [2024-07-25 01:20:14.559201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.620 qpair failed and we were unable to recover it. 00:34:21.620 [2024-07-25 01:20:14.559360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.620 [2024-07-25 01:20:14.559387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.620 qpair failed and we were unable to recover it. 00:34:21.620 [2024-07-25 01:20:14.559503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.620 [2024-07-25 01:20:14.559529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.620 qpair failed and we were unable to recover it. 00:34:21.620 [2024-07-25 01:20:14.559651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.620 [2024-07-25 01:20:14.559676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.620 qpair failed and we were unable to recover it. 00:34:21.620 [2024-07-25 01:20:14.559815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.620 [2024-07-25 01:20:14.559840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.620 qpair failed and we were unable to recover it. 00:34:21.620 [2024-07-25 01:20:14.560011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.620 [2024-07-25 01:20:14.560036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.621 qpair failed and we were unable to recover it. 00:34:21.621 [2024-07-25 01:20:14.560175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.621 [2024-07-25 01:20:14.560199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.621 qpair failed and we were unable to recover it. 00:34:21.621 [2024-07-25 01:20:14.560324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.621 [2024-07-25 01:20:14.560351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.621 qpair failed and we were unable to recover it. 00:34:21.621 [2024-07-25 01:20:14.560472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.621 [2024-07-25 01:20:14.560497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.621 qpair failed and we were unable to recover it. 00:34:21.621 [2024-07-25 01:20:14.560608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.621 [2024-07-25 01:20:14.560633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.621 qpair failed and we were unable to recover it. 00:34:21.621 [2024-07-25 01:20:14.560757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.621 [2024-07-25 01:20:14.560783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.621 qpair failed and we were unable to recover it. 00:34:21.621 [2024-07-25 01:20:14.560923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.621 [2024-07-25 01:20:14.560948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.621 qpair failed and we were unable to recover it. 00:34:21.621 [2024-07-25 01:20:14.561059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.621 [2024-07-25 01:20:14.561085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.621 qpair failed and we were unable to recover it. 00:34:21.621 [2024-07-25 01:20:14.561203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.621 [2024-07-25 01:20:14.561230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.621 qpair failed and we were unable to recover it. 00:34:21.621 [2024-07-25 01:20:14.561432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.621 [2024-07-25 01:20:14.561458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.621 qpair failed and we were unable to recover it. 00:34:21.621 [2024-07-25 01:20:14.561622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.621 [2024-07-25 01:20:14.561647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.621 qpair failed and we were unable to recover it. 00:34:21.621 [2024-07-25 01:20:14.561788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.621 [2024-07-25 01:20:14.561813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.621 qpair failed and we were unable to recover it. 00:34:21.621 [2024-07-25 01:20:14.561948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.621 [2024-07-25 01:20:14.561973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.621 qpair failed and we were unable to recover it. 00:34:21.621 [2024-07-25 01:20:14.562085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.621 [2024-07-25 01:20:14.562109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.621 qpair failed and we were unable to recover it. 00:34:21.621 [2024-07-25 01:20:14.562255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.621 [2024-07-25 01:20:14.562282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.621 qpair failed and we were unable to recover it. 00:34:21.621 [2024-07-25 01:20:14.562399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.621 [2024-07-25 01:20:14.562425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.621 qpair failed and we were unable to recover it. 00:34:21.621 [2024-07-25 01:20:14.562575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.621 [2024-07-25 01:20:14.562599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.621 qpair failed and we were unable to recover it. 00:34:21.621 [2024-07-25 01:20:14.562715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.621 [2024-07-25 01:20:14.562742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.621 qpair failed and we were unable to recover it. 00:34:21.621 [2024-07-25 01:20:14.562898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.621 [2024-07-25 01:20:14.562931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:21.621 qpair failed and we were unable to recover it. 00:34:21.621 [2024-07-25 01:20:14.563055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.621 [2024-07-25 01:20:14.563084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:21.621 qpair failed and we were unable to recover it. 00:34:21.621 [2024-07-25 01:20:14.563212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.621 [2024-07-25 01:20:14.563238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:21.621 qpair failed and we were unable to recover it. 00:34:21.621 [2024-07-25 01:20:14.563370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.621 [2024-07-25 01:20:14.563399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:21.621 qpair failed and we were unable to recover it. 00:34:21.621 [2024-07-25 01:20:14.563528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.621 [2024-07-25 01:20:14.563555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:21.621 qpair failed and we were unable to recover it. 00:34:21.621 [2024-07-25 01:20:14.563707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.621 [2024-07-25 01:20:14.563736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:21.621 qpair failed and we were unable to recover it. 00:34:21.621 [2024-07-25 01:20:14.563886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.621 [2024-07-25 01:20:14.563927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:21.621 qpair failed and we were unable to recover it. 00:34:21.621 [2024-07-25 01:20:14.564046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.621 [2024-07-25 01:20:14.564073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:21.621 qpair failed and we were unable to recover it. 00:34:21.621 [2024-07-25 01:20:14.564220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.621 [2024-07-25 01:20:14.564256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:21.621 qpair failed and we were unable to recover it. 00:34:21.621 [2024-07-25 01:20:14.564392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.621 [2024-07-25 01:20:14.564419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:21.621 qpair failed and we were unable to recover it. 00:34:21.621 [2024-07-25 01:20:14.564538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.621 [2024-07-25 01:20:14.564568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:21.621 qpair failed and we were unable to recover it. 00:34:21.621 [2024-07-25 01:20:14.564681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.621 [2024-07-25 01:20:14.564708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:21.621 qpair failed and we were unable to recover it. 00:34:21.621 [2024-07-25 01:20:14.564831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.621 [2024-07-25 01:20:14.564863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:21.621 qpair failed and we were unable to recover it. 00:34:21.621 [2024-07-25 01:20:14.564983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.621 [2024-07-25 01:20:14.565014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:21.621 qpair failed and we were unable to recover it. 00:34:21.621 [2024-07-25 01:20:14.565164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.621 [2024-07-25 01:20:14.565202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.621 qpair failed and we were unable to recover it. 00:34:21.621 [2024-07-25 01:20:14.565361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.621 [2024-07-25 01:20:14.565389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.621 qpair failed and we were unable to recover it. 00:34:21.621 [2024-07-25 01:20:14.565537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.621 [2024-07-25 01:20:14.565563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.622 qpair failed and we were unable to recover it. 00:34:21.622 [2024-07-25 01:20:14.565702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.622 [2024-07-25 01:20:14.565727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.622 qpair failed and we were unable to recover it. 00:34:21.622 [2024-07-25 01:20:14.565869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.622 [2024-07-25 01:20:14.565894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.622 qpair failed and we were unable to recover it. 00:34:21.622 [2024-07-25 01:20:14.566010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.622 [2024-07-25 01:20:14.566035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.622 qpair failed and we were unable to recover it. 00:34:21.622 [2024-07-25 01:20:14.566155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.622 [2024-07-25 01:20:14.566183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:21.622 qpair failed and we were unable to recover it. 00:34:21.622 [2024-07-25 01:20:14.566356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.622 [2024-07-25 01:20:14.566384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.622 qpair failed and we were unable to recover it. 00:34:21.622 [2024-07-25 01:20:14.566499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.622 [2024-07-25 01:20:14.566525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.622 qpair failed and we were unable to recover it. 00:34:21.622 [2024-07-25 01:20:14.566670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.622 [2024-07-25 01:20:14.566696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.622 qpair failed and we were unable to recover it. 00:34:21.622 [2024-07-25 01:20:14.566838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.622 [2024-07-25 01:20:14.566863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.622 qpair failed and we were unable to recover it. 00:34:21.622 [2024-07-25 01:20:14.567007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.622 [2024-07-25 01:20:14.567033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.622 qpair failed and we were unable to recover it. 00:34:21.622 [2024-07-25 01:20:14.567152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.622 [2024-07-25 01:20:14.567178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.622 qpair failed and we were unable to recover it. 00:34:21.622 [2024-07-25 01:20:14.567327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.622 [2024-07-25 01:20:14.567355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:21.622 qpair failed and we were unable to recover it. 00:34:21.622 [2024-07-25 01:20:14.567539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.622 [2024-07-25 01:20:14.567568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:21.622 qpair failed and we were unable to recover it. 00:34:21.622 [2024-07-25 01:20:14.567697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.622 [2024-07-25 01:20:14.567724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:21.622 qpair failed and we were unable to recover it. 00:34:21.622 [2024-07-25 01:20:14.567877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.622 [2024-07-25 01:20:14.567905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:21.622 qpair failed and we were unable to recover it. 00:34:21.622 [2024-07-25 01:20:14.568040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.622 [2024-07-25 01:20:14.568066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:21.622 qpair failed and we were unable to recover it. 00:34:21.622 [2024-07-25 01:20:14.568182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.622 [2024-07-25 01:20:14.568210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:21.622 qpair failed and we were unable to recover it. 00:34:21.622 [2024-07-25 01:20:14.568360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.622 [2024-07-25 01:20:14.568387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.622 qpair failed and we were unable to recover it. 00:34:21.622 [2024-07-25 01:20:14.568556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.622 [2024-07-25 01:20:14.568585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.622 qpair failed and we were unable to recover it. 00:34:21.622 [2024-07-25 01:20:14.568735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.622 [2024-07-25 01:20:14.568778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.622 qpair failed and we were unable to recover it. 00:34:21.622 [2024-07-25 01:20:14.568893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.622 [2024-07-25 01:20:14.568918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.622 qpair failed and we were unable to recover it. 00:34:21.622 [2024-07-25 01:20:14.569086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.622 [2024-07-25 01:20:14.569111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.622 qpair failed and we were unable to recover it. 00:34:21.622 [2024-07-25 01:20:14.569271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.622 [2024-07-25 01:20:14.569297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.622 qpair failed and we were unable to recover it. 00:34:21.622 [2024-07-25 01:20:14.569415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.622 [2024-07-25 01:20:14.569440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.622 qpair failed and we were unable to recover it. 00:34:21.622 [2024-07-25 01:20:14.569601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.622 [2024-07-25 01:20:14.569630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.622 qpair failed and we were unable to recover it. 00:34:21.622 [2024-07-25 01:20:14.569752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.622 [2024-07-25 01:20:14.569777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.622 qpair failed and we were unable to recover it. 00:34:21.622 [2024-07-25 01:20:14.569954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.622 [2024-07-25 01:20:14.569979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.622 qpair failed and we were unable to recover it. 00:34:21.622 [2024-07-25 01:20:14.570145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.622 [2024-07-25 01:20:14.570170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.622 qpair failed and we were unable to recover it. 00:34:21.622 [2024-07-25 01:20:14.570336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.622 [2024-07-25 01:20:14.570361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.622 qpair failed and we were unable to recover it. 00:34:21.622 [2024-07-25 01:20:14.570471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.622 [2024-07-25 01:20:14.570496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.622 qpair failed and we were unable to recover it. 00:34:21.622 [2024-07-25 01:20:14.570603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.622 [2024-07-25 01:20:14.570628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.622 qpair failed and we were unable to recover it. 00:34:21.622 [2024-07-25 01:20:14.570779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.622 [2024-07-25 01:20:14.570803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.622 qpair failed and we were unable to recover it. 00:34:21.622 [2024-07-25 01:20:14.570942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.622 [2024-07-25 01:20:14.570967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.622 qpair failed and we were unable to recover it. 00:34:21.622 [2024-07-25 01:20:14.571108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.622 [2024-07-25 01:20:14.571133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.622 qpair failed and we were unable to recover it. 00:34:21.622 [2024-07-25 01:20:14.571274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.622 [2024-07-25 01:20:14.571300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.622 qpair failed and we were unable to recover it. 00:34:21.622 [2024-07-25 01:20:14.571418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.622 [2024-07-25 01:20:14.571443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.622 qpair failed and we were unable to recover it. 00:34:21.622 [2024-07-25 01:20:14.571587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.622 [2024-07-25 01:20:14.571613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.622 qpair failed and we were unable to recover it. 00:34:21.622 [2024-07-25 01:20:14.571732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.623 [2024-07-25 01:20:14.571757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.623 qpair failed and we were unable to recover it. 00:34:21.623 [2024-07-25 01:20:14.571875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.623 [2024-07-25 01:20:14.571900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.623 qpair failed and we were unable to recover it. 00:34:21.623 [2024-07-25 01:20:14.572049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.623 [2024-07-25 01:20:14.572074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.623 qpair failed and we were unable to recover it. 00:34:21.623 [2024-07-25 01:20:14.572220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.623 [2024-07-25 01:20:14.572251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.623 qpair failed and we were unable to recover it. 00:34:21.623 [2024-07-25 01:20:14.572424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.623 [2024-07-25 01:20:14.572450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.623 qpair failed and we were unable to recover it. 00:34:21.623 [2024-07-25 01:20:14.572562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.623 [2024-07-25 01:20:14.572586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.623 qpair failed and we were unable to recover it. 00:34:21.623 [2024-07-25 01:20:14.572728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.623 [2024-07-25 01:20:14.572753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.623 qpair failed and we were unable to recover it. 00:34:21.623 [2024-07-25 01:20:14.572868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.623 [2024-07-25 01:20:14.572893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.623 qpair failed and we were unable to recover it. 00:34:21.623 [2024-07-25 01:20:14.573045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.623 [2024-07-25 01:20:14.573070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.623 qpair failed and we were unable to recover it. 00:34:21.623 [2024-07-25 01:20:14.573188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.623 [2024-07-25 01:20:14.573213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.623 qpair failed and we were unable to recover it. 00:34:21.623 [2024-07-25 01:20:14.573366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.623 [2024-07-25 01:20:14.573400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:21.623 qpair failed and we were unable to recover it. 00:34:21.623 [2024-07-25 01:20:14.573522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.623 [2024-07-25 01:20:14.573549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:21.623 qpair failed and we were unable to recover it. 00:34:21.623 [2024-07-25 01:20:14.573671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.623 [2024-07-25 01:20:14.573704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:21.623 qpair failed and we were unable to recover it. 00:34:21.623 [2024-07-25 01:20:14.573839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.623 [2024-07-25 01:20:14.573866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:21.623 qpair failed and we were unable to recover it. 00:34:21.623 [2024-07-25 01:20:14.573989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.623 [2024-07-25 01:20:14.574020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:21.623 qpair failed and we were unable to recover it. 00:34:21.623 [2024-07-25 01:20:14.574163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.623 [2024-07-25 01:20:14.574191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:21.623 qpair failed and we were unable to recover it. 00:34:21.623 [2024-07-25 01:20:14.574337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.623 [2024-07-25 01:20:14.574364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:21.623 qpair failed and we were unable to recover it. 00:34:21.623 [2024-07-25 01:20:14.574481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.623 [2024-07-25 01:20:14.574512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:21.623 qpair failed and we were unable to recover it. 00:34:21.623 [2024-07-25 01:20:14.574637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.623 [2024-07-25 01:20:14.574663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:21.623 qpair failed and we were unable to recover it. 00:34:21.623 [2024-07-25 01:20:14.574789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.623 [2024-07-25 01:20:14.574816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:21.623 qpair failed and we were unable to recover it. 00:34:21.623 [2024-07-25 01:20:14.574960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.623 [2024-07-25 01:20:14.574992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:21.623 qpair failed and we were unable to recover it. 00:34:21.623 [2024-07-25 01:20:14.575142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.623 [2024-07-25 01:20:14.575168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:21.623 qpair failed and we were unable to recover it. 00:34:21.623 [2024-07-25 01:20:14.575291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.623 [2024-07-25 01:20:14.575318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.623 qpair failed and we were unable to recover it. 00:34:21.623 [2024-07-25 01:20:14.575434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.623 [2024-07-25 01:20:14.575459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.623 qpair failed and we were unable to recover it. 00:34:21.623 [2024-07-25 01:20:14.575606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.623 [2024-07-25 01:20:14.575632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.623 qpair failed and we were unable to recover it. 00:34:21.623 [2024-07-25 01:20:14.575746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.623 [2024-07-25 01:20:14.575770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.623 qpair failed and we were unable to recover it. 00:34:21.623 [2024-07-25 01:20:14.575914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.623 [2024-07-25 01:20:14.575939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.623 qpair failed and we were unable to recover it. 00:34:21.623 [2024-07-25 01:20:14.576050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.623 [2024-07-25 01:20:14.576076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.623 qpair failed and we were unable to recover it. 00:34:21.623 [2024-07-25 01:20:14.576203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.623 [2024-07-25 01:20:14.576229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.623 qpair failed and we were unable to recover it. 00:34:21.623 [2024-07-25 01:20:14.576360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.623 [2024-07-25 01:20:14.576386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.623 qpair failed and we were unable to recover it. 00:34:21.623 [2024-07-25 01:20:14.576513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.623 [2024-07-25 01:20:14.576538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.623 qpair failed and we were unable to recover it. 00:34:21.623 [2024-07-25 01:20:14.576650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.623 [2024-07-25 01:20:14.576675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.623 qpair failed and we were unable to recover it. 00:34:21.623 [2024-07-25 01:20:14.576822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.623 [2024-07-25 01:20:14.576848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.623 qpair failed and we were unable to recover it. 00:34:21.623 [2024-07-25 01:20:14.576992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.624 [2024-07-25 01:20:14.577017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.624 qpair failed and we were unable to recover it. 00:34:21.624 [2024-07-25 01:20:14.577129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.624 [2024-07-25 01:20:14.577155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.624 qpair failed and we were unable to recover it. 00:34:21.624 [2024-07-25 01:20:14.577275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.624 [2024-07-25 01:20:14.577300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.624 qpair failed and we were unable to recover it. 00:34:21.624 [2024-07-25 01:20:14.577420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.624 [2024-07-25 01:20:14.577445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.624 qpair failed and we were unable to recover it. 00:34:21.624 [2024-07-25 01:20:14.577559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.624 [2024-07-25 01:20:14.577584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.624 qpair failed and we were unable to recover it. 00:34:21.624 [2024-07-25 01:20:14.577699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.624 [2024-07-25 01:20:14.577724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.624 qpair failed and we were unable to recover it. 00:34:21.624 [2024-07-25 01:20:14.577936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.624 [2024-07-25 01:20:14.577961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.624 qpair failed and we were unable to recover it. 00:34:21.624 [2024-07-25 01:20:14.578108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.624 [2024-07-25 01:20:14.578134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.624 qpair failed and we were unable to recover it. 00:34:21.624 [2024-07-25 01:20:14.578291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.624 [2024-07-25 01:20:14.578321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.624 qpair failed and we were unable to recover it. 00:34:21.624 [2024-07-25 01:20:14.578478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.624 [2024-07-25 01:20:14.578503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.624 qpair failed and we were unable to recover it. 00:34:21.624 [2024-07-25 01:20:14.578673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.624 [2024-07-25 01:20:14.578698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.624 qpair failed and we were unable to recover it. 00:34:21.624 [2024-07-25 01:20:14.578854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.624 [2024-07-25 01:20:14.578879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.624 qpair failed and we were unable to recover it. 00:34:21.624 [2024-07-25 01:20:14.579003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.624 [2024-07-25 01:20:14.579028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.624 qpair failed and we were unable to recover it. 00:34:21.624 [2024-07-25 01:20:14.579165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.624 [2024-07-25 01:20:14.579191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.624 qpair failed and we were unable to recover it. 00:34:21.624 [2024-07-25 01:20:14.579314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.624 [2024-07-25 01:20:14.579339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.624 qpair failed and we were unable to recover it. 00:34:21.624 [2024-07-25 01:20:14.579453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.624 [2024-07-25 01:20:14.579478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.624 qpair failed and we were unable to recover it. 00:34:21.624 [2024-07-25 01:20:14.579646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.624 [2024-07-25 01:20:14.579672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.624 qpair failed and we were unable to recover it. 00:34:21.624 [2024-07-25 01:20:14.579818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.624 [2024-07-25 01:20:14.579843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.624 qpair failed and we were unable to recover it. 00:34:21.624 [2024-07-25 01:20:14.579990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.624 [2024-07-25 01:20:14.580015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.624 qpair failed and we were unable to recover it. 00:34:21.624 [2024-07-25 01:20:14.580132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.624 [2024-07-25 01:20:14.580157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.624 qpair failed and we were unable to recover it. 00:34:21.624 [2024-07-25 01:20:14.580279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.624 [2024-07-25 01:20:14.580304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.624 qpair failed and we were unable to recover it. 00:34:21.624 [2024-07-25 01:20:14.580418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.624 [2024-07-25 01:20:14.580443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.624 qpair failed and we were unable to recover it. 00:34:21.624 [2024-07-25 01:20:14.580588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.624 [2024-07-25 01:20:14.580613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.624 qpair failed and we were unable to recover it. 00:34:21.624 [2024-07-25 01:20:14.580723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.624 [2024-07-25 01:20:14.580748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.624 qpair failed and we were unable to recover it. 00:34:21.624 [2024-07-25 01:20:14.580901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.624 [2024-07-25 01:20:14.580926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.624 qpair failed and we were unable to recover it. 00:34:21.624 [2024-07-25 01:20:14.581069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.624 [2024-07-25 01:20:14.581094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.624 qpair failed and we were unable to recover it. 00:34:21.624 [2024-07-25 01:20:14.581199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.624 [2024-07-25 01:20:14.581224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.624 qpair failed and we were unable to recover it. 00:34:21.624 [2024-07-25 01:20:14.581347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.624 [2024-07-25 01:20:14.581373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.624 qpair failed and we were unable to recover it. 00:34:21.624 [2024-07-25 01:20:14.581495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.624 [2024-07-25 01:20:14.581519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.624 qpair failed and we were unable to recover it. 00:34:21.624 [2024-07-25 01:20:14.581639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.624 [2024-07-25 01:20:14.581664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.624 qpair failed and we were unable to recover it. 00:34:21.624 [2024-07-25 01:20:14.581801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.624 [2024-07-25 01:20:14.581826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.624 qpair failed and we were unable to recover it. 00:34:21.624 [2024-07-25 01:20:14.581935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.624 [2024-07-25 01:20:14.581960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.624 qpair failed and we were unable to recover it. 00:34:21.624 [2024-07-25 01:20:14.582124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.624 [2024-07-25 01:20:14.582149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.624 qpair failed and we were unable to recover it. 00:34:21.624 [2024-07-25 01:20:14.582267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.624 [2024-07-25 01:20:14.582293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.624 qpair failed and we were unable to recover it. 00:34:21.624 [2024-07-25 01:20:14.582411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.624 [2024-07-25 01:20:14.582436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.624 qpair failed and we were unable to recover it. 00:34:21.624 [2024-07-25 01:20:14.582572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.624 [2024-07-25 01:20:14.582597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.624 qpair failed and we were unable to recover it. 00:34:21.624 [2024-07-25 01:20:14.582715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.624 [2024-07-25 01:20:14.582742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.624 qpair failed and we were unable to recover it. 00:34:21.624 [2024-07-25 01:20:14.582886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.624 [2024-07-25 01:20:14.582912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.624 qpair failed and we were unable to recover it. 00:34:21.624 [2024-07-25 01:20:14.583111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.625 [2024-07-25 01:20:14.583139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.625 qpair failed and we were unable to recover it. 00:34:21.625 [2024-07-25 01:20:14.583309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.625 [2024-07-25 01:20:14.583335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.625 qpair failed and we were unable to recover it. 00:34:21.625 [2024-07-25 01:20:14.583452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.625 [2024-07-25 01:20:14.583478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.625 qpair failed and we were unable to recover it. 00:34:21.625 [2024-07-25 01:20:14.583599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.625 [2024-07-25 01:20:14.583625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.625 qpair failed and we were unable to recover it. 00:34:21.625 [2024-07-25 01:20:14.583767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.625 [2024-07-25 01:20:14.583792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.625 qpair failed and we were unable to recover it. 00:34:21.625 [2024-07-25 01:20:14.583937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.625 [2024-07-25 01:20:14.583963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.625 qpair failed and we were unable to recover it. 00:34:21.625 [2024-07-25 01:20:14.584089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.625 [2024-07-25 01:20:14.584115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.625 qpair failed and we were unable to recover it. 00:34:21.625 [2024-07-25 01:20:14.584262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.625 [2024-07-25 01:20:14.584287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.625 qpair failed and we were unable to recover it. 00:34:21.625 [2024-07-25 01:20:14.584426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.625 [2024-07-25 01:20:14.584451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.625 qpair failed and we were unable to recover it. 00:34:21.625 [2024-07-25 01:20:14.584570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.625 [2024-07-25 01:20:14.584595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.625 qpair failed and we were unable to recover it. 00:34:21.625 [2024-07-25 01:20:14.584764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.625 [2024-07-25 01:20:14.584789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.625 qpair failed and we were unable to recover it. 00:34:21.625 [2024-07-25 01:20:14.584913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.625 [2024-07-25 01:20:14.584943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:21.625 qpair failed and we were unable to recover it. 00:34:21.625 [2024-07-25 01:20:14.585078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.625 [2024-07-25 01:20:14.585121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:21.625 qpair failed and we were unable to recover it. 00:34:21.625 [2024-07-25 01:20:14.585263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.625 [2024-07-25 01:20:14.585292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:21.625 qpair failed and we were unable to recover it. 00:34:21.625 [2024-07-25 01:20:14.585426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.625 [2024-07-25 01:20:14.585453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:21.625 qpair failed and we were unable to recover it. 00:34:21.625 [2024-07-25 01:20:14.585607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.625 [2024-07-25 01:20:14.585633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:21.625 qpair failed and we were unable to recover it. 00:34:21.625 [2024-07-25 01:20:14.585761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.625 [2024-07-25 01:20:14.585793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:21.625 qpair failed and we were unable to recover it. 00:34:21.625 [2024-07-25 01:20:14.585943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.625 [2024-07-25 01:20:14.585970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.625 qpair failed and we were unable to recover it. 00:34:21.625 [2024-07-25 01:20:14.586114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.625 [2024-07-25 01:20:14.586139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.625 qpair failed and we were unable to recover it. 00:34:21.625 [2024-07-25 01:20:14.586261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.625 [2024-07-25 01:20:14.586286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.625 qpair failed and we were unable to recover it. 00:34:21.625 [2024-07-25 01:20:14.586414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.625 [2024-07-25 01:20:14.586439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.625 qpair failed and we were unable to recover it. 00:34:21.625 [2024-07-25 01:20:14.586556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.625 [2024-07-25 01:20:14.586582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.625 qpair failed and we were unable to recover it. 00:34:21.625 [2024-07-25 01:20:14.586694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.625 [2024-07-25 01:20:14.586720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.625 qpair failed and we were unable to recover it. 00:34:21.625 [2024-07-25 01:20:14.586885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.625 [2024-07-25 01:20:14.586910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.625 qpair failed and we were unable to recover it. 00:34:21.625 [2024-07-25 01:20:14.587014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.625 [2024-07-25 01:20:14.587038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.625 qpair failed and we were unable to recover it. 00:34:21.625 [2024-07-25 01:20:14.587155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.625 [2024-07-25 01:20:14.587180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.625 qpair failed and we were unable to recover it. 00:34:21.625 [2024-07-25 01:20:14.587337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.625 [2024-07-25 01:20:14.587366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:21.625 qpair failed and we were unable to recover it. 00:34:21.625 [2024-07-25 01:20:14.587511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.625 [2024-07-25 01:20:14.587538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:21.625 qpair failed and we were unable to recover it. 00:34:21.625 [2024-07-25 01:20:14.587689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.625 [2024-07-25 01:20:14.587714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:21.625 qpair failed and we were unable to recover it. 00:34:21.625 [2024-07-25 01:20:14.587842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.625 [2024-07-25 01:20:14.587868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.625 qpair failed and we were unable to recover it. 00:34:21.625 [2024-07-25 01:20:14.587987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.625 [2024-07-25 01:20:14.588012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.625 qpair failed and we were unable to recover it. 00:34:21.625 [2024-07-25 01:20:14.588129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.625 [2024-07-25 01:20:14.588154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.625 qpair failed and we were unable to recover it. 00:34:21.625 [2024-07-25 01:20:14.588300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.625 [2024-07-25 01:20:14.588326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.625 qpair failed and we were unable to recover it. 00:34:21.625 [2024-07-25 01:20:14.588443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.625 [2024-07-25 01:20:14.588467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.625 qpair failed and we were unable to recover it. 00:34:21.625 [2024-07-25 01:20:14.588608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.625 [2024-07-25 01:20:14.588633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.625 qpair failed and we were unable to recover it. 00:34:21.625 [2024-07-25 01:20:14.588828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.625 [2024-07-25 01:20:14.588853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.625 qpair failed and we were unable to recover it. 00:34:21.625 [2024-07-25 01:20:14.588994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.625 [2024-07-25 01:20:14.589019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.625 qpair failed and we were unable to recover it. 00:34:21.625 [2024-07-25 01:20:14.589166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.625 [2024-07-25 01:20:14.589191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.625 qpair failed and we were unable to recover it. 00:34:21.625 [2024-07-25 01:20:14.589352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.626 [2024-07-25 01:20:14.589381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:21.626 qpair failed and we were unable to recover it. 00:34:21.626 [2024-07-25 01:20:14.589502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.626 [2024-07-25 01:20:14.589530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:21.626 qpair failed and we were unable to recover it. 00:34:21.626 [2024-07-25 01:20:14.589681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.626 [2024-07-25 01:20:14.589707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:21.626 qpair failed and we were unable to recover it. 00:34:21.626 [2024-07-25 01:20:14.589860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.626 [2024-07-25 01:20:14.589885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.626 qpair failed and we were unable to recover it. 00:34:21.626 [2024-07-25 01:20:14.590007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.626 [2024-07-25 01:20:14.590032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.626 qpair failed and we were unable to recover it. 00:34:21.626 [2024-07-25 01:20:14.590150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.626 [2024-07-25 01:20:14.590175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.626 qpair failed and we were unable to recover it. 00:34:21.626 [2024-07-25 01:20:14.590288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.626 [2024-07-25 01:20:14.590313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.626 qpair failed and we were unable to recover it. 00:34:21.626 [2024-07-25 01:20:14.590424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.626 [2024-07-25 01:20:14.590450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.626 qpair failed and we were unable to recover it. 00:34:21.626 [2024-07-25 01:20:14.590572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.626 [2024-07-25 01:20:14.590597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.626 qpair failed and we were unable to recover it. 00:34:21.626 [2024-07-25 01:20:14.590734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.626 [2024-07-25 01:20:14.590777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.626 qpair failed and we were unable to recover it. 00:34:21.626 [2024-07-25 01:20:14.590899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.626 [2024-07-25 01:20:14.590924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.626 qpair failed and we were unable to recover it. 00:34:21.626 [2024-07-25 01:20:14.591088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.626 [2024-07-25 01:20:14.591113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.626 qpair failed and we were unable to recover it. 00:34:21.626 [2024-07-25 01:20:14.591253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.626 [2024-07-25 01:20:14.591279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.626 qpair failed and we were unable to recover it. 00:34:21.626 [2024-07-25 01:20:14.591417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.626 [2024-07-25 01:20:14.591442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.626 qpair failed and we were unable to recover it. 00:34:21.626 [2024-07-25 01:20:14.591577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.626 [2024-07-25 01:20:14.591602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.626 qpair failed and we were unable to recover it. 00:34:21.626 [2024-07-25 01:20:14.591747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.626 [2024-07-25 01:20:14.591788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.626 qpair failed and we were unable to recover it. 00:34:21.626 [2024-07-25 01:20:14.591955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.626 [2024-07-25 01:20:14.591983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.626 qpair failed and we were unable to recover it. 00:34:21.626 [2024-07-25 01:20:14.592126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.626 [2024-07-25 01:20:14.592151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.626 qpair failed and we were unable to recover it. 00:34:21.626 [2024-07-25 01:20:14.592280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.626 [2024-07-25 01:20:14.592306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.626 qpair failed and we were unable to recover it. 00:34:21.626 [2024-07-25 01:20:14.592431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.626 [2024-07-25 01:20:14.592456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.626 qpair failed and we were unable to recover it. 00:34:21.626 [2024-07-25 01:20:14.592652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.626 [2024-07-25 01:20:14.592693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.626 qpair failed and we were unable to recover it. 00:34:21.626 [2024-07-25 01:20:14.592936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.626 [2024-07-25 01:20:14.592987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.626 qpair failed and we were unable to recover it. 00:34:21.626 [2024-07-25 01:20:14.593168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.626 [2024-07-25 01:20:14.593196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.626 qpair failed and we were unable to recover it. 00:34:21.626 [2024-07-25 01:20:14.593396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.626 [2024-07-25 01:20:14.593421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.626 qpair failed and we were unable to recover it. 00:34:21.626 [2024-07-25 01:20:14.593540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.626 [2024-07-25 01:20:14.593567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.626 qpair failed and we were unable to recover it. 00:34:21.626 [2024-07-25 01:20:14.593684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.626 [2024-07-25 01:20:14.593710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.626 qpair failed and we were unable to recover it. 00:34:21.626 [2024-07-25 01:20:14.593886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.626 [2024-07-25 01:20:14.593915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.626 qpair failed and we were unable to recover it. 00:34:21.626 [2024-07-25 01:20:14.594083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.626 [2024-07-25 01:20:14.594110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.626 qpair failed and we were unable to recover it. 00:34:21.626 [2024-07-25 01:20:14.594257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.626 [2024-07-25 01:20:14.594283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.626 qpair failed and we were unable to recover it. 00:34:21.626 [2024-07-25 01:20:14.594456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.626 [2024-07-25 01:20:14.594481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.626 qpair failed and we were unable to recover it. 00:34:21.626 [2024-07-25 01:20:14.594630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.626 [2024-07-25 01:20:14.594655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.626 qpair failed and we were unable to recover it. 00:34:21.626 [2024-07-25 01:20:14.594797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.626 [2024-07-25 01:20:14.594824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.626 qpair failed and we were unable to recover it. 00:34:21.626 [2024-07-25 01:20:14.594986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.626 [2024-07-25 01:20:14.595011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.627 qpair failed and we were unable to recover it. 00:34:21.627 [2024-07-25 01:20:14.595146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.627 [2024-07-25 01:20:14.595171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.627 qpair failed and we were unable to recover it. 00:34:21.627 [2024-07-25 01:20:14.595312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.627 [2024-07-25 01:20:14.595337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.627 qpair failed and we were unable to recover it. 00:34:21.627 [2024-07-25 01:20:14.595458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.627 [2024-07-25 01:20:14.595483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.627 qpair failed and we were unable to recover it. 00:34:21.627 [2024-07-25 01:20:14.595622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.627 [2024-07-25 01:20:14.595646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.627 qpair failed and we were unable to recover it. 00:34:21.627 [2024-07-25 01:20:14.595764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.627 [2024-07-25 01:20:14.595789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.627 qpair failed and we were unable to recover it. 00:34:21.627 [2024-07-25 01:20:14.595964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.627 [2024-07-25 01:20:14.596005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.627 qpair failed and we were unable to recover it. 00:34:21.627 [2024-07-25 01:20:14.596168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.627 [2024-07-25 01:20:14.596193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.627 qpair failed and we were unable to recover it. 00:34:21.627 [2024-07-25 01:20:14.596308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.627 [2024-07-25 01:20:14.596333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.627 qpair failed and we were unable to recover it. 00:34:21.627 [2024-07-25 01:20:14.596475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.627 [2024-07-25 01:20:14.596500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.627 qpair failed and we were unable to recover it. 00:34:21.627 [2024-07-25 01:20:14.596664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.627 [2024-07-25 01:20:14.596689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.627 qpair failed and we were unable to recover it. 00:34:21.627 [2024-07-25 01:20:14.596849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.627 [2024-07-25 01:20:14.596878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.627 qpair failed and we were unable to recover it. 00:34:21.627 [2024-07-25 01:20:14.597057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.627 [2024-07-25 01:20:14.597085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.627 qpair failed and we were unable to recover it. 00:34:21.627 [2024-07-25 01:20:14.597218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.627 [2024-07-25 01:20:14.597249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.627 qpair failed and we were unable to recover it. 00:34:21.627 [2024-07-25 01:20:14.597394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.627 [2024-07-25 01:20:14.597419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.627 qpair failed and we were unable to recover it. 00:34:21.627 [2024-07-25 01:20:14.597560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.627 [2024-07-25 01:20:14.597585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.627 qpair failed and we were unable to recover it. 00:34:21.627 [2024-07-25 01:20:14.597786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.627 [2024-07-25 01:20:14.597811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.627 qpair failed and we were unable to recover it. 00:34:21.627 [2024-07-25 01:20:14.597956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.627 [2024-07-25 01:20:14.597981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.627 qpair failed and we were unable to recover it. 00:34:21.627 [2024-07-25 01:20:14.598124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.627 [2024-07-25 01:20:14.598152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.627 qpair failed and we were unable to recover it. 00:34:21.627 [2024-07-25 01:20:14.598314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.627 [2024-07-25 01:20:14.598339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.627 qpair failed and we were unable to recover it. 00:34:21.627 [2024-07-25 01:20:14.598456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.627 [2024-07-25 01:20:14.598481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.627 qpair failed and we were unable to recover it. 00:34:21.627 [2024-07-25 01:20:14.598668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.627 [2024-07-25 01:20:14.598693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.627 qpair failed and we were unable to recover it. 00:34:21.627 [2024-07-25 01:20:14.598833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.627 [2024-07-25 01:20:14.598877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.627 qpair failed and we were unable to recover it. 00:34:21.627 [2024-07-25 01:20:14.599000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.627 [2024-07-25 01:20:14.599028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.627 qpair failed and we were unable to recover it. 00:34:21.627 [2024-07-25 01:20:14.599162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.627 [2024-07-25 01:20:14.599190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.627 qpair failed and we were unable to recover it. 00:34:21.627 [2024-07-25 01:20:14.599353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.627 [2024-07-25 01:20:14.599393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:21.627 qpair failed and we were unable to recover it. 00:34:21.627 [2024-07-25 01:20:14.599568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.627 [2024-07-25 01:20:14.599595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:21.627 qpair failed and we were unable to recover it. 00:34:21.627 [2024-07-25 01:20:14.599744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.627 [2024-07-25 01:20:14.599788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:21.627 qpair failed and we were unable to recover it. 00:34:21.627 [2024-07-25 01:20:14.599988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.627 [2024-07-25 01:20:14.600014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:21.627 qpair failed and we were unable to recover it. 00:34:21.627 [2024-07-25 01:20:14.600156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.627 [2024-07-25 01:20:14.600181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:21.627 qpair failed and we were unable to recover it. 00:34:21.627 [2024-07-25 01:20:14.600339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.627 [2024-07-25 01:20:14.600366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:21.627 qpair failed and we were unable to recover it. 00:34:21.627 [2024-07-25 01:20:14.600542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.627 [2024-07-25 01:20:14.600568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:21.627 qpair failed and we were unable to recover it. 00:34:21.627 [2024-07-25 01:20:14.600709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.627 [2024-07-25 01:20:14.600734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:21.627 qpair failed and we were unable to recover it. 00:34:21.627 [2024-07-25 01:20:14.600880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.627 [2024-07-25 01:20:14.600905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:21.627 qpair failed and we were unable to recover it. 00:34:21.627 [2024-07-25 01:20:14.601054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.627 [2024-07-25 01:20:14.601081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.627 qpair failed and we were unable to recover it. 00:34:21.627 [2024-07-25 01:20:14.601227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.627 [2024-07-25 01:20:14.601259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.627 qpair failed and we were unable to recover it. 00:34:21.627 [2024-07-25 01:20:14.601432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.627 [2024-07-25 01:20:14.601458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.627 qpair failed and we were unable to recover it. 00:34:21.627 [2024-07-25 01:20:14.601612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.627 [2024-07-25 01:20:14.601640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.627 qpair failed and we were unable to recover it. 00:34:21.627 [2024-07-25 01:20:14.601831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.628 [2024-07-25 01:20:14.601859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.628 qpair failed and we were unable to recover it. 00:34:21.628 [2024-07-25 01:20:14.602014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.628 [2024-07-25 01:20:14.602042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.628 qpair failed and we were unable to recover it. 00:34:21.628 [2024-07-25 01:20:14.602204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.628 [2024-07-25 01:20:14.602229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.628 qpair failed and we were unable to recover it. 00:34:21.628 [2024-07-25 01:20:14.602382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.628 [2024-07-25 01:20:14.602408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.628 qpair failed and we were unable to recover it. 00:34:21.628 [2024-07-25 01:20:14.602523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.628 [2024-07-25 01:20:14.602548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.628 qpair failed and we were unable to recover it. 00:34:21.628 [2024-07-25 01:20:14.602712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.628 [2024-07-25 01:20:14.602740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.628 qpair failed and we were unable to recover it. 00:34:21.628 [2024-07-25 01:20:14.602904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.628 [2024-07-25 01:20:14.602930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.628 qpair failed and we were unable to recover it. 00:34:21.628 [2024-07-25 01:20:14.603096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.628 [2024-07-25 01:20:14.603121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.628 qpair failed and we were unable to recover it. 00:34:21.628 [2024-07-25 01:20:14.603269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.628 [2024-07-25 01:20:14.603295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.628 qpair failed and we were unable to recover it. 00:34:21.628 [2024-07-25 01:20:14.603442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.628 [2024-07-25 01:20:14.603468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.628 qpair failed and we were unable to recover it. 00:34:21.628 [2024-07-25 01:20:14.603634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.628 [2024-07-25 01:20:14.603662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.628 qpair failed and we were unable to recover it. 00:34:21.628 [2024-07-25 01:20:14.603817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.628 [2024-07-25 01:20:14.603848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.628 qpair failed and we were unable to recover it. 00:34:21.628 [2024-07-25 01:20:14.604005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.628 [2024-07-25 01:20:14.604030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.628 qpair failed and we were unable to recover it. 00:34:21.628 [2024-07-25 01:20:14.604240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.628 [2024-07-25 01:20:14.604269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.628 qpair failed and we were unable to recover it. 00:34:21.628 [2024-07-25 01:20:14.604424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.628 [2024-07-25 01:20:14.604449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.628 qpair failed and we were unable to recover it. 00:34:21.628 [2024-07-25 01:20:14.604614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.628 [2024-07-25 01:20:14.604639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.628 qpair failed and we were unable to recover it. 00:34:21.628 [2024-07-25 01:20:14.604771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.628 [2024-07-25 01:20:14.604796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.628 qpair failed and we were unable to recover it. 00:34:21.628 [2024-07-25 01:20:14.604939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.628 [2024-07-25 01:20:14.604981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.628 qpair failed and we were unable to recover it. 00:34:21.628 [2024-07-25 01:20:14.605171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.628 [2024-07-25 01:20:14.605197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.628 qpair failed and we were unable to recover it. 00:34:21.628 [2024-07-25 01:20:14.605339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.628 [2024-07-25 01:20:14.605364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.628 qpair failed and we were unable to recover it. 00:34:21.628 [2024-07-25 01:20:14.605504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.628 [2024-07-25 01:20:14.605530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.628 qpair failed and we were unable to recover it. 00:34:21.628 [2024-07-25 01:20:14.605674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.628 [2024-07-25 01:20:14.605715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.628 qpair failed and we were unable to recover it. 00:34:21.628 [2024-07-25 01:20:14.605873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.628 [2024-07-25 01:20:14.605900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.628 qpair failed and we were unable to recover it. 00:34:21.628 [2024-07-25 01:20:14.606039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.628 [2024-07-25 01:20:14.606064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.628 qpair failed and we were unable to recover it. 00:34:21.628 [2024-07-25 01:20:14.606229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.628 [2024-07-25 01:20:14.606259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.628 qpair failed and we were unable to recover it. 00:34:21.628 [2024-07-25 01:20:14.606406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.628 [2024-07-25 01:20:14.606431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.628 qpair failed and we were unable to recover it. 00:34:21.628 [2024-07-25 01:20:14.606543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.628 [2024-07-25 01:20:14.606568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.628 qpair failed and we were unable to recover it. 00:34:21.628 [2024-07-25 01:20:14.606706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.628 [2024-07-25 01:20:14.606731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.628 qpair failed and we were unable to recover it. 00:34:21.628 [2024-07-25 01:20:14.606873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.628 [2024-07-25 01:20:14.606898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.628 qpair failed and we were unable to recover it. 00:34:21.628 [2024-07-25 01:20:14.607067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.628 [2024-07-25 01:20:14.607092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.628 qpair failed and we were unable to recover it. 00:34:21.628 [2024-07-25 01:20:14.607283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.628 [2024-07-25 01:20:14.607308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.628 qpair failed and we were unable to recover it. 00:34:21.628 [2024-07-25 01:20:14.607427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.628 [2024-07-25 01:20:14.607452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.628 qpair failed and we were unable to recover it. 00:34:21.628 [2024-07-25 01:20:14.607610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.628 [2024-07-25 01:20:14.607635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.628 qpair failed and we were unable to recover it. 00:34:21.628 [2024-07-25 01:20:14.607791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.628 [2024-07-25 01:20:14.607816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.628 qpair failed and we were unable to recover it. 00:34:21.628 [2024-07-25 01:20:14.607978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.628 [2024-07-25 01:20:14.608006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.628 qpair failed and we were unable to recover it. 00:34:21.628 [2024-07-25 01:20:14.608162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.628 [2024-07-25 01:20:14.608190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.628 qpair failed and we were unable to recover it. 00:34:21.628 [2024-07-25 01:20:14.608359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.628 [2024-07-25 01:20:14.608385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.628 qpair failed and we were unable to recover it. 00:34:21.628 [2024-07-25 01:20:14.608544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.628 [2024-07-25 01:20:14.608573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.628 qpair failed and we were unable to recover it. 00:34:21.629 [2024-07-25 01:20:14.608810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.629 [2024-07-25 01:20:14.608865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.629 qpair failed and we were unable to recover it. 00:34:21.629 [2024-07-25 01:20:14.609033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.629 [2024-07-25 01:20:14.609061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.629 qpair failed and we were unable to recover it. 00:34:21.629 [2024-07-25 01:20:14.609222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.629 [2024-07-25 01:20:14.609256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.629 qpair failed and we were unable to recover it. 00:34:21.629 [2024-07-25 01:20:14.609389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.629 [2024-07-25 01:20:14.609414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.629 qpair failed and we were unable to recover it. 00:34:21.629 [2024-07-25 01:20:14.609555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.629 [2024-07-25 01:20:14.609580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.629 qpair failed and we were unable to recover it. 00:34:21.629 [2024-07-25 01:20:14.609703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.629 [2024-07-25 01:20:14.609728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.629 qpair failed and we were unable to recover it. 00:34:21.629 [2024-07-25 01:20:14.609924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.629 [2024-07-25 01:20:14.609949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.629 qpair failed and we were unable to recover it. 00:34:21.629 [2024-07-25 01:20:14.610096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.629 [2024-07-25 01:20:14.610121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.629 qpair failed and we were unable to recover it. 00:34:21.629 [2024-07-25 01:20:14.610301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.629 [2024-07-25 01:20:14.610327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.629 qpair failed and we were unable to recover it. 00:34:21.629 [2024-07-25 01:20:14.610471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.629 [2024-07-25 01:20:14.610496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.629 qpair failed and we were unable to recover it. 00:34:21.629 [2024-07-25 01:20:14.610660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.629 [2024-07-25 01:20:14.610685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.629 qpair failed and we were unable to recover it. 00:34:21.629 [2024-07-25 01:20:14.610842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.629 [2024-07-25 01:20:14.610867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.629 qpair failed and we were unable to recover it. 00:34:21.629 [2024-07-25 01:20:14.611007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.629 [2024-07-25 01:20:14.611034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.629 qpair failed and we were unable to recover it. 00:34:21.629 [2024-07-25 01:20:14.611151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.629 [2024-07-25 01:20:14.611177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.629 qpair failed and we were unable to recover it. 00:34:21.629 [2024-07-25 01:20:14.611341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.629 [2024-07-25 01:20:14.611367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.629 qpair failed and we were unable to recover it. 00:34:21.629 [2024-07-25 01:20:14.611537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.629 [2024-07-25 01:20:14.611563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.629 qpair failed and we were unable to recover it. 00:34:21.629 [2024-07-25 01:20:14.611672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.629 [2024-07-25 01:20:14.611699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.629 qpair failed and we were unable to recover it. 00:34:21.629 [2024-07-25 01:20:14.611877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.629 [2024-07-25 01:20:14.611902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.629 qpair failed and we were unable to recover it. 00:34:21.629 [2024-07-25 01:20:14.612021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.629 [2024-07-25 01:20:14.612046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.629 qpair failed and we were unable to recover it. 00:34:21.629 [2024-07-25 01:20:14.612189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.629 [2024-07-25 01:20:14.612215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.629 qpair failed and we were unable to recover it. 00:34:21.629 [2024-07-25 01:20:14.612363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.629 [2024-07-25 01:20:14.612389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.629 qpair failed and we were unable to recover it. 00:34:21.629 [2024-07-25 01:20:14.612528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.629 [2024-07-25 01:20:14.612554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.629 qpair failed and we were unable to recover it. 00:34:21.629 [2024-07-25 01:20:14.612697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.629 [2024-07-25 01:20:14.612723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.629 qpair failed and we were unable to recover it. 00:34:21.629 [2024-07-25 01:20:14.612863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.629 [2024-07-25 01:20:14.612888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.629 qpair failed and we were unable to recover it. 00:34:21.629 [2024-07-25 01:20:14.613057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.629 [2024-07-25 01:20:14.613082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.629 qpair failed and we were unable to recover it. 00:34:21.629 [2024-07-25 01:20:14.613257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.629 [2024-07-25 01:20:14.613283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.629 qpair failed and we were unable to recover it. 00:34:21.629 [2024-07-25 01:20:14.613402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.629 [2024-07-25 01:20:14.613427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.629 qpair failed and we were unable to recover it. 00:34:21.629 [2024-07-25 01:20:14.613573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.629 [2024-07-25 01:20:14.613602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.629 qpair failed and we were unable to recover it. 00:34:21.629 [2024-07-25 01:20:14.613772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.629 [2024-07-25 01:20:14.613800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.629 qpair failed and we were unable to recover it. 00:34:21.629 [2024-07-25 01:20:14.613964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.629 [2024-07-25 01:20:14.613990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.629 qpair failed and we were unable to recover it. 00:34:21.629 [2024-07-25 01:20:14.614114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.629 [2024-07-25 01:20:14.614154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:21.629 qpair failed and we were unable to recover it. 00:34:21.629 [2024-07-25 01:20:14.614288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.629 [2024-07-25 01:20:14.614316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:21.629 qpair failed and we were unable to recover it. 00:34:21.629 [2024-07-25 01:20:14.614467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.629 [2024-07-25 01:20:14.614494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:21.629 qpair failed and we were unable to recover it. 00:34:21.629 [2024-07-25 01:20:14.614669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.629 [2024-07-25 01:20:14.614695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:21.629 qpair failed and we were unable to recover it. 00:34:21.629 [2024-07-25 01:20:14.614843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.629 [2024-07-25 01:20:14.614868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:21.629 qpair failed and we were unable to recover it. 00:34:21.629 [2024-07-25 01:20:14.615037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.629 [2024-07-25 01:20:14.615063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:21.629 qpair failed and we were unable to recover it. 00:34:21.629 [2024-07-25 01:20:14.615230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.629 [2024-07-25 01:20:14.615272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.629 qpair failed and we were unable to recover it. 00:34:21.629 [2024-07-25 01:20:14.615433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.629 [2024-07-25 01:20:14.615458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.629 qpair failed and we were unable to recover it. 00:34:21.630 [2024-07-25 01:20:14.615625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.630 [2024-07-25 01:20:14.615651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.630 qpair failed and we were unable to recover it. 00:34:21.630 [2024-07-25 01:20:14.615815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.630 [2024-07-25 01:20:14.615843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.630 qpair failed and we were unable to recover it. 00:34:21.630 [2024-07-25 01:20:14.615971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.630 [2024-07-25 01:20:14.615999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.630 qpair failed and we were unable to recover it. 00:34:21.630 [2024-07-25 01:20:14.616162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.630 [2024-07-25 01:20:14.616188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.630 qpair failed and we were unable to recover it. 00:34:21.630 [2024-07-25 01:20:14.616333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.630 [2024-07-25 01:20:14.616361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:21.630 qpair failed and we were unable to recover it. 00:34:21.630 [2024-07-25 01:20:14.616527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.630 [2024-07-25 01:20:14.616552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:21.630 qpair failed and we were unable to recover it. 00:34:21.630 [2024-07-25 01:20:14.616662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.630 [2024-07-25 01:20:14.616688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:21.630 qpair failed and we were unable to recover it. 00:34:21.630 [2024-07-25 01:20:14.616831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.630 [2024-07-25 01:20:14.616857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:21.630 qpair failed and we were unable to recover it. 00:34:21.630 [2024-07-25 01:20:14.617023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.630 [2024-07-25 01:20:14.617048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:21.630 qpair failed and we were unable to recover it. 00:34:21.630 [2024-07-25 01:20:14.617165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.630 [2024-07-25 01:20:14.617190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:21.630 qpair failed and we were unable to recover it. 00:34:21.630 [2024-07-25 01:20:14.617338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.630 [2024-07-25 01:20:14.617365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:21.630 qpair failed and we were unable to recover it. 00:34:21.630 [2024-07-25 01:20:14.617484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.630 [2024-07-25 01:20:14.617511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:21.630 qpair failed and we were unable to recover it. 00:34:21.630 [2024-07-25 01:20:14.617649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.630 [2024-07-25 01:20:14.617693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:21.630 qpair failed and we were unable to recover it. 00:34:21.630 [2024-07-25 01:20:14.617889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.630 [2024-07-25 01:20:14.617932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:21.630 qpair failed and we were unable to recover it. 00:34:21.630 [2024-07-25 01:20:14.618085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.630 [2024-07-25 01:20:14.618111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:21.630 qpair failed and we were unable to recover it. 00:34:21.630 [2024-07-25 01:20:14.618260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.630 [2024-07-25 01:20:14.618287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:21.630 qpair failed and we were unable to recover it. 00:34:21.630 [2024-07-25 01:20:14.618407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.630 [2024-07-25 01:20:14.618438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.630 qpair failed and we were unable to recover it. 00:34:21.630 [2024-07-25 01:20:14.618558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.630 [2024-07-25 01:20:14.618583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.630 qpair failed and we were unable to recover it. 00:34:21.630 [2024-07-25 01:20:14.618739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.630 [2024-07-25 01:20:14.618768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.630 qpair failed and we were unable to recover it. 00:34:21.630 [2024-07-25 01:20:14.618930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.630 [2024-07-25 01:20:14.618955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.630 qpair failed and we were unable to recover it. 00:34:21.630 [2024-07-25 01:20:14.619111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.630 [2024-07-25 01:20:14.619136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.630 qpair failed and we were unable to recover it. 00:34:21.630 [2024-07-25 01:20:14.619280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.630 [2024-07-25 01:20:14.619306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.630 qpair failed and we were unable to recover it. 00:34:21.630 [2024-07-25 01:20:14.619427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.630 [2024-07-25 01:20:14.619454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.630 qpair failed and we were unable to recover it. 00:34:21.630 [2024-07-25 01:20:14.619626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.630 [2024-07-25 01:20:14.619655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.630 qpair failed and we were unable to recover it. 00:34:21.630 [2024-07-25 01:20:14.619811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.630 [2024-07-25 01:20:14.619841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.630 qpair failed and we were unable to recover it. 00:34:21.630 [2024-07-25 01:20:14.620025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.630 [2024-07-25 01:20:14.620069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:21.630 qpair failed and we were unable to recover it. 00:34:21.630 [2024-07-25 01:20:14.620183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.630 [2024-07-25 01:20:14.620209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:21.630 qpair failed and we were unable to recover it. 00:34:21.630 [2024-07-25 01:20:14.620368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.630 [2024-07-25 01:20:14.620395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:21.630 qpair failed and we were unable to recover it. 00:34:21.630 [2024-07-25 01:20:14.620535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.630 [2024-07-25 01:20:14.620561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:21.630 qpair failed and we were unable to recover it. 00:34:21.630 [2024-07-25 01:20:14.620772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.630 [2024-07-25 01:20:14.620797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:21.630 qpair failed and we were unable to recover it. 00:34:21.630 [2024-07-25 01:20:14.620955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.630 [2024-07-25 01:20:14.620981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:21.630 qpair failed and we were unable to recover it. 00:34:21.630 [2024-07-25 01:20:14.621119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.630 [2024-07-25 01:20:14.621145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:21.630 qpair failed and we were unable to recover it. 00:34:21.630 [2024-07-25 01:20:14.621285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.630 [2024-07-25 01:20:14.621312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:21.630 qpair failed and we were unable to recover it. 00:34:21.630 [2024-07-25 01:20:14.621484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.630 [2024-07-25 01:20:14.621510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:21.630 qpair failed and we were unable to recover it. 00:34:21.630 [2024-07-25 01:20:14.621670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.630 [2024-07-25 01:20:14.621712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:21.630 qpair failed and we were unable to recover it. 00:34:21.630 [2024-07-25 01:20:14.621905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.630 [2024-07-25 01:20:14.621933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:21.630 qpair failed and we were unable to recover it. 00:34:21.630 [2024-07-25 01:20:14.622094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.630 [2024-07-25 01:20:14.622119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:21.630 qpair failed and we were unable to recover it. 00:34:21.630 [2024-07-25 01:20:14.622297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.631 [2024-07-25 01:20:14.622323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:21.631 qpair failed and we were unable to recover it. 00:34:21.631 [2024-07-25 01:20:14.622467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.631 [2024-07-25 01:20:14.622492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:21.631 qpair failed and we were unable to recover it. 00:34:21.631 [2024-07-25 01:20:14.622624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.631 [2024-07-25 01:20:14.622652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:21.631 qpair failed and we were unable to recover it. 00:34:21.631 [2024-07-25 01:20:14.622802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.631 [2024-07-25 01:20:14.622827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:21.631 qpair failed and we were unable to recover it. 00:34:21.631 [2024-07-25 01:20:14.622970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.631 [2024-07-25 01:20:14.622997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:21.631 qpair failed and we were unable to recover it. 00:34:21.631 [2024-07-25 01:20:14.623134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.631 [2024-07-25 01:20:14.623160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:21.631 qpair failed and we were unable to recover it. 00:34:21.631 [2024-07-25 01:20:14.623335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.631 [2024-07-25 01:20:14.623364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.631 qpair failed and we were unable to recover it. 00:34:21.631 [2024-07-25 01:20:14.623538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.631 [2024-07-25 01:20:14.623563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.631 qpair failed and we were unable to recover it. 00:34:21.631 [2024-07-25 01:20:14.623681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.631 [2024-07-25 01:20:14.623707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.631 qpair failed and we were unable to recover it. 00:34:21.631 [2024-07-25 01:20:14.623817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.631 [2024-07-25 01:20:14.623843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.631 qpair failed and we were unable to recover it. 00:34:21.631 [2024-07-25 01:20:14.623982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.631 [2024-07-25 01:20:14.624007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.631 qpair failed and we were unable to recover it. 00:34:21.631 [2024-07-25 01:20:14.624179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.631 [2024-07-25 01:20:14.624205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.631 qpair failed and we were unable to recover it. 00:34:21.631 [2024-07-25 01:20:14.624446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.631 [2024-07-25 01:20:14.624472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.631 qpair failed and we were unable to recover it. 00:34:21.631 [2024-07-25 01:20:14.624696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.631 [2024-07-25 01:20:14.624722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.631 qpair failed and we were unable to recover it. 00:34:21.631 [2024-07-25 01:20:14.624878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.631 [2024-07-25 01:20:14.624920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.631 qpair failed and we were unable to recover it. 00:34:21.631 [2024-07-25 01:20:14.625082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.631 [2024-07-25 01:20:14.625110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.631 qpair failed and we were unable to recover it. 00:34:21.631 [2024-07-25 01:20:14.625273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.631 [2024-07-25 01:20:14.625299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.631 qpair failed and we were unable to recover it. 00:34:21.631 [2024-07-25 01:20:14.625413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.631 [2024-07-25 01:20:14.625439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.631 qpair failed and we were unable to recover it. 00:34:21.631 [2024-07-25 01:20:14.625558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.631 [2024-07-25 01:20:14.625584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.631 qpair failed and we were unable to recover it. 00:34:21.631 [2024-07-25 01:20:14.625700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.631 [2024-07-25 01:20:14.625725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.631 qpair failed and we were unable to recover it. 00:34:21.631 [2024-07-25 01:20:14.625888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.631 [2024-07-25 01:20:14.625913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.631 qpair failed and we were unable to recover it. 00:34:21.631 [2024-07-25 01:20:14.626055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.631 [2024-07-25 01:20:14.626082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:21.631 qpair failed and we were unable to recover it. 00:34:21.631 [2024-07-25 01:20:14.626193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.631 [2024-07-25 01:20:14.626219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:21.631 qpair failed and we were unable to recover it. 00:34:21.631 [2024-07-25 01:20:14.626455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.631 [2024-07-25 01:20:14.626481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:21.631 qpair failed and we were unable to recover it. 00:34:21.631 [2024-07-25 01:20:14.626622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.631 [2024-07-25 01:20:14.626648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:21.631 qpair failed and we were unable to recover it. 00:34:21.631 [2024-07-25 01:20:14.626794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.631 [2024-07-25 01:20:14.626820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:21.631 qpair failed and we were unable to recover it. 00:34:21.631 [2024-07-25 01:20:14.626956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.631 [2024-07-25 01:20:14.626983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:21.631 qpair failed and we were unable to recover it. 00:34:21.631 [2024-07-25 01:20:14.627120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.631 [2024-07-25 01:20:14.627147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.631 qpair failed and we were unable to recover it. 00:34:21.631 [2024-07-25 01:20:14.627263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.631 [2024-07-25 01:20:14.627289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.631 qpair failed and we were unable to recover it. 00:34:21.631 [2024-07-25 01:20:14.627437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.631 [2024-07-25 01:20:14.627462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.631 qpair failed and we were unable to recover it. 00:34:21.631 [2024-07-25 01:20:14.627581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.631 [2024-07-25 01:20:14.627608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.631 qpair failed and we were unable to recover it. 00:34:21.631 [2024-07-25 01:20:14.627778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.631 [2024-07-25 01:20:14.627805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.631 qpair failed and we were unable to recover it. 00:34:21.631 [2024-07-25 01:20:14.627945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.631 [2024-07-25 01:20:14.627972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.631 qpair failed and we were unable to recover it. 00:34:21.632 [2024-07-25 01:20:14.628166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.632 [2024-07-25 01:20:14.628213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:21.632 qpair failed and we were unable to recover it. 00:34:21.632 [2024-07-25 01:20:14.628368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.632 [2024-07-25 01:20:14.628394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:21.632 qpair failed and we were unable to recover it. 00:34:21.632 [2024-07-25 01:20:14.628510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.632 [2024-07-25 01:20:14.628538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:21.632 qpair failed and we were unable to recover it. 00:34:21.632 [2024-07-25 01:20:14.628707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.632 [2024-07-25 01:20:14.628733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:21.632 qpair failed and we were unable to recover it. 00:34:21.632 [2024-07-25 01:20:14.628923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.632 [2024-07-25 01:20:14.628967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:21.632 qpair failed and we were unable to recover it. 00:34:21.632 [2024-07-25 01:20:14.629114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.632 [2024-07-25 01:20:14.629140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:21.632 qpair failed and we were unable to recover it. 00:34:21.632 [2024-07-25 01:20:14.629312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.632 [2024-07-25 01:20:14.629340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.632 qpair failed and we were unable to recover it. 00:34:21.632 [2024-07-25 01:20:14.629508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.632 [2024-07-25 01:20:14.629533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.632 qpair failed and we were unable to recover it. 00:34:21.632 [2024-07-25 01:20:14.629675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.632 [2024-07-25 01:20:14.629701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.632 qpair failed and we were unable to recover it. 00:34:21.632 [2024-07-25 01:20:14.629862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.632 [2024-07-25 01:20:14.629888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.632 qpair failed and we were unable to recover it. 00:34:21.632 [2024-07-25 01:20:14.630052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.632 [2024-07-25 01:20:14.630094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.632 qpair failed and we were unable to recover it. 00:34:21.632 [2024-07-25 01:20:14.630253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.632 [2024-07-25 01:20:14.630296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.632 qpair failed and we were unable to recover it. 00:34:21.632 [2024-07-25 01:20:14.630411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.632 [2024-07-25 01:20:14.630437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.632 qpair failed and we were unable to recover it. 00:34:21.632 [2024-07-25 01:20:14.630550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.632 [2024-07-25 01:20:14.630576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.632 qpair failed and we were unable to recover it. 00:34:21.632 [2024-07-25 01:20:14.630751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.632 [2024-07-25 01:20:14.630779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.632 qpair failed and we were unable to recover it. 00:34:21.632 [2024-07-25 01:20:14.630956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.632 [2024-07-25 01:20:14.630998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.632 qpair failed and we were unable to recover it. 00:34:21.632 [2024-07-25 01:20:14.631184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.632 [2024-07-25 01:20:14.631209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.632 qpair failed and we were unable to recover it. 00:34:21.632 [2024-07-25 01:20:14.631325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.632 [2024-07-25 01:20:14.631352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.632 qpair failed and we were unable to recover it. 00:34:21.632 [2024-07-25 01:20:14.631494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.632 [2024-07-25 01:20:14.631519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.632 qpair failed and we were unable to recover it. 00:34:21.632 [2024-07-25 01:20:14.631717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.632 [2024-07-25 01:20:14.631744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.632 qpair failed and we were unable to recover it. 00:34:21.632 [2024-07-25 01:20:14.631902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.632 [2024-07-25 01:20:14.631930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.632 qpair failed and we were unable to recover it. 00:34:21.632 [2024-07-25 01:20:14.632094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.632 [2024-07-25 01:20:14.632119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.632 qpair failed and we were unable to recover it. 00:34:21.632 [2024-07-25 01:20:14.632239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.632 [2024-07-25 01:20:14.632269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.632 qpair failed and we were unable to recover it. 00:34:21.632 [2024-07-25 01:20:14.632430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.632 [2024-07-25 01:20:14.632456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.632 qpair failed and we were unable to recover it. 00:34:21.632 [2024-07-25 01:20:14.632568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.632 [2024-07-25 01:20:14.632593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.632 qpair failed and we were unable to recover it. 00:34:21.632 [2024-07-25 01:20:14.632739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.632 [2024-07-25 01:20:14.632771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.632 qpair failed and we were unable to recover it. 00:34:21.632 [2024-07-25 01:20:14.632934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.632 [2024-07-25 01:20:14.632969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.632 qpair failed and we were unable to recover it. 00:34:21.632 [2024-07-25 01:20:14.633135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.632 [2024-07-25 01:20:14.633176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.632 qpair failed and we were unable to recover it. 00:34:21.632 [2024-07-25 01:20:14.633359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.632 [2024-07-25 01:20:14.633396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.632 qpair failed and we were unable to recover it. 00:34:21.632 [2024-07-25 01:20:14.633556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.632 [2024-07-25 01:20:14.633590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.632 qpair failed and we were unable to recover it. 00:34:21.632 [2024-07-25 01:20:14.633756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.632 [2024-07-25 01:20:14.633791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.632 qpair failed and we were unable to recover it. 00:34:21.632 [2024-07-25 01:20:14.633982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.632 [2024-07-25 01:20:14.634035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.632 qpair failed and we were unable to recover it. 00:34:21.632 [2024-07-25 01:20:14.634235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.632 [2024-07-25 01:20:14.634300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.632 qpair failed and we were unable to recover it. 00:34:21.632 [2024-07-25 01:20:14.634451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.632 [2024-07-25 01:20:14.634489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:21.632 qpair failed and we were unable to recover it. 00:34:21.632 [2024-07-25 01:20:14.634674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.632 [2024-07-25 01:20:14.634705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:21.632 qpair failed and we were unable to recover it. 00:34:21.632 [2024-07-25 01:20:14.634949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.632 [2024-07-25 01:20:14.634992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:21.632 qpair failed and we were unable to recover it. 00:34:21.632 [2024-07-25 01:20:14.635159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.632 [2024-07-25 01:20:14.635184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:21.632 qpair failed and we were unable to recover it. 00:34:21.632 [2024-07-25 01:20:14.635342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.632 [2024-07-25 01:20:14.635369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:21.632 qpair failed and we were unable to recover it. 00:34:21.632 [2024-07-25 01:20:14.635511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.633 [2024-07-25 01:20:14.635538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:21.633 qpair failed and we were unable to recover it. 00:34:21.633 [2024-07-25 01:20:14.635739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.633 [2024-07-25 01:20:14.635783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:21.633 qpair failed and we were unable to recover it. 00:34:21.633 [2024-07-25 01:20:14.636010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.633 [2024-07-25 01:20:14.636054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:21.633 qpair failed and we were unable to recover it. 00:34:21.633 [2024-07-25 01:20:14.636249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.633 [2024-07-25 01:20:14.636277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:21.633 qpair failed and we were unable to recover it. 00:34:21.633 [2024-07-25 01:20:14.636418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.633 [2024-07-25 01:20:14.636445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:21.633 qpair failed and we were unable to recover it. 00:34:21.633 [2024-07-25 01:20:14.636617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.633 [2024-07-25 01:20:14.636643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:21.633 qpair failed and we were unable to recover it. 00:34:21.633 [2024-07-25 01:20:14.636810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.633 [2024-07-25 01:20:14.636837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:21.633 qpair failed and we were unable to recover it. 00:34:21.633 [2024-07-25 01:20:14.636955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.633 [2024-07-25 01:20:14.636983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:21.633 qpair failed and we were unable to recover it. 00:34:21.633 [2024-07-25 01:20:14.637101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.633 [2024-07-25 01:20:14.637128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:21.633 qpair failed and we were unable to recover it. 00:34:21.633 [2024-07-25 01:20:14.637284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.633 [2024-07-25 01:20:14.637312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:21.633 qpair failed and we were unable to recover it. 00:34:21.633 [2024-07-25 01:20:14.637433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.633 [2024-07-25 01:20:14.637458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:21.633 qpair failed and we were unable to recover it. 00:34:21.633 [2024-07-25 01:20:14.637590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.633 [2024-07-25 01:20:14.637616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:21.633 qpair failed and we were unable to recover it. 00:34:21.633 [2024-07-25 01:20:14.637821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.633 [2024-07-25 01:20:14.637847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:21.633 qpair failed and we were unable to recover it. 00:34:21.633 [2024-07-25 01:20:14.638000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.633 [2024-07-25 01:20:14.638026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:21.633 qpair failed and we were unable to recover it. 00:34:21.633 [2024-07-25 01:20:14.638163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.633 [2024-07-25 01:20:14.638189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:21.633 qpair failed and we were unable to recover it. 00:34:21.633 [2024-07-25 01:20:14.638339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.633 [2024-07-25 01:20:14.638365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:21.633 qpair failed and we were unable to recover it. 00:34:21.633 [2024-07-25 01:20:14.638495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.633 [2024-07-25 01:20:14.638522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:21.633 qpair failed and we were unable to recover it. 00:34:21.633 [2024-07-25 01:20:14.638677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.633 [2024-07-25 01:20:14.638711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:21.633 qpair failed and we were unable to recover it. 00:34:21.633 [2024-07-25 01:20:14.638858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.633 [2024-07-25 01:20:14.638884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:21.633 qpair failed and we were unable to recover it. 00:34:21.633 [2024-07-25 01:20:14.639049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.633 [2024-07-25 01:20:14.639074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:21.633 qpair failed and we were unable to recover it. 00:34:21.633 [2024-07-25 01:20:14.639212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.633 [2024-07-25 01:20:14.639238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:21.633 qpair failed and we were unable to recover it. 00:34:21.633 [2024-07-25 01:20:14.639373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.633 [2024-07-25 01:20:14.639398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:21.633 qpair failed and we were unable to recover it. 00:34:21.633 [2024-07-25 01:20:14.639519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.633 [2024-07-25 01:20:14.639557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:21.633 qpair failed and we were unable to recover it. 00:34:21.633 [2024-07-25 01:20:14.639700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.633 [2024-07-25 01:20:14.639727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:21.633 qpair failed and we were unable to recover it. 00:34:21.633 [2024-07-25 01:20:14.639873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.633 [2024-07-25 01:20:14.639900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:21.633 qpair failed and we were unable to recover it. 00:34:21.633 [2024-07-25 01:20:14.640073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.633 [2024-07-25 01:20:14.640099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:21.633 qpair failed and we were unable to recover it. 00:34:21.633 [2024-07-25 01:20:14.640250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.633 [2024-07-25 01:20:14.640276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:21.633 qpair failed and we were unable to recover it. 00:34:21.633 [2024-07-25 01:20:14.640421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.633 [2024-07-25 01:20:14.640447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:21.633 qpair failed and we were unable to recover it. 00:34:21.633 [2024-07-25 01:20:14.640587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.633 [2024-07-25 01:20:14.640613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:21.633 qpair failed and we were unable to recover it. 00:34:21.633 [2024-07-25 01:20:14.640731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.633 [2024-07-25 01:20:14.640761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:21.633 qpair failed and we were unable to recover it. 00:34:21.633 [2024-07-25 01:20:14.640986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.633 [2024-07-25 01:20:14.641011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:21.633 qpair failed and we were unable to recover it. 00:34:21.633 [2024-07-25 01:20:14.641232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.633 [2024-07-25 01:20:14.641267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:21.633 qpair failed and we were unable to recover it. 00:34:21.633 [2024-07-25 01:20:14.641411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.633 [2024-07-25 01:20:14.641437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:21.633 qpair failed and we were unable to recover it. 00:34:21.633 [2024-07-25 01:20:14.641595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.633 [2024-07-25 01:20:14.641622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:21.633 qpair failed and we were unable to recover it. 00:34:21.633 [2024-07-25 01:20:14.641767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.633 [2024-07-25 01:20:14.641793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:21.633 qpair failed and we were unable to recover it. 00:34:21.633 [2024-07-25 01:20:14.641959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.633 [2024-07-25 01:20:14.641986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:21.633 qpair failed and we were unable to recover it. 00:34:21.633 [2024-07-25 01:20:14.642156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.633 [2024-07-25 01:20:14.642182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:21.633 qpair failed and we were unable to recover it. 00:34:21.633 [2024-07-25 01:20:14.642332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.634 [2024-07-25 01:20:14.642359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:21.634 qpair failed and we were unable to recover it. 00:34:21.634 [2024-07-25 01:20:14.642468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.634 [2024-07-25 01:20:14.642495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:21.634 qpair failed and we were unable to recover it. 00:34:21.634 [2024-07-25 01:20:14.642636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.634 [2024-07-25 01:20:14.642662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:21.634 qpair failed and we were unable to recover it. 00:34:21.634 [2024-07-25 01:20:14.642830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.634 [2024-07-25 01:20:14.642856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:21.634 qpair failed and we were unable to recover it. 00:34:21.634 [2024-07-25 01:20:14.642995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.634 [2024-07-25 01:20:14.643021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:21.634 qpair failed and we were unable to recover it. 00:34:21.634 [2024-07-25 01:20:14.643126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.634 [2024-07-25 01:20:14.643152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:21.634 qpair failed and we were unable to recover it. 00:34:21.634 [2024-07-25 01:20:14.643307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.634 [2024-07-25 01:20:14.643334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:21.634 qpair failed and we were unable to recover it. 00:34:21.634 [2024-07-25 01:20:14.643453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.634 [2024-07-25 01:20:14.643479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:21.634 qpair failed and we were unable to recover it. 00:34:21.634 [2024-07-25 01:20:14.643702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.634 [2024-07-25 01:20:14.643728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:21.634 qpair failed and we were unable to recover it. 00:34:21.634 [2024-07-25 01:20:14.643873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.634 [2024-07-25 01:20:14.643900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:21.634 qpair failed and we were unable to recover it. 00:34:21.634 [2024-07-25 01:20:14.644082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.634 [2024-07-25 01:20:14.644108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:21.634 qpair failed and we were unable to recover it. 00:34:21.634 [2024-07-25 01:20:14.644254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.634 [2024-07-25 01:20:14.644281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:21.634 qpair failed and we were unable to recover it. 00:34:21.634 [2024-07-25 01:20:14.644430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.634 [2024-07-25 01:20:14.644456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:21.634 qpair failed and we were unable to recover it. 00:34:21.634 [2024-07-25 01:20:14.644633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.634 [2024-07-25 01:20:14.644660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:21.634 qpair failed and we were unable to recover it. 00:34:21.634 [2024-07-25 01:20:14.644787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.634 [2024-07-25 01:20:14.644812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:21.634 qpair failed and we were unable to recover it. 00:34:21.634 [2024-07-25 01:20:14.644924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.634 [2024-07-25 01:20:14.644950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:21.634 qpair failed and we were unable to recover it. 00:34:21.634 [2024-07-25 01:20:14.645094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.634 [2024-07-25 01:20:14.645120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:21.634 qpair failed and we were unable to recover it. 00:34:21.634 [2024-07-25 01:20:14.645280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.634 [2024-07-25 01:20:14.645316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.634 qpair failed and we were unable to recover it. 00:34:21.634 [2024-07-25 01:20:14.645469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.634 [2024-07-25 01:20:14.645498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.634 qpair failed and we were unable to recover it. 00:34:21.634 [2024-07-25 01:20:14.645675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.634 [2024-07-25 01:20:14.645702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.634 qpair failed and we were unable to recover it. 00:34:21.634 [2024-07-25 01:20:14.645843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.634 [2024-07-25 01:20:14.645869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.634 qpair failed and we were unable to recover it. 00:34:21.634 [2024-07-25 01:20:14.646012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.634 [2024-07-25 01:20:14.646038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.634 qpair failed and we were unable to recover it. 00:34:21.634 [2024-07-25 01:20:14.646212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.634 [2024-07-25 01:20:14.646257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.634 qpair failed and we were unable to recover it. 00:34:21.634 [2024-07-25 01:20:14.646405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.634 [2024-07-25 01:20:14.646430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.634 qpair failed and we were unable to recover it. 00:34:21.634 [2024-07-25 01:20:14.646568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.634 [2024-07-25 01:20:14.646603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.634 qpair failed and we were unable to recover it. 00:34:21.634 [2024-07-25 01:20:14.646719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.634 [2024-07-25 01:20:14.646744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.634 qpair failed and we were unable to recover it. 00:34:21.634 [2024-07-25 01:20:14.646852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.634 [2024-07-25 01:20:14.646878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.634 qpair failed and we were unable to recover it. 00:34:21.634 [2024-07-25 01:20:14.647011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.634 [2024-07-25 01:20:14.647036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.634 qpair failed and we were unable to recover it. 00:34:21.634 [2024-07-25 01:20:14.647181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.634 [2024-07-25 01:20:14.647206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.634 qpair failed and we were unable to recover it. 00:34:21.634 [2024-07-25 01:20:14.647343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.634 [2024-07-25 01:20:14.647370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.634 qpair failed and we were unable to recover it. 00:34:21.634 [2024-07-25 01:20:14.647533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.634 [2024-07-25 01:20:14.647558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.634 qpair failed and we were unable to recover it. 00:34:21.634 [2024-07-25 01:20:14.647725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.634 [2024-07-25 01:20:14.647750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.634 qpair failed and we were unable to recover it. 00:34:21.634 [2024-07-25 01:20:14.647928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.634 [2024-07-25 01:20:14.647954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.634 qpair failed and we were unable to recover it. 00:34:21.634 [2024-07-25 01:20:14.648096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.634 [2024-07-25 01:20:14.648121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.634 qpair failed and we were unable to recover it. 00:34:21.634 [2024-07-25 01:20:14.648233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.634 [2024-07-25 01:20:14.648269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.634 qpair failed and we were unable to recover it. 00:34:21.634 [2024-07-25 01:20:14.648439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.634 [2024-07-25 01:20:14.648464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.634 qpair failed and we were unable to recover it. 00:34:21.634 [2024-07-25 01:20:14.648575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.634 [2024-07-25 01:20:14.648602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.634 qpair failed and we were unable to recover it. 00:34:21.634 [2024-07-25 01:20:14.648720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.634 [2024-07-25 01:20:14.648746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.634 qpair failed and we were unable to recover it. 00:34:21.634 [2024-07-25 01:20:14.648863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.635 [2024-07-25 01:20:14.648889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.635 qpair failed and we were unable to recover it. 00:34:21.635 [2024-07-25 01:20:14.649059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.635 [2024-07-25 01:20:14.649085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.635 qpair failed and we were unable to recover it. 00:34:21.635 [2024-07-25 01:20:14.649226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.635 [2024-07-25 01:20:14.649260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.635 qpair failed and we were unable to recover it. 00:34:21.635 [2024-07-25 01:20:14.649405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.635 [2024-07-25 01:20:14.649431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.635 qpair failed and we were unable to recover it. 00:34:21.635 [2024-07-25 01:20:14.649564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.635 [2024-07-25 01:20:14.649589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.635 qpair failed and we were unable to recover it. 00:34:21.635 [2024-07-25 01:20:14.649763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.635 [2024-07-25 01:20:14.649788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.635 qpair failed and we were unable to recover it. 00:34:21.635 [2024-07-25 01:20:14.649931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.635 [2024-07-25 01:20:14.649958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.635 qpair failed and we were unable to recover it. 00:34:21.635 [2024-07-25 01:20:14.650109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.635 [2024-07-25 01:20:14.650134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.635 qpair failed and we were unable to recover it. 00:34:21.635 [2024-07-25 01:20:14.650289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.635 [2024-07-25 01:20:14.650315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.635 qpair failed and we were unable to recover it. 00:34:21.635 [2024-07-25 01:20:14.650453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.635 [2024-07-25 01:20:14.650479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.635 qpair failed and we were unable to recover it. 00:34:21.635 [2024-07-25 01:20:14.650613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.635 [2024-07-25 01:20:14.650638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.635 qpair failed and we were unable to recover it. 00:34:21.635 [2024-07-25 01:20:14.650774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.635 [2024-07-25 01:20:14.650799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.635 qpair failed and we were unable to recover it. 00:34:21.635 [2024-07-25 01:20:14.650923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.635 [2024-07-25 01:20:14.650948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.635 qpair failed and we were unable to recover it. 00:34:21.635 [2024-07-25 01:20:14.651114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.635 [2024-07-25 01:20:14.651141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.635 qpair failed and we were unable to recover it. 00:34:21.635 [2024-07-25 01:20:14.651280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.635 [2024-07-25 01:20:14.651306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.635 qpair failed and we were unable to recover it. 00:34:21.635 [2024-07-25 01:20:14.651417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.635 [2024-07-25 01:20:14.651443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.635 qpair failed and we were unable to recover it. 00:34:21.635 [2024-07-25 01:20:14.651575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.635 [2024-07-25 01:20:14.651604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.635 qpair failed and we were unable to recover it. 00:34:21.635 [2024-07-25 01:20:14.651786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.635 [2024-07-25 01:20:14.651814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.635 qpair failed and we were unable to recover it. 00:34:21.635 [2024-07-25 01:20:14.651974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.635 [2024-07-25 01:20:14.651999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.635 qpair failed and we were unable to recover it. 00:34:21.635 [2024-07-25 01:20:14.652136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.635 [2024-07-25 01:20:14.652161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.635 qpair failed and we were unable to recover it. 00:34:21.635 [2024-07-25 01:20:14.652327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.635 [2024-07-25 01:20:14.652352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.635 qpair failed and we were unable to recover it. 00:34:21.635 [2024-07-25 01:20:14.652494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.635 [2024-07-25 01:20:14.652535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.635 qpair failed and we were unable to recover it. 00:34:21.635 [2024-07-25 01:20:14.652772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.635 [2024-07-25 01:20:14.652800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.635 qpair failed and we were unable to recover it. 00:34:21.635 [2024-07-25 01:20:14.652932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.635 [2024-07-25 01:20:14.652957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.635 qpair failed and we were unable to recover it. 00:34:21.635 [2024-07-25 01:20:14.653102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.635 [2024-07-25 01:20:14.653128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.635 qpair failed and we were unable to recover it. 00:34:21.635 [2024-07-25 01:20:14.653269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.635 [2024-07-25 01:20:14.653295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.635 qpair failed and we were unable to recover it. 00:34:21.635 [2024-07-25 01:20:14.653438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.635 [2024-07-25 01:20:14.653463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.635 qpair failed and we were unable to recover it. 00:34:21.635 [2024-07-25 01:20:14.653631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.635 [2024-07-25 01:20:14.653660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.635 qpair failed and we were unable to recover it. 00:34:21.635 [2024-07-25 01:20:14.653855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.635 [2024-07-25 01:20:14.653880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.635 qpair failed and we were unable to recover it. 00:34:21.635 [2024-07-25 01:20:14.654015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.635 [2024-07-25 01:20:14.654041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.635 qpair failed and we were unable to recover it. 00:34:21.635 [2024-07-25 01:20:14.654207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.635 [2024-07-25 01:20:14.654236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.635 qpair failed and we were unable to recover it. 00:34:21.635 [2024-07-25 01:20:14.654389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.635 [2024-07-25 01:20:14.654414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.635 qpair failed and we were unable to recover it. 00:34:21.635 [2024-07-25 01:20:14.654611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.635 [2024-07-25 01:20:14.654639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.635 qpair failed and we were unable to recover it. 00:34:21.635 [2024-07-25 01:20:14.654790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.635 [2024-07-25 01:20:14.654818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.635 qpair failed and we were unable to recover it. 00:34:21.635 [2024-07-25 01:20:14.654994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.635 [2024-07-25 01:20:14.655019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.635 qpair failed and we were unable to recover it. 00:34:21.635 [2024-07-25 01:20:14.655170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.635 [2024-07-25 01:20:14.655201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.635 qpair failed and we were unable to recover it. 00:34:21.635 [2024-07-25 01:20:14.655352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.635 [2024-07-25 01:20:14.655378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.636 qpair failed and we were unable to recover it. 00:34:21.636 [2024-07-25 01:20:14.655518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.636 [2024-07-25 01:20:14.655543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.636 qpair failed and we were unable to recover it. 00:34:21.636 [2024-07-25 01:20:14.655725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.636 [2024-07-25 01:20:14.655750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.636 qpair failed and we were unable to recover it. 00:34:21.636 [2024-07-25 01:20:14.655894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.636 [2024-07-25 01:20:14.655920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.636 qpair failed and we were unable to recover it. 00:34:21.636 [2024-07-25 01:20:14.656082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.636 [2024-07-25 01:20:14.656110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.636 qpair failed and we were unable to recover it. 00:34:21.636 [2024-07-25 01:20:14.656238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.636 [2024-07-25 01:20:14.656290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.636 qpair failed and we were unable to recover it. 00:34:21.636 [2024-07-25 01:20:14.656412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.636 [2024-07-25 01:20:14.656439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.636 qpair failed and we were unable to recover it. 00:34:21.636 [2024-07-25 01:20:14.656607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.636 [2024-07-25 01:20:14.656633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.636 qpair failed and we were unable to recover it. 00:34:21.636 [2024-07-25 01:20:14.656769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.636 [2024-07-25 01:20:14.656798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.636 qpair failed and we were unable to recover it. 00:34:21.636 [2024-07-25 01:20:14.656963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.636 [2024-07-25 01:20:14.656991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.636 qpair failed and we were unable to recover it. 00:34:21.636 [2024-07-25 01:20:14.657141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.636 [2024-07-25 01:20:14.657166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.636 qpair failed and we were unable to recover it. 00:34:21.636 [2024-07-25 01:20:14.657337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.636 [2024-07-25 01:20:14.657363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.636 qpair failed and we were unable to recover it. 00:34:21.636 [2024-07-25 01:20:14.657485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.636 [2024-07-25 01:20:14.657510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.636 qpair failed and we were unable to recover it. 00:34:21.636 [2024-07-25 01:20:14.657674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.636 [2024-07-25 01:20:14.657703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.636 qpair failed and we were unable to recover it. 00:34:21.636 [2024-07-25 01:20:14.657853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.636 [2024-07-25 01:20:14.657881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.636 qpair failed and we were unable to recover it. 00:34:21.636 [2024-07-25 01:20:14.658057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.636 [2024-07-25 01:20:14.658085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.636 qpair failed and we were unable to recover it. 00:34:21.636 [2024-07-25 01:20:14.658215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.636 [2024-07-25 01:20:14.658248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.636 qpair failed and we were unable to recover it. 00:34:21.636 [2024-07-25 01:20:14.658396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.636 [2024-07-25 01:20:14.658421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.636 qpair failed and we were unable to recover it. 00:34:21.636 [2024-07-25 01:20:14.658574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.636 [2024-07-25 01:20:14.658602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.636 qpair failed and we were unable to recover it. 00:34:21.636 [2024-07-25 01:20:14.658759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.636 [2024-07-25 01:20:14.658787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.636 qpair failed and we were unable to recover it. 00:34:21.636 [2024-07-25 01:20:14.658938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.636 [2024-07-25 01:20:14.658966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.636 qpair failed and we were unable to recover it. 00:34:21.636 [2024-07-25 01:20:14.659121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.636 [2024-07-25 01:20:14.659159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.636 qpair failed and we were unable to recover it. 00:34:21.636 [2024-07-25 01:20:14.659312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.636 [2024-07-25 01:20:14.659338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.636 qpair failed and we were unable to recover it. 00:34:21.636 [2024-07-25 01:20:14.659505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.636 [2024-07-25 01:20:14.659531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.636 qpair failed and we were unable to recover it. 00:34:21.636 [2024-07-25 01:20:14.659678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.636 [2024-07-25 01:20:14.659706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.636 qpair failed and we were unable to recover it. 00:34:21.636 [2024-07-25 01:20:14.659903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.636 [2024-07-25 01:20:14.659929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.636 qpair failed and we were unable to recover it. 00:34:21.636 [2024-07-25 01:20:14.660125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.636 [2024-07-25 01:20:14.660157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.636 qpair failed and we were unable to recover it. 00:34:21.636 [2024-07-25 01:20:14.660318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.636 [2024-07-25 01:20:14.660347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.636 qpair failed and we were unable to recover it. 00:34:21.636 [2024-07-25 01:20:14.660513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.636 [2024-07-25 01:20:14.660538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.636 qpair failed and we were unable to recover it. 00:34:21.636 [2024-07-25 01:20:14.660704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.636 [2024-07-25 01:20:14.660732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.636 qpair failed and we were unable to recover it. 00:34:21.636 [2024-07-25 01:20:14.660901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.636 [2024-07-25 01:20:14.660929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.636 qpair failed and we were unable to recover it. 00:34:21.636 [2024-07-25 01:20:14.661092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.636 [2024-07-25 01:20:14.661119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.636 qpair failed and we were unable to recover it. 00:34:21.636 [2024-07-25 01:20:14.661283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.636 [2024-07-25 01:20:14.661313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.636 qpair failed and we were unable to recover it. 00:34:21.636 [2024-07-25 01:20:14.661468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.636 [2024-07-25 01:20:14.661497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.636 qpair failed and we were unable to recover it. 00:34:21.636 [2024-07-25 01:20:14.661660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.636 [2024-07-25 01:20:14.661685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.637 qpair failed and we were unable to recover it. 00:34:21.637 [2024-07-25 01:20:14.661799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.637 [2024-07-25 01:20:14.661824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.637 qpair failed and we were unable to recover it. 00:34:21.637 [2024-07-25 01:20:14.661976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.637 [2024-07-25 01:20:14.662001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.637 qpair failed and we were unable to recover it. 00:34:21.637 [2024-07-25 01:20:14.662140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.637 [2024-07-25 01:20:14.662166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.637 qpair failed and we were unable to recover it. 00:34:21.637 [2024-07-25 01:20:14.662290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.637 [2024-07-25 01:20:14.662316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.637 qpair failed and we were unable to recover it. 00:34:21.637 [2024-07-25 01:20:14.662460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.637 [2024-07-25 01:20:14.662485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.637 qpair failed and we were unable to recover it. 00:34:21.637 [2024-07-25 01:20:14.662645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.637 [2024-07-25 01:20:14.662670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.637 qpair failed and we were unable to recover it. 00:34:21.637 [2024-07-25 01:20:14.662815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.637 [2024-07-25 01:20:14.662840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.637 qpair failed and we were unable to recover it. 00:34:21.637 [2024-07-25 01:20:14.663007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.637 [2024-07-25 01:20:14.663036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.637 qpair failed and we were unable to recover it. 00:34:21.637 [2024-07-25 01:20:14.663189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.637 [2024-07-25 01:20:14.663218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.637 qpair failed and we were unable to recover it. 00:34:21.637 [2024-07-25 01:20:14.663372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.637 [2024-07-25 01:20:14.663398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.637 qpair failed and we were unable to recover it. 00:34:21.637 [2024-07-25 01:20:14.663519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.637 [2024-07-25 01:20:14.663554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.637 qpair failed and we were unable to recover it. 00:34:21.637 [2024-07-25 01:20:14.663726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.637 [2024-07-25 01:20:14.663751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.637 qpair failed and we were unable to recover it. 00:34:21.637 [2024-07-25 01:20:14.663896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.637 [2024-07-25 01:20:14.663921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.637 qpair failed and we were unable to recover it. 00:34:21.637 [2024-07-25 01:20:14.664074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.637 [2024-07-25 01:20:14.664103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.637 qpair failed and we were unable to recover it. 00:34:21.637 [2024-07-25 01:20:14.664275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.637 [2024-07-25 01:20:14.664301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.637 qpair failed and we were unable to recover it. 00:34:21.637 [2024-07-25 01:20:14.664466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.637 [2024-07-25 01:20:14.664494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.637 qpair failed and we were unable to recover it. 00:34:21.637 [2024-07-25 01:20:14.664654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.637 [2024-07-25 01:20:14.664681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.637 qpair failed and we were unable to recover it. 00:34:21.637 [2024-07-25 01:20:14.664847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.637 [2024-07-25 01:20:14.664872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.637 qpair failed and we were unable to recover it. 00:34:21.637 [2024-07-25 01:20:14.664982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.637 [2024-07-25 01:20:14.665007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.637 qpair failed and we were unable to recover it. 00:34:21.637 [2024-07-25 01:20:14.665124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.637 [2024-07-25 01:20:14.665149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.637 qpair failed and we were unable to recover it. 00:34:21.637 [2024-07-25 01:20:14.665289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.637 [2024-07-25 01:20:14.665315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.637 qpair failed and we were unable to recover it. 00:34:21.637 [2024-07-25 01:20:14.665427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.637 [2024-07-25 01:20:14.665452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.637 qpair failed and we were unable to recover it. 00:34:21.637 [2024-07-25 01:20:14.665638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.637 [2024-07-25 01:20:14.665672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.637 qpair failed and we were unable to recover it. 00:34:21.637 [2024-07-25 01:20:14.665834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.637 [2024-07-25 01:20:14.665859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.637 qpair failed and we were unable to recover it. 00:34:21.637 [2024-07-25 01:20:14.665978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.637 [2024-07-25 01:20:14.666003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.637 qpair failed and we were unable to recover it. 00:34:21.637 [2024-07-25 01:20:14.666118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.637 [2024-07-25 01:20:14.666143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.637 qpair failed and we were unable to recover it. 00:34:21.637 [2024-07-25 01:20:14.666288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.637 [2024-07-25 01:20:14.666314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.637 qpair failed and we were unable to recover it. 00:34:21.637 [2024-07-25 01:20:14.666447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.637 [2024-07-25 01:20:14.666472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.637 qpair failed and we were unable to recover it. 00:34:21.637 [2024-07-25 01:20:14.666578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.637 [2024-07-25 01:20:14.666604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.637 qpair failed and we were unable to recover it. 00:34:21.637 [2024-07-25 01:20:14.666741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.637 [2024-07-25 01:20:14.666782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.637 qpair failed and we were unable to recover it. 00:34:21.637 [2024-07-25 01:20:14.666959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.637 [2024-07-25 01:20:14.666984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.637 qpair failed and we were unable to recover it. 00:34:21.637 [2024-07-25 01:20:14.667161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.637 [2024-07-25 01:20:14.667186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.637 qpair failed and we were unable to recover it. 00:34:21.637 [2024-07-25 01:20:14.667362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.637 [2024-07-25 01:20:14.667388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.637 qpair failed and we were unable to recover it. 00:34:21.637 [2024-07-25 01:20:14.667552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.637 [2024-07-25 01:20:14.667581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.637 qpair failed and we were unable to recover it. 00:34:21.637 [2024-07-25 01:20:14.667777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.637 [2024-07-25 01:20:14.667802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.637 qpair failed and we were unable to recover it. 00:34:21.637 [2024-07-25 01:20:14.668000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.637 [2024-07-25 01:20:14.668028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.637 qpair failed and we were unable to recover it. 00:34:21.637 [2024-07-25 01:20:14.668178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.637 [2024-07-25 01:20:14.668207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.637 qpair failed and we were unable to recover it. 00:34:21.637 [2024-07-25 01:20:14.668353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.638 [2024-07-25 01:20:14.668379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.638 qpair failed and we were unable to recover it. 00:34:21.638 [2024-07-25 01:20:14.668491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.638 [2024-07-25 01:20:14.668516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.638 qpair failed and we were unable to recover it. 00:34:21.638 [2024-07-25 01:20:14.668661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.638 [2024-07-25 01:20:14.668705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.638 qpair failed and we were unable to recover it. 00:34:21.638 [2024-07-25 01:20:14.668863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.638 [2024-07-25 01:20:14.668891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.638 qpair failed and we were unable to recover it. 00:34:21.638 [2024-07-25 01:20:14.669118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.638 [2024-07-25 01:20:14.669145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.638 qpair failed and we were unable to recover it. 00:34:21.638 [2024-07-25 01:20:14.669267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.638 [2024-07-25 01:20:14.669310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.638 qpair failed and we were unable to recover it. 00:34:21.638 [2024-07-25 01:20:14.669456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.638 [2024-07-25 01:20:14.669482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.638 qpair failed and we were unable to recover it. 00:34:21.638 [2024-07-25 01:20:14.669628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.638 [2024-07-25 01:20:14.669653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.638 qpair failed and we were unable to recover it. 00:34:21.638 [2024-07-25 01:20:14.669779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.638 [2024-07-25 01:20:14.669805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.638 qpair failed and we were unable to recover it. 00:34:21.638 [2024-07-25 01:20:14.669948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.638 [2024-07-25 01:20:14.669973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.638 qpair failed and we were unable to recover it. 00:34:21.638 [2024-07-25 01:20:14.670112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.638 [2024-07-25 01:20:14.670137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.638 qpair failed and we were unable to recover it. 00:34:21.638 [2024-07-25 01:20:14.670290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.638 [2024-07-25 01:20:14.670317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.638 qpair failed and we were unable to recover it. 00:34:21.638 [2024-07-25 01:20:14.670467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.638 [2024-07-25 01:20:14.670492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.638 qpair failed and we were unable to recover it. 00:34:21.638 [2024-07-25 01:20:14.670637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.638 [2024-07-25 01:20:14.670662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.638 qpair failed and we were unable to recover it. 00:34:21.638 [2024-07-25 01:20:14.670781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.638 [2024-07-25 01:20:14.670806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.638 qpair failed and we were unable to recover it. 00:34:21.638 [2024-07-25 01:20:14.670948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.638 [2024-07-25 01:20:14.670972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.638 qpair failed and we were unable to recover it. 00:34:21.638 [2024-07-25 01:20:14.671118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.638 [2024-07-25 01:20:14.671143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.638 qpair failed and we were unable to recover it. 00:34:21.638 [2024-07-25 01:20:14.671332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.638 [2024-07-25 01:20:14.671357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.638 qpair failed and we were unable to recover it. 00:34:21.638 [2024-07-25 01:20:14.671470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.638 [2024-07-25 01:20:14.671495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.638 qpair failed and we were unable to recover it. 00:34:21.638 [2024-07-25 01:20:14.671613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.638 [2024-07-25 01:20:14.671637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.638 qpair failed and we were unable to recover it. 00:34:21.638 [2024-07-25 01:20:14.671814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.638 [2024-07-25 01:20:14.671845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.638 qpair failed and we were unable to recover it. 00:34:21.638 [2024-07-25 01:20:14.671983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.638 [2024-07-25 01:20:14.672011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.638 qpair failed and we were unable to recover it. 00:34:21.638 [2024-07-25 01:20:14.672166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.638 [2024-07-25 01:20:14.672200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.638 qpair failed and we were unable to recover it. 00:34:21.638 [2024-07-25 01:20:14.672351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.638 [2024-07-25 01:20:14.672377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.638 qpair failed and we were unable to recover it. 00:34:21.638 [2024-07-25 01:20:14.672518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.638 [2024-07-25 01:20:14.672543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.638 qpair failed and we were unable to recover it. 00:34:21.638 [2024-07-25 01:20:14.672698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.638 [2024-07-25 01:20:14.672723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.638 qpair failed and we were unable to recover it. 00:34:21.638 [2024-07-25 01:20:14.672844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.638 [2024-07-25 01:20:14.672869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.638 qpair failed and we were unable to recover it. 00:34:21.638 [2024-07-25 01:20:14.673011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.638 [2024-07-25 01:20:14.673038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.638 qpair failed and we were unable to recover it. 00:34:21.638 [2024-07-25 01:20:14.673218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.638 [2024-07-25 01:20:14.673253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.638 qpair failed and we were unable to recover it. 00:34:21.638 [2024-07-25 01:20:14.673393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.638 [2024-07-25 01:20:14.673419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.638 qpair failed and we were unable to recover it. 00:34:21.638 [2024-07-25 01:20:14.673552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.638 [2024-07-25 01:20:14.673577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.638 qpair failed and we were unable to recover it. 00:34:21.638 [2024-07-25 01:20:14.673735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.638 [2024-07-25 01:20:14.673759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.638 qpair failed and we were unable to recover it. 00:34:21.638 [2024-07-25 01:20:14.673902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.638 [2024-07-25 01:20:14.673929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.638 qpair failed and we were unable to recover it. 00:34:21.638 [2024-07-25 01:20:14.674127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.638 [2024-07-25 01:20:14.674152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.638 qpair failed and we were unable to recover it. 00:34:21.638 [2024-07-25 01:20:14.674307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.638 [2024-07-25 01:20:14.674333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.638 qpair failed and we were unable to recover it. 00:34:21.638 [2024-07-25 01:20:14.674478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.638 [2024-07-25 01:20:14.674504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.638 qpair failed and we were unable to recover it. 00:34:21.638 [2024-07-25 01:20:14.674618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.638 [2024-07-25 01:20:14.674643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.638 qpair failed and we were unable to recover it. 00:34:21.638 [2024-07-25 01:20:14.674789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.638 [2024-07-25 01:20:14.674814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.638 qpair failed and we were unable to recover it. 00:34:21.638 [2024-07-25 01:20:14.674962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.639 [2024-07-25 01:20:14.674987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.639 qpair failed and we were unable to recover it. 00:34:21.639 [2024-07-25 01:20:14.675102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.639 [2024-07-25 01:20:14.675127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.639 qpair failed and we were unable to recover it. 00:34:21.639 [2024-07-25 01:20:14.675262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.639 [2024-07-25 01:20:14.675288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.639 qpair failed and we were unable to recover it. 00:34:21.639 [2024-07-25 01:20:14.675427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.639 [2024-07-25 01:20:14.675452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.639 qpair failed and we were unable to recover it. 00:34:21.639 [2024-07-25 01:20:14.675632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.639 [2024-07-25 01:20:14.675657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.639 qpair failed and we were unable to recover it. 00:34:21.639 [2024-07-25 01:20:14.675842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.639 [2024-07-25 01:20:14.675868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.639 qpair failed and we were unable to recover it. 00:34:21.639 [2024-07-25 01:20:14.676011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.639 [2024-07-25 01:20:14.676036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.639 qpair failed and we were unable to recover it. 00:34:21.639 [2024-07-25 01:20:14.676205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.639 [2024-07-25 01:20:14.676230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.639 qpair failed and we were unable to recover it. 00:34:21.639 [2024-07-25 01:20:14.676387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.639 [2024-07-25 01:20:14.676412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.639 qpair failed and we were unable to recover it. 00:34:21.639 [2024-07-25 01:20:14.676555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.639 [2024-07-25 01:20:14.676582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.639 qpair failed and we were unable to recover it. 00:34:21.639 [2024-07-25 01:20:14.676755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.639 [2024-07-25 01:20:14.676796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.639 qpair failed and we were unable to recover it. 00:34:21.639 [2024-07-25 01:20:14.677037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.639 [2024-07-25 01:20:14.677066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.639 qpair failed and we were unable to recover it. 00:34:21.639 [2024-07-25 01:20:14.677177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.639 [2024-07-25 01:20:14.677202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.639 qpair failed and we were unable to recover it. 00:34:21.639 [2024-07-25 01:20:14.677351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.639 [2024-07-25 01:20:14.677377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.639 qpair failed and we were unable to recover it. 00:34:21.639 [2024-07-25 01:20:14.677543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.639 [2024-07-25 01:20:14.677575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.639 qpair failed and we were unable to recover it. 00:34:21.639 [2024-07-25 01:20:14.677712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.639 [2024-07-25 01:20:14.677737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.639 qpair failed and we were unable to recover it. 00:34:21.639 [2024-07-25 01:20:14.677891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.639 [2024-07-25 01:20:14.677918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.639 qpair failed and we were unable to recover it. 00:34:21.639 [2024-07-25 01:20:14.678028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.639 [2024-07-25 01:20:14.678053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.639 qpair failed and we were unable to recover it. 00:34:21.639 [2024-07-25 01:20:14.678226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.639 [2024-07-25 01:20:14.678258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.639 qpair failed and we were unable to recover it. 00:34:21.639 [2024-07-25 01:20:14.678402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.639 [2024-07-25 01:20:14.678427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.639 qpair failed and we were unable to recover it. 00:34:21.639 [2024-07-25 01:20:14.678592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.639 [2024-07-25 01:20:14.678625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.639 qpair failed and we were unable to recover it. 00:34:21.639 [2024-07-25 01:20:14.678742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.639 [2024-07-25 01:20:14.678767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.639 qpair failed and we were unable to recover it. 00:34:21.639 [2024-07-25 01:20:14.678916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.639 [2024-07-25 01:20:14.678941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.639 qpair failed and we were unable to recover it. 00:34:21.639 [2024-07-25 01:20:14.679083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.639 [2024-07-25 01:20:14.679108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.639 qpair failed and we were unable to recover it. 00:34:21.639 [2024-07-25 01:20:14.679251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.639 [2024-07-25 01:20:14.679277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.639 qpair failed and we were unable to recover it. 00:34:21.639 [2024-07-25 01:20:14.679426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.639 [2024-07-25 01:20:14.679451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.639 qpair failed and we were unable to recover it. 00:34:21.639 [2024-07-25 01:20:14.679567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.639 [2024-07-25 01:20:14.679592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.639 qpair failed and we were unable to recover it. 00:34:21.639 [2024-07-25 01:20:14.679733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.639 [2024-07-25 01:20:14.679762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.639 qpair failed and we were unable to recover it. 00:34:21.639 [2024-07-25 01:20:14.679924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.639 [2024-07-25 01:20:14.679949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.639 qpair failed and we were unable to recover it. 00:34:21.639 [2024-07-25 01:20:14.680088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.639 [2024-07-25 01:20:14.680114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.639 qpair failed and we were unable to recover it. 00:34:21.639 [2024-07-25 01:20:14.680222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.639 [2024-07-25 01:20:14.680254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.639 qpair failed and we were unable to recover it. 00:34:21.639 [2024-07-25 01:20:14.680377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.639 [2024-07-25 01:20:14.680403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.639 qpair failed and we were unable to recover it. 00:34:21.639 [2024-07-25 01:20:14.680546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.639 [2024-07-25 01:20:14.680571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.639 qpair failed and we were unable to recover it. 00:34:21.639 [2024-07-25 01:20:14.680675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.639 [2024-07-25 01:20:14.680717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.639 qpair failed and we were unable to recover it. 00:34:21.639 [2024-07-25 01:20:14.680873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.639 [2024-07-25 01:20:14.680898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.639 qpair failed and we were unable to recover it. 00:34:21.639 [2024-07-25 01:20:14.681067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.639 [2024-07-25 01:20:14.681092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.639 qpair failed and we were unable to recover it. 00:34:21.639 [2024-07-25 01:20:14.681212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.639 [2024-07-25 01:20:14.681237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.639 qpair failed and we were unable to recover it. 00:34:21.639 [2024-07-25 01:20:14.681362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.639 [2024-07-25 01:20:14.681387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.639 qpair failed and we were unable to recover it. 00:34:21.639 [2024-07-25 01:20:14.681561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.640 [2024-07-25 01:20:14.681591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.640 qpair failed and we were unable to recover it. 00:34:21.640 [2024-07-25 01:20:14.681767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.640 [2024-07-25 01:20:14.681796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.640 qpair failed and we were unable to recover it. 00:34:21.640 [2024-07-25 01:20:14.681988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.640 [2024-07-25 01:20:14.682013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.640 qpair failed and we were unable to recover it. 00:34:21.640 [2024-07-25 01:20:14.682152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.640 [2024-07-25 01:20:14.682177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.640 qpair failed and we were unable to recover it. 00:34:21.640 [2024-07-25 01:20:14.682298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.640 [2024-07-25 01:20:14.682325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.640 qpair failed and we were unable to recover it. 00:34:21.640 [2024-07-25 01:20:14.682491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.640 [2024-07-25 01:20:14.682516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.640 qpair failed and we were unable to recover it. 00:34:21.640 [2024-07-25 01:20:14.682710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.640 [2024-07-25 01:20:14.682738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.640 qpair failed and we were unable to recover it. 00:34:21.640 [2024-07-25 01:20:14.682966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.640 [2024-07-25 01:20:14.682991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.640 qpair failed and we were unable to recover it. 00:34:21.640 [2024-07-25 01:20:14.683131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.640 [2024-07-25 01:20:14.683156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.640 qpair failed and we were unable to recover it. 00:34:21.640 [2024-07-25 01:20:14.683304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.640 [2024-07-25 01:20:14.683330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.640 qpair failed and we were unable to recover it. 00:34:21.640 [2024-07-25 01:20:14.683450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.640 [2024-07-25 01:20:14.683476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.640 qpair failed and we were unable to recover it. 00:34:21.640 [2024-07-25 01:20:14.683648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.640 [2024-07-25 01:20:14.683673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.640 qpair failed and we were unable to recover it. 00:34:21.640 [2024-07-25 01:20:14.683836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.640 [2024-07-25 01:20:14.683877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.640 qpair failed and we were unable to recover it. 00:34:21.640 [2024-07-25 01:20:14.683985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.640 [2024-07-25 01:20:14.684010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.640 qpair failed and we were unable to recover it. 00:34:21.640 [2024-07-25 01:20:14.684160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.640 [2024-07-25 01:20:14.684185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.640 qpair failed and we were unable to recover it. 00:34:21.640 [2024-07-25 01:20:14.684335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.640 [2024-07-25 01:20:14.684361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.640 qpair failed and we were unable to recover it. 00:34:21.640 [2024-07-25 01:20:14.684500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.640 [2024-07-25 01:20:14.684544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.640 qpair failed and we were unable to recover it. 00:34:21.640 [2024-07-25 01:20:14.684707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.640 [2024-07-25 01:20:14.684732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.640 qpair failed and we were unable to recover it. 00:34:21.640 [2024-07-25 01:20:14.684874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.640 [2024-07-25 01:20:14.684899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.640 qpair failed and we were unable to recover it. 00:34:21.640 [2024-07-25 01:20:14.685045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.640 [2024-07-25 01:20:14.685070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.640 qpair failed and we were unable to recover it. 00:34:21.640 [2024-07-25 01:20:14.685203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.640 [2024-07-25 01:20:14.685228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.640 qpair failed and we were unable to recover it. 00:34:21.640 [2024-07-25 01:20:14.685342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.640 [2024-07-25 01:20:14.685368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.640 qpair failed and we were unable to recover it. 00:34:21.640 [2024-07-25 01:20:14.685532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.640 [2024-07-25 01:20:14.685567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.640 qpair failed and we were unable to recover it. 00:34:21.640 [2024-07-25 01:20:14.685726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.640 [2024-07-25 01:20:14.685754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.640 qpair failed and we were unable to recover it. 00:34:21.640 [2024-07-25 01:20:14.685936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.640 [2024-07-25 01:20:14.685964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.640 qpair failed and we were unable to recover it. 00:34:21.640 [2024-07-25 01:20:14.686146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.640 [2024-07-25 01:20:14.686171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.640 qpair failed and we were unable to recover it. 00:34:21.640 [2024-07-25 01:20:14.686318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.640 [2024-07-25 01:20:14.686343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.640 qpair failed and we were unable to recover it. 00:34:21.640 [2024-07-25 01:20:14.686454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.640 [2024-07-25 01:20:14.686479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.640 qpair failed and we were unable to recover it. 00:34:21.640 [2024-07-25 01:20:14.686600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.640 [2024-07-25 01:20:14.686625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.640 qpair failed and we were unable to recover it. 00:34:21.640 [2024-07-25 01:20:14.686795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.640 [2024-07-25 01:20:14.686820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.640 qpair failed and we were unable to recover it. 00:34:21.640 [2024-07-25 01:20:14.686958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.640 [2024-07-25 01:20:14.686983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.640 qpair failed and we were unable to recover it. 00:34:21.640 [2024-07-25 01:20:14.687150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.640 [2024-07-25 01:20:14.687176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.640 qpair failed and we were unable to recover it. 00:34:21.640 [2024-07-25 01:20:14.687330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.640 [2024-07-25 01:20:14.687355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.640 qpair failed and we were unable to recover it. 00:34:21.641 [2024-07-25 01:20:14.687495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.641 [2024-07-25 01:20:14.687521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.641 qpair failed and we were unable to recover it. 00:34:21.641 [2024-07-25 01:20:14.687640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.641 [2024-07-25 01:20:14.687682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.641 qpair failed and we were unable to recover it. 00:34:21.641 [2024-07-25 01:20:14.687827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.641 [2024-07-25 01:20:14.687857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.641 qpair failed and we were unable to recover it. 00:34:21.641 [2024-07-25 01:20:14.688024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.641 [2024-07-25 01:20:14.688049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.641 qpair failed and we were unable to recover it. 00:34:21.641 [2024-07-25 01:20:14.688196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.641 [2024-07-25 01:20:14.688221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.641 qpair failed and we were unable to recover it. 00:34:21.641 [2024-07-25 01:20:14.688405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.641 [2024-07-25 01:20:14.688430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.641 qpair failed and we were unable to recover it. 00:34:21.641 [2024-07-25 01:20:14.688544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.641 [2024-07-25 01:20:14.688569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.641 qpair failed and we were unable to recover it. 00:34:21.641 [2024-07-25 01:20:14.688706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.641 [2024-07-25 01:20:14.688732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.641 qpair failed and we were unable to recover it. 00:34:21.641 [2024-07-25 01:20:14.688871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.641 [2024-07-25 01:20:14.688897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.641 qpair failed and we were unable to recover it. 00:34:21.641 [2024-07-25 01:20:14.689036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.641 [2024-07-25 01:20:14.689062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.641 qpair failed and we were unable to recover it. 00:34:21.641 [2024-07-25 01:20:14.689185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.641 [2024-07-25 01:20:14.689210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.641 qpair failed and we were unable to recover it. 00:34:21.641 [2024-07-25 01:20:14.689381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.641 [2024-07-25 01:20:14.689408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.641 qpair failed and we were unable to recover it. 00:34:21.641 [2024-07-25 01:20:14.689523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.641 [2024-07-25 01:20:14.689555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.641 qpair failed and we were unable to recover it. 00:34:21.641 [2024-07-25 01:20:14.689697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.641 [2024-07-25 01:20:14.689722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.641 qpair failed and we were unable to recover it. 00:34:21.641 [2024-07-25 01:20:14.689862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.641 [2024-07-25 01:20:14.689887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.641 qpair failed and we were unable to recover it. 00:34:21.641 [2024-07-25 01:20:14.690002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.641 [2024-07-25 01:20:14.690027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.641 qpair failed and we were unable to recover it. 00:34:21.641 [2024-07-25 01:20:14.690167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.641 [2024-07-25 01:20:14.690194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.641 qpair failed and we were unable to recover it. 00:34:21.641 [2024-07-25 01:20:14.690328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.641 [2024-07-25 01:20:14.690354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.641 qpair failed and we were unable to recover it. 00:34:21.641 [2024-07-25 01:20:14.690541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.641 [2024-07-25 01:20:14.690568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.641 qpair failed and we were unable to recover it. 00:34:21.641 [2024-07-25 01:20:14.690745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.641 [2024-07-25 01:20:14.690774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.641 qpair failed and we were unable to recover it. 00:34:21.641 [2024-07-25 01:20:14.690936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.641 [2024-07-25 01:20:14.690961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.641 qpair failed and we were unable to recover it. 00:34:21.641 [2024-07-25 01:20:14.691100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.641 [2024-07-25 01:20:14.691126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.641 qpair failed and we were unable to recover it. 00:34:21.641 [2024-07-25 01:20:14.691261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.641 [2024-07-25 01:20:14.691288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.641 qpair failed and we were unable to recover it. 00:34:21.641 [2024-07-25 01:20:14.691459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.641 [2024-07-25 01:20:14.691484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.641 qpair failed and we were unable to recover it. 00:34:21.641 [2024-07-25 01:20:14.691656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.641 [2024-07-25 01:20:14.691685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.641 qpair failed and we were unable to recover it. 00:34:21.641 [2024-07-25 01:20:14.691827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.641 [2024-07-25 01:20:14.691852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.641 qpair failed and we were unable to recover it. 00:34:21.641 [2024-07-25 01:20:14.692002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.641 [2024-07-25 01:20:14.692028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.641 qpair failed and we were unable to recover it. 00:34:21.641 [2024-07-25 01:20:14.692140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.641 [2024-07-25 01:20:14.692165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.641 qpair failed and we were unable to recover it. 00:34:21.641 [2024-07-25 01:20:14.692311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.641 [2024-07-25 01:20:14.692337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.641 qpair failed and we were unable to recover it. 00:34:21.641 [2024-07-25 01:20:14.692452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.641 [2024-07-25 01:20:14.692478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.641 qpair failed and we were unable to recover it. 00:34:21.641 [2024-07-25 01:20:14.692663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.641 [2024-07-25 01:20:14.692692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.641 qpair failed and we were unable to recover it. 00:34:21.641 [2024-07-25 01:20:14.692850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.641 [2024-07-25 01:20:14.692876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.641 qpair failed and we were unable to recover it. 00:34:21.641 [2024-07-25 01:20:14.693023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.641 [2024-07-25 01:20:14.693049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.641 qpair failed and we were unable to recover it. 00:34:21.641 [2024-07-25 01:20:14.693169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.641 [2024-07-25 01:20:14.693194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.641 qpair failed and we were unable to recover it. 00:34:21.641 [2024-07-25 01:20:14.693349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.641 [2024-07-25 01:20:14.693376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.641 qpair failed and we were unable to recover it. 00:34:21.641 [2024-07-25 01:20:14.693512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.641 [2024-07-25 01:20:14.693542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.641 qpair failed and we were unable to recover it. 00:34:21.641 [2024-07-25 01:20:14.693697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.641 [2024-07-25 01:20:14.693725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.641 qpair failed and we were unable to recover it. 00:34:21.641 [2024-07-25 01:20:14.693844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.641 [2024-07-25 01:20:14.693872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.642 qpair failed and we were unable to recover it. 00:34:21.642 [2024-07-25 01:20:14.694032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.642 [2024-07-25 01:20:14.694058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.642 qpair failed and we were unable to recover it. 00:34:21.642 [2024-07-25 01:20:14.694173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.642 [2024-07-25 01:20:14.694200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.642 qpair failed and we were unable to recover it. 00:34:21.642 [2024-07-25 01:20:14.694354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.642 [2024-07-25 01:20:14.694380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.642 qpair failed and we were unable to recover it. 00:34:21.642 [2024-07-25 01:20:14.694523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.642 [2024-07-25 01:20:14.694548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.642 qpair failed and we were unable to recover it. 00:34:21.642 [2024-07-25 01:20:14.694659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.642 [2024-07-25 01:20:14.694684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.642 qpair failed and we were unable to recover it. 00:34:21.642 [2024-07-25 01:20:14.694862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.642 [2024-07-25 01:20:14.694887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.642 qpair failed and we were unable to recover it. 00:34:21.642 [2024-07-25 01:20:14.695007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.642 [2024-07-25 01:20:14.695032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.642 qpair failed and we were unable to recover it. 00:34:21.642 [2024-07-25 01:20:14.695191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.642 [2024-07-25 01:20:14.695217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.642 qpair failed and we were unable to recover it. 00:34:21.642 [2024-07-25 01:20:14.695352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.642 [2024-07-25 01:20:14.695378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.642 qpair failed and we were unable to recover it. 00:34:21.642 [2024-07-25 01:20:14.695491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.642 [2024-07-25 01:20:14.695516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.642 qpair failed and we were unable to recover it. 00:34:21.642 [2024-07-25 01:20:14.695688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.642 [2024-07-25 01:20:14.695717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.642 qpair failed and we were unable to recover it. 00:34:21.642 [2024-07-25 01:20:14.695907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.642 [2024-07-25 01:20:14.695935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.642 qpair failed and we were unable to recover it. 00:34:21.642 [2024-07-25 01:20:14.696071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.642 [2024-07-25 01:20:14.696099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.642 qpair failed and we were unable to recover it. 00:34:21.642 [2024-07-25 01:20:14.696251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.642 [2024-07-25 01:20:14.696277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.642 qpair failed and we were unable to recover it. 00:34:21.642 [2024-07-25 01:20:14.696413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.642 [2024-07-25 01:20:14.696438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.642 qpair failed and we were unable to recover it. 00:34:21.642 [2024-07-25 01:20:14.696552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.642 [2024-07-25 01:20:14.696576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.642 qpair failed and we were unable to recover it. 00:34:21.642 [2024-07-25 01:20:14.696778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.642 [2024-07-25 01:20:14.696806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.642 qpair failed and we were unable to recover it. 00:34:21.642 [2024-07-25 01:20:14.696964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.642 [2024-07-25 01:20:14.696994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.642 qpair failed and we were unable to recover it. 00:34:21.642 [2024-07-25 01:20:14.697158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.642 [2024-07-25 01:20:14.697187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.642 qpair failed and we were unable to recover it. 00:34:21.642 [2024-07-25 01:20:14.697363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.642 [2024-07-25 01:20:14.697389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.642 qpair failed and we were unable to recover it. 00:34:21.642 [2024-07-25 01:20:14.697552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.642 [2024-07-25 01:20:14.697577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.642 qpair failed and we were unable to recover it. 00:34:21.642 [2024-07-25 01:20:14.697701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.642 [2024-07-25 01:20:14.697727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.642 qpair failed and we were unable to recover it. 00:34:21.642 [2024-07-25 01:20:14.697916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.642 [2024-07-25 01:20:14.697958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.642 qpair failed and we were unable to recover it. 00:34:21.642 [2024-07-25 01:20:14.698114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.642 [2024-07-25 01:20:14.698143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.642 qpair failed and we were unable to recover it. 00:34:21.642 [2024-07-25 01:20:14.698308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.642 [2024-07-25 01:20:14.698338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.642 qpair failed and we were unable to recover it. 00:34:21.642 [2024-07-25 01:20:14.698458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.642 [2024-07-25 01:20:14.698483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.642 qpair failed and we were unable to recover it. 00:34:21.642 [2024-07-25 01:20:14.698623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.642 [2024-07-25 01:20:14.698648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.642 qpair failed and we were unable to recover it. 00:34:21.642 [2024-07-25 01:20:14.698792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.642 [2024-07-25 01:20:14.698818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.642 qpair failed and we were unable to recover it. 00:34:21.642 [2024-07-25 01:20:14.698942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.642 [2024-07-25 01:20:14.698970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.642 qpair failed and we were unable to recover it. 00:34:21.642 [2024-07-25 01:20:14.699140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.642 [2024-07-25 01:20:14.699168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.642 qpair failed and we were unable to recover it. 00:34:21.642 [2024-07-25 01:20:14.699360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.642 [2024-07-25 01:20:14.699386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.642 qpair failed and we were unable to recover it. 00:34:21.642 [2024-07-25 01:20:14.699500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.642 [2024-07-25 01:20:14.699526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.642 qpair failed and we were unable to recover it. 00:34:21.642 [2024-07-25 01:20:14.699645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.642 [2024-07-25 01:20:14.699671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.643 qpair failed and we were unable to recover it. 00:34:21.643 [2024-07-25 01:20:14.699841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.643 [2024-07-25 01:20:14.699869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.643 qpair failed and we were unable to recover it. 00:34:21.643 [2024-07-25 01:20:14.700043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.643 [2024-07-25 01:20:14.700071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.643 qpair failed and we were unable to recover it. 00:34:21.643 [2024-07-25 01:20:14.700250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.643 [2024-07-25 01:20:14.700295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.643 qpair failed and we were unable to recover it. 00:34:21.643 [2024-07-25 01:20:14.700437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.643 [2024-07-25 01:20:14.700462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.643 qpair failed and we were unable to recover it. 00:34:21.643 [2024-07-25 01:20:14.700617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.643 [2024-07-25 01:20:14.700645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.643 qpair failed and we were unable to recover it. 00:34:21.643 [2024-07-25 01:20:14.700813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.643 [2024-07-25 01:20:14.700842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.643 qpair failed and we were unable to recover it. 00:34:21.643 [2024-07-25 01:20:14.700977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.643 [2024-07-25 01:20:14.701002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.643 qpair failed and we were unable to recover it. 00:34:21.643 [2024-07-25 01:20:14.701140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.643 [2024-07-25 01:20:14.701169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.643 qpair failed and we were unable to recover it. 00:34:21.643 [2024-07-25 01:20:14.701319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.643 [2024-07-25 01:20:14.701345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.643 qpair failed and we were unable to recover it. 00:34:21.643 [2024-07-25 01:20:14.701499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.643 [2024-07-25 01:20:14.701524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.643 qpair failed and we were unable to recover it. 00:34:21.643 [2024-07-25 01:20:14.701662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.643 [2024-07-25 01:20:14.701687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.643 qpair failed and we were unable to recover it. 00:34:21.643 [2024-07-25 01:20:14.701827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.643 [2024-07-25 01:20:14.701852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.643 qpair failed and we were unable to recover it. 00:34:21.643 [2024-07-25 01:20:14.701964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.643 [2024-07-25 01:20:14.701989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.643 qpair failed and we were unable to recover it. 00:34:21.643 [2024-07-25 01:20:14.702118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.643 [2024-07-25 01:20:14.702143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.643 qpair failed and we were unable to recover it. 00:34:21.643 [2024-07-25 01:20:14.702292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.643 [2024-07-25 01:20:14.702318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.643 qpair failed and we were unable to recover it. 00:34:21.643 [2024-07-25 01:20:14.702429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.643 [2024-07-25 01:20:14.702455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.643 qpair failed and we were unable to recover it. 00:34:21.643 [2024-07-25 01:20:14.702595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.643 [2024-07-25 01:20:14.702620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.643 qpair failed and we were unable to recover it. 00:34:21.643 [2024-07-25 01:20:14.702758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.643 [2024-07-25 01:20:14.702783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.643 qpair failed and we were unable to recover it. 00:34:21.643 [2024-07-25 01:20:14.702918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.643 [2024-07-25 01:20:14.702944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.643 qpair failed and we were unable to recover it. 00:34:21.643 [2024-07-25 01:20:14.703711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.643 [2024-07-25 01:20:14.703755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.643 qpair failed and we were unable to recover it. 00:34:21.643 [2024-07-25 01:20:14.703943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.643 [2024-07-25 01:20:14.703969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.643 qpair failed and we were unable to recover it. 00:34:21.643 [2024-07-25 01:20:14.704681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.643 [2024-07-25 01:20:14.704713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.643 qpair failed and we were unable to recover it. 00:34:21.643 [2024-07-25 01:20:14.704916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.643 [2024-07-25 01:20:14.704946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.643 qpair failed and we were unable to recover it. 00:34:21.643 [2024-07-25 01:20:14.705140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.643 [2024-07-25 01:20:14.705167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.643 qpair failed and we were unable to recover it. 00:34:21.643 [2024-07-25 01:20:14.705288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.643 [2024-07-25 01:20:14.705313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.643 qpair failed and we were unable to recover it. 00:34:21.643 [2024-07-25 01:20:14.705432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.643 [2024-07-25 01:20:14.705457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.643 qpair failed and we were unable to recover it. 00:34:21.643 [2024-07-25 01:20:14.705577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.643 [2024-07-25 01:20:14.705602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.643 qpair failed and we were unable to recover it. 00:34:21.643 [2024-07-25 01:20:14.705769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.643 [2024-07-25 01:20:14.705794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.643 qpair failed and we were unable to recover it. 00:34:21.643 [2024-07-25 01:20:14.705979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.643 [2024-07-25 01:20:14.706008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.643 qpair failed and we were unable to recover it. 00:34:21.643 [2024-07-25 01:20:14.706222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.643 [2024-07-25 01:20:14.706266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.643 qpair failed and we were unable to recover it. 00:34:21.643 [2024-07-25 01:20:14.706407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.643 [2024-07-25 01:20:14.706433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.643 qpair failed and we were unable to recover it. 00:34:21.643 [2024-07-25 01:20:14.706549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.643 [2024-07-25 01:20:14.706575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.643 qpair failed and we were unable to recover it. 00:34:21.643 [2024-07-25 01:20:14.706721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.643 [2024-07-25 01:20:14.706749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.643 qpair failed and we were unable to recover it. 00:34:21.643 [2024-07-25 01:20:14.706903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.643 [2024-07-25 01:20:14.706932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.643 qpair failed and we were unable to recover it. 00:34:21.643 [2024-07-25 01:20:14.707062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.643 [2024-07-25 01:20:14.707091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.643 qpair failed and we were unable to recover it. 00:34:21.643 [2024-07-25 01:20:14.707267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.643 [2024-07-25 01:20:14.707308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:21.643 qpair failed and we were unable to recover it. 00:34:21.643 [2024-07-25 01:20:14.707435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.643 [2024-07-25 01:20:14.707462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:21.643 qpair failed and we were unable to recover it. 00:34:21.643 [2024-07-25 01:20:14.707569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.644 [2024-07-25 01:20:14.707604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:21.644 qpair failed and we were unable to recover it. 00:34:21.644 [2024-07-25 01:20:14.707754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.644 [2024-07-25 01:20:14.707781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:21.644 qpair failed and we were unable to recover it. 00:34:21.644 [2024-07-25 01:20:14.707923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.644 [2024-07-25 01:20:14.707962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:21.644 qpair failed and we were unable to recover it. 00:34:21.644 [2024-07-25 01:20:14.708134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.644 [2024-07-25 01:20:14.708172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:21.644 qpair failed and we were unable to recover it. 00:34:21.644 [2024-07-25 01:20:14.708339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.644 [2024-07-25 01:20:14.708366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.644 qpair failed and we were unable to recover it. 00:34:21.644 [2024-07-25 01:20:14.708484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.644 [2024-07-25 01:20:14.708511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.644 qpair failed and we were unable to recover it. 00:34:21.644 [2024-07-25 01:20:14.708679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.644 [2024-07-25 01:20:14.708705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.644 qpair failed and we were unable to recover it. 00:34:21.644 [2024-07-25 01:20:14.708817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.644 [2024-07-25 01:20:14.708843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.644 qpair failed and we were unable to recover it. 00:34:21.644 [2024-07-25 01:20:14.708987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.644 [2024-07-25 01:20:14.709013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.644 qpair failed and we were unable to recover it. 00:34:21.644 [2024-07-25 01:20:14.709133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.644 [2024-07-25 01:20:14.709158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.644 qpair failed and we were unable to recover it. 00:34:21.644 [2024-07-25 01:20:14.709277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.644 [2024-07-25 01:20:14.709303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.644 qpair failed and we were unable to recover it. 00:34:21.644 [2024-07-25 01:20:14.709414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.644 [2024-07-25 01:20:14.709441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.644 qpair failed and we were unable to recover it. 00:34:21.644 [2024-07-25 01:20:14.709612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.644 [2024-07-25 01:20:14.709639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.644 qpair failed and we were unable to recover it. 00:34:21.644 [2024-07-25 01:20:14.709796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.644 [2024-07-25 01:20:14.709824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.644 qpair failed and we were unable to recover it. 00:34:21.644 [2024-07-25 01:20:14.709987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.644 [2024-07-25 01:20:14.710012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.644 qpair failed and we were unable to recover it. 00:34:21.644 [2024-07-25 01:20:14.710125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.644 [2024-07-25 01:20:14.710151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.644 qpair failed and we were unable to recover it. 00:34:21.644 [2024-07-25 01:20:14.710298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.644 [2024-07-25 01:20:14.710324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.644 qpair failed and we were unable to recover it. 00:34:21.644 [2024-07-25 01:20:14.710465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.644 [2024-07-25 01:20:14.710491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.644 qpair failed and we were unable to recover it. 00:34:21.644 [2024-07-25 01:20:14.710644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.644 [2024-07-25 01:20:14.710670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.644 qpair failed and we were unable to recover it. 00:34:21.644 [2024-07-25 01:20:14.710841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.644 [2024-07-25 01:20:14.710869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.644 qpair failed and we were unable to recover it. 00:34:21.644 [2024-07-25 01:20:14.711060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.644 [2024-07-25 01:20:14.711086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.644 qpair failed and we were unable to recover it. 00:34:21.644 [2024-07-25 01:20:14.711256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.644 [2024-07-25 01:20:14.711298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.644 qpair failed and we were unable to recover it. 00:34:21.644 [2024-07-25 01:20:14.711450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.644 [2024-07-25 01:20:14.711482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:21.644 qpair failed and we were unable to recover it. 00:34:21.644 [2024-07-25 01:20:14.711632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.644 [2024-07-25 01:20:14.711659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:21.644 qpair failed and we were unable to recover it. 00:34:21.644 [2024-07-25 01:20:14.711846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.644 [2024-07-25 01:20:14.711890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:21.644 qpair failed and we were unable to recover it. 00:34:21.644 [2024-07-25 01:20:14.712039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.644 [2024-07-25 01:20:14.712064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:21.644 qpair failed and we were unable to recover it. 00:34:21.644 [2024-07-25 01:20:14.712201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.644 [2024-07-25 01:20:14.712227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:21.644 qpair failed and we were unable to recover it. 00:34:21.644 [2024-07-25 01:20:14.712381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.644 [2024-07-25 01:20:14.712408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:21.644 qpair failed and we were unable to recover it. 00:34:21.644 [2024-07-25 01:20:14.712637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.644 [2024-07-25 01:20:14.712680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:21.644 qpair failed and we were unable to recover it. 00:34:21.644 [2024-07-25 01:20:14.712853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.644 [2024-07-25 01:20:14.712901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:21.644 qpair failed and we were unable to recover it. 00:34:21.644 [2024-07-25 01:20:14.713121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.644 [2024-07-25 01:20:14.713147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:21.644 qpair failed and we were unable to recover it. 00:34:21.644 [2024-07-25 01:20:14.713290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.644 [2024-07-25 01:20:14.713317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:21.644 qpair failed and we were unable to recover it. 00:34:21.644 [2024-07-25 01:20:14.713445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.644 [2024-07-25 01:20:14.713472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:21.644 qpair failed and we were unable to recover it. 00:34:21.644 [2024-07-25 01:20:14.713594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.644 [2024-07-25 01:20:14.713629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:21.644 qpair failed and we were unable to recover it. 00:34:21.644 [2024-07-25 01:20:14.713800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.644 [2024-07-25 01:20:14.713844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:21.644 qpair failed and we were unable to recover it. 00:34:21.644 [2024-07-25 01:20:14.713965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.644 [2024-07-25 01:20:14.713997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:21.644 qpair failed and we were unable to recover it. 00:34:21.644 [2024-07-25 01:20:14.714168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.644 [2024-07-25 01:20:14.714194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:21.644 qpair failed and we were unable to recover it. 00:34:21.644 [2024-07-25 01:20:14.714304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.644 [2024-07-25 01:20:14.714331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:21.644 qpair failed and we were unable to recover it. 00:34:21.645 [2024-07-25 01:20:14.714503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.645 [2024-07-25 01:20:14.714530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:21.645 qpair failed and we were unable to recover it. 00:34:21.645 [2024-07-25 01:20:14.714672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.645 [2024-07-25 01:20:14.714702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:21.645 qpair failed and we were unable to recover it. 00:34:21.645 [2024-07-25 01:20:14.714817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.645 [2024-07-25 01:20:14.714844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:21.645 qpair failed and we were unable to recover it. 00:34:21.645 [2024-07-25 01:20:14.715020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.645 [2024-07-25 01:20:14.715057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:21.645 qpair failed and we were unable to recover it. 00:34:21.645 [2024-07-25 01:20:14.715224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.645 [2024-07-25 01:20:14.715287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:21.645 qpair failed and we were unable to recover it. 00:34:21.645 [2024-07-25 01:20:14.715447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.645 [2024-07-25 01:20:14.715475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.645 qpair failed and we were unable to recover it. 00:34:21.645 [2024-07-25 01:20:14.715680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.645 [2024-07-25 01:20:14.715708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.645 qpair failed and we were unable to recover it. 00:34:21.645 [2024-07-25 01:20:14.715907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.645 [2024-07-25 01:20:14.715935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.645 qpair failed and we were unable to recover it. 00:34:21.645 [2024-07-25 01:20:14.716130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.645 [2024-07-25 01:20:14.716155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.645 qpair failed and we were unable to recover it. 00:34:21.645 [2024-07-25 01:20:14.716344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.645 [2024-07-25 01:20:14.716370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.645 qpair failed and we were unable to recover it. 00:34:21.645 [2024-07-25 01:20:14.716512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.645 [2024-07-25 01:20:14.716538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.645 qpair failed and we were unable to recover it. 00:34:21.645 [2024-07-25 01:20:14.716700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.645 [2024-07-25 01:20:14.716726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.645 qpair failed and we were unable to recover it. 00:34:21.645 [2024-07-25 01:20:14.716836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.645 [2024-07-25 01:20:14.716861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.645 qpair failed and we were unable to recover it. 00:34:21.645 [2024-07-25 01:20:14.717007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.645 [2024-07-25 01:20:14.717034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.645 qpair failed and we were unable to recover it. 00:34:21.645 [2024-07-25 01:20:14.717158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.645 [2024-07-25 01:20:14.717184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.645 qpair failed and we were unable to recover it. 00:34:21.645 [2024-07-25 01:20:14.717336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.645 [2024-07-25 01:20:14.717362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.645 qpair failed and we were unable to recover it. 00:34:21.645 [2024-07-25 01:20:14.717503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.645 [2024-07-25 01:20:14.717528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.645 qpair failed and we were unable to recover it. 00:34:21.645 [2024-07-25 01:20:14.717691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.645 [2024-07-25 01:20:14.717716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.645 qpair failed and we were unable to recover it. 00:34:21.645 [2024-07-25 01:20:14.717860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.645 [2024-07-25 01:20:14.717886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.645 qpair failed and we were unable to recover it. 00:34:21.645 [2024-07-25 01:20:14.718033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.645 [2024-07-25 01:20:14.718059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.645 qpair failed and we were unable to recover it. 00:34:21.645 [2024-07-25 01:20:14.718169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.645 [2024-07-25 01:20:14.718194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.645 qpair failed and we were unable to recover it. 00:34:21.645 [2024-07-25 01:20:14.718339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.645 [2024-07-25 01:20:14.718365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.645 qpair failed and we were unable to recover it. 00:34:21.645 [2024-07-25 01:20:14.718485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.645 [2024-07-25 01:20:14.718510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.645 qpair failed and we were unable to recover it. 00:34:21.645 [2024-07-25 01:20:14.718675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.645 [2024-07-25 01:20:14.718700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.645 qpair failed and we were unable to recover it. 00:34:21.645 [2024-07-25 01:20:14.718821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.645 [2024-07-25 01:20:14.718851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.645 qpair failed and we were unable to recover it. 00:34:21.645 [2024-07-25 01:20:14.718992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.645 [2024-07-25 01:20:14.719017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.645 qpair failed and we were unable to recover it. 00:34:21.645 [2024-07-25 01:20:14.719159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.645 [2024-07-25 01:20:14.719185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.645 qpair failed and we were unable to recover it. 00:34:21.645 [2024-07-25 01:20:14.719364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.645 [2024-07-25 01:20:14.719391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.645 qpair failed and we were unable to recover it. 00:34:21.645 [2024-07-25 01:20:14.719506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.645 [2024-07-25 01:20:14.719532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.645 qpair failed and we were unable to recover it. 00:34:21.645 [2024-07-25 01:20:14.719672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.645 [2024-07-25 01:20:14.719698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.645 qpair failed and we were unable to recover it. 00:34:21.645 [2024-07-25 01:20:14.719849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.645 [2024-07-25 01:20:14.719875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.645 qpair failed and we were unable to recover it. 00:34:21.645 [2024-07-25 01:20:14.720009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.646 [2024-07-25 01:20:14.720046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.646 qpair failed and we were unable to recover it. 00:34:21.646 [2024-07-25 01:20:14.720185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.646 [2024-07-25 01:20:14.720211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.646 qpair failed and we were unable to recover it. 00:34:21.646 [2024-07-25 01:20:14.720347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.646 [2024-07-25 01:20:14.720372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.646 qpair failed and we were unable to recover it. 00:34:21.646 [2024-07-25 01:20:14.720483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.646 [2024-07-25 01:20:14.720508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.646 qpair failed and we were unable to recover it. 00:34:21.646 [2024-07-25 01:20:14.720620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.646 [2024-07-25 01:20:14.720646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.646 qpair failed and we were unable to recover it. 00:34:21.646 [2024-07-25 01:20:14.720782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.646 [2024-07-25 01:20:14.720807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.646 qpair failed and we were unable to recover it. 00:34:21.646 [2024-07-25 01:20:14.720950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.646 [2024-07-25 01:20:14.720976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.646 qpair failed and we were unable to recover it. 00:34:21.646 [2024-07-25 01:20:14.721126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.646 [2024-07-25 01:20:14.721151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.646 qpair failed and we were unable to recover it. 00:34:21.646 [2024-07-25 01:20:14.721296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.646 [2024-07-25 01:20:14.721323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.646 qpair failed and we were unable to recover it. 00:34:21.646 [2024-07-25 01:20:14.721434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.646 [2024-07-25 01:20:14.721460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.646 qpair failed and we were unable to recover it. 00:34:21.646 [2024-07-25 01:20:14.721576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.646 [2024-07-25 01:20:14.721602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.646 qpair failed and we were unable to recover it. 00:34:21.646 [2024-07-25 01:20:14.721744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.646 [2024-07-25 01:20:14.721770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.646 qpair failed and we were unable to recover it. 00:34:21.646 [2024-07-25 01:20:14.721894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.646 [2024-07-25 01:20:14.721919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.646 qpair failed and we were unable to recover it. 00:34:21.646 [2024-07-25 01:20:14.722039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.646 [2024-07-25 01:20:14.722064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.646 qpair failed and we were unable to recover it. 00:34:21.646 [2024-07-25 01:20:14.722179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.646 [2024-07-25 01:20:14.722206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.646 qpair failed and we were unable to recover it. 00:34:21.646 [2024-07-25 01:20:14.722347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.646 [2024-07-25 01:20:14.722373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.646 qpair failed and we were unable to recover it. 00:34:21.646 [2024-07-25 01:20:14.722487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.646 [2024-07-25 01:20:14.722512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.646 qpair failed and we were unable to recover it. 00:34:21.646 [2024-07-25 01:20:14.722684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.646 [2024-07-25 01:20:14.722723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:21.646 qpair failed and we were unable to recover it. 00:34:21.646 [2024-07-25 01:20:14.722866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.646 [2024-07-25 01:20:14.722902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.646 qpair failed and we were unable to recover it. 00:34:21.646 [2024-07-25 01:20:14.723065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.646 [2024-07-25 01:20:14.723093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.646 qpair failed and we were unable to recover it. 00:34:21.646 [2024-07-25 01:20:14.723212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.646 [2024-07-25 01:20:14.723257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.646 qpair failed and we were unable to recover it. 00:34:21.646 [2024-07-25 01:20:14.723419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.646 [2024-07-25 01:20:14.723446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.646 qpair failed and we were unable to recover it. 00:34:21.646 [2024-07-25 01:20:14.723594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.646 [2024-07-25 01:20:14.723621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.646 qpair failed and we were unable to recover it. 00:34:21.646 [2024-07-25 01:20:14.723744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.646 [2024-07-25 01:20:14.723769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.646 qpair failed and we were unable to recover it. 00:34:21.646 [2024-07-25 01:20:14.723883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.646 [2024-07-25 01:20:14.723909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.646 qpair failed and we were unable to recover it. 00:34:21.646 [2024-07-25 01:20:14.724029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.646 [2024-07-25 01:20:14.724055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.646 qpair failed and we were unable to recover it. 00:34:21.646 [2024-07-25 01:20:14.724181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.646 [2024-07-25 01:20:14.724207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.646 qpair failed and we were unable to recover it. 00:34:21.646 [2024-07-25 01:20:14.724347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.646 [2024-07-25 01:20:14.724375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.646 qpair failed and we were unable to recover it. 00:34:21.646 [2024-07-25 01:20:14.724523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.646 [2024-07-25 01:20:14.724548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.646 qpair failed and we were unable to recover it. 00:34:21.646 [2024-07-25 01:20:14.724671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.646 [2024-07-25 01:20:14.724698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.646 qpair failed and we were unable to recover it. 00:34:21.646 [2024-07-25 01:20:14.724825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.646 [2024-07-25 01:20:14.724851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.646 qpair failed and we were unable to recover it. 00:34:21.646 [2024-07-25 01:20:14.724970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.646 [2024-07-25 01:20:14.724995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.646 qpair failed and we were unable to recover it. 00:34:21.646 [2024-07-25 01:20:14.725127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.646 [2024-07-25 01:20:14.725154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.646 qpair failed and we were unable to recover it. 00:34:21.646 [2024-07-25 01:20:14.725294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.646 [2024-07-25 01:20:14.725320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.646 qpair failed and we were unable to recover it. 00:34:21.646 [2024-07-25 01:20:14.725469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.646 [2024-07-25 01:20:14.725496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.646 qpair failed and we were unable to recover it. 00:34:21.646 [2024-07-25 01:20:14.725637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.646 [2024-07-25 01:20:14.725663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.646 qpair failed and we were unable to recover it. 00:34:21.646 [2024-07-25 01:20:14.725803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.646 [2024-07-25 01:20:14.725828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.646 qpair failed and we were unable to recover it. 00:34:21.646 [2024-07-25 01:20:14.725971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.646 [2024-07-25 01:20:14.725998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.646 qpair failed and we were unable to recover it. 00:34:21.646 [2024-07-25 01:20:14.726110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.647 [2024-07-25 01:20:14.726136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.647 qpair failed and we were unable to recover it. 00:34:21.647 [2024-07-25 01:20:14.726271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.647 [2024-07-25 01:20:14.726296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.647 qpair failed and we were unable to recover it. 00:34:21.647 [2024-07-25 01:20:14.726410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.647 [2024-07-25 01:20:14.726436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.647 qpair failed and we were unable to recover it. 00:34:21.647 [2024-07-25 01:20:14.726576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.647 [2024-07-25 01:20:14.726601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.647 qpair failed and we were unable to recover it. 00:34:21.647 [2024-07-25 01:20:14.726729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.647 [2024-07-25 01:20:14.726756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.647 qpair failed and we were unable to recover it. 00:34:21.647 [2024-07-25 01:20:14.726870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.647 [2024-07-25 01:20:14.726896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.647 qpair failed and we were unable to recover it. 00:34:21.647 [2024-07-25 01:20:14.727035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.647 [2024-07-25 01:20:14.727059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.647 qpair failed and we were unable to recover it. 00:34:21.647 [2024-07-25 01:20:14.727987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.647 [2024-07-25 01:20:14.728017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.647 qpair failed and we were unable to recover it. 00:34:21.647 [2024-07-25 01:20:14.728142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.647 [2024-07-25 01:20:14.728169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.647 qpair failed and we were unable to recover it. 00:34:21.647 [2024-07-25 01:20:14.728294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.647 [2024-07-25 01:20:14.728324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.647 qpair failed and we were unable to recover it. 00:34:21.647 [2024-07-25 01:20:14.728442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.647 [2024-07-25 01:20:14.728469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.647 qpair failed and we were unable to recover it. 00:34:21.647 [2024-07-25 01:20:14.728592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.647 [2024-07-25 01:20:14.728617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.647 qpair failed and we were unable to recover it. 00:34:21.647 [2024-07-25 01:20:14.728753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.647 [2024-07-25 01:20:14.728780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.647 qpair failed and we were unable to recover it. 00:34:21.647 [2024-07-25 01:20:14.728897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.647 [2024-07-25 01:20:14.728923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.647 qpair failed and we were unable to recover it. 00:34:21.647 [2024-07-25 01:20:14.729042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.647 [2024-07-25 01:20:14.729069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.647 qpair failed and we were unable to recover it. 00:34:21.647 [2024-07-25 01:20:14.729188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.647 [2024-07-25 01:20:14.729214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.647 qpair failed and we were unable to recover it. 00:34:21.647 [2024-07-25 01:20:14.729929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.647 [2024-07-25 01:20:14.729959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.647 qpair failed and we were unable to recover it. 00:34:21.647 [2024-07-25 01:20:14.730080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.647 [2024-07-25 01:20:14.730106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.647 qpair failed and we were unable to recover it. 00:34:21.647 [2024-07-25 01:20:14.730221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.647 [2024-07-25 01:20:14.730254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.647 qpair failed and we were unable to recover it. 00:34:21.647 [2024-07-25 01:20:14.730371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.647 [2024-07-25 01:20:14.730396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.647 qpair failed and we were unable to recover it. 00:34:21.647 [2024-07-25 01:20:14.730521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.647 [2024-07-25 01:20:14.730547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.647 qpair failed and we were unable to recover it. 00:34:21.647 [2024-07-25 01:20:14.730666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.647 [2024-07-25 01:20:14.730691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.647 qpair failed and we were unable to recover it. 00:34:21.647 [2024-07-25 01:20:14.730807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.647 [2024-07-25 01:20:14.730833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.647 qpair failed and we were unable to recover it. 00:34:21.647 [2024-07-25 01:20:14.730974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.647 [2024-07-25 01:20:14.730998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.647 qpair failed and we were unable to recover it. 00:34:21.647 [2024-07-25 01:20:14.731143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.647 [2024-07-25 01:20:14.731167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.647 qpair failed and we were unable to recover it. 00:34:21.647 [2024-07-25 01:20:14.731290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.647 [2024-07-25 01:20:14.731318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.647 qpair failed and we were unable to recover it. 00:34:21.647 [2024-07-25 01:20:14.731437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.647 [2024-07-25 01:20:14.731462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.647 qpair failed and we were unable to recover it. 00:34:21.647 [2024-07-25 01:20:14.731572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.647 [2024-07-25 01:20:14.731598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.647 qpair failed and we were unable to recover it. 00:34:21.647 [2024-07-25 01:20:14.731739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.647 [2024-07-25 01:20:14.731763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.647 qpair failed and we were unable to recover it. 00:34:21.647 [2024-07-25 01:20:14.731875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.647 [2024-07-25 01:20:14.731901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.647 qpair failed and we were unable to recover it. 00:34:21.647 [2024-07-25 01:20:14.732048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.647 [2024-07-25 01:20:14.732073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.647 qpair failed and we were unable to recover it. 00:34:21.647 [2024-07-25 01:20:14.732239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.647 [2024-07-25 01:20:14.732272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.647 qpair failed and we were unable to recover it. 00:34:21.647 [2024-07-25 01:20:14.732395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.647 [2024-07-25 01:20:14.732420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.647 qpair failed and we were unable to recover it. 00:34:21.647 [2024-07-25 01:20:14.732550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.647 [2024-07-25 01:20:14.732575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.647 qpair failed and we were unable to recover it. 00:34:21.647 [2024-07-25 01:20:14.732722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.647 [2024-07-25 01:20:14.732747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.647 qpair failed and we were unable to recover it. 00:34:21.647 [2024-07-25 01:20:14.732896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.647 [2024-07-25 01:20:14.732921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.647 qpair failed and we were unable to recover it. 00:34:21.647 [2024-07-25 01:20:14.733075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.647 [2024-07-25 01:20:14.733099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.647 qpair failed and we were unable to recover it. 00:34:21.647 [2024-07-25 01:20:14.733254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.647 [2024-07-25 01:20:14.733280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.647 qpair failed and we were unable to recover it. 00:34:21.648 [2024-07-25 01:20:14.733394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.648 [2024-07-25 01:20:14.733421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.648 qpair failed and we were unable to recover it. 00:34:21.648 [2024-07-25 01:20:14.733588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.648 [2024-07-25 01:20:14.733612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.648 qpair failed and we were unable to recover it. 00:34:21.648 [2024-07-25 01:20:14.733739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.648 [2024-07-25 01:20:14.733763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.648 qpair failed and we were unable to recover it. 00:34:21.648 [2024-07-25 01:20:14.733921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.648 [2024-07-25 01:20:14.733947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.648 qpair failed and we were unable to recover it. 00:34:21.648 [2024-07-25 01:20:14.734058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.648 [2024-07-25 01:20:14.734083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.648 qpair failed and we were unable to recover it. 00:34:21.648 [2024-07-25 01:20:14.734229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.648 [2024-07-25 01:20:14.734264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.648 qpair failed and we were unable to recover it. 00:34:21.648 [2024-07-25 01:20:14.734379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.648 [2024-07-25 01:20:14.734414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.648 qpair failed and we were unable to recover it. 00:34:21.648 [2024-07-25 01:20:14.734569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.648 [2024-07-25 01:20:14.734603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.648 qpair failed and we were unable to recover it. 00:34:21.648 [2024-07-25 01:20:14.734735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.648 [2024-07-25 01:20:14.734760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.648 qpair failed and we were unable to recover it. 00:34:21.648 [2024-07-25 01:20:14.734876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.648 [2024-07-25 01:20:14.734900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.648 qpair failed and we were unable to recover it. 00:34:21.648 [2024-07-25 01:20:14.735022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.648 [2024-07-25 01:20:14.735048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.648 qpair failed and we were unable to recover it. 00:34:21.648 [2024-07-25 01:20:14.735196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.648 [2024-07-25 01:20:14.735225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.648 qpair failed and we were unable to recover it. 00:34:21.648 [2024-07-25 01:20:14.735394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.648 [2024-07-25 01:20:14.735419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.648 qpair failed and we were unable to recover it. 00:34:21.648 [2024-07-25 01:20:14.735533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.648 [2024-07-25 01:20:14.735559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.648 qpair failed and we were unable to recover it. 00:34:21.648 [2024-07-25 01:20:14.735670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.648 [2024-07-25 01:20:14.735694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.648 qpair failed and we were unable to recover it. 00:34:21.648 [2024-07-25 01:20:14.735865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.648 [2024-07-25 01:20:14.735890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.648 qpair failed and we were unable to recover it. 00:34:21.648 [2024-07-25 01:20:14.736018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.648 [2024-07-25 01:20:14.736043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.648 qpair failed and we were unable to recover it. 00:34:21.648 [2024-07-25 01:20:14.736162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.648 [2024-07-25 01:20:14.736198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.648 qpair failed and we were unable to recover it. 00:34:21.648 [2024-07-25 01:20:14.736380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.648 [2024-07-25 01:20:14.736407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.648 qpair failed and we were unable to recover it. 00:34:21.648 [2024-07-25 01:20:14.736552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.648 [2024-07-25 01:20:14.736578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.648 qpair failed and we were unable to recover it. 00:34:21.648 [2024-07-25 01:20:14.736720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.648 [2024-07-25 01:20:14.736745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.648 qpair failed and we were unable to recover it. 00:34:21.648 [2024-07-25 01:20:14.736857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.648 [2024-07-25 01:20:14.736881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.648 qpair failed and we were unable to recover it. 00:34:21.648 [2024-07-25 01:20:14.736999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.648 [2024-07-25 01:20:14.737024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.648 qpair failed and we were unable to recover it. 00:34:21.648 [2024-07-25 01:20:14.737139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.648 [2024-07-25 01:20:14.737164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.648 qpair failed and we were unable to recover it. 00:34:21.648 [2024-07-25 01:20:14.737312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.648 [2024-07-25 01:20:14.737351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.648 qpair failed and we were unable to recover it. 00:34:21.648 [2024-07-25 01:20:14.737507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.648 [2024-07-25 01:20:14.737535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.648 qpair failed and we were unable to recover it. 00:34:21.648 [2024-07-25 01:20:14.737656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.648 [2024-07-25 01:20:14.737685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.648 qpair failed and we were unable to recover it. 00:34:21.648 [2024-07-25 01:20:14.737822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.648 [2024-07-25 01:20:14.737859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.648 qpair failed and we were unable to recover it. 00:34:21.921 [2024-07-25 01:20:14.737995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.921 [2024-07-25 01:20:14.738022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.921 qpair failed and we were unable to recover it. 00:34:21.921 [2024-07-25 01:20:14.738168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.921 [2024-07-25 01:20:14.738193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.921 qpair failed and we were unable to recover it. 00:34:21.921 [2024-07-25 01:20:14.738318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.921 [2024-07-25 01:20:14.738344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.921 qpair failed and we were unable to recover it. 00:34:21.921 [2024-07-25 01:20:14.738460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.921 [2024-07-25 01:20:14.738485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.921 qpair failed and we were unable to recover it. 00:34:21.921 [2024-07-25 01:20:14.738602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.921 [2024-07-25 01:20:14.738627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.921 qpair failed and we were unable to recover it. 00:34:21.921 [2024-07-25 01:20:14.738768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.921 [2024-07-25 01:20:14.738793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.921 qpair failed and we were unable to recover it. 00:34:21.921 [2024-07-25 01:20:14.738939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.921 [2024-07-25 01:20:14.738964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.921 qpair failed and we were unable to recover it. 00:34:21.921 [2024-07-25 01:20:14.739108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.921 [2024-07-25 01:20:14.739134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.921 qpair failed and we were unable to recover it. 00:34:21.921 [2024-07-25 01:20:14.739275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.921 [2024-07-25 01:20:14.739302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.921 qpair failed and we were unable to recover it. 00:34:21.921 [2024-07-25 01:20:14.739413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.921 [2024-07-25 01:20:14.739438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.921 qpair failed and we were unable to recover it. 00:34:21.921 [2024-07-25 01:20:14.739555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.921 [2024-07-25 01:20:14.739581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.921 qpair failed and we were unable to recover it. 00:34:21.921 [2024-07-25 01:20:14.739692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.921 [2024-07-25 01:20:14.739718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.921 qpair failed and we were unable to recover it. 00:34:21.921 [2024-07-25 01:20:14.739864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.921 [2024-07-25 01:20:14.739890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.921 qpair failed and we were unable to recover it. 00:34:21.921 [2024-07-25 01:20:14.740026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.921 [2024-07-25 01:20:14.740052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.921 qpair failed and we were unable to recover it. 00:34:21.921 [2024-07-25 01:20:14.740174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.921 [2024-07-25 01:20:14.740199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.921 qpair failed and we were unable to recover it. 00:34:21.921 [2024-07-25 01:20:14.740348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.921 [2024-07-25 01:20:14.740374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.921 qpair failed and we were unable to recover it. 00:34:21.921 [2024-07-25 01:20:14.740491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.921 [2024-07-25 01:20:14.740516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.921 qpair failed and we were unable to recover it. 00:34:21.921 [2024-07-25 01:20:14.740637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.921 [2024-07-25 01:20:14.740662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.921 qpair failed and we were unable to recover it. 00:34:21.921 [2024-07-25 01:20:14.740778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.921 [2024-07-25 01:20:14.740805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.921 qpair failed and we were unable to recover it. 00:34:21.921 [2024-07-25 01:20:14.740921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.921 [2024-07-25 01:20:14.740946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.921 qpair failed and we were unable to recover it. 00:34:21.921 [2024-07-25 01:20:14.741109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.921 [2024-07-25 01:20:14.741134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.921 qpair failed and we were unable to recover it. 00:34:21.921 [2024-07-25 01:20:14.741267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.921 [2024-07-25 01:20:14.741293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.921 qpair failed and we were unable to recover it. 00:34:21.921 [2024-07-25 01:20:14.741430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.921 [2024-07-25 01:20:14.741455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.921 qpair failed and we were unable to recover it. 00:34:21.921 [2024-07-25 01:20:14.741605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.921 [2024-07-25 01:20:14.741631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.921 qpair failed and we were unable to recover it. 00:34:21.921 [2024-07-25 01:20:14.741787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.921 [2024-07-25 01:20:14.741819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.921 qpair failed and we were unable to recover it. 00:34:21.921 [2024-07-25 01:20:14.741934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.921 [2024-07-25 01:20:14.741959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.921 qpair failed and we were unable to recover it. 00:34:21.921 [2024-07-25 01:20:14.742072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.921 [2024-07-25 01:20:14.742097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.921 qpair failed and we were unable to recover it. 00:34:21.921 [2024-07-25 01:20:14.742207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.921 [2024-07-25 01:20:14.742233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.921 qpair failed and we were unable to recover it. 00:34:21.921 [2024-07-25 01:20:14.742363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.921 [2024-07-25 01:20:14.742388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.921 qpair failed and we were unable to recover it. 00:34:21.921 [2024-07-25 01:20:14.743098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.921 [2024-07-25 01:20:14.743128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.921 qpair failed and we were unable to recover it. 00:34:21.921 [2024-07-25 01:20:14.743275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.921 [2024-07-25 01:20:14.743302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.921 qpair failed and we were unable to recover it. 00:34:21.921 [2024-07-25 01:20:14.743421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.921 [2024-07-25 01:20:14.743447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.922 qpair failed and we were unable to recover it. 00:34:21.922 [2024-07-25 01:20:14.743570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.922 [2024-07-25 01:20:14.743595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.922 qpair failed and we were unable to recover it. 00:34:21.922 [2024-07-25 01:20:14.743758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.922 [2024-07-25 01:20:14.743783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.922 qpair failed and we were unable to recover it. 00:34:21.922 [2024-07-25 01:20:14.743909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.922 [2024-07-25 01:20:14.743934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.922 qpair failed and we were unable to recover it. 00:34:21.922 [2024-07-25 01:20:14.744075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.922 [2024-07-25 01:20:14.744100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.922 qpair failed and we were unable to recover it. 00:34:21.922 [2024-07-25 01:20:14.744248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.922 [2024-07-25 01:20:14.744274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.922 qpair failed and we were unable to recover it. 00:34:21.922 [2024-07-25 01:20:14.744391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.922 [2024-07-25 01:20:14.744421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.922 qpair failed and we were unable to recover it. 00:34:21.922 [2024-07-25 01:20:14.744539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.922 [2024-07-25 01:20:14.744564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.922 qpair failed and we were unable to recover it. 00:34:21.922 [2024-07-25 01:20:14.744707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.922 [2024-07-25 01:20:14.744733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.922 qpair failed and we were unable to recover it. 00:34:21.922 [2024-07-25 01:20:14.744872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.922 [2024-07-25 01:20:14.744898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.922 qpair failed and we were unable to recover it. 00:34:21.922 [2024-07-25 01:20:14.745021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.922 [2024-07-25 01:20:14.745046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.922 qpair failed and we were unable to recover it. 00:34:21.922 [2024-07-25 01:20:14.745160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.922 [2024-07-25 01:20:14.745186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.922 qpair failed and we were unable to recover it. 00:34:21.922 [2024-07-25 01:20:14.745339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.922 [2024-07-25 01:20:14.745364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.922 qpair failed and we were unable to recover it. 00:34:21.922 [2024-07-25 01:20:14.745533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.922 [2024-07-25 01:20:14.745557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.922 qpair failed and we were unable to recover it. 00:34:21.922 [2024-07-25 01:20:14.745695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.922 [2024-07-25 01:20:14.745720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.922 qpair failed and we were unable to recover it. 00:34:21.922 [2024-07-25 01:20:14.745871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.922 [2024-07-25 01:20:14.745903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.922 qpair failed and we were unable to recover it. 00:34:21.922 [2024-07-25 01:20:14.746035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.922 [2024-07-25 01:20:14.746061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.922 qpair failed and we were unable to recover it. 00:34:21.922 [2024-07-25 01:20:14.746168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.922 [2024-07-25 01:20:14.746193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.922 qpair failed and we were unable to recover it. 00:34:21.922 [2024-07-25 01:20:14.746347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.922 [2024-07-25 01:20:14.746372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.922 qpair failed and we were unable to recover it. 00:34:21.922 [2024-07-25 01:20:14.746488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.922 [2024-07-25 01:20:14.746514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.922 qpair failed and we were unable to recover it. 00:34:21.922 [2024-07-25 01:20:14.746670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.922 [2024-07-25 01:20:14.746696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.922 qpair failed and we were unable to recover it. 00:34:21.922 [2024-07-25 01:20:14.746849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.922 [2024-07-25 01:20:14.746874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.922 qpair failed and we were unable to recover it. 00:34:21.922 [2024-07-25 01:20:14.747011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.922 [2024-07-25 01:20:14.747036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.922 qpair failed and we were unable to recover it. 00:34:21.922 [2024-07-25 01:20:14.747174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.922 [2024-07-25 01:20:14.747199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.922 qpair failed and we were unable to recover it. 00:34:21.922 [2024-07-25 01:20:14.747389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.922 [2024-07-25 01:20:14.747414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.922 qpair failed and we were unable to recover it. 00:34:21.922 [2024-07-25 01:20:14.747561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.922 [2024-07-25 01:20:14.747587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.922 qpair failed and we were unable to recover it. 00:34:21.922 [2024-07-25 01:20:14.747719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.922 [2024-07-25 01:20:14.747744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.922 qpair failed and we were unable to recover it. 00:34:21.922 [2024-07-25 01:20:14.747860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.922 [2024-07-25 01:20:14.747886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.922 qpair failed and we were unable to recover it. 00:34:21.922 [2024-07-25 01:20:14.748002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.922 [2024-07-25 01:20:14.748030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.922 qpair failed and we were unable to recover it. 00:34:21.922 [2024-07-25 01:20:14.748148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.922 [2024-07-25 01:20:14.748174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.922 qpair failed and we were unable to recover it. 00:34:21.922 [2024-07-25 01:20:14.748333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.922 [2024-07-25 01:20:14.748359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.922 qpair failed and we were unable to recover it. 00:34:21.922 [2024-07-25 01:20:14.748482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.922 [2024-07-25 01:20:14.748507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.922 qpair failed and we were unable to recover it. 00:34:21.922 [2024-07-25 01:20:14.748681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.922 [2024-07-25 01:20:14.748706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.922 qpair failed and we were unable to recover it. 00:34:21.922 [2024-07-25 01:20:14.748837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.922 [2024-07-25 01:20:14.748866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.922 qpair failed and we were unable to recover it. 00:34:21.922 [2024-07-25 01:20:14.748974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.922 [2024-07-25 01:20:14.749000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.922 qpair failed and we were unable to recover it. 00:34:21.922 [2024-07-25 01:20:14.749159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.922 [2024-07-25 01:20:14.749185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.922 qpair failed and we were unable to recover it. 00:34:21.922 [2024-07-25 01:20:14.749339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.922 [2024-07-25 01:20:14.749366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.922 qpair failed and we were unable to recover it. 00:34:21.922 [2024-07-25 01:20:14.749510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.922 [2024-07-25 01:20:14.749534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.923 qpair failed and we were unable to recover it. 00:34:21.923 [2024-07-25 01:20:14.749672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.923 [2024-07-25 01:20:14.749697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.923 qpair failed and we were unable to recover it. 00:34:21.923 [2024-07-25 01:20:14.749814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.923 [2024-07-25 01:20:14.749850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.923 qpair failed and we were unable to recover it. 00:34:21.923 [2024-07-25 01:20:14.749972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.923 [2024-07-25 01:20:14.749997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.923 qpair failed and we were unable to recover it. 00:34:21.923 [2024-07-25 01:20:14.750128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.923 [2024-07-25 01:20:14.750153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.923 qpair failed and we were unable to recover it. 00:34:21.923 [2024-07-25 01:20:14.750275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.923 [2024-07-25 01:20:14.750301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.923 qpair failed and we were unable to recover it. 00:34:21.923 [2024-07-25 01:20:14.750421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.923 [2024-07-25 01:20:14.750446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.923 qpair failed and we were unable to recover it. 00:34:21.923 [2024-07-25 01:20:14.750554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.923 [2024-07-25 01:20:14.750579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.923 qpair failed and we were unable to recover it. 00:34:21.923 [2024-07-25 01:20:14.750701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.923 [2024-07-25 01:20:14.750728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.923 qpair failed and we were unable to recover it. 00:34:21.923 [2024-07-25 01:20:14.750851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.923 [2024-07-25 01:20:14.750877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.923 qpair failed and we were unable to recover it. 00:34:21.923 [2024-07-25 01:20:14.751027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.923 [2024-07-25 01:20:14.751052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.923 qpair failed and we were unable to recover it. 00:34:21.923 [2024-07-25 01:20:14.751195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.923 [2024-07-25 01:20:14.751219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.923 qpair failed and we were unable to recover it. 00:34:21.923 [2024-07-25 01:20:14.751337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.923 [2024-07-25 01:20:14.751362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.923 qpair failed and we were unable to recover it. 00:34:21.923 [2024-07-25 01:20:14.751544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.923 [2024-07-25 01:20:14.751570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.923 qpair failed and we were unable to recover it. 00:34:21.923 [2024-07-25 01:20:14.751701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.923 [2024-07-25 01:20:14.751726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.923 qpair failed and we were unable to recover it. 00:34:21.923 [2024-07-25 01:20:14.751869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.923 [2024-07-25 01:20:14.751894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.923 qpair failed and we were unable to recover it. 00:34:21.923 [2024-07-25 01:20:14.752010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.923 [2024-07-25 01:20:14.752037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.923 qpair failed and we were unable to recover it. 00:34:21.923 [2024-07-25 01:20:14.752173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.923 [2024-07-25 01:20:14.752198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.923 qpair failed and we were unable to recover it. 00:34:21.923 [2024-07-25 01:20:14.752333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.923 [2024-07-25 01:20:14.752358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.923 qpair failed and we were unable to recover it. 00:34:21.923 [2024-07-25 01:20:14.752472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.923 [2024-07-25 01:20:14.752498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.923 qpair failed and we were unable to recover it. 00:34:21.923 [2024-07-25 01:20:14.752623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.923 [2024-07-25 01:20:14.752648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.923 qpair failed and we were unable to recover it. 00:34:21.923 [2024-07-25 01:20:14.752785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.923 [2024-07-25 01:20:14.752810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.923 qpair failed and we were unable to recover it. 00:34:21.923 [2024-07-25 01:20:14.752938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.923 [2024-07-25 01:20:14.752963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.923 qpair failed and we were unable to recover it. 00:34:21.923 [2024-07-25 01:20:14.753079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.923 [2024-07-25 01:20:14.753104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.923 qpair failed and we were unable to recover it. 00:34:21.923 [2024-07-25 01:20:14.753235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.923 [2024-07-25 01:20:14.753266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.923 qpair failed and we were unable to recover it. 00:34:21.923 [2024-07-25 01:20:14.753420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.923 [2024-07-25 01:20:14.753445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.923 qpair failed and we were unable to recover it. 00:34:21.923 [2024-07-25 01:20:14.753574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.923 [2024-07-25 01:20:14.753599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.923 qpair failed and we were unable to recover it. 00:34:21.923 [2024-07-25 01:20:14.753705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.923 [2024-07-25 01:20:14.753731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.923 qpair failed and we were unable to recover it. 00:34:21.923 [2024-07-25 01:20:14.753839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.923 [2024-07-25 01:20:14.753864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.923 qpair failed and we were unable to recover it. 00:34:21.923 [2024-07-25 01:20:14.753978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.923 [2024-07-25 01:20:14.754003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.923 qpair failed and we were unable to recover it. 00:34:21.923 [2024-07-25 01:20:14.754127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.923 [2024-07-25 01:20:14.754152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.923 qpair failed and we were unable to recover it. 00:34:21.923 [2024-07-25 01:20:14.754272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.923 [2024-07-25 01:20:14.754298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.923 qpair failed and we were unable to recover it. 00:34:21.923 [2024-07-25 01:20:14.754417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.923 [2024-07-25 01:20:14.754442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.923 qpair failed and we were unable to recover it. 00:34:21.923 [2024-07-25 01:20:14.754563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.923 [2024-07-25 01:20:14.754589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.923 qpair failed and we were unable to recover it. 00:34:21.923 [2024-07-25 01:20:14.754711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.923 [2024-07-25 01:20:14.754736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.923 qpair failed and we were unable to recover it. 00:34:21.923 [2024-07-25 01:20:14.754859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.923 [2024-07-25 01:20:14.754884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.923 qpair failed and we were unable to recover it. 00:34:21.923 [2024-07-25 01:20:14.755035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.923 [2024-07-25 01:20:14.755060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.923 qpair failed and we were unable to recover it. 00:34:21.923 [2024-07-25 01:20:14.755231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.924 [2024-07-25 01:20:14.755289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.924 qpair failed and we were unable to recover it. 00:34:21.924 [2024-07-25 01:20:14.755436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.924 [2024-07-25 01:20:14.755461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.924 qpair failed and we were unable to recover it. 00:34:21.924 [2024-07-25 01:20:14.755566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.924 [2024-07-25 01:20:14.755595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.924 qpair failed and we were unable to recover it. 00:34:21.924 [2024-07-25 01:20:14.755720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.924 [2024-07-25 01:20:14.755745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.924 qpair failed and we were unable to recover it. 00:34:21.924 [2024-07-25 01:20:14.755867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.924 [2024-07-25 01:20:14.755892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.924 qpair failed and we were unable to recover it. 00:34:21.924 [2024-07-25 01:20:14.756014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.924 [2024-07-25 01:20:14.756040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.924 qpair failed and we were unable to recover it. 00:34:21.924 [2024-07-25 01:20:14.756156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.924 [2024-07-25 01:20:14.756181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.924 qpair failed and we were unable to recover it. 00:34:21.924 [2024-07-25 01:20:14.756307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.924 [2024-07-25 01:20:14.756346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.924 qpair failed and we were unable to recover it. 00:34:21.924 [2024-07-25 01:20:14.756466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.924 [2024-07-25 01:20:14.756492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.924 qpair failed and we were unable to recover it. 00:34:21.924 [2024-07-25 01:20:14.756656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.924 [2024-07-25 01:20:14.756681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.924 qpair failed and we were unable to recover it. 00:34:21.924 [2024-07-25 01:20:14.756807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.924 [2024-07-25 01:20:14.756834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.924 qpair failed and we were unable to recover it. 00:34:21.924 [2024-07-25 01:20:14.756978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.924 [2024-07-25 01:20:14.757003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.924 qpair failed and we were unable to recover it. 00:34:21.924 [2024-07-25 01:20:14.757151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.924 [2024-07-25 01:20:14.757175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.924 qpair failed and we were unable to recover it. 00:34:21.924 [2024-07-25 01:20:14.757301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.924 [2024-07-25 01:20:14.757328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.924 qpair failed and we were unable to recover it. 00:34:21.924 [2024-07-25 01:20:14.757474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.924 [2024-07-25 01:20:14.757500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.924 qpair failed and we were unable to recover it. 00:34:21.924 [2024-07-25 01:20:14.757620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.924 [2024-07-25 01:20:14.757646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.924 qpair failed and we were unable to recover it. 00:34:21.924 [2024-07-25 01:20:14.757801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.924 [2024-07-25 01:20:14.757826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.924 qpair failed and we were unable to recover it. 00:34:21.924 [2024-07-25 01:20:14.757974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.924 [2024-07-25 01:20:14.757999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.924 qpair failed and we were unable to recover it. 00:34:21.924 [2024-07-25 01:20:14.758139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.924 [2024-07-25 01:20:14.758165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.924 qpair failed and we were unable to recover it. 00:34:21.924 [2024-07-25 01:20:14.758278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.924 [2024-07-25 01:20:14.758305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.924 qpair failed and we were unable to recover it. 00:34:21.924 [2024-07-25 01:20:14.758452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.924 [2024-07-25 01:20:14.758476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.924 qpair failed and we were unable to recover it. 00:34:21.924 [2024-07-25 01:20:14.758587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.924 [2024-07-25 01:20:14.758612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.924 qpair failed and we were unable to recover it. 00:34:21.924 [2024-07-25 01:20:14.758781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.924 [2024-07-25 01:20:14.758806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.924 qpair failed and we were unable to recover it. 00:34:21.924 [2024-07-25 01:20:14.758946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.924 [2024-07-25 01:20:14.758970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.924 qpair failed and we were unable to recover it. 00:34:21.924 [2024-07-25 01:20:14.759086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.924 [2024-07-25 01:20:14.759111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.924 qpair failed and we were unable to recover it. 00:34:21.924 [2024-07-25 01:20:14.759214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.924 [2024-07-25 01:20:14.759239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.924 qpair failed and we were unable to recover it. 00:34:21.924 [2024-07-25 01:20:14.759360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.924 [2024-07-25 01:20:14.759385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.924 qpair failed and we were unable to recover it. 00:34:21.924 [2024-07-25 01:20:14.759507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.924 [2024-07-25 01:20:14.759533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.924 qpair failed and we were unable to recover it. 00:34:21.924 [2024-07-25 01:20:14.759657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.924 [2024-07-25 01:20:14.759684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.924 qpair failed and we were unable to recover it. 00:34:21.924 [2024-07-25 01:20:14.759821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.924 [2024-07-25 01:20:14.759847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.924 qpair failed and we were unable to recover it. 00:34:21.924 [2024-07-25 01:20:14.759954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.924 [2024-07-25 01:20:14.759978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.924 qpair failed and we were unable to recover it. 00:34:21.924 [2024-07-25 01:20:14.760123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.924 [2024-07-25 01:20:14.760148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.924 qpair failed and we were unable to recover it. 00:34:21.924 [2024-07-25 01:20:14.760290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.924 [2024-07-25 01:20:14.760316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.924 qpair failed and we were unable to recover it. 00:34:21.924 [2024-07-25 01:20:14.760460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.924 [2024-07-25 01:20:14.760485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.924 qpair failed and we were unable to recover it. 00:34:21.924 [2024-07-25 01:20:14.760629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.924 [2024-07-25 01:20:14.760655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.924 qpair failed and we were unable to recover it. 00:34:21.924 [2024-07-25 01:20:14.760774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.924 [2024-07-25 01:20:14.760798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.924 qpair failed and we were unable to recover it. 00:34:21.924 [2024-07-25 01:20:14.760908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.925 [2024-07-25 01:20:14.760932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.925 qpair failed and we were unable to recover it. 00:34:21.925 [2024-07-25 01:20:14.761062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.925 [2024-07-25 01:20:14.761087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.925 qpair failed and we were unable to recover it. 00:34:21.925 [2024-07-25 01:20:14.761224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.925 [2024-07-25 01:20:14.761257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.925 qpair failed and we were unable to recover it. 00:34:21.925 [2024-07-25 01:20:14.761431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.925 [2024-07-25 01:20:14.761455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.925 qpair failed and we were unable to recover it. 00:34:21.925 [2024-07-25 01:20:14.761598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.925 [2024-07-25 01:20:14.761627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.925 qpair failed and we were unable to recover it. 00:34:21.925 [2024-07-25 01:20:14.761779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.925 [2024-07-25 01:20:14.761803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.925 qpair failed and we were unable to recover it. 00:34:21.925 [2024-07-25 01:20:14.761946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.925 [2024-07-25 01:20:14.761970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.925 qpair failed and we were unable to recover it. 00:34:21.925 [2024-07-25 01:20:14.762088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.925 [2024-07-25 01:20:14.762114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.925 qpair failed and we were unable to recover it. 00:34:21.925 [2024-07-25 01:20:14.762228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.925 [2024-07-25 01:20:14.762259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.925 qpair failed and we were unable to recover it. 00:34:21.925 [2024-07-25 01:20:14.762374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.925 [2024-07-25 01:20:14.762399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.925 qpair failed and we were unable to recover it. 00:34:21.925 [2024-07-25 01:20:14.762517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.925 [2024-07-25 01:20:14.762542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.925 qpair failed and we were unable to recover it. 00:34:21.925 [2024-07-25 01:20:14.762687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.925 [2024-07-25 01:20:14.762712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.925 qpair failed and we were unable to recover it. 00:34:21.925 [2024-07-25 01:20:14.762878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.925 [2024-07-25 01:20:14.762902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.925 qpair failed and we were unable to recover it. 00:34:21.925 [2024-07-25 01:20:14.763046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.925 [2024-07-25 01:20:14.763071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.925 qpair failed and we were unable to recover it. 00:34:21.925 [2024-07-25 01:20:14.763186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.925 [2024-07-25 01:20:14.763211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.925 qpair failed and we were unable to recover it. 00:34:21.925 [2024-07-25 01:20:14.763372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.925 [2024-07-25 01:20:14.763399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.925 qpair failed and we were unable to recover it. 00:34:21.925 [2024-07-25 01:20:14.763517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.925 [2024-07-25 01:20:14.763543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.925 qpair failed and we were unable to recover it. 00:34:21.925 [2024-07-25 01:20:14.763699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.925 [2024-07-25 01:20:14.763724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.925 qpair failed and we were unable to recover it. 00:34:21.925 [2024-07-25 01:20:14.763878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.925 [2024-07-25 01:20:14.763903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:21.925 qpair failed and we were unable to recover it. 00:34:21.925 [2024-07-25 01:20:14.764050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.925 [2024-07-25 01:20:14.764078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.925 qpair failed and we were unable to recover it. 00:34:21.925 [2024-07-25 01:20:14.764222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.925 [2024-07-25 01:20:14.764256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.925 qpair failed and we were unable to recover it. 00:34:21.925 [2024-07-25 01:20:14.764398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.925 [2024-07-25 01:20:14.764423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.925 qpair failed and we were unable to recover it. 00:34:21.925 [2024-07-25 01:20:14.764565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.925 [2024-07-25 01:20:14.764590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.925 qpair failed and we were unable to recover it. 00:34:21.925 [2024-07-25 01:20:14.764712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.925 [2024-07-25 01:20:14.764737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.925 qpair failed and we were unable to recover it. 00:34:21.925 [2024-07-25 01:20:14.764887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.925 [2024-07-25 01:20:14.764912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.925 qpair failed and we were unable to recover it. 00:34:21.925 [2024-07-25 01:20:14.765028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.925 [2024-07-25 01:20:14.765055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.925 qpair failed and we were unable to recover it. 00:34:21.925 [2024-07-25 01:20:14.765217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.925 [2024-07-25 01:20:14.765248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.925 qpair failed and we were unable to recover it. 00:34:21.925 [2024-07-25 01:20:14.765394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.925 [2024-07-25 01:20:14.765420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.925 qpair failed and we were unable to recover it. 00:34:21.925 [2024-07-25 01:20:14.765534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.925 [2024-07-25 01:20:14.765561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.925 qpair failed and we were unable to recover it. 00:34:21.925 [2024-07-25 01:20:14.765679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.925 [2024-07-25 01:20:14.765704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.926 qpair failed and we were unable to recover it. 00:34:21.926 [2024-07-25 01:20:14.765818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.926 [2024-07-25 01:20:14.765843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.926 qpair failed and we were unable to recover it. 00:34:21.926 [2024-07-25 01:20:14.765982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.926 [2024-07-25 01:20:14.766011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.926 qpair failed and we were unable to recover it. 00:34:21.926 [2024-07-25 01:20:14.766132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.926 [2024-07-25 01:20:14.766158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.926 qpair failed and we were unable to recover it. 00:34:21.926 [2024-07-25 01:20:14.766335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.926 [2024-07-25 01:20:14.766361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.926 qpair failed and we were unable to recover it. 00:34:21.926 [2024-07-25 01:20:14.766501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.926 [2024-07-25 01:20:14.766527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.926 qpair failed and we were unable to recover it. 00:34:21.926 [2024-07-25 01:20:14.766669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.926 [2024-07-25 01:20:14.766695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.926 qpair failed and we were unable to recover it. 00:34:21.926 [2024-07-25 01:20:14.766797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.926 [2024-07-25 01:20:14.766821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.926 qpair failed and we were unable to recover it. 00:34:21.926 [2024-07-25 01:20:14.766963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.926 [2024-07-25 01:20:14.766987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.926 qpair failed and we were unable to recover it. 00:34:21.926 [2024-07-25 01:20:14.767147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.926 [2024-07-25 01:20:14.767175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.926 qpair failed and we were unable to recover it. 00:34:21.926 [2024-07-25 01:20:14.767323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.926 [2024-07-25 01:20:14.767348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.926 qpair failed and we were unable to recover it. 00:34:21.926 [2024-07-25 01:20:14.767503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.926 [2024-07-25 01:20:14.767527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.926 qpair failed and we were unable to recover it. 00:34:21.926 [2024-07-25 01:20:14.767636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.926 [2024-07-25 01:20:14.767662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.926 qpair failed and we were unable to recover it. 00:34:21.926 [2024-07-25 01:20:14.767802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.926 [2024-07-25 01:20:14.767827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.926 qpair failed and we were unable to recover it. 00:34:21.926 [2024-07-25 01:20:14.767978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.926 [2024-07-25 01:20:14.768004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.926 qpair failed and we were unable to recover it. 00:34:21.926 [2024-07-25 01:20:14.768131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.926 [2024-07-25 01:20:14.768160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.926 qpair failed and we were unable to recover it. 00:34:21.926 [2024-07-25 01:20:14.768344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.926 [2024-07-25 01:20:14.768369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.926 qpair failed and we were unable to recover it. 00:34:21.926 [2024-07-25 01:20:14.768512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.926 [2024-07-25 01:20:14.768537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.926 qpair failed and we were unable to recover it. 00:34:21.926 [2024-07-25 01:20:14.768682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.926 [2024-07-25 01:20:14.768707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.926 qpair failed and we were unable to recover it. 00:34:21.926 [2024-07-25 01:20:14.768850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.926 [2024-07-25 01:20:14.768875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.926 qpair failed and we were unable to recover it. 00:34:21.926 [2024-07-25 01:20:14.769021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.926 [2024-07-25 01:20:14.769046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.926 qpair failed and we were unable to recover it. 00:34:21.926 [2024-07-25 01:20:14.769213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.926 [2024-07-25 01:20:14.769246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.926 qpair failed and we were unable to recover it. 00:34:21.926 [2024-07-25 01:20:14.769403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.926 [2024-07-25 01:20:14.769427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.926 qpair failed and we were unable to recover it. 00:34:21.926 [2024-07-25 01:20:14.769541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.926 [2024-07-25 01:20:14.769565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.926 qpair failed and we were unable to recover it. 00:34:21.926 [2024-07-25 01:20:14.769702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.926 [2024-07-25 01:20:14.769727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.926 qpair failed and we were unable to recover it. 00:34:21.926 [2024-07-25 01:20:14.769868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.926 [2024-07-25 01:20:14.769893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.926 qpair failed and we were unable to recover it. 00:34:21.926 [2024-07-25 01:20:14.770033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.926 [2024-07-25 01:20:14.770058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.926 qpair failed and we were unable to recover it. 00:34:21.926 [2024-07-25 01:20:14.770231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.926 [2024-07-25 01:20:14.770284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.926 qpair failed and we were unable to recover it. 00:34:21.926 [2024-07-25 01:20:14.770425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.926 [2024-07-25 01:20:14.770450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.926 qpair failed and we were unable to recover it. 00:34:21.926 [2024-07-25 01:20:14.770584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.926 [2024-07-25 01:20:14.770609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.926 qpair failed and we were unable to recover it. 00:34:21.926 [2024-07-25 01:20:14.770724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.926 [2024-07-25 01:20:14.770751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.926 qpair failed and we were unable to recover it. 00:34:21.926 [2024-07-25 01:20:14.770872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.926 [2024-07-25 01:20:14.770897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.926 qpair failed and we were unable to recover it. 00:34:21.926 [2024-07-25 01:20:14.771040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.926 [2024-07-25 01:20:14.771066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.926 qpair failed and we were unable to recover it. 00:34:21.926 [2024-07-25 01:20:14.771240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.926 [2024-07-25 01:20:14.771277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.926 qpair failed and we were unable to recover it. 00:34:21.926 [2024-07-25 01:20:14.771396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.926 [2024-07-25 01:20:14.771422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.926 qpair failed and we were unable to recover it. 00:34:21.926 [2024-07-25 01:20:14.771557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.926 [2024-07-25 01:20:14.771583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.926 qpair failed and we were unable to recover it. 00:34:21.926 [2024-07-25 01:20:14.771755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.927 [2024-07-25 01:20:14.771783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.927 qpair failed and we were unable to recover it. 00:34:21.927 [2024-07-25 01:20:14.771905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.927 [2024-07-25 01:20:14.771933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.927 qpair failed and we were unable to recover it. 00:34:21.927 [2024-07-25 01:20:14.772063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.927 [2024-07-25 01:20:14.772091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.927 qpair failed and we were unable to recover it. 00:34:21.927 [2024-07-25 01:20:14.772289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.927 [2024-07-25 01:20:14.772317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.927 qpair failed and we were unable to recover it. 00:34:21.927 [2024-07-25 01:20:14.772453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.927 [2024-07-25 01:20:14.772480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.927 qpair failed and we were unable to recover it. 00:34:21.927 [2024-07-25 01:20:14.772635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.927 [2024-07-25 01:20:14.772663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.927 qpair failed and we were unable to recover it. 00:34:21.927 [2024-07-25 01:20:14.772852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.927 [2024-07-25 01:20:14.772881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.927 qpair failed and we were unable to recover it. 00:34:21.927 [2024-07-25 01:20:14.773002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.927 [2024-07-25 01:20:14.773026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.927 qpair failed and we were unable to recover it. 00:34:21.927 [2024-07-25 01:20:14.773140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.927 [2024-07-25 01:20:14.773166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.927 qpair failed and we were unable to recover it. 00:34:21.927 [2024-07-25 01:20:14.773308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.927 [2024-07-25 01:20:14.773334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.927 qpair failed and we were unable to recover it. 00:34:21.927 [2024-07-25 01:20:14.773458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.927 [2024-07-25 01:20:14.773483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.927 qpair failed and we were unable to recover it. 00:34:21.927 [2024-07-25 01:20:14.773623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.927 [2024-07-25 01:20:14.773649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.927 qpair failed and we were unable to recover it. 00:34:21.927 [2024-07-25 01:20:14.773777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.927 [2024-07-25 01:20:14.773801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.927 qpair failed and we were unable to recover it. 00:34:21.927 [2024-07-25 01:20:14.773932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.927 [2024-07-25 01:20:14.773957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.927 qpair failed and we were unable to recover it. 00:34:21.927 [2024-07-25 01:20:14.774120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.927 [2024-07-25 01:20:14.774145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.927 qpair failed and we were unable to recover it. 00:34:21.927 [2024-07-25 01:20:14.774297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.927 [2024-07-25 01:20:14.774322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.927 qpair failed and we were unable to recover it. 00:34:21.927 [2024-07-25 01:20:14.774445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.927 [2024-07-25 01:20:14.774469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.927 qpair failed and we were unable to recover it. 00:34:21.927 [2024-07-25 01:20:14.774590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.927 [2024-07-25 01:20:14.774615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.927 qpair failed and we were unable to recover it. 00:34:21.927 [2024-07-25 01:20:14.774733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.927 [2024-07-25 01:20:14.774758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.927 qpair failed and we were unable to recover it. 00:34:21.927 [2024-07-25 01:20:14.774903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.927 [2024-07-25 01:20:14.774928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.927 qpair failed and we were unable to recover it. 00:34:21.927 [2024-07-25 01:20:14.775078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.927 [2024-07-25 01:20:14.775103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.927 qpair failed and we were unable to recover it. 00:34:21.927 [2024-07-25 01:20:14.775220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.927 [2024-07-25 01:20:14.775250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.927 qpair failed and we were unable to recover it. 00:34:21.927 [2024-07-25 01:20:14.775370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.927 [2024-07-25 01:20:14.775396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.927 qpair failed and we were unable to recover it. 00:34:21.927 [2024-07-25 01:20:14.775543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.927 [2024-07-25 01:20:14.775568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.927 qpair failed and we were unable to recover it. 00:34:21.927 [2024-07-25 01:20:14.775727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.927 [2024-07-25 01:20:14.775753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.927 qpair failed and we were unable to recover it. 00:34:21.927 [2024-07-25 01:20:14.775900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.927 [2024-07-25 01:20:14.775927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.927 qpair failed and we were unable to recover it. 00:34:21.927 [2024-07-25 01:20:14.776049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.927 [2024-07-25 01:20:14.776074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.927 qpair failed and we were unable to recover it. 00:34:21.927 [2024-07-25 01:20:14.776191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.927 [2024-07-25 01:20:14.776217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.927 qpair failed and we were unable to recover it. 00:34:21.927 [2024-07-25 01:20:14.776338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.927 [2024-07-25 01:20:14.776363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.927 qpair failed and we were unable to recover it. 00:34:21.927 [2024-07-25 01:20:14.776481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.927 [2024-07-25 01:20:14.776505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.927 qpair failed and we were unable to recover it. 00:34:21.927 [2024-07-25 01:20:14.776610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.927 [2024-07-25 01:20:14.776635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.927 qpair failed and we were unable to recover it. 00:34:21.927 [2024-07-25 01:20:14.776778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.927 [2024-07-25 01:20:14.776802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.927 qpair failed and we were unable to recover it. 00:34:21.927 [2024-07-25 01:20:14.776916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.927 [2024-07-25 01:20:14.776941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.927 qpair failed and we were unable to recover it. 00:34:21.927 [2024-07-25 01:20:14.777113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.927 [2024-07-25 01:20:14.777137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.927 qpair failed and we were unable to recover it. 00:34:21.927 [2024-07-25 01:20:14.777253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.927 [2024-07-25 01:20:14.777280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.927 qpair failed and we were unable to recover it. 00:34:21.927 [2024-07-25 01:20:14.777398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.927 [2024-07-25 01:20:14.777422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.927 qpair failed and we were unable to recover it. 00:34:21.927 [2024-07-25 01:20:14.777564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.927 [2024-07-25 01:20:14.777589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.928 qpair failed and we were unable to recover it. 00:34:21.928 [2024-07-25 01:20:14.777712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.928 [2024-07-25 01:20:14.777737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.928 qpair failed and we were unable to recover it. 00:34:21.928 [2024-07-25 01:20:14.777851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.928 [2024-07-25 01:20:14.777876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.928 qpair failed and we were unable to recover it. 00:34:21.928 [2024-07-25 01:20:14.777991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.928 [2024-07-25 01:20:14.778017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.928 qpair failed and we were unable to recover it. 00:34:21.928 [2024-07-25 01:20:14.778162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.928 [2024-07-25 01:20:14.778186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.928 qpair failed and we were unable to recover it. 00:34:21.928 [2024-07-25 01:20:14.778323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.928 [2024-07-25 01:20:14.778349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.928 qpair failed and we were unable to recover it. 00:34:21.928 [2024-07-25 01:20:14.778492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.928 [2024-07-25 01:20:14.778518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.928 qpair failed and we were unable to recover it. 00:34:21.928 [2024-07-25 01:20:14.778648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.928 [2024-07-25 01:20:14.778672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.928 qpair failed and we were unable to recover it. 00:34:21.928 [2024-07-25 01:20:14.778837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.928 [2024-07-25 01:20:14.778863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.928 qpair failed and we were unable to recover it. 00:34:21.928 [2024-07-25 01:20:14.778982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.928 [2024-07-25 01:20:14.779007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.928 qpair failed and we were unable to recover it. 00:34:21.928 [2024-07-25 01:20:14.779173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.928 [2024-07-25 01:20:14.779201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.928 qpair failed and we were unable to recover it. 00:34:21.928 [2024-07-25 01:20:14.779324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.928 [2024-07-25 01:20:14.779349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.928 qpair failed and we were unable to recover it. 00:34:21.928 [2024-07-25 01:20:14.779460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.928 [2024-07-25 01:20:14.779484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.928 qpair failed and we were unable to recover it. 00:34:21.928 [2024-07-25 01:20:14.779657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.928 [2024-07-25 01:20:14.779682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.928 qpair failed and we were unable to recover it. 00:34:21.928 [2024-07-25 01:20:14.779835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.928 [2024-07-25 01:20:14.779861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.928 qpair failed and we were unable to recover it. 00:34:21.928 [2024-07-25 01:20:14.779978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.928 [2024-07-25 01:20:14.780003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.928 qpair failed and we were unable to recover it. 00:34:21.928 [2024-07-25 01:20:14.780144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.928 [2024-07-25 01:20:14.780169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.928 qpair failed and we were unable to recover it. 00:34:21.928 [2024-07-25 01:20:14.780327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.928 [2024-07-25 01:20:14.780354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.928 qpair failed and we were unable to recover it. 00:34:21.928 [2024-07-25 01:20:14.780472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.928 [2024-07-25 01:20:14.780496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.928 qpair failed and we were unable to recover it. 00:34:21.928 [2024-07-25 01:20:14.780678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.928 [2024-07-25 01:20:14.780702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.928 qpair failed and we were unable to recover it. 00:34:21.928 [2024-07-25 01:20:14.780818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.928 [2024-07-25 01:20:14.780843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.928 qpair failed and we were unable to recover it. 00:34:21.928 [2024-07-25 01:20:14.780987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.928 [2024-07-25 01:20:14.781011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.928 qpair failed and we were unable to recover it. 00:34:21.928 [2024-07-25 01:20:14.781136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.928 [2024-07-25 01:20:14.781160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.928 qpair failed and we were unable to recover it. 00:34:21.928 [2024-07-25 01:20:14.781329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.928 [2024-07-25 01:20:14.781355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.928 qpair failed and we were unable to recover it. 00:34:21.928 [2024-07-25 01:20:14.781479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.928 [2024-07-25 01:20:14.781503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.928 qpair failed and we were unable to recover it. 00:34:21.928 [2024-07-25 01:20:14.781622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.928 [2024-07-25 01:20:14.781647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.928 qpair failed and we were unable to recover it. 00:34:21.928 [2024-07-25 01:20:14.781758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.928 [2024-07-25 01:20:14.781782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.928 qpair failed and we were unable to recover it. 00:34:21.928 [2024-07-25 01:20:14.781922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.928 [2024-07-25 01:20:14.781947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.928 qpair failed and we were unable to recover it. 00:34:21.928 [2024-07-25 01:20:14.782070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.928 [2024-07-25 01:20:14.782095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.928 qpair failed and we were unable to recover it. 00:34:21.928 [2024-07-25 01:20:14.782233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.928 [2024-07-25 01:20:14.782264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.928 qpair failed and we were unable to recover it. 00:34:21.928 [2024-07-25 01:20:14.782378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.928 [2024-07-25 01:20:14.782404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.928 qpair failed and we were unable to recover it. 00:34:21.928 [2024-07-25 01:20:14.782515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.928 [2024-07-25 01:20:14.782539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.928 qpair failed and we were unable to recover it. 00:34:21.928 [2024-07-25 01:20:14.782684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.928 [2024-07-25 01:20:14.782711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.928 qpair failed and we were unable to recover it. 00:34:21.928 [2024-07-25 01:20:14.782830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.928 [2024-07-25 01:20:14.782855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.928 qpair failed and we were unable to recover it. 00:34:21.928 [2024-07-25 01:20:14.782975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.928 [2024-07-25 01:20:14.782999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.928 qpair failed and we were unable to recover it. 00:34:21.929 [2024-07-25 01:20:14.783111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.929 [2024-07-25 01:20:14.783137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.929 qpair failed and we were unable to recover it. 00:34:21.929 [2024-07-25 01:20:14.783255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.929 [2024-07-25 01:20:14.783280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.929 qpair failed and we were unable to recover it. 00:34:21.929 [2024-07-25 01:20:14.783431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.929 [2024-07-25 01:20:14.783466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:21.929 qpair failed and we were unable to recover it. 00:34:21.929 [2024-07-25 01:20:14.783643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.929 [2024-07-25 01:20:14.783676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:21.929 qpair failed and we were unable to recover it. 00:34:21.929 [2024-07-25 01:20:14.783837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.929 [2024-07-25 01:20:14.783882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:21.929 qpair failed and we were unable to recover it. 00:34:21.929 [2024-07-25 01:20:14.784116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.929 [2024-07-25 01:20:14.784161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:21.929 qpair failed and we were unable to recover it. 00:34:21.929 [2024-07-25 01:20:14.784384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.929 [2024-07-25 01:20:14.784414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:21.929 qpair failed and we were unable to recover it. 00:34:21.929 [2024-07-25 01:20:14.784559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.929 [2024-07-25 01:20:14.784603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:21.929 qpair failed and we were unable to recover it. 00:34:21.929 [2024-07-25 01:20:14.784719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.929 [2024-07-25 01:20:14.784750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:21.929 qpair failed and we were unable to recover it. 00:34:21.929 [2024-07-25 01:20:14.784869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.929 [2024-07-25 01:20:14.784894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:21.929 qpair failed and we were unable to recover it. 00:34:21.929 [2024-07-25 01:20:14.785041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.929 [2024-07-25 01:20:14.785073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:21.929 qpair failed and we were unable to recover it. 00:34:21.929 [2024-07-25 01:20:14.785183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.929 [2024-07-25 01:20:14.785208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:21.929 qpair failed and we were unable to recover it. 00:34:21.929 [2024-07-25 01:20:14.785369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.929 [2024-07-25 01:20:14.785396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:21.929 qpair failed and we were unable to recover it. 00:34:21.929 [2024-07-25 01:20:14.785562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.929 [2024-07-25 01:20:14.785606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:21.929 qpair failed and we were unable to recover it. 00:34:21.929 [2024-07-25 01:20:14.785763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.929 [2024-07-25 01:20:14.785807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:21.929 qpair failed and we were unable to recover it. 00:34:21.929 [2024-07-25 01:20:14.785967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.929 [2024-07-25 01:20:14.786015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:21.929 qpair failed and we were unable to recover it. 00:34:21.929 [2024-07-25 01:20:14.786166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.929 [2024-07-25 01:20:14.786193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:21.929 qpair failed and we were unable to recover it. 00:34:21.929 [2024-07-25 01:20:14.786374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.929 [2024-07-25 01:20:14.786404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:21.929 qpair failed and we were unable to recover it. 00:34:21.929 [2024-07-25 01:20:14.786585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.929 [2024-07-25 01:20:14.786627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.929 qpair failed and we were unable to recover it. 00:34:21.929 [2024-07-25 01:20:14.786760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.929 [2024-07-25 01:20:14.786790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.929 qpair failed and we were unable to recover it. 00:34:21.929 [2024-07-25 01:20:14.786919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.929 [2024-07-25 01:20:14.786947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.929 qpair failed and we were unable to recover it. 00:34:21.929 [2024-07-25 01:20:14.787107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.929 [2024-07-25 01:20:14.787135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.929 qpair failed and we were unable to recover it. 00:34:21.929 [2024-07-25 01:20:14.787299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.929 [2024-07-25 01:20:14.787325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.929 qpair failed and we were unable to recover it. 00:34:21.929 [2024-07-25 01:20:14.787461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.929 [2024-07-25 01:20:14.787485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.929 qpair failed and we were unable to recover it. 00:34:21.929 [2024-07-25 01:20:14.787604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.929 [2024-07-25 01:20:14.787629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.929 qpair failed and we were unable to recover it. 00:34:21.929 [2024-07-25 01:20:14.787798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.929 [2024-07-25 01:20:14.787823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.929 qpair failed and we were unable to recover it. 00:34:21.929 [2024-07-25 01:20:14.787976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.929 [2024-07-25 01:20:14.788002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.929 qpair failed and we were unable to recover it. 00:34:21.929 [2024-07-25 01:20:14.788163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.929 [2024-07-25 01:20:14.788191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.929 qpair failed and we were unable to recover it. 00:34:21.929 [2024-07-25 01:20:14.788330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.929 [2024-07-25 01:20:14.788358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.929 qpair failed and we were unable to recover it. 00:34:21.929 [2024-07-25 01:20:14.788505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.929 [2024-07-25 01:20:14.788530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.929 qpair failed and we were unable to recover it. 00:34:21.929 [2024-07-25 01:20:14.788688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.929 [2024-07-25 01:20:14.788717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.929 qpair failed and we were unable to recover it. 00:34:21.929 [2024-07-25 01:20:14.788843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.929 [2024-07-25 01:20:14.788871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.929 qpair failed and we were unable to recover it. 00:34:21.929 [2024-07-25 01:20:14.788995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.929 [2024-07-25 01:20:14.789022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.929 qpair failed and we were unable to recover it. 00:34:21.929 [2024-07-25 01:20:14.789205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.930 [2024-07-25 01:20:14.789233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.930 qpair failed and we were unable to recover it. 00:34:21.930 [2024-07-25 01:20:14.789378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.930 [2024-07-25 01:20:14.789404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.930 qpair failed and we were unable to recover it. 00:34:21.930 [2024-07-25 01:20:14.789551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.930 [2024-07-25 01:20:14.789575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.930 qpair failed and we were unable to recover it. 00:34:21.930 [2024-07-25 01:20:14.789721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.930 [2024-07-25 01:20:14.789747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.930 qpair failed and we were unable to recover it. 00:34:21.930 [2024-07-25 01:20:14.789882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.930 [2024-07-25 01:20:14.789907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.930 qpair failed and we were unable to recover it. 00:34:21.930 [2024-07-25 01:20:14.790044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.930 [2024-07-25 01:20:14.790073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.930 qpair failed and we were unable to recover it. 00:34:21.930 [2024-07-25 01:20:14.790214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.930 [2024-07-25 01:20:14.790240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.930 qpair failed and we were unable to recover it. 00:34:21.930 [2024-07-25 01:20:14.790394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.930 [2024-07-25 01:20:14.790420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.930 qpair failed and we were unable to recover it. 00:34:21.930 [2024-07-25 01:20:14.790556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.930 [2024-07-25 01:20:14.790583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.930 qpair failed and we were unable to recover it. 00:34:21.930 [2024-07-25 01:20:14.790743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.930 [2024-07-25 01:20:14.790771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.930 qpair failed and we were unable to recover it. 00:34:21.930 [2024-07-25 01:20:14.790927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.930 [2024-07-25 01:20:14.790955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.930 qpair failed and we were unable to recover it. 00:34:21.930 [2024-07-25 01:20:14.791114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.930 [2024-07-25 01:20:14.791143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.930 qpair failed and we were unable to recover it. 00:34:21.930 [2024-07-25 01:20:14.791292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.930 [2024-07-25 01:20:14.791318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.930 qpair failed and we were unable to recover it. 00:34:21.930 [2024-07-25 01:20:14.791462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.930 [2024-07-25 01:20:14.791488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.930 qpair failed and we were unable to recover it. 00:34:21.930 [2024-07-25 01:20:14.791654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.930 [2024-07-25 01:20:14.791681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.930 qpair failed and we were unable to recover it. 00:34:21.930 [2024-07-25 01:20:14.791835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.930 [2024-07-25 01:20:14.791863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.930 qpair failed and we were unable to recover it. 00:34:21.930 [2024-07-25 01:20:14.792022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.930 [2024-07-25 01:20:14.792051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.930 qpair failed and we were unable to recover it. 00:34:21.930 [2024-07-25 01:20:14.792230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.930 [2024-07-25 01:20:14.792263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.930 qpair failed and we were unable to recover it. 00:34:21.930 [2024-07-25 01:20:14.792404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.930 [2024-07-25 01:20:14.792428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.930 qpair failed and we were unable to recover it. 00:34:21.930 [2024-07-25 01:20:14.792567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.930 [2024-07-25 01:20:14.792595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.930 qpair failed and we were unable to recover it. 00:34:21.930 [2024-07-25 01:20:14.792747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.930 [2024-07-25 01:20:14.792775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.930 qpair failed and we were unable to recover it. 00:34:21.930 [2024-07-25 01:20:14.792957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.930 [2024-07-25 01:20:14.792985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.930 qpair failed and we were unable to recover it. 00:34:21.930 [2024-07-25 01:20:14.793113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.930 [2024-07-25 01:20:14.793144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.930 qpair failed and we were unable to recover it. 00:34:21.930 [2024-07-25 01:20:14.793325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.930 [2024-07-25 01:20:14.793351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.930 qpair failed and we were unable to recover it. 00:34:21.930 [2024-07-25 01:20:14.793490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.930 [2024-07-25 01:20:14.793515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.930 qpair failed and we were unable to recover it. 00:34:21.930 [2024-07-25 01:20:14.793657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.930 [2024-07-25 01:20:14.793682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.930 qpair failed and we were unable to recover it. 00:34:21.930 [2024-07-25 01:20:14.793832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.930 [2024-07-25 01:20:14.793860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.930 qpair failed and we were unable to recover it. 00:34:21.930 [2024-07-25 01:20:14.793980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.930 [2024-07-25 01:20:14.794009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.930 qpair failed and we were unable to recover it. 00:34:21.930 [2024-07-25 01:20:14.794167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.930 [2024-07-25 01:20:14.794195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.930 qpair failed and we were unable to recover it. 00:34:21.930 [2024-07-25 01:20:14.794368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.930 [2024-07-25 01:20:14.794393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.930 qpair failed and we were unable to recover it. 00:34:21.930 [2024-07-25 01:20:14.794535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.931 [2024-07-25 01:20:14.794560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.931 qpair failed and we were unable to recover it. 00:34:21.931 [2024-07-25 01:20:14.794729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.931 [2024-07-25 01:20:14.794770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.931 qpair failed and we were unable to recover it. 00:34:21.931 [2024-07-25 01:20:14.794906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.931 [2024-07-25 01:20:14.794933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.931 qpair failed and we were unable to recover it. 00:34:21.931 [2024-07-25 01:20:14.795151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.931 [2024-07-25 01:20:14.795178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.931 qpair failed and we were unable to recover it. 00:34:21.931 [2024-07-25 01:20:14.795350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.931 [2024-07-25 01:20:14.795376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.931 qpair failed and we were unable to recover it. 00:34:21.931 [2024-07-25 01:20:14.795522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.931 [2024-07-25 01:20:14.795572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.931 qpair failed and we were unable to recover it. 00:34:21.931 [2024-07-25 01:20:14.795722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.931 [2024-07-25 01:20:14.795751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.931 qpair failed and we were unable to recover it. 00:34:21.931 [2024-07-25 01:20:14.795936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.931 [2024-07-25 01:20:14.795961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.931 qpair failed and we were unable to recover it. 00:34:21.931 [2024-07-25 01:20:14.796151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.931 [2024-07-25 01:20:14.796178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.931 qpair failed and we were unable to recover it. 00:34:21.931 [2024-07-25 01:20:14.796314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.931 [2024-07-25 01:20:14.796344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.931 qpair failed and we were unable to recover it. 00:34:21.931 [2024-07-25 01:20:14.796471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.931 [2024-07-25 01:20:14.796500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.931 qpair failed and we were unable to recover it. 00:34:21.931 [2024-07-25 01:20:14.796667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.931 [2024-07-25 01:20:14.796692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.931 qpair failed and we were unable to recover it. 00:34:21.931 [2024-07-25 01:20:14.796857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.931 [2024-07-25 01:20:14.796883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.931 qpair failed and we were unable to recover it. 00:34:21.931 [2024-07-25 01:20:14.797064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.931 [2024-07-25 01:20:14.797092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.931 qpair failed and we were unable to recover it. 00:34:21.931 [2024-07-25 01:20:14.797260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.931 [2024-07-25 01:20:14.797285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.931 qpair failed and we were unable to recover it. 00:34:21.931 [2024-07-25 01:20:14.797431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.931 [2024-07-25 01:20:14.797455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.931 qpair failed and we were unable to recover it. 00:34:21.931 [2024-07-25 01:20:14.797579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.931 [2024-07-25 01:20:14.797622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.931 qpair failed and we were unable to recover it. 00:34:21.931 [2024-07-25 01:20:14.797767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.931 [2024-07-25 01:20:14.797794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.931 qpair failed and we were unable to recover it. 00:34:21.931 [2024-07-25 01:20:14.797944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.931 [2024-07-25 01:20:14.797972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.931 qpair failed and we were unable to recover it. 00:34:21.931 [2024-07-25 01:20:14.798130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.931 [2024-07-25 01:20:14.798156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.931 qpair failed and we were unable to recover it. 00:34:21.931 [2024-07-25 01:20:14.798318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.931 [2024-07-25 01:20:14.798347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.931 qpair failed and we were unable to recover it. 00:34:21.931 [2024-07-25 01:20:14.798502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.931 [2024-07-25 01:20:14.798530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.931 qpair failed and we were unable to recover it. 00:34:21.931 [2024-07-25 01:20:14.798662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.931 [2024-07-25 01:20:14.798689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.931 qpair failed and we were unable to recover it. 00:34:21.931 [2024-07-25 01:20:14.798831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.931 [2024-07-25 01:20:14.798857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.931 qpair failed and we were unable to recover it. 00:34:21.931 [2024-07-25 01:20:14.799002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.931 [2024-07-25 01:20:14.799043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.931 qpair failed and we were unable to recover it. 00:34:21.931 [2024-07-25 01:20:14.799201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.931 [2024-07-25 01:20:14.799228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.931 qpair failed and we were unable to recover it. 00:34:21.931 [2024-07-25 01:20:14.799430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.931 [2024-07-25 01:20:14.799458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.931 qpair failed and we were unable to recover it. 00:34:21.931 [2024-07-25 01:20:14.799603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.931 [2024-07-25 01:20:14.799628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.931 qpair failed and we were unable to recover it. 00:34:21.931 [2024-07-25 01:20:14.799812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.931 [2024-07-25 01:20:14.799839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.931 qpair failed and we were unable to recover it. 00:34:21.931 [2024-07-25 01:20:14.800004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.931 [2024-07-25 01:20:14.800031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.931 qpair failed and we were unable to recover it. 00:34:21.931 [2024-07-25 01:20:14.800162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.931 [2024-07-25 01:20:14.800190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.931 qpair failed and we were unable to recover it. 00:34:21.931 [2024-07-25 01:20:14.800363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.931 [2024-07-25 01:20:14.800389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.931 qpair failed and we were unable to recover it. 00:34:21.931 [2024-07-25 01:20:14.800505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.931 [2024-07-25 01:20:14.800544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.931 qpair failed and we were unable to recover it. 00:34:21.931 [2024-07-25 01:20:14.800657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.931 [2024-07-25 01:20:14.800683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.931 qpair failed and we were unable to recover it. 00:34:21.931 [2024-07-25 01:20:14.800825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.932 [2024-07-25 01:20:14.800849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.932 qpair failed and we were unable to recover it. 00:34:21.932 [2024-07-25 01:20:14.800987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.932 [2024-07-25 01:20:14.801012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.932 qpair failed and we were unable to recover it. 00:34:21.932 [2024-07-25 01:20:14.801124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.932 [2024-07-25 01:20:14.801166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.932 qpair failed and we were unable to recover it. 00:34:21.932 [2024-07-25 01:20:14.801302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.932 [2024-07-25 01:20:14.801330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.932 qpair failed and we were unable to recover it. 00:34:21.932 [2024-07-25 01:20:14.801471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.932 [2024-07-25 01:20:14.801499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.932 qpair failed and we were unable to recover it. 00:34:21.932 [2024-07-25 01:20:14.801666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.932 [2024-07-25 01:20:14.801694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.932 qpair failed and we were unable to recover it. 00:34:21.932 [2024-07-25 01:20:14.801885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.932 [2024-07-25 01:20:14.801926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.932 qpair failed and we were unable to recover it. 00:34:21.932 [2024-07-25 01:20:14.802113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.932 [2024-07-25 01:20:14.802141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.932 qpair failed and we were unable to recover it. 00:34:21.932 [2024-07-25 01:20:14.802303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.932 [2024-07-25 01:20:14.802332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.932 qpair failed and we were unable to recover it. 00:34:21.932 [2024-07-25 01:20:14.802499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.932 [2024-07-25 01:20:14.802525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.932 qpair failed and we were unable to recover it. 00:34:21.932 [2024-07-25 01:20:14.802683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.932 [2024-07-25 01:20:14.802711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.932 qpair failed and we were unable to recover it. 00:34:21.932 [2024-07-25 01:20:14.802874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.932 [2024-07-25 01:20:14.802901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.932 qpair failed and we were unable to recover it. 00:34:21.932 [2024-07-25 01:20:14.803038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.932 [2024-07-25 01:20:14.803067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.932 qpair failed and we were unable to recover it. 00:34:21.932 [2024-07-25 01:20:14.803193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.932 [2024-07-25 01:20:14.803218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.932 qpair failed and we were unable to recover it. 00:34:21.932 [2024-07-25 01:20:14.803369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.932 [2024-07-25 01:20:14.803413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.932 qpair failed and we were unable to recover it. 00:34:21.932 [2024-07-25 01:20:14.803598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.932 [2024-07-25 01:20:14.803627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.932 qpair failed and we were unable to recover it. 00:34:21.932 [2024-07-25 01:20:14.803790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.932 [2024-07-25 01:20:14.803816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.932 qpair failed and we were unable to recover it. 00:34:21.932 [2024-07-25 01:20:14.803986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.932 [2024-07-25 01:20:14.804011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.932 qpair failed and we were unable to recover it. 00:34:21.932 [2024-07-25 01:20:14.804145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.932 [2024-07-25 01:20:14.804172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.932 qpair failed and we were unable to recover it. 00:34:21.932 [2024-07-25 01:20:14.804320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.932 [2024-07-25 01:20:14.804349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.932 qpair failed and we were unable to recover it. 00:34:21.932 [2024-07-25 01:20:14.804474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.932 [2024-07-25 01:20:14.804501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.932 qpair failed and we were unable to recover it. 00:34:21.932 [2024-07-25 01:20:14.804661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.932 [2024-07-25 01:20:14.804686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.932 qpair failed and we were unable to recover it. 00:34:21.932 [2024-07-25 01:20:14.804854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.932 [2024-07-25 01:20:14.804880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.932 qpair failed and we were unable to recover it. 00:34:21.932 [2024-07-25 01:20:14.804999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.932 [2024-07-25 01:20:14.805025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.932 qpair failed and we were unable to recover it. 00:34:21.932 [2024-07-25 01:20:14.805191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.932 [2024-07-25 01:20:14.805218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.932 qpair failed and we were unable to recover it. 00:34:21.932 [2024-07-25 01:20:14.805396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.932 [2024-07-25 01:20:14.805423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.932 qpair failed and we were unable to recover it. 00:34:21.932 [2024-07-25 01:20:14.805565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.932 [2024-07-25 01:20:14.805606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.932 qpair failed and we were unable to recover it. 00:34:21.932 [2024-07-25 01:20:14.805765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.932 [2024-07-25 01:20:14.805793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.932 qpair failed and we were unable to recover it. 00:34:21.932 [2024-07-25 01:20:14.805944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.932 [2024-07-25 01:20:14.805971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.932 qpair failed and we were unable to recover it. 00:34:21.932 [2024-07-25 01:20:14.806104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.932 [2024-07-25 01:20:14.806129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.932 qpair failed and we were unable to recover it. 00:34:21.932 [2024-07-25 01:20:14.806297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.932 [2024-07-25 01:20:14.806342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.932 qpair failed and we were unable to recover it. 00:34:21.932 [2024-07-25 01:20:14.806471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.932 [2024-07-25 01:20:14.806498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.932 qpair failed and we were unable to recover it. 00:34:21.932 [2024-07-25 01:20:14.806621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.932 [2024-07-25 01:20:14.806649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.932 qpair failed and we were unable to recover it. 00:34:21.932 [2024-07-25 01:20:14.806808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.932 [2024-07-25 01:20:14.806833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.932 qpair failed and we were unable to recover it. 00:34:21.932 [2024-07-25 01:20:14.806995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.932 [2024-07-25 01:20:14.807023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.932 qpair failed and we were unable to recover it. 00:34:21.932 [2024-07-25 01:20:14.807183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.932 [2024-07-25 01:20:14.807207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.932 qpair failed and we were unable to recover it. 00:34:21.932 [2024-07-25 01:20:14.807367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.932 [2024-07-25 01:20:14.807409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.932 qpair failed and we were unable to recover it. 00:34:21.933 [2024-07-25 01:20:14.807578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.933 [2024-07-25 01:20:14.807603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.933 qpair failed and we were unable to recover it. 00:34:21.933 [2024-07-25 01:20:14.807747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.933 [2024-07-25 01:20:14.807795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.933 qpair failed and we were unable to recover it. 00:34:21.933 [2024-07-25 01:20:14.807953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.933 [2024-07-25 01:20:14.807980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.933 qpair failed and we were unable to recover it. 00:34:21.933 [2024-07-25 01:20:14.808135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.933 [2024-07-25 01:20:14.808163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.933 qpair failed and we were unable to recover it. 00:34:21.933 [2024-07-25 01:20:14.808296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.933 [2024-07-25 01:20:14.808322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.933 qpair failed and we were unable to recover it. 00:34:21.933 [2024-07-25 01:20:14.808484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.933 [2024-07-25 01:20:14.808508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.933 qpair failed and we were unable to recover it. 00:34:21.933 [2024-07-25 01:20:14.808700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.933 [2024-07-25 01:20:14.808728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.933 qpair failed and we were unable to recover it. 00:34:21.933 [2024-07-25 01:20:14.808879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.933 [2024-07-25 01:20:14.808907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.933 qpair failed and we were unable to recover it. 00:34:21.933 [2024-07-25 01:20:14.809076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.933 [2024-07-25 01:20:14.809101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.933 qpair failed and we were unable to recover it. 00:34:21.933 [2024-07-25 01:20:14.809252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.933 [2024-07-25 01:20:14.809297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.933 qpair failed and we were unable to recover it. 00:34:21.933 [2024-07-25 01:20:14.809454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.933 [2024-07-25 01:20:14.809482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.933 qpair failed and we were unable to recover it. 00:34:21.933 [2024-07-25 01:20:14.809632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.933 [2024-07-25 01:20:14.809660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.933 qpair failed and we were unable to recover it. 00:34:21.933 [2024-07-25 01:20:14.809828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.933 [2024-07-25 01:20:14.809853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.933 qpair failed and we were unable to recover it. 00:34:21.933 [2024-07-25 01:20:14.810008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.933 [2024-07-25 01:20:14.810035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.933 qpair failed and we were unable to recover it. 00:34:21.933 [2024-07-25 01:20:14.810189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.933 [2024-07-25 01:20:14.810217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.933 qpair failed and we were unable to recover it. 00:34:21.933 [2024-07-25 01:20:14.810364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.933 [2024-07-25 01:20:14.810393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.933 qpair failed and we were unable to recover it. 00:34:21.933 [2024-07-25 01:20:14.810568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.933 [2024-07-25 01:20:14.810593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.933 qpair failed and we were unable to recover it. 00:34:21.933 [2024-07-25 01:20:14.810760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.933 [2024-07-25 01:20:14.810801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.933 qpair failed and we were unable to recover it. 00:34:21.933 [2024-07-25 01:20:14.810982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.933 [2024-07-25 01:20:14.811010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.933 qpair failed and we were unable to recover it. 00:34:21.933 [2024-07-25 01:20:14.811164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.933 [2024-07-25 01:20:14.811192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.933 qpair failed and we were unable to recover it. 00:34:21.933 [2024-07-25 01:20:14.811390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.933 [2024-07-25 01:20:14.811416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.933 qpair failed and we were unable to recover it. 00:34:21.933 [2024-07-25 01:20:14.811570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.933 [2024-07-25 01:20:14.811597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.933 qpair failed and we were unable to recover it. 00:34:21.933 [2024-07-25 01:20:14.811754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.933 [2024-07-25 01:20:14.811783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.933 qpair failed and we were unable to recover it. 00:34:21.933 [2024-07-25 01:20:14.811917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.933 [2024-07-25 01:20:14.811945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.933 qpair failed and we were unable to recover it. 00:34:21.933 [2024-07-25 01:20:14.812130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.933 [2024-07-25 01:20:14.812154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.933 qpair failed and we were unable to recover it. 00:34:21.933 [2024-07-25 01:20:14.812347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.933 [2024-07-25 01:20:14.812376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.933 qpair failed and we were unable to recover it. 00:34:21.933 [2024-07-25 01:20:14.812494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.933 [2024-07-25 01:20:14.812523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.933 qpair failed and we were unable to recover it. 00:34:21.933 [2024-07-25 01:20:14.812645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.933 [2024-07-25 01:20:14.812672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.933 qpair failed and we were unable to recover it. 00:34:21.933 [2024-07-25 01:20:14.812850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.933 [2024-07-25 01:20:14.812875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.933 qpair failed and we were unable to recover it. 00:34:21.933 [2024-07-25 01:20:14.813036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.933 [2024-07-25 01:20:14.813064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.933 qpair failed and we were unable to recover it. 00:34:21.933 [2024-07-25 01:20:14.813222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.933 [2024-07-25 01:20:14.813257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.933 qpair failed and we were unable to recover it. 00:34:21.933 [2024-07-25 01:20:14.813442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.933 [2024-07-25 01:20:14.813471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.933 qpair failed and we were unable to recover it. 00:34:21.933 [2024-07-25 01:20:14.813608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.933 [2024-07-25 01:20:14.813634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.933 qpair failed and we were unable to recover it. 00:34:21.933 [2024-07-25 01:20:14.813782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.933 [2024-07-25 01:20:14.813823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.934 qpair failed and we were unable to recover it. 00:34:21.934 [2024-07-25 01:20:14.814006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.934 [2024-07-25 01:20:14.814033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.934 qpair failed and we were unable to recover it. 00:34:21.934 [2024-07-25 01:20:14.814148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.934 [2024-07-25 01:20:14.814176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.934 qpair failed and we were unable to recover it. 00:34:21.934 [2024-07-25 01:20:14.814358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.934 [2024-07-25 01:20:14.814384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.934 qpair failed and we were unable to recover it. 00:34:21.934 [2024-07-25 01:20:14.814566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.934 [2024-07-25 01:20:14.814593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.934 qpair failed and we were unable to recover it. 00:34:21.934 [2024-07-25 01:20:14.814753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.934 [2024-07-25 01:20:14.814782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.934 qpair failed and we were unable to recover it. 00:34:21.934 [2024-07-25 01:20:14.814962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.934 [2024-07-25 01:20:14.814990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.934 qpair failed and we were unable to recover it. 00:34:21.934 [2024-07-25 01:20:14.815156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.934 [2024-07-25 01:20:14.815181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.934 qpair failed and we were unable to recover it. 00:34:21.934 [2024-07-25 01:20:14.815305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.934 [2024-07-25 01:20:14.815351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.934 qpair failed and we were unable to recover it. 00:34:21.934 [2024-07-25 01:20:14.815498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.934 [2024-07-25 01:20:14.815525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.934 qpair failed and we were unable to recover it. 00:34:21.934 [2024-07-25 01:20:14.815664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.934 [2024-07-25 01:20:14.815689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.934 qpair failed and we were unable to recover it. 00:34:21.934 [2024-07-25 01:20:14.815831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.934 [2024-07-25 01:20:14.815855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.934 qpair failed and we were unable to recover it. 00:34:21.934 [2024-07-25 01:20:14.815996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.934 [2024-07-25 01:20:14.816021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.934 qpair failed and we were unable to recover it. 00:34:21.934 [2024-07-25 01:20:14.816192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.934 [2024-07-25 01:20:14.816218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.934 qpair failed and we were unable to recover it. 00:34:21.934 [2024-07-25 01:20:14.816341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.934 [2024-07-25 01:20:14.816366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.934 qpair failed and we were unable to recover it. 00:34:21.934 [2024-07-25 01:20:14.816507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.934 [2024-07-25 01:20:14.816532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.934 qpair failed and we were unable to recover it. 00:34:21.934 [2024-07-25 01:20:14.816670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.934 [2024-07-25 01:20:14.816695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.934 qpair failed and we were unable to recover it. 00:34:21.934 [2024-07-25 01:20:14.816818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.934 [2024-07-25 01:20:14.816844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.934 qpair failed and we were unable to recover it. 00:34:21.934 [2024-07-25 01:20:14.816986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.934 [2024-07-25 01:20:14.817015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.934 qpair failed and we were unable to recover it. 00:34:21.934 [2024-07-25 01:20:14.817181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.934 [2024-07-25 01:20:14.817206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.934 qpair failed and we were unable to recover it. 00:34:21.934 [2024-07-25 01:20:14.817356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.934 [2024-07-25 01:20:14.817381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.934 qpair failed and we were unable to recover it. 00:34:21.934 [2024-07-25 01:20:14.817526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.934 [2024-07-25 01:20:14.817552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.934 qpair failed and we were unable to recover it. 00:34:21.934 [2024-07-25 01:20:14.817692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.934 [2024-07-25 01:20:14.817720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.934 qpair failed and we were unable to recover it. 00:34:21.934 [2024-07-25 01:20:14.817885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.934 [2024-07-25 01:20:14.817909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.934 qpair failed and we were unable to recover it. 00:34:21.934 [2024-07-25 01:20:14.818063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.934 [2024-07-25 01:20:14.818091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.934 qpair failed and we were unable to recover it. 00:34:21.934 [2024-07-25 01:20:14.818275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.934 [2024-07-25 01:20:14.818304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.934 qpair failed and we were unable to recover it. 00:34:21.934 [2024-07-25 01:20:14.818474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.934 [2024-07-25 01:20:14.818499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.934 qpair failed and we were unable to recover it. 00:34:21.934 [2024-07-25 01:20:14.818643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.934 [2024-07-25 01:20:14.818667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.934 qpair failed and we were unable to recover it. 00:34:21.934 [2024-07-25 01:20:14.818862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.934 [2024-07-25 01:20:14.818890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.934 qpair failed and we were unable to recover it. 00:34:21.934 [2024-07-25 01:20:14.819057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.934 [2024-07-25 01:20:14.819083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.934 qpair failed and we were unable to recover it. 00:34:21.934 [2024-07-25 01:20:14.819239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.934 [2024-07-25 01:20:14.819281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.934 qpair failed and we were unable to recover it. 00:34:21.934 [2024-07-25 01:20:14.819424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.934 [2024-07-25 01:20:14.819450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.934 qpair failed and we were unable to recover it. 00:34:21.934 [2024-07-25 01:20:14.819594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.934 [2024-07-25 01:20:14.819638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.934 qpair failed and we were unable to recover it. 00:34:21.934 [2024-07-25 01:20:14.819797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.934 [2024-07-25 01:20:14.819825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.934 qpair failed and we were unable to recover it. 00:34:21.934 [2024-07-25 01:20:14.819957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.934 [2024-07-25 01:20:14.819984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.934 qpair failed and we were unable to recover it. 00:34:21.934 [2024-07-25 01:20:14.820145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.934 [2024-07-25 01:20:14.820171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.935 qpair failed and we were unable to recover it. 00:34:21.935 [2024-07-25 01:20:14.820332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.935 [2024-07-25 01:20:14.820361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.935 qpair failed and we were unable to recover it. 00:34:21.935 [2024-07-25 01:20:14.820515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.935 [2024-07-25 01:20:14.820542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.935 qpair failed and we were unable to recover it. 00:34:21.935 [2024-07-25 01:20:14.820698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.935 [2024-07-25 01:20:14.820726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.935 qpair failed and we were unable to recover it. 00:34:21.935 [2024-07-25 01:20:14.820920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.935 [2024-07-25 01:20:14.820945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.935 qpair failed and we were unable to recover it. 00:34:21.935 [2024-07-25 01:20:14.821104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.935 [2024-07-25 01:20:14.821132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.935 qpair failed and we were unable to recover it. 00:34:21.935 [2024-07-25 01:20:14.821278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.935 [2024-07-25 01:20:14.821307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.935 qpair failed and we were unable to recover it. 00:34:21.935 [2024-07-25 01:20:14.821426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.935 [2024-07-25 01:20:14.821453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.935 qpair failed and we were unable to recover it. 00:34:21.935 [2024-07-25 01:20:14.821642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.935 [2024-07-25 01:20:14.821667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.935 qpair failed and we were unable to recover it. 00:34:21.935 [2024-07-25 01:20:14.821837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.935 [2024-07-25 01:20:14.821865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.935 qpair failed and we were unable to recover it. 00:34:21.935 [2024-07-25 01:20:14.822057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.935 [2024-07-25 01:20:14.822084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.935 qpair failed and we were unable to recover it. 00:34:21.935 [2024-07-25 01:20:14.822214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.935 [2024-07-25 01:20:14.822249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.935 qpair failed and we were unable to recover it. 00:34:21.935 [2024-07-25 01:20:14.822414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.935 [2024-07-25 01:20:14.822439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.935 qpair failed and we were unable to recover it. 00:34:21.935 [2024-07-25 01:20:14.822630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.935 [2024-07-25 01:20:14.822663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.935 qpair failed and we were unable to recover it. 00:34:21.935 [2024-07-25 01:20:14.822830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.935 [2024-07-25 01:20:14.822855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.935 qpair failed and we were unable to recover it. 00:34:21.935 [2024-07-25 01:20:14.822967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.935 [2024-07-25 01:20:14.822991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.935 qpair failed and we were unable to recover it. 00:34:21.935 [2024-07-25 01:20:14.823111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.935 [2024-07-25 01:20:14.823136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.935 qpair failed and we were unable to recover it. 00:34:21.935 [2024-07-25 01:20:14.823276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.935 [2024-07-25 01:20:14.823302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.935 qpair failed and we were unable to recover it. 00:34:21.935 [2024-07-25 01:20:14.823415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.935 [2024-07-25 01:20:14.823442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.935 qpair failed and we were unable to recover it. 00:34:21.935 [2024-07-25 01:20:14.823554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.935 [2024-07-25 01:20:14.823579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.935 qpair failed and we were unable to recover it. 00:34:21.935 [2024-07-25 01:20:14.823692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.935 [2024-07-25 01:20:14.823717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.935 qpair failed and we were unable to recover it. 00:34:21.935 [2024-07-25 01:20:14.823840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.935 [2024-07-25 01:20:14.823866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.935 qpair failed and we were unable to recover it. 00:34:21.935 [2024-07-25 01:20:14.824054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.935 [2024-07-25 01:20:14.824082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.935 qpair failed and we were unable to recover it. 00:34:21.935 [2024-07-25 01:20:14.824239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.935 [2024-07-25 01:20:14.824277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.935 qpair failed and we were unable to recover it. 00:34:21.935 [2024-07-25 01:20:14.824418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.935 [2024-07-25 01:20:14.824443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.935 qpair failed and we were unable to recover it. 00:34:21.935 [2024-07-25 01:20:14.824613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.935 [2024-07-25 01:20:14.824655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.935 qpair failed and we were unable to recover it. 00:34:21.935 [2024-07-25 01:20:14.824844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.935 [2024-07-25 01:20:14.824872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.935 qpair failed and we were unable to recover it. 00:34:21.935 [2024-07-25 01:20:14.825064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.935 [2024-07-25 01:20:14.825092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.935 qpair failed and we were unable to recover it. 00:34:21.935 [2024-07-25 01:20:14.825219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.935 [2024-07-25 01:20:14.825250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.935 qpair failed and we were unable to recover it. 00:34:21.935 [2024-07-25 01:20:14.825368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.935 [2024-07-25 01:20:14.825392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.935 qpair failed and we were unable to recover it. 00:34:21.936 [2024-07-25 01:20:14.825532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.936 [2024-07-25 01:20:14.825556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.936 qpair failed and we were unable to recover it. 00:34:21.936 [2024-07-25 01:20:14.825699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.936 [2024-07-25 01:20:14.825725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.936 qpair failed and we were unable to recover it. 00:34:21.936 [2024-07-25 01:20:14.825862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.936 [2024-07-25 01:20:14.825886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.936 qpair failed and we were unable to recover it. 00:34:21.936 [2024-07-25 01:20:14.826000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.936 [2024-07-25 01:20:14.826025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.936 qpair failed and we were unable to recover it. 00:34:21.936 [2024-07-25 01:20:14.826140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.936 [2024-07-25 01:20:14.826166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.936 qpair failed and we were unable to recover it. 00:34:21.936 [2024-07-25 01:20:14.826307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.936 [2024-07-25 01:20:14.826333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.936 qpair failed and we were unable to recover it. 00:34:21.936 [2024-07-25 01:20:14.826472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.936 [2024-07-25 01:20:14.826497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.936 qpair failed and we were unable to recover it. 00:34:21.936 [2024-07-25 01:20:14.826618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.936 [2024-07-25 01:20:14.826643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.936 qpair failed and we were unable to recover it. 00:34:21.936 [2024-07-25 01:20:14.826784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.936 [2024-07-25 01:20:14.826809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.936 qpair failed and we were unable to recover it. 00:34:21.936 [2024-07-25 01:20:14.826925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.936 [2024-07-25 01:20:14.826950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.936 qpair failed and we were unable to recover it. 00:34:21.936 [2024-07-25 01:20:14.827095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.936 [2024-07-25 01:20:14.827123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.936 qpair failed and we were unable to recover it. 00:34:21.936 [2024-07-25 01:20:14.827294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.936 [2024-07-25 01:20:14.827321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.936 qpair failed and we were unable to recover it. 00:34:21.936 [2024-07-25 01:20:14.827438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.936 [2024-07-25 01:20:14.827462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.936 qpair failed and we were unable to recover it. 00:34:21.936 [2024-07-25 01:20:14.827601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.936 [2024-07-25 01:20:14.827626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.936 qpair failed and we were unable to recover it. 00:34:21.936 [2024-07-25 01:20:14.827765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.936 [2024-07-25 01:20:14.827790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.936 qpair failed and we were unable to recover it. 00:34:21.936 [2024-07-25 01:20:14.827933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.936 [2024-07-25 01:20:14.827957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.936 qpair failed and we were unable to recover it. 00:34:21.936 [2024-07-25 01:20:14.828094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.936 [2024-07-25 01:20:14.828120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.936 qpair failed and we were unable to recover it. 00:34:21.936 [2024-07-25 01:20:14.828264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.936 [2024-07-25 01:20:14.828289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.936 qpair failed and we were unable to recover it. 00:34:21.936 [2024-07-25 01:20:14.828454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.936 [2024-07-25 01:20:14.828478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.936 qpair failed and we were unable to recover it. 00:34:21.936 [2024-07-25 01:20:14.828620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.936 [2024-07-25 01:20:14.828646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.936 qpair failed and we were unable to recover it. 00:34:21.936 [2024-07-25 01:20:14.828788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.936 [2024-07-25 01:20:14.828812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.936 qpair failed and we were unable to recover it. 00:34:21.936 [2024-07-25 01:20:14.828936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.936 [2024-07-25 01:20:14.828960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.936 qpair failed and we were unable to recover it. 00:34:21.936 [2024-07-25 01:20:14.829099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.936 [2024-07-25 01:20:14.829124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.936 qpair failed and we were unable to recover it. 00:34:21.936 [2024-07-25 01:20:14.829234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.936 [2024-07-25 01:20:14.829265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.936 qpair failed and we were unable to recover it. 00:34:21.936 [2024-07-25 01:20:14.829425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.936 [2024-07-25 01:20:14.829450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.936 qpair failed and we were unable to recover it. 00:34:21.936 [2024-07-25 01:20:14.829617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.936 [2024-07-25 01:20:14.829643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.936 qpair failed and we were unable to recover it. 00:34:21.936 [2024-07-25 01:20:14.829794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.936 [2024-07-25 01:20:14.829819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.936 qpair failed and we were unable to recover it. 00:34:21.936 [2024-07-25 01:20:14.829962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.936 [2024-07-25 01:20:14.829986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.936 qpair failed and we were unable to recover it. 00:34:21.936 [2024-07-25 01:20:14.830102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.936 [2024-07-25 01:20:14.830127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.936 qpair failed and we were unable to recover it. 00:34:21.936 [2024-07-25 01:20:14.830271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.936 [2024-07-25 01:20:14.830296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.936 qpair failed and we were unable to recover it. 00:34:21.936 [2024-07-25 01:20:14.830464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.936 [2024-07-25 01:20:14.830489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.936 qpair failed and we were unable to recover it. 00:34:21.936 [2024-07-25 01:20:14.830605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.936 [2024-07-25 01:20:14.830630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.936 qpair failed and we were unable to recover it. 00:34:21.936 [2024-07-25 01:20:14.830772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.936 [2024-07-25 01:20:14.830797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.936 qpair failed and we were unable to recover it. 00:34:21.936 [2024-07-25 01:20:14.830966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.936 [2024-07-25 01:20:14.830991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.936 qpair failed and we were unable to recover it. 00:34:21.936 [2024-07-25 01:20:14.831134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.936 [2024-07-25 01:20:14.831159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.936 qpair failed and we were unable to recover it. 00:34:21.936 [2024-07-25 01:20:14.831271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.936 [2024-07-25 01:20:14.831297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.936 qpair failed and we were unable to recover it. 00:34:21.936 [2024-07-25 01:20:14.831431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.937 [2024-07-25 01:20:14.831456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.937 qpair failed and we were unable to recover it. 00:34:21.937 [2024-07-25 01:20:14.831586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.937 [2024-07-25 01:20:14.831611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.937 qpair failed and we were unable to recover it. 00:34:21.937 [2024-07-25 01:20:14.831726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.937 [2024-07-25 01:20:14.831752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.937 qpair failed and we were unable to recover it. 00:34:21.937 [2024-07-25 01:20:14.831895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.937 [2024-07-25 01:20:14.831920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.937 qpair failed and we were unable to recover it. 00:34:21.937 [2024-07-25 01:20:14.832087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.937 [2024-07-25 01:20:14.832112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.937 qpair failed and we were unable to recover it. 00:34:21.937 [2024-07-25 01:20:14.832229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.937 [2024-07-25 01:20:14.832261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.937 qpair failed and we were unable to recover it. 00:34:21.937 [2024-07-25 01:20:14.832406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.937 [2024-07-25 01:20:14.832431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.937 qpair failed and we were unable to recover it. 00:34:21.937 [2024-07-25 01:20:14.832581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.937 [2024-07-25 01:20:14.832606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.937 qpair failed and we were unable to recover it. 00:34:21.937 [2024-07-25 01:20:14.832745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.937 [2024-07-25 01:20:14.832771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.937 qpair failed and we were unable to recover it. 00:34:21.937 [2024-07-25 01:20:14.832938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.937 [2024-07-25 01:20:14.832963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.937 qpair failed and we were unable to recover it. 00:34:21.937 [2024-07-25 01:20:14.833104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.937 [2024-07-25 01:20:14.833128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.937 qpair failed and we were unable to recover it. 00:34:21.937 [2024-07-25 01:20:14.833295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.937 [2024-07-25 01:20:14.833321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.937 qpair failed and we were unable to recover it. 00:34:21.937 [2024-07-25 01:20:14.833441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.937 [2024-07-25 01:20:14.833467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.937 qpair failed and we were unable to recover it. 00:34:21.937 [2024-07-25 01:20:14.833604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.937 [2024-07-25 01:20:14.833629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.937 qpair failed and we were unable to recover it. 00:34:21.937 [2024-07-25 01:20:14.833800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.937 [2024-07-25 01:20:14.833831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.937 qpair failed and we were unable to recover it. 00:34:21.937 [2024-07-25 01:20:14.833949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.937 [2024-07-25 01:20:14.833975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.937 qpair failed and we were unable to recover it. 00:34:21.937 [2024-07-25 01:20:14.834118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.937 [2024-07-25 01:20:14.834143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.937 qpair failed and we were unable to recover it. 00:34:21.937 [2024-07-25 01:20:14.834278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.937 [2024-07-25 01:20:14.834303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.937 qpair failed and we were unable to recover it. 00:34:21.937 [2024-07-25 01:20:14.834443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.937 [2024-07-25 01:20:14.834468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.937 qpair failed and we were unable to recover it. 00:34:21.937 [2024-07-25 01:20:14.834645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.937 [2024-07-25 01:20:14.834670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.937 qpair failed and we were unable to recover it. 00:34:21.937 [2024-07-25 01:20:14.834803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.937 [2024-07-25 01:20:14.834828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.937 qpair failed and we were unable to recover it. 00:34:21.937 [2024-07-25 01:20:14.834974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.937 [2024-07-25 01:20:14.834999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.937 qpair failed and we were unable to recover it. 00:34:21.937 [2024-07-25 01:20:14.835165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.937 [2024-07-25 01:20:14.835189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.937 qpair failed and we were unable to recover it. 00:34:21.937 [2024-07-25 01:20:14.835332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.937 [2024-07-25 01:20:14.835358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.937 qpair failed and we were unable to recover it. 00:34:21.937 [2024-07-25 01:20:14.835499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.937 [2024-07-25 01:20:14.835524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.937 qpair failed and we were unable to recover it. 00:34:21.937 [2024-07-25 01:20:14.835692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.937 [2024-07-25 01:20:14.835716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.937 qpair failed and we were unable to recover it. 00:34:21.937 [2024-07-25 01:20:14.835885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.937 [2024-07-25 01:20:14.835910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.937 qpair failed and we were unable to recover it. 00:34:21.937 [2024-07-25 01:20:14.836039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.937 [2024-07-25 01:20:14.836064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.937 qpair failed and we were unable to recover it. 00:34:21.937 [2024-07-25 01:20:14.836184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.937 [2024-07-25 01:20:14.836209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.937 qpair failed and we were unable to recover it. 00:34:21.937 [2024-07-25 01:20:14.836327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.937 [2024-07-25 01:20:14.836353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.937 qpair failed and we were unable to recover it. 00:34:21.937 [2024-07-25 01:20:14.836522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.937 [2024-07-25 01:20:14.836547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.937 qpair failed and we were unable to recover it. 00:34:21.937 [2024-07-25 01:20:14.836668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.937 [2024-07-25 01:20:14.836692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.937 qpair failed and we were unable to recover it. 00:34:21.937 [2024-07-25 01:20:14.836829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.937 [2024-07-25 01:20:14.836854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.937 qpair failed and we were unable to recover it. 00:34:21.938 [2024-07-25 01:20:14.836991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.938 [2024-07-25 01:20:14.837015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.938 qpair failed and we were unable to recover it. 00:34:21.938 [2024-07-25 01:20:14.837119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.938 [2024-07-25 01:20:14.837143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.938 qpair failed and we were unable to recover it. 00:34:21.938 [2024-07-25 01:20:14.837279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.938 [2024-07-25 01:20:14.837304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.938 qpair failed and we were unable to recover it. 00:34:21.938 [2024-07-25 01:20:14.837438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.938 [2024-07-25 01:20:14.837463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.938 qpair failed and we were unable to recover it. 00:34:21.938 [2024-07-25 01:20:14.837594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.938 [2024-07-25 01:20:14.837619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.938 qpair failed and we were unable to recover it. 00:34:21.938 [2024-07-25 01:20:14.837784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.938 [2024-07-25 01:20:14.837809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.938 qpair failed and we were unable to recover it. 00:34:21.938 [2024-07-25 01:20:14.837972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.938 [2024-07-25 01:20:14.837996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.938 qpair failed and we were unable to recover it. 00:34:21.938 [2024-07-25 01:20:14.838115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.938 [2024-07-25 01:20:14.838140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.938 qpair failed and we were unable to recover it. 00:34:21.938 [2024-07-25 01:20:14.838257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.938 [2024-07-25 01:20:14.838282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.938 qpair failed and we were unable to recover it. 00:34:21.938 [2024-07-25 01:20:14.838430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.938 [2024-07-25 01:20:14.838454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.938 qpair failed and we were unable to recover it. 00:34:21.938 [2024-07-25 01:20:14.838569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.938 [2024-07-25 01:20:14.838594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.938 qpair failed and we were unable to recover it. 00:34:21.938 [2024-07-25 01:20:14.838743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.938 [2024-07-25 01:20:14.838767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.938 qpair failed and we were unable to recover it. 00:34:21.938 [2024-07-25 01:20:14.838883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.938 [2024-07-25 01:20:14.838908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.938 qpair failed and we were unable to recover it. 00:34:21.938 [2024-07-25 01:20:14.839074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.938 [2024-07-25 01:20:14.839099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.938 qpair failed and we were unable to recover it. 00:34:21.938 [2024-07-25 01:20:14.839255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.938 [2024-07-25 01:20:14.839281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.938 qpair failed and we were unable to recover it. 00:34:21.938 [2024-07-25 01:20:14.839424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.938 [2024-07-25 01:20:14.839449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.938 qpair failed and we were unable to recover it. 00:34:21.938 [2024-07-25 01:20:14.839588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.938 [2024-07-25 01:20:14.839613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.938 qpair failed and we were unable to recover it. 00:34:21.938 [2024-07-25 01:20:14.839777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.938 [2024-07-25 01:20:14.839801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.938 qpair failed and we were unable to recover it. 00:34:21.938 [2024-07-25 01:20:14.839918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.938 [2024-07-25 01:20:14.839945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.938 qpair failed and we were unable to recover it. 00:34:21.938 [2024-07-25 01:20:14.840090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.938 [2024-07-25 01:20:14.840115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.938 qpair failed and we were unable to recover it. 00:34:21.938 [2024-07-25 01:20:14.840240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.938 [2024-07-25 01:20:14.840273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.938 qpair failed and we were unable to recover it. 00:34:21.938 [2024-07-25 01:20:14.840385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.938 [2024-07-25 01:20:14.840414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.938 qpair failed and we were unable to recover it. 00:34:21.938 [2024-07-25 01:20:14.840582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.938 [2024-07-25 01:20:14.840607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.938 qpair failed and we were unable to recover it. 00:34:21.938 [2024-07-25 01:20:14.840715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.938 [2024-07-25 01:20:14.840739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.938 qpair failed and we were unable to recover it. 00:34:21.938 [2024-07-25 01:20:14.840858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.938 [2024-07-25 01:20:14.840884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.938 qpair failed and we were unable to recover it. 00:34:21.938 [2024-07-25 01:20:14.841033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.938 [2024-07-25 01:20:14.841059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.938 qpair failed and we were unable to recover it. 00:34:21.938 [2024-07-25 01:20:14.841228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.938 [2024-07-25 01:20:14.841261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.938 qpair failed and we were unable to recover it. 00:34:21.938 [2024-07-25 01:20:14.841378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.938 [2024-07-25 01:20:14.841404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.938 qpair failed and we were unable to recover it. 00:34:21.938 [2024-07-25 01:20:14.841571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.938 [2024-07-25 01:20:14.841595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.938 qpair failed and we were unable to recover it. 00:34:21.938 [2024-07-25 01:20:14.841740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.938 [2024-07-25 01:20:14.841765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.938 qpair failed and we were unable to recover it. 00:34:21.938 [2024-07-25 01:20:14.841905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.938 [2024-07-25 01:20:14.841930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.938 qpair failed and we were unable to recover it. 00:34:21.938 [2024-07-25 01:20:14.842074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.938 [2024-07-25 01:20:14.842099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.938 qpair failed and we were unable to recover it. 00:34:21.938 [2024-07-25 01:20:14.842247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.938 [2024-07-25 01:20:14.842272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.938 qpair failed and we were unable to recover it. 00:34:21.938 [2024-07-25 01:20:14.842395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.938 [2024-07-25 01:20:14.842420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.938 qpair failed and we were unable to recover it. 00:34:21.938 [2024-07-25 01:20:14.842538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.938 [2024-07-25 01:20:14.842563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.939 qpair failed and we were unable to recover it. 00:34:21.939 [2024-07-25 01:20:14.842706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.939 [2024-07-25 01:20:14.842731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.939 qpair failed and we were unable to recover it. 00:34:21.939 [2024-07-25 01:20:14.842847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.939 [2024-07-25 01:20:14.842871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.939 qpair failed and we were unable to recover it. 00:34:21.939 [2024-07-25 01:20:14.843037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.939 [2024-07-25 01:20:14.843061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.939 qpair failed and we were unable to recover it. 00:34:21.939 [2024-07-25 01:20:14.843200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.939 [2024-07-25 01:20:14.843225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.939 qpair failed and we were unable to recover it. 00:34:21.939 [2024-07-25 01:20:14.843399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.939 [2024-07-25 01:20:14.843423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.939 qpair failed and we were unable to recover it. 00:34:21.939 [2024-07-25 01:20:14.843569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.939 [2024-07-25 01:20:14.843594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.939 qpair failed and we were unable to recover it. 00:34:21.939 [2024-07-25 01:20:14.843708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.939 [2024-07-25 01:20:14.843734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.939 qpair failed and we were unable to recover it. 00:34:21.939 [2024-07-25 01:20:14.843878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.939 [2024-07-25 01:20:14.843903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.939 qpair failed and we were unable to recover it. 00:34:21.939 [2024-07-25 01:20:14.844071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.939 [2024-07-25 01:20:14.844095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.939 qpair failed and we were unable to recover it. 00:34:21.939 [2024-07-25 01:20:14.844207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.939 [2024-07-25 01:20:14.844233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.939 qpair failed and we were unable to recover it. 00:34:21.939 [2024-07-25 01:20:14.844407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.939 [2024-07-25 01:20:14.844433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.939 qpair failed and we were unable to recover it. 00:34:21.939 [2024-07-25 01:20:14.844574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.939 [2024-07-25 01:20:14.844598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.939 qpair failed and we were unable to recover it. 00:34:21.939 [2024-07-25 01:20:14.844773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.939 [2024-07-25 01:20:14.844799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.939 qpair failed and we were unable to recover it. 00:34:21.939 [2024-07-25 01:20:14.844944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.939 [2024-07-25 01:20:14.844970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.939 qpair failed and we were unable to recover it. 00:34:21.939 [2024-07-25 01:20:14.845094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.939 [2024-07-25 01:20:14.845118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.939 qpair failed and we were unable to recover it. 00:34:21.939 [2024-07-25 01:20:14.845269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.939 [2024-07-25 01:20:14.845296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.939 qpair failed and we were unable to recover it. 00:34:21.939 [2024-07-25 01:20:14.845436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.939 [2024-07-25 01:20:14.845462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.939 qpair failed and we were unable to recover it. 00:34:21.939 [2024-07-25 01:20:14.845634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.939 [2024-07-25 01:20:14.845659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.939 qpair failed and we were unable to recover it. 00:34:21.939 [2024-07-25 01:20:14.845824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.939 [2024-07-25 01:20:14.845849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.939 qpair failed and we were unable to recover it. 00:34:21.939 [2024-07-25 01:20:14.845962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.939 [2024-07-25 01:20:14.845986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.939 qpair failed and we were unable to recover it. 00:34:21.939 [2024-07-25 01:20:14.846107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.939 [2024-07-25 01:20:14.846132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.939 qpair failed and we were unable to recover it. 00:34:21.939 [2024-07-25 01:20:14.846298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.939 [2024-07-25 01:20:14.846324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.939 qpair failed and we were unable to recover it. 00:34:21.939 [2024-07-25 01:20:14.846436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.939 [2024-07-25 01:20:14.846461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.939 qpair failed and we were unable to recover it. 00:34:21.939 [2024-07-25 01:20:14.846608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.939 [2024-07-25 01:20:14.846633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.939 qpair failed and we were unable to recover it. 00:34:21.939 [2024-07-25 01:20:14.846772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.939 [2024-07-25 01:20:14.846798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.939 qpair failed and we were unable to recover it. 00:34:21.939 [2024-07-25 01:20:14.846966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.939 [2024-07-25 01:20:14.846991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.939 qpair failed and we were unable to recover it. 00:34:21.939 [2024-07-25 01:20:14.847101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.939 [2024-07-25 01:20:14.847129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.939 qpair failed and we were unable to recover it. 00:34:21.939 [2024-07-25 01:20:14.847257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.939 [2024-07-25 01:20:14.847282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.939 qpair failed and we were unable to recover it. 00:34:21.939 [2024-07-25 01:20:14.847450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.939 [2024-07-25 01:20:14.847475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.939 qpair failed and we were unable to recover it. 00:34:21.939 [2024-07-25 01:20:14.847644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.939 [2024-07-25 01:20:14.847668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.939 qpair failed and we were unable to recover it. 00:34:21.939 [2024-07-25 01:20:14.847775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.939 [2024-07-25 01:20:14.847799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.939 qpair failed and we were unable to recover it. 00:34:21.939 [2024-07-25 01:20:14.847945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.939 [2024-07-25 01:20:14.847969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.939 qpair failed and we were unable to recover it. 00:34:21.939 [2024-07-25 01:20:14.848112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.939 [2024-07-25 01:20:14.848137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.939 qpair failed and we were unable to recover it. 00:34:21.939 [2024-07-25 01:20:14.848264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.939 [2024-07-25 01:20:14.848290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.940 qpair failed and we were unable to recover it. 00:34:21.940 [2024-07-25 01:20:14.848405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.940 [2024-07-25 01:20:14.848429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.940 qpair failed and we were unable to recover it. 00:34:21.940 [2024-07-25 01:20:14.848572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.940 [2024-07-25 01:20:14.848597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.940 qpair failed and we were unable to recover it. 00:34:21.940 [2024-07-25 01:20:14.848768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.940 [2024-07-25 01:20:14.848793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.940 qpair failed and we were unable to recover it. 00:34:21.940 [2024-07-25 01:20:14.848912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.940 [2024-07-25 01:20:14.848937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.940 qpair failed and we were unable to recover it. 00:34:21.940 [2024-07-25 01:20:14.849049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.940 [2024-07-25 01:20:14.849075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.940 qpair failed and we were unable to recover it. 00:34:21.940 [2024-07-25 01:20:14.849216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.940 [2024-07-25 01:20:14.849249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.940 qpair failed and we were unable to recover it. 00:34:21.940 [2024-07-25 01:20:14.849402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.940 [2024-07-25 01:20:14.849426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.940 qpair failed and we were unable to recover it. 00:34:21.940 [2024-07-25 01:20:14.849547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.940 [2024-07-25 01:20:14.849572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.940 qpair failed and we were unable to recover it. 00:34:21.940 [2024-07-25 01:20:14.849722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.940 [2024-07-25 01:20:14.849747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.940 qpair failed and we were unable to recover it. 00:34:21.940 [2024-07-25 01:20:14.849887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.940 [2024-07-25 01:20:14.849913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.940 qpair failed and we were unable to recover it. 00:34:21.940 [2024-07-25 01:20:14.850075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.940 [2024-07-25 01:20:14.850100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.940 qpair failed and we were unable to recover it. 00:34:21.940 [2024-07-25 01:20:14.850253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.940 [2024-07-25 01:20:14.850278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.940 qpair failed and we were unable to recover it. 00:34:21.940 [2024-07-25 01:20:14.850431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.940 [2024-07-25 01:20:14.850456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.940 qpair failed and we were unable to recover it. 00:34:21.940 [2024-07-25 01:20:14.850564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.940 [2024-07-25 01:20:14.850588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.940 qpair failed and we were unable to recover it. 00:34:21.940 [2024-07-25 01:20:14.850702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.940 [2024-07-25 01:20:14.850726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.940 qpair failed and we were unable to recover it. 00:34:21.940 [2024-07-25 01:20:14.850869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.940 [2024-07-25 01:20:14.850893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.940 qpair failed and we were unable to recover it. 00:34:21.940 [2024-07-25 01:20:14.851005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.940 [2024-07-25 01:20:14.851029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.940 qpair failed and we were unable to recover it. 00:34:21.940 [2024-07-25 01:20:14.851148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.940 [2024-07-25 01:20:14.851173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.940 qpair failed and we were unable to recover it. 00:34:21.940 [2024-07-25 01:20:14.851321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.940 [2024-07-25 01:20:14.851346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.940 qpair failed and we were unable to recover it. 00:34:21.940 [2024-07-25 01:20:14.851495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.940 [2024-07-25 01:20:14.851521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.940 qpair failed and we were unable to recover it. 00:34:21.940 [2024-07-25 01:20:14.851686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.940 [2024-07-25 01:20:14.851712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.940 qpair failed and we were unable to recover it. 00:34:21.940 [2024-07-25 01:20:14.851817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.940 [2024-07-25 01:20:14.851842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.940 qpair failed and we were unable to recover it. 00:34:21.940 [2024-07-25 01:20:14.851986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.940 [2024-07-25 01:20:14.852011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.940 qpair failed and we were unable to recover it. 00:34:21.940 [2024-07-25 01:20:14.852126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.940 [2024-07-25 01:20:14.852151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.940 qpair failed and we were unable to recover it. 00:34:21.940 [2024-07-25 01:20:14.852290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.940 [2024-07-25 01:20:14.852315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.940 qpair failed and we were unable to recover it. 00:34:21.940 [2024-07-25 01:20:14.852430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.940 [2024-07-25 01:20:14.852456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.940 qpair failed and we were unable to recover it. 00:34:21.940 [2024-07-25 01:20:14.852576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.940 [2024-07-25 01:20:14.852601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.940 qpair failed and we were unable to recover it. 00:34:21.940 [2024-07-25 01:20:14.852768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.940 [2024-07-25 01:20:14.852793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.940 qpair failed and we were unable to recover it. 00:34:21.940 [2024-07-25 01:20:14.852940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.940 [2024-07-25 01:20:14.852965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.940 qpair failed and we were unable to recover it. 00:34:21.940 [2024-07-25 01:20:14.853074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.940 [2024-07-25 01:20:14.853100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.941 qpair failed and we were unable to recover it. 00:34:21.941 [2024-07-25 01:20:14.853247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.941 [2024-07-25 01:20:14.853273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.941 qpair failed and we were unable to recover it. 00:34:21.941 [2024-07-25 01:20:14.853393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.941 [2024-07-25 01:20:14.853417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.941 qpair failed and we were unable to recover it. 00:34:21.941 [2024-07-25 01:20:14.853529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.941 [2024-07-25 01:20:14.853558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.941 qpair failed and we were unable to recover it. 00:34:21.941 [2024-07-25 01:20:14.853696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.941 [2024-07-25 01:20:14.853720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.941 qpair failed and we were unable to recover it. 00:34:21.941 [2024-07-25 01:20:14.853834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.941 [2024-07-25 01:20:14.853860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.941 qpair failed and we were unable to recover it. 00:34:21.941 [2024-07-25 01:20:14.854029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.941 [2024-07-25 01:20:14.854055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.941 qpair failed and we were unable to recover it. 00:34:21.941 [2024-07-25 01:20:14.854197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.941 [2024-07-25 01:20:14.854221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.941 qpair failed and we were unable to recover it. 00:34:21.941 [2024-07-25 01:20:14.854373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.941 [2024-07-25 01:20:14.854398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.941 qpair failed and we were unable to recover it. 00:34:21.941 [2024-07-25 01:20:14.854548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.941 [2024-07-25 01:20:14.854573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.941 qpair failed and we were unable to recover it. 00:34:21.941 [2024-07-25 01:20:14.854692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.941 [2024-07-25 01:20:14.854717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.941 qpair failed and we were unable to recover it. 00:34:21.941 [2024-07-25 01:20:14.854848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.941 [2024-07-25 01:20:14.854874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.941 qpair failed and we were unable to recover it. 00:34:21.941 [2024-07-25 01:20:14.855029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.941 [2024-07-25 01:20:14.855056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.941 qpair failed and we were unable to recover it. 00:34:21.941 [2024-07-25 01:20:14.855198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.941 [2024-07-25 01:20:14.855222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.941 qpair failed and we were unable to recover it. 00:34:21.941 [2024-07-25 01:20:14.855381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.941 [2024-07-25 01:20:14.855407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.941 qpair failed and we were unable to recover it. 00:34:21.941 [2024-07-25 01:20:14.855577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.941 [2024-07-25 01:20:14.855603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.941 qpair failed and we were unable to recover it. 00:34:21.941 [2024-07-25 01:20:14.855745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.941 [2024-07-25 01:20:14.855770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.941 qpair failed and we were unable to recover it. 00:34:21.941 [2024-07-25 01:20:14.855892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.941 [2024-07-25 01:20:14.855917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.941 qpair failed and we were unable to recover it. 00:34:21.941 [2024-07-25 01:20:14.856058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.941 [2024-07-25 01:20:14.856083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.941 qpair failed and we were unable to recover it. 00:34:21.941 [2024-07-25 01:20:14.856196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.941 [2024-07-25 01:20:14.856221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.941 qpair failed and we were unable to recover it. 00:34:21.941 [2024-07-25 01:20:14.856388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.941 [2024-07-25 01:20:14.856428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.941 qpair failed and we were unable to recover it. 00:34:21.941 [2024-07-25 01:20:14.856578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.941 [2024-07-25 01:20:14.856606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.941 qpair failed and we were unable to recover it. 00:34:21.941 [2024-07-25 01:20:14.856733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.941 [2024-07-25 01:20:14.856759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.941 qpair failed and we were unable to recover it. 00:34:21.941 [2024-07-25 01:20:14.856931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.941 [2024-07-25 01:20:14.856957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.941 qpair failed and we were unable to recover it. 00:34:21.941 [2024-07-25 01:20:14.857095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.941 [2024-07-25 01:20:14.857121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.941 qpair failed and we were unable to recover it. 00:34:21.941 [2024-07-25 01:20:14.857266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.941 [2024-07-25 01:20:14.857293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.941 qpair failed and we were unable to recover it. 00:34:21.941 [2024-07-25 01:20:14.857434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.941 [2024-07-25 01:20:14.857459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.941 qpair failed and we were unable to recover it. 00:34:21.941 [2024-07-25 01:20:14.857573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.941 [2024-07-25 01:20:14.857600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.941 qpair failed and we were unable to recover it. 00:34:21.941 [2024-07-25 01:20:14.857769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.941 [2024-07-25 01:20:14.857794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.941 qpair failed and we were unable to recover it. 00:34:21.941 [2024-07-25 01:20:14.857967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.941 [2024-07-25 01:20:14.857993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.941 qpair failed and we were unable to recover it. 00:34:21.941 [2024-07-25 01:20:14.858115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.941 [2024-07-25 01:20:14.858141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.941 qpair failed and we were unable to recover it. 00:34:21.941 [2024-07-25 01:20:14.858252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.941 [2024-07-25 01:20:14.858279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.941 qpair failed and we were unable to recover it. 00:34:21.941 [2024-07-25 01:20:14.858434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.941 [2024-07-25 01:20:14.858461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.941 qpair failed and we were unable to recover it. 00:34:21.942 [2024-07-25 01:20:14.858579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.942 [2024-07-25 01:20:14.858605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.942 qpair failed and we were unable to recover it. 00:34:21.942 [2024-07-25 01:20:14.858754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.942 [2024-07-25 01:20:14.858780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.942 qpair failed and we were unable to recover it. 00:34:21.942 [2024-07-25 01:20:14.858927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.942 [2024-07-25 01:20:14.858953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.942 qpair failed and we were unable to recover it. 00:34:21.942 [2024-07-25 01:20:14.859092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.942 [2024-07-25 01:20:14.859117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.942 qpair failed and we were unable to recover it. 00:34:21.942 [2024-07-25 01:20:14.859233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.942 [2024-07-25 01:20:14.859266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.942 qpair failed and we were unable to recover it. 00:34:21.942 [2024-07-25 01:20:14.859406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.942 [2024-07-25 01:20:14.859432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.942 qpair failed and we were unable to recover it. 00:34:21.942 [2024-07-25 01:20:14.859580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.942 [2024-07-25 01:20:14.859605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.942 qpair failed and we were unable to recover it. 00:34:21.942 [2024-07-25 01:20:14.859745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.942 [2024-07-25 01:20:14.859770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.942 qpair failed and we were unable to recover it. 00:34:21.942 [2024-07-25 01:20:14.859875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.942 [2024-07-25 01:20:14.859901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.942 qpair failed and we were unable to recover it. 00:34:21.942 [2024-07-25 01:20:14.860019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.942 [2024-07-25 01:20:14.860044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.942 qpair failed and we were unable to recover it. 00:34:21.942 [2024-07-25 01:20:14.860189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.942 [2024-07-25 01:20:14.860222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.942 qpair failed and we were unable to recover it. 00:34:21.942 [2024-07-25 01:20:14.860353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.942 [2024-07-25 01:20:14.860379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.942 qpair failed and we were unable to recover it. 00:34:21.942 [2024-07-25 01:20:14.860516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.942 [2024-07-25 01:20:14.860542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.942 qpair failed and we were unable to recover it. 00:34:21.942 [2024-07-25 01:20:14.860683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.942 [2024-07-25 01:20:14.860710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.942 qpair failed and we were unable to recover it. 00:34:21.942 [2024-07-25 01:20:14.860855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.942 [2024-07-25 01:20:14.860881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.942 qpair failed and we were unable to recover it. 00:34:21.942 [2024-07-25 01:20:14.861021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.942 [2024-07-25 01:20:14.861047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.942 qpair failed and we were unable to recover it. 00:34:21.942 [2024-07-25 01:20:14.861214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.942 [2024-07-25 01:20:14.861240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.942 qpair failed and we were unable to recover it. 00:34:21.942 [2024-07-25 01:20:14.861418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.942 [2024-07-25 01:20:14.861443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.942 qpair failed and we were unable to recover it. 00:34:21.942 [2024-07-25 01:20:14.861613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.942 [2024-07-25 01:20:14.861639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.942 qpair failed and we were unable to recover it. 00:34:21.942 [2024-07-25 01:20:14.861784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.942 [2024-07-25 01:20:14.861810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.942 qpair failed and we were unable to recover it. 00:34:21.942 [2024-07-25 01:20:14.861931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.942 [2024-07-25 01:20:14.861958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.942 qpair failed and we were unable to recover it. 00:34:21.942 [2024-07-25 01:20:14.862101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.942 [2024-07-25 01:20:14.862127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.942 qpair failed and we were unable to recover it. 00:34:21.942 [2024-07-25 01:20:14.862252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.942 [2024-07-25 01:20:14.862279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.942 qpair failed and we were unable to recover it. 00:34:21.942 [2024-07-25 01:20:14.862427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.942 [2024-07-25 01:20:14.862454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.942 qpair failed and we were unable to recover it. 00:34:21.942 [2024-07-25 01:20:14.862584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.942 [2024-07-25 01:20:14.862611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.942 qpair failed and we were unable to recover it. 00:34:21.942 [2024-07-25 01:20:14.862731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.942 [2024-07-25 01:20:14.862756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.942 qpair failed and we were unable to recover it. 00:34:21.942 [2024-07-25 01:20:14.862875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.942 [2024-07-25 01:20:14.862901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.942 qpair failed and we were unable to recover it. 00:34:21.942 [2024-07-25 01:20:14.863076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.942 [2024-07-25 01:20:14.863102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.942 qpair failed and we were unable to recover it. 00:34:21.942 [2024-07-25 01:20:14.863251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.942 [2024-07-25 01:20:14.863278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.942 qpair failed and we were unable to recover it. 00:34:21.942 [2024-07-25 01:20:14.863440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.942 [2024-07-25 01:20:14.863466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.942 qpair failed and we were unable to recover it. 00:34:21.942 [2024-07-25 01:20:14.863589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.942 [2024-07-25 01:20:14.863615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.942 qpair failed and we were unable to recover it. 00:34:21.942 [2024-07-25 01:20:14.863754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.942 [2024-07-25 01:20:14.863779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.942 qpair failed and we were unable to recover it. 00:34:21.942 [2024-07-25 01:20:14.863899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.942 [2024-07-25 01:20:14.863926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.942 qpair failed and we were unable to recover it. 00:34:21.942 [2024-07-25 01:20:14.864093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.942 [2024-07-25 01:20:14.864118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.942 qpair failed and we were unable to recover it. 00:34:21.942 [2024-07-25 01:20:14.864263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.942 [2024-07-25 01:20:14.864289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.942 qpair failed and we were unable to recover it. 00:34:21.942 [2024-07-25 01:20:14.864455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.943 [2024-07-25 01:20:14.864481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.943 qpair failed and we were unable to recover it. 00:34:21.943 [2024-07-25 01:20:14.864650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.943 [2024-07-25 01:20:14.864675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.943 qpair failed and we were unable to recover it. 00:34:21.943 [2024-07-25 01:20:14.864831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.943 [2024-07-25 01:20:14.864857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.943 qpair failed and we were unable to recover it. 00:34:21.943 [2024-07-25 01:20:14.864997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.943 [2024-07-25 01:20:14.865023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.943 qpair failed and we were unable to recover it. 00:34:21.943 [2024-07-25 01:20:14.865163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.943 [2024-07-25 01:20:14.865188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.943 qpair failed and we were unable to recover it. 00:34:21.943 [2024-07-25 01:20:14.865340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.943 [2024-07-25 01:20:14.865366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.943 qpair failed and we were unable to recover it. 00:34:21.943 [2024-07-25 01:20:14.865483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.943 [2024-07-25 01:20:14.865508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.943 qpair failed and we were unable to recover it. 00:34:21.943 [2024-07-25 01:20:14.865674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.943 [2024-07-25 01:20:14.865700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.943 qpair failed and we were unable to recover it. 00:34:21.943 [2024-07-25 01:20:14.865839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.943 [2024-07-25 01:20:14.865865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.943 qpair failed and we were unable to recover it. 00:34:21.943 [2024-07-25 01:20:14.865978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.943 [2024-07-25 01:20:14.866004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.943 qpair failed and we were unable to recover it. 00:34:21.943 [2024-07-25 01:20:14.866125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.943 [2024-07-25 01:20:14.866150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.943 qpair failed and we were unable to recover it. 00:34:21.943 [2024-07-25 01:20:14.866280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.943 [2024-07-25 01:20:14.866307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.943 qpair failed and we were unable to recover it. 00:34:21.943 [2024-07-25 01:20:14.866422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.943 [2024-07-25 01:20:14.866447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.943 qpair failed and we were unable to recover it. 00:34:21.943 [2024-07-25 01:20:14.866614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.943 [2024-07-25 01:20:14.866640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.943 qpair failed and we were unable to recover it. 00:34:21.943 [2024-07-25 01:20:14.866812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.943 [2024-07-25 01:20:14.866838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.943 qpair failed and we were unable to recover it. 00:34:21.943 [2024-07-25 01:20:14.866983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.943 [2024-07-25 01:20:14.867012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.943 qpair failed and we were unable to recover it. 00:34:21.943 [2024-07-25 01:20:14.867125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.943 [2024-07-25 01:20:14.867151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.943 qpair failed and we were unable to recover it. 00:34:21.943 [2024-07-25 01:20:14.867266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.943 [2024-07-25 01:20:14.867292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.943 qpair failed and we were unable to recover it. 00:34:21.943 [2024-07-25 01:20:14.867460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.943 [2024-07-25 01:20:14.867486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.943 qpair failed and we were unable to recover it. 00:34:21.943 [2024-07-25 01:20:14.867600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.943 [2024-07-25 01:20:14.867627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.943 qpair failed and we were unable to recover it. 00:34:21.943 [2024-07-25 01:20:14.867798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.943 [2024-07-25 01:20:14.867823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.943 qpair failed and we were unable to recover it. 00:34:21.943 [2024-07-25 01:20:14.867943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.943 [2024-07-25 01:20:14.867969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.943 qpair failed and we were unable to recover it. 00:34:21.943 [2024-07-25 01:20:14.868085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.943 [2024-07-25 01:20:14.868111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.943 qpair failed and we were unable to recover it. 00:34:21.943 [2024-07-25 01:20:14.868230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.943 [2024-07-25 01:20:14.868262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.943 qpair failed and we were unable to recover it. 00:34:21.943 [2024-07-25 01:20:14.868405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.943 [2024-07-25 01:20:14.868432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.943 qpair failed and we were unable to recover it. 00:34:21.943 [2024-07-25 01:20:14.868600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.943 [2024-07-25 01:20:14.868626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.943 qpair failed and we were unable to recover it. 00:34:21.943 [2024-07-25 01:20:14.868797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.943 [2024-07-25 01:20:14.868822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.943 qpair failed and we were unable to recover it. 00:34:21.943 [2024-07-25 01:20:14.868961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.943 [2024-07-25 01:20:14.868986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.943 qpair failed and we were unable to recover it. 00:34:21.943 [2024-07-25 01:20:14.869127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.943 [2024-07-25 01:20:14.869152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.943 qpair failed and we were unable to recover it. 00:34:21.943 [2024-07-25 01:20:14.869301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.943 [2024-07-25 01:20:14.869327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.943 qpair failed and we were unable to recover it. 00:34:21.943 [2024-07-25 01:20:14.869468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.943 [2024-07-25 01:20:14.869494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.943 qpair failed and we were unable to recover it. 00:34:21.943 [2024-07-25 01:20:14.869636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.943 [2024-07-25 01:20:14.869662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.943 qpair failed and we were unable to recover it. 00:34:21.943 [2024-07-25 01:20:14.869808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.943 [2024-07-25 01:20:14.869834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.943 qpair failed and we were unable to recover it. 00:34:21.943 [2024-07-25 01:20:14.869976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.943 [2024-07-25 01:20:14.870002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.943 qpair failed and we were unable to recover it. 00:34:21.943 [2024-07-25 01:20:14.870148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.943 [2024-07-25 01:20:14.870174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.943 qpair failed and we were unable to recover it. 00:34:21.943 [2024-07-25 01:20:14.870286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.944 [2024-07-25 01:20:14.870312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.944 qpair failed and we were unable to recover it. 00:34:21.944 [2024-07-25 01:20:14.870480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.944 [2024-07-25 01:20:14.870506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.944 qpair failed and we were unable to recover it. 00:34:21.944 [2024-07-25 01:20:14.870670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.944 [2024-07-25 01:20:14.870695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.944 qpair failed and we were unable to recover it. 00:34:21.944 [2024-07-25 01:20:14.870835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.944 [2024-07-25 01:20:14.870860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.944 qpair failed and we were unable to recover it. 00:34:21.944 [2024-07-25 01:20:14.871000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.944 [2024-07-25 01:20:14.871026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.944 qpair failed and we were unable to recover it. 00:34:21.944 [2024-07-25 01:20:14.871193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.944 [2024-07-25 01:20:14.871219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.944 qpair failed and we were unable to recover it. 00:34:21.944 [2024-07-25 01:20:14.871377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.944 [2024-07-25 01:20:14.871405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.944 qpair failed and we were unable to recover it. 00:34:21.944 [2024-07-25 01:20:14.871530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.944 [2024-07-25 01:20:14.871557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.944 qpair failed and we were unable to recover it. 00:34:21.944 [2024-07-25 01:20:14.871725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.944 [2024-07-25 01:20:14.871750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.944 qpair failed and we were unable to recover it. 00:34:21.944 [2024-07-25 01:20:14.871927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.944 [2024-07-25 01:20:14.871953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.944 qpair failed and we were unable to recover it. 00:34:21.944 [2024-07-25 01:20:14.872091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.944 [2024-07-25 01:20:14.872117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.944 qpair failed and we were unable to recover it. 00:34:21.944 [2024-07-25 01:20:14.872264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.944 [2024-07-25 01:20:14.872291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.944 qpair failed and we were unable to recover it. 00:34:21.944 [2024-07-25 01:20:14.872436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.944 [2024-07-25 01:20:14.872462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.944 qpair failed and we were unable to recover it. 00:34:21.944 [2024-07-25 01:20:14.872601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.944 [2024-07-25 01:20:14.872626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.944 qpair failed and we were unable to recover it. 00:34:21.944 [2024-07-25 01:20:14.872743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.944 [2024-07-25 01:20:14.872768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.944 qpair failed and we were unable to recover it. 00:34:21.944 [2024-07-25 01:20:14.872911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.944 [2024-07-25 01:20:14.872937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.944 qpair failed and we were unable to recover it. 00:34:21.944 [2024-07-25 01:20:14.873081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.944 [2024-07-25 01:20:14.873107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.944 qpair failed and we were unable to recover it. 00:34:21.944 [2024-07-25 01:20:14.873251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.944 [2024-07-25 01:20:14.873278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.944 qpair failed and we were unable to recover it. 00:34:21.944 [2024-07-25 01:20:14.873450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.944 [2024-07-25 01:20:14.873476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.944 qpair failed and we were unable to recover it. 00:34:21.944 [2024-07-25 01:20:14.873590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.944 [2024-07-25 01:20:14.873615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.944 qpair failed and we were unable to recover it. 00:34:21.944 [2024-07-25 01:20:14.873754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.944 [2024-07-25 01:20:14.873784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.944 qpair failed and we were unable to recover it. 00:34:21.944 [2024-07-25 01:20:14.873906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.944 [2024-07-25 01:20:14.873931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.944 qpair failed and we were unable to recover it. 00:34:21.944 [2024-07-25 01:20:14.874046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.944 [2024-07-25 01:20:14.874071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.944 qpair failed and we were unable to recover it. 00:34:21.944 [2024-07-25 01:20:14.874207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.944 [2024-07-25 01:20:14.874233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.944 qpair failed and we were unable to recover it. 00:34:21.944 [2024-07-25 01:20:14.874349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.944 [2024-07-25 01:20:14.874375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.944 qpair failed and we were unable to recover it. 00:34:21.944 [2024-07-25 01:20:14.874511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.944 [2024-07-25 01:20:14.874537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.944 qpair failed and we were unable to recover it. 00:34:21.944 [2024-07-25 01:20:14.874658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.944 [2024-07-25 01:20:14.874684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.944 qpair failed and we were unable to recover it. 00:34:21.944 [2024-07-25 01:20:14.874820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.944 [2024-07-25 01:20:14.874846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.944 qpair failed and we were unable to recover it. 00:34:21.944 [2024-07-25 01:20:14.874981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.944 [2024-07-25 01:20:14.875007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.944 qpair failed and we were unable to recover it. 00:34:21.944 [2024-07-25 01:20:14.875175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.944 [2024-07-25 01:20:14.875200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.944 qpair failed and we were unable to recover it. 00:34:21.944 [2024-07-25 01:20:14.875344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.944 [2024-07-25 01:20:14.875370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.944 qpair failed and we were unable to recover it. 00:34:21.944 [2024-07-25 01:20:14.875489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.944 [2024-07-25 01:20:14.875515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.944 qpair failed and we were unable to recover it. 00:34:21.944 [2024-07-25 01:20:14.875634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.944 [2024-07-25 01:20:14.875660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.944 qpair failed and we were unable to recover it. 00:34:21.944 [2024-07-25 01:20:14.875800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.944 [2024-07-25 01:20:14.875826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.944 qpair failed and we were unable to recover it. 00:34:21.944 [2024-07-25 01:20:14.875974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.944 [2024-07-25 01:20:14.876000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.944 qpair failed and we were unable to recover it. 00:34:21.944 [2024-07-25 01:20:14.876116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.945 [2024-07-25 01:20:14.876142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.945 qpair failed and we were unable to recover it. 00:34:21.945 [2024-07-25 01:20:14.876282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.945 [2024-07-25 01:20:14.876308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.945 qpair failed and we were unable to recover it. 00:34:21.945 [2024-07-25 01:20:14.876458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.945 [2024-07-25 01:20:14.876484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.945 qpair failed and we were unable to recover it. 00:34:21.945 [2024-07-25 01:20:14.876649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.945 [2024-07-25 01:20:14.876675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.945 qpair failed and we were unable to recover it. 00:34:21.945 [2024-07-25 01:20:14.876790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.945 [2024-07-25 01:20:14.876816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.945 qpair failed and we were unable to recover it. 00:34:21.945 [2024-07-25 01:20:14.876934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.945 [2024-07-25 01:20:14.876960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.945 qpair failed and we were unable to recover it. 00:34:21.945 [2024-07-25 01:20:14.877103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.945 [2024-07-25 01:20:14.877129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.945 qpair failed and we were unable to recover it. 00:34:21.945 [2024-07-25 01:20:14.877270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.945 [2024-07-25 01:20:14.877297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.945 qpair failed and we were unable to recover it. 00:34:21.945 [2024-07-25 01:20:14.877465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.945 [2024-07-25 01:20:14.877491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.945 qpair failed and we were unable to recover it. 00:34:21.945 [2024-07-25 01:20:14.877606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.945 [2024-07-25 01:20:14.877632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.945 qpair failed and we were unable to recover it. 00:34:21.945 [2024-07-25 01:20:14.877775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.945 [2024-07-25 01:20:14.877801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.945 qpair failed and we were unable to recover it. 00:34:21.945 [2024-07-25 01:20:14.877968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.945 [2024-07-25 01:20:14.877994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.945 qpair failed and we were unable to recover it. 00:34:21.945 [2024-07-25 01:20:14.878147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.945 [2024-07-25 01:20:14.878173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.945 qpair failed and we were unable to recover it. 00:34:21.945 [2024-07-25 01:20:14.878298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.945 [2024-07-25 01:20:14.878325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.945 qpair failed and we were unable to recover it. 00:34:21.945 [2024-07-25 01:20:14.878470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.945 [2024-07-25 01:20:14.878495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.945 qpair failed and we were unable to recover it. 00:34:21.945 [2024-07-25 01:20:14.878665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.945 [2024-07-25 01:20:14.878691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.945 qpair failed and we were unable to recover it. 00:34:21.945 [2024-07-25 01:20:14.878834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.945 [2024-07-25 01:20:14.878860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.945 qpair failed and we were unable to recover it. 00:34:21.945 [2024-07-25 01:20:14.878997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.945 [2024-07-25 01:20:14.879022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.945 qpair failed and we were unable to recover it. 00:34:21.945 [2024-07-25 01:20:14.879162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.945 [2024-07-25 01:20:14.879188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.945 qpair failed and we were unable to recover it. 00:34:21.945 [2024-07-25 01:20:14.879357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.945 [2024-07-25 01:20:14.879383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.945 qpair failed and we were unable to recover it. 00:34:21.945 [2024-07-25 01:20:14.879525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.945 [2024-07-25 01:20:14.879551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.945 qpair failed and we were unable to recover it. 00:34:21.945 [2024-07-25 01:20:14.879689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.945 [2024-07-25 01:20:14.879715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.945 qpair failed and we were unable to recover it. 00:34:21.945 [2024-07-25 01:20:14.879851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.945 [2024-07-25 01:20:14.879877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.945 qpair failed and we were unable to recover it. 00:34:21.945 [2024-07-25 01:20:14.880021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.945 [2024-07-25 01:20:14.880046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.945 qpair failed and we were unable to recover it. 00:34:21.945 [2024-07-25 01:20:14.880184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.945 [2024-07-25 01:20:14.880210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.945 qpair failed and we were unable to recover it. 00:34:21.945 [2024-07-25 01:20:14.880387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.945 [2024-07-25 01:20:14.880417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.945 qpair failed and we were unable to recover it. 00:34:21.945 [2024-07-25 01:20:14.880563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.945 [2024-07-25 01:20:14.880589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.945 qpair failed and we were unable to recover it. 00:34:21.945 [2024-07-25 01:20:14.880735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.945 [2024-07-25 01:20:14.880761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.945 qpair failed and we were unable to recover it. 00:34:21.945 [2024-07-25 01:20:14.880878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.945 [2024-07-25 01:20:14.880904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.945 qpair failed and we were unable to recover it. 00:34:21.945 [2024-07-25 01:20:14.881074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.945 [2024-07-25 01:20:14.881099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.945 qpair failed and we were unable to recover it. 00:34:21.945 [2024-07-25 01:20:14.881268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.945 [2024-07-25 01:20:14.881295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.945 qpair failed and we were unable to recover it. 00:34:21.945 [2024-07-25 01:20:14.881416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.946 [2024-07-25 01:20:14.881441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.946 qpair failed and we were unable to recover it. 00:34:21.946 [2024-07-25 01:20:14.881552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.946 [2024-07-25 01:20:14.881578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.946 qpair failed and we were unable to recover it. 00:34:21.946 [2024-07-25 01:20:14.881715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.946 [2024-07-25 01:20:14.881741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.946 qpair failed and we were unable to recover it. 00:34:21.946 [2024-07-25 01:20:14.881855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.946 [2024-07-25 01:20:14.881881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.946 qpair failed and we were unable to recover it. 00:34:21.946 [2024-07-25 01:20:14.882020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.946 [2024-07-25 01:20:14.882046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.946 qpair failed and we were unable to recover it. 00:34:21.946 [2024-07-25 01:20:14.882155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.946 [2024-07-25 01:20:14.882182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.946 qpair failed and we were unable to recover it. 00:34:21.946 [2024-07-25 01:20:14.882356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.946 [2024-07-25 01:20:14.882382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.946 qpair failed and we were unable to recover it. 00:34:21.946 [2024-07-25 01:20:14.882495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.946 [2024-07-25 01:20:14.882522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.946 qpair failed and we were unable to recover it. 00:34:21.946 [2024-07-25 01:20:14.882696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.946 [2024-07-25 01:20:14.882723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.946 qpair failed and we were unable to recover it. 00:34:21.946 [2024-07-25 01:20:14.882870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.946 [2024-07-25 01:20:14.882895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.946 qpair failed and we were unable to recover it. 00:34:21.946 [2024-07-25 01:20:14.883013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.946 [2024-07-25 01:20:14.883039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.946 qpair failed and we were unable to recover it. 00:34:21.946 [2024-07-25 01:20:14.883179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.946 [2024-07-25 01:20:14.883205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.946 qpair failed and we were unable to recover it. 00:34:21.946 [2024-07-25 01:20:14.883317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.946 [2024-07-25 01:20:14.883343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.946 qpair failed and we were unable to recover it. 00:34:21.946 [2024-07-25 01:20:14.883491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.946 [2024-07-25 01:20:14.883517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.946 qpair failed and we were unable to recover it. 00:34:21.946 [2024-07-25 01:20:14.883662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.946 [2024-07-25 01:20:14.883688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.946 qpair failed and we were unable to recover it. 00:34:21.946 [2024-07-25 01:20:14.883830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.946 [2024-07-25 01:20:14.883856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.946 qpair failed and we were unable to recover it. 00:34:21.946 [2024-07-25 01:20:14.883974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.946 [2024-07-25 01:20:14.884000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.946 qpair failed and we were unable to recover it. 00:34:21.946 [2024-07-25 01:20:14.884116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.946 [2024-07-25 01:20:14.884141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.946 qpair failed and we were unable to recover it. 00:34:21.946 [2024-07-25 01:20:14.884290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.946 [2024-07-25 01:20:14.884326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.946 qpair failed and we were unable to recover it. 00:34:21.946 [2024-07-25 01:20:14.884474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.946 [2024-07-25 01:20:14.884500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.946 qpair failed and we were unable to recover it. 00:34:21.946 [2024-07-25 01:20:14.884615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.946 [2024-07-25 01:20:14.884641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.946 qpair failed and we were unable to recover it. 00:34:21.946 [2024-07-25 01:20:14.884792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.946 [2024-07-25 01:20:14.884818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.946 qpair failed and we were unable to recover it. 00:34:21.946 [2024-07-25 01:20:14.884926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.946 [2024-07-25 01:20:14.884952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.946 qpair failed and we were unable to recover it. 00:34:21.946 [2024-07-25 01:20:14.885090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.946 [2024-07-25 01:20:14.885116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.946 qpair failed and we were unable to recover it. 00:34:21.946 [2024-07-25 01:20:14.885258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.946 [2024-07-25 01:20:14.885284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.946 qpair failed and we were unable to recover it. 00:34:21.946 [2024-07-25 01:20:14.885430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.946 [2024-07-25 01:20:14.885456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.946 qpair failed and we were unable to recover it. 00:34:21.946 [2024-07-25 01:20:14.885591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.946 [2024-07-25 01:20:14.885617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.946 qpair failed and we were unable to recover it. 00:34:21.946 [2024-07-25 01:20:14.885787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.946 [2024-07-25 01:20:14.885812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.946 qpair failed and we were unable to recover it. 00:34:21.946 [2024-07-25 01:20:14.885982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.946 [2024-07-25 01:20:14.886007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.946 qpair failed and we were unable to recover it. 00:34:21.946 [2024-07-25 01:20:14.886178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.946 [2024-07-25 01:20:14.886203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.946 qpair failed and we were unable to recover it. 00:34:21.946 [2024-07-25 01:20:14.886348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.946 [2024-07-25 01:20:14.886375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.946 qpair failed and we were unable to recover it. 00:34:21.947 [2024-07-25 01:20:14.886542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.947 [2024-07-25 01:20:14.886568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.947 qpair failed and we were unable to recover it. 00:34:21.947 [2024-07-25 01:20:14.886720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.947 [2024-07-25 01:20:14.886746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.947 qpair failed and we were unable to recover it. 00:34:21.947 [2024-07-25 01:20:14.886895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.947 [2024-07-25 01:20:14.886921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.947 qpair failed and we were unable to recover it. 00:34:21.947 [2024-07-25 01:20:14.887034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.947 [2024-07-25 01:20:14.887064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.947 qpair failed and we were unable to recover it. 00:34:21.947 [2024-07-25 01:20:14.887215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.947 [2024-07-25 01:20:14.887257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.947 qpair failed and we were unable to recover it. 00:34:21.947 [2024-07-25 01:20:14.887427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.947 [2024-07-25 01:20:14.887453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.947 qpair failed and we were unable to recover it. 00:34:21.947 [2024-07-25 01:20:14.887591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.947 [2024-07-25 01:20:14.887617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.947 qpair failed and we were unable to recover it. 00:34:21.947 [2024-07-25 01:20:14.887754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.947 [2024-07-25 01:20:14.887780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.947 qpair failed and we were unable to recover it. 00:34:21.947 [2024-07-25 01:20:14.887923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.947 [2024-07-25 01:20:14.887948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.947 qpair failed and we were unable to recover it. 00:34:21.947 [2024-07-25 01:20:14.888062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.947 [2024-07-25 01:20:14.888087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.947 qpair failed and we were unable to recover it. 00:34:21.947 [2024-07-25 01:20:14.888209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.947 [2024-07-25 01:20:14.888234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.947 qpair failed and we were unable to recover it. 00:34:21.947 [2024-07-25 01:20:14.888367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.947 [2024-07-25 01:20:14.888394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.947 qpair failed and we were unable to recover it. 00:34:21.947 [2024-07-25 01:20:14.888567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.947 [2024-07-25 01:20:14.888593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.947 qpair failed and we were unable to recover it. 00:34:21.947 [2024-07-25 01:20:14.888738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.947 [2024-07-25 01:20:14.888764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.947 qpair failed and we were unable to recover it. 00:34:21.947 [2024-07-25 01:20:14.888908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.947 [2024-07-25 01:20:14.888934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.947 qpair failed and we were unable to recover it. 00:34:21.947 [2024-07-25 01:20:14.889078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.947 [2024-07-25 01:20:14.889104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.947 qpair failed and we were unable to recover it. 00:34:21.947 [2024-07-25 01:20:14.889209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.947 [2024-07-25 01:20:14.889235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.947 qpair failed and we were unable to recover it. 00:34:21.947 [2024-07-25 01:20:14.889387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.947 [2024-07-25 01:20:14.889414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.947 qpair failed and we were unable to recover it. 00:34:21.947 [2024-07-25 01:20:14.889562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.947 [2024-07-25 01:20:14.889588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.947 qpair failed and we were unable to recover it. 00:34:21.947 [2024-07-25 01:20:14.889730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.947 [2024-07-25 01:20:14.889756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.947 qpair failed and we were unable to recover it. 00:34:21.947 [2024-07-25 01:20:14.889898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.947 [2024-07-25 01:20:14.889924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.947 qpair failed and we were unable to recover it. 00:34:21.947 [2024-07-25 01:20:14.890093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.947 [2024-07-25 01:20:14.890118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.947 qpair failed and we were unable to recover it. 00:34:21.947 [2024-07-25 01:20:14.890269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.947 [2024-07-25 01:20:14.890297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.947 qpair failed and we were unable to recover it. 00:34:21.947 [2024-07-25 01:20:14.890405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.947 [2024-07-25 01:20:14.890431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.947 qpair failed and we were unable to recover it. 00:34:21.947 [2024-07-25 01:20:14.890579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.947 [2024-07-25 01:20:14.890605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.947 qpair failed and we were unable to recover it. 00:34:21.947 [2024-07-25 01:20:14.890715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.947 [2024-07-25 01:20:14.890741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.947 qpair failed and we were unable to recover it. 00:34:21.947 [2024-07-25 01:20:14.890886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.947 [2024-07-25 01:20:14.890912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.947 qpair failed and we were unable to recover it. 00:34:21.947 [2024-07-25 01:20:14.891028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.947 [2024-07-25 01:20:14.891055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.947 qpair failed and we were unable to recover it. 00:34:21.947 [2024-07-25 01:20:14.891228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.947 [2024-07-25 01:20:14.891262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.947 qpair failed and we were unable to recover it. 00:34:21.947 [2024-07-25 01:20:14.891404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.947 [2024-07-25 01:20:14.891431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.947 qpair failed and we were unable to recover it. 00:34:21.947 [2024-07-25 01:20:14.891575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.947 [2024-07-25 01:20:14.891602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.947 qpair failed and we were unable to recover it. 00:34:21.947 [2024-07-25 01:20:14.891768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.947 [2024-07-25 01:20:14.891793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.947 qpair failed and we were unable to recover it. 00:34:21.947 [2024-07-25 01:20:14.891961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.947 [2024-07-25 01:20:14.891987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.947 qpair failed and we were unable to recover it. 00:34:21.947 [2024-07-25 01:20:14.892130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.947 [2024-07-25 01:20:14.892156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.947 qpair failed and we were unable to recover it. 00:34:21.947 [2024-07-25 01:20:14.892297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.947 [2024-07-25 01:20:14.892324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.947 qpair failed and we were unable to recover it. 00:34:21.947 [2024-07-25 01:20:14.892464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.947 [2024-07-25 01:20:14.892491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.947 qpair failed and we were unable to recover it. 00:34:21.947 [2024-07-25 01:20:14.892627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.948 [2024-07-25 01:20:14.892654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.948 qpair failed and we were unable to recover it. 00:34:21.948 [2024-07-25 01:20:14.892797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.948 [2024-07-25 01:20:14.892823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.948 qpair failed and we were unable to recover it. 00:34:21.948 [2024-07-25 01:20:14.892942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.948 [2024-07-25 01:20:14.892968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.948 qpair failed and we were unable to recover it. 00:34:21.948 [2024-07-25 01:20:14.893135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.948 [2024-07-25 01:20:14.893164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.948 qpair failed and we were unable to recover it. 00:34:21.948 [2024-07-25 01:20:14.893352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.948 [2024-07-25 01:20:14.893378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.948 qpair failed and we were unable to recover it. 00:34:21.948 [2024-07-25 01:20:14.893519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.948 [2024-07-25 01:20:14.893544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.948 qpair failed and we were unable to recover it. 00:34:21.948 [2024-07-25 01:20:14.893658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.948 [2024-07-25 01:20:14.893683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.948 qpair failed and we were unable to recover it. 00:34:21.948 [2024-07-25 01:20:14.893822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.948 [2024-07-25 01:20:14.893854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.948 qpair failed and we were unable to recover it. 00:34:21.948 [2024-07-25 01:20:14.893973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.948 [2024-07-25 01:20:14.893999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.948 qpair failed and we were unable to recover it. 00:34:21.948 [2024-07-25 01:20:14.894137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.948 [2024-07-25 01:20:14.894163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.948 qpair failed and we were unable to recover it. 00:34:21.948 [2024-07-25 01:20:14.894306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.948 [2024-07-25 01:20:14.894332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.948 qpair failed and we were unable to recover it. 00:34:21.948 [2024-07-25 01:20:14.894513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.948 [2024-07-25 01:20:14.894538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.948 qpair failed and we were unable to recover it. 00:34:21.948 [2024-07-25 01:20:14.894650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.948 [2024-07-25 01:20:14.894676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.948 qpair failed and we were unable to recover it. 00:34:21.948 [2024-07-25 01:20:14.894795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.948 [2024-07-25 01:20:14.894820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.948 qpair failed and we were unable to recover it. 00:34:21.948 [2024-07-25 01:20:14.894984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.948 [2024-07-25 01:20:14.895009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.948 qpair failed and we were unable to recover it. 00:34:21.948 [2024-07-25 01:20:14.895124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.948 [2024-07-25 01:20:14.895150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.948 qpair failed and we were unable to recover it. 00:34:21.948 [2024-07-25 01:20:14.895295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.948 [2024-07-25 01:20:14.895321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.948 qpair failed and we were unable to recover it. 00:34:21.948 [2024-07-25 01:20:14.895465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.948 [2024-07-25 01:20:14.895492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.948 qpair failed and we were unable to recover it. 00:34:21.948 [2024-07-25 01:20:14.895615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.948 [2024-07-25 01:20:14.895640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.948 qpair failed and we were unable to recover it. 00:34:21.948 [2024-07-25 01:20:14.895789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.948 [2024-07-25 01:20:14.895814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.948 qpair failed and we were unable to recover it. 00:34:21.948 [2024-07-25 01:20:14.895983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.948 [2024-07-25 01:20:14.896009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.948 qpair failed and we were unable to recover it. 00:34:21.948 [2024-07-25 01:20:14.896186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.948 [2024-07-25 01:20:14.896212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.948 qpair failed and we were unable to recover it. 00:34:21.948 [2024-07-25 01:20:14.896327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.948 [2024-07-25 01:20:14.896352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.948 qpair failed and we were unable to recover it. 00:34:21.948 [2024-07-25 01:20:14.896497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.948 [2024-07-25 01:20:14.896523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.948 qpair failed and we were unable to recover it. 00:34:21.948 [2024-07-25 01:20:14.896662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.948 [2024-07-25 01:20:14.896688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.948 qpair failed and we were unable to recover it. 00:34:21.948 [2024-07-25 01:20:14.896805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.948 [2024-07-25 01:20:14.896831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.948 qpair failed and we were unable to recover it. 00:34:21.948 [2024-07-25 01:20:14.896951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.948 [2024-07-25 01:20:14.896977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.948 qpair failed and we were unable to recover it. 00:34:21.948 [2024-07-25 01:20:14.897142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.948 [2024-07-25 01:20:14.897168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.948 qpair failed and we were unable to recover it. 00:34:21.948 [2024-07-25 01:20:14.897291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.948 [2024-07-25 01:20:14.897316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.948 qpair failed and we were unable to recover it. 00:34:21.948 [2024-07-25 01:20:14.897456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.948 [2024-07-25 01:20:14.897481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.948 qpair failed and we were unable to recover it. 00:34:21.948 [2024-07-25 01:20:14.897598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.948 [2024-07-25 01:20:14.897624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.948 qpair failed and we were unable to recover it. 00:34:21.948 [2024-07-25 01:20:14.897792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.948 [2024-07-25 01:20:14.897818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.948 qpair failed and we were unable to recover it. 00:34:21.948 [2024-07-25 01:20:14.897985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.948 [2024-07-25 01:20:14.898010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.948 qpair failed and we were unable to recover it. 00:34:21.948 [2024-07-25 01:20:14.898147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.948 [2024-07-25 01:20:14.898173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.948 qpair failed and we were unable to recover it. 00:34:21.948 [2024-07-25 01:20:14.898319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.948 [2024-07-25 01:20:14.898346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.948 qpair failed and we were unable to recover it. 00:34:21.949 [2024-07-25 01:20:14.898487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.949 [2024-07-25 01:20:14.898512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.949 qpair failed and we were unable to recover it. 00:34:21.949 [2024-07-25 01:20:14.898632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.949 [2024-07-25 01:20:14.898657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.949 qpair failed and we were unable to recover it. 00:34:21.949 [2024-07-25 01:20:14.898829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.949 [2024-07-25 01:20:14.898855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.949 qpair failed and we were unable to recover it. 00:34:21.949 [2024-07-25 01:20:14.898975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.949 [2024-07-25 01:20:14.899000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.949 qpair failed and we were unable to recover it. 00:34:21.949 [2024-07-25 01:20:14.899145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.949 [2024-07-25 01:20:14.899171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.949 qpair failed and we were unable to recover it. 00:34:21.949 [2024-07-25 01:20:14.899312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.949 [2024-07-25 01:20:14.899338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.949 qpair failed and we were unable to recover it. 00:34:21.949 [2024-07-25 01:20:14.899457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.949 [2024-07-25 01:20:14.899483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.949 qpair failed and we were unable to recover it. 00:34:21.949 [2024-07-25 01:20:14.899602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.949 [2024-07-25 01:20:14.899627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.949 qpair failed and we were unable to recover it. 00:34:21.949 [2024-07-25 01:20:14.899767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.949 [2024-07-25 01:20:14.899792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.949 qpair failed and we were unable to recover it. 00:34:21.949 [2024-07-25 01:20:14.899938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.949 [2024-07-25 01:20:14.899964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.949 qpair failed and we were unable to recover it. 00:34:21.949 [2024-07-25 01:20:14.900101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.949 [2024-07-25 01:20:14.900126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.949 qpair failed and we were unable to recover it. 00:34:21.949 [2024-07-25 01:20:14.900264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.949 [2024-07-25 01:20:14.900290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.949 qpair failed and we were unable to recover it. 00:34:21.949 [2024-07-25 01:20:14.900408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.949 [2024-07-25 01:20:14.900438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.949 qpair failed and we were unable to recover it. 00:34:21.949 [2024-07-25 01:20:14.900604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.949 [2024-07-25 01:20:14.900629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.949 qpair failed and we were unable to recover it. 00:34:21.949 [2024-07-25 01:20:14.900744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.949 [2024-07-25 01:20:14.900769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.949 qpair failed and we were unable to recover it. 00:34:21.949 [2024-07-25 01:20:14.900911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.949 [2024-07-25 01:20:14.900937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.949 qpair failed and we were unable to recover it. 00:34:21.949 [2024-07-25 01:20:14.901050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.949 [2024-07-25 01:20:14.901076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.949 qpair failed and we were unable to recover it. 00:34:21.949 [2024-07-25 01:20:14.901248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.949 [2024-07-25 01:20:14.901275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.949 qpair failed and we were unable to recover it. 00:34:21.949 [2024-07-25 01:20:14.901418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.949 [2024-07-25 01:20:14.901444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.949 qpair failed and we were unable to recover it. 00:34:21.949 [2024-07-25 01:20:14.901614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.949 [2024-07-25 01:20:14.901639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.949 qpair failed and we were unable to recover it. 00:34:21.949 [2024-07-25 01:20:14.901807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.949 [2024-07-25 01:20:14.901832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.949 qpair failed and we were unable to recover it. 00:34:21.949 [2024-07-25 01:20:14.901982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.949 [2024-07-25 01:20:14.902007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.949 qpair failed and we were unable to recover it. 00:34:21.949 [2024-07-25 01:20:14.902126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.949 [2024-07-25 01:20:14.902153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.949 qpair failed and we were unable to recover it. 00:34:21.949 [2024-07-25 01:20:14.902277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.949 [2024-07-25 01:20:14.902302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.949 qpair failed and we were unable to recover it. 00:34:21.949 [2024-07-25 01:20:14.902412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.949 [2024-07-25 01:20:14.902438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.949 qpair failed and we were unable to recover it. 00:34:21.949 [2024-07-25 01:20:14.902612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.949 [2024-07-25 01:20:14.902638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.949 qpair failed and we were unable to recover it. 00:34:21.949 [2024-07-25 01:20:14.902759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.949 [2024-07-25 01:20:14.902786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.949 qpair failed and we were unable to recover it. 00:34:21.949 [2024-07-25 01:20:14.902951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.949 [2024-07-25 01:20:14.902977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.949 qpair failed and we were unable to recover it. 00:34:21.949 [2024-07-25 01:20:14.903117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.949 [2024-07-25 01:20:14.903144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.949 qpair failed and we were unable to recover it. 00:34:21.949 [2024-07-25 01:20:14.903323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.949 [2024-07-25 01:20:14.903349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.949 qpair failed and we were unable to recover it. 00:34:21.949 [2024-07-25 01:20:14.903494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.949 [2024-07-25 01:20:14.903519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.949 qpair failed and we were unable to recover it. 00:34:21.949 [2024-07-25 01:20:14.903661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.949 [2024-07-25 01:20:14.903687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.949 qpair failed and we were unable to recover it. 00:34:21.949 [2024-07-25 01:20:14.903829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.949 [2024-07-25 01:20:14.903855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.949 qpair failed and we were unable to recover it. 00:34:21.949 [2024-07-25 01:20:14.903980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.949 [2024-07-25 01:20:14.904005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.949 qpair failed and we were unable to recover it. 00:34:21.949 [2024-07-25 01:20:14.904147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.950 [2024-07-25 01:20:14.904172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.950 qpair failed and we were unable to recover it. 00:34:21.950 [2024-07-25 01:20:14.904292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.950 [2024-07-25 01:20:14.904318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.950 qpair failed and we were unable to recover it. 00:34:21.950 [2024-07-25 01:20:14.904486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.950 [2024-07-25 01:20:14.904512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.950 qpair failed and we were unable to recover it. 00:34:21.950 [2024-07-25 01:20:14.904650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.950 [2024-07-25 01:20:14.904675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.950 qpair failed and we were unable to recover it. 00:34:21.950 [2024-07-25 01:20:14.904824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.950 [2024-07-25 01:20:14.904850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.950 qpair failed and we were unable to recover it. 00:34:21.950 [2024-07-25 01:20:14.905003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.950 [2024-07-25 01:20:14.905029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.950 qpair failed and we were unable to recover it. 00:34:21.950 [2024-07-25 01:20:14.905140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.950 [2024-07-25 01:20:14.905165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.950 qpair failed and we were unable to recover it. 00:34:21.950 [2024-07-25 01:20:14.905300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.950 [2024-07-25 01:20:14.905327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.950 qpair failed and we were unable to recover it. 00:34:21.950 [2024-07-25 01:20:14.905497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.950 [2024-07-25 01:20:14.905523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.950 qpair failed and we were unable to recover it. 00:34:21.950 [2024-07-25 01:20:14.905692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.950 [2024-07-25 01:20:14.905718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.950 qpair failed and we were unable to recover it. 00:34:21.950 [2024-07-25 01:20:14.905835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.950 [2024-07-25 01:20:14.905861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.950 qpair failed and we were unable to recover it. 00:34:21.950 [2024-07-25 01:20:14.905996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.950 [2024-07-25 01:20:14.906022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.950 qpair failed and we were unable to recover it. 00:34:21.950 [2024-07-25 01:20:14.906159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.950 [2024-07-25 01:20:14.906184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.950 qpair failed and we were unable to recover it. 00:34:21.950 [2024-07-25 01:20:14.906329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.950 [2024-07-25 01:20:14.906355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.950 qpair failed and we were unable to recover it. 00:34:21.950 [2024-07-25 01:20:14.906467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.950 [2024-07-25 01:20:14.906493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.950 qpair failed and we were unable to recover it. 00:34:21.950 [2024-07-25 01:20:14.906635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.950 [2024-07-25 01:20:14.906661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.950 qpair failed and we were unable to recover it. 00:34:21.950 [2024-07-25 01:20:14.906800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.950 [2024-07-25 01:20:14.906826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.950 qpair failed and we were unable to recover it. 00:34:21.950 [2024-07-25 01:20:14.906941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.950 [2024-07-25 01:20:14.906967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.950 qpair failed and we were unable to recover it. 00:34:21.950 [2024-07-25 01:20:14.907081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.950 [2024-07-25 01:20:14.907108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.950 qpair failed and we were unable to recover it. 00:34:21.950 [2024-07-25 01:20:14.907261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.950 [2024-07-25 01:20:14.907287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.950 qpair failed and we were unable to recover it. 00:34:21.950 [2024-07-25 01:20:14.907429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.950 [2024-07-25 01:20:14.907456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.950 qpair failed and we were unable to recover it. 00:34:21.950 [2024-07-25 01:20:14.907598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.950 [2024-07-25 01:20:14.907624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.950 qpair failed and we were unable to recover it. 00:34:21.950 [2024-07-25 01:20:14.907793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.950 [2024-07-25 01:20:14.907819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.950 qpair failed and we were unable to recover it. 00:34:21.950 [2024-07-25 01:20:14.907988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.950 [2024-07-25 01:20:14.908014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.950 qpair failed and we were unable to recover it. 00:34:21.950 [2024-07-25 01:20:14.908128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.950 [2024-07-25 01:20:14.908154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.950 qpair failed and we were unable to recover it. 00:34:21.950 [2024-07-25 01:20:14.908262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.950 [2024-07-25 01:20:14.908288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.950 qpair failed and we were unable to recover it. 00:34:21.950 [2024-07-25 01:20:14.908438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.950 [2024-07-25 01:20:14.908464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.950 qpair failed and we were unable to recover it. 00:34:21.950 [2024-07-25 01:20:14.908581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.950 [2024-07-25 01:20:14.908606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.950 qpair failed and we were unable to recover it. 00:34:21.950 [2024-07-25 01:20:14.908754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.950 [2024-07-25 01:20:14.908779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.950 qpair failed and we were unable to recover it. 00:34:21.950 [2024-07-25 01:20:14.908924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.950 [2024-07-25 01:20:14.908950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.950 qpair failed and we were unable to recover it. 00:34:21.950 [2024-07-25 01:20:14.909093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.950 [2024-07-25 01:20:14.909118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.950 qpair failed and we were unable to recover it. 00:34:21.950 [2024-07-25 01:20:14.909285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.950 [2024-07-25 01:20:14.909311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.950 qpair failed and we were unable to recover it. 00:34:21.950 [2024-07-25 01:20:14.909458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.950 [2024-07-25 01:20:14.909483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.950 qpair failed and we were unable to recover it. 00:34:21.951 [2024-07-25 01:20:14.909632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.951 [2024-07-25 01:20:14.909658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.951 qpair failed and we were unable to recover it. 00:34:21.951 [2024-07-25 01:20:14.909774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.951 [2024-07-25 01:20:14.909801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.951 qpair failed and we were unable to recover it. 00:34:21.951 [2024-07-25 01:20:14.909972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.951 [2024-07-25 01:20:14.909998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.951 qpair failed and we were unable to recover it. 00:34:21.951 [2024-07-25 01:20:14.910142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.951 [2024-07-25 01:20:14.910167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.951 qpair failed and we were unable to recover it. 00:34:21.951 [2024-07-25 01:20:14.910311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.951 [2024-07-25 01:20:14.910337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.951 qpair failed and we were unable to recover it. 00:34:21.951 [2024-07-25 01:20:14.910480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.951 [2024-07-25 01:20:14.910506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.951 qpair failed and we were unable to recover it. 00:34:21.951 [2024-07-25 01:20:14.910649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.951 [2024-07-25 01:20:14.910674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.951 qpair failed and we were unable to recover it. 00:34:21.951 [2024-07-25 01:20:14.910843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.951 [2024-07-25 01:20:14.910869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.951 qpair failed and we were unable to recover it. 00:34:21.951 [2024-07-25 01:20:14.911040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.951 [2024-07-25 01:20:14.911065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.951 qpair failed and we were unable to recover it. 00:34:21.951 [2024-07-25 01:20:14.911182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.951 [2024-07-25 01:20:14.911209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.951 qpair failed and we were unable to recover it. 00:34:21.951 [2024-07-25 01:20:14.911367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.951 [2024-07-25 01:20:14.911394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.951 qpair failed and we were unable to recover it. 00:34:21.951 [2024-07-25 01:20:14.911532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.951 [2024-07-25 01:20:14.911557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.951 qpair failed and we were unable to recover it. 00:34:21.951 [2024-07-25 01:20:14.911698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.951 [2024-07-25 01:20:14.911727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.951 qpair failed and we were unable to recover it. 00:34:21.951 [2024-07-25 01:20:14.911879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.951 [2024-07-25 01:20:14.911905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.951 qpair failed and we were unable to recover it. 00:34:21.951 [2024-07-25 01:20:14.912043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.951 [2024-07-25 01:20:14.912068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.951 qpair failed and we were unable to recover it. 00:34:21.951 [2024-07-25 01:20:14.912184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.951 [2024-07-25 01:20:14.912211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.951 qpair failed and we were unable to recover it. 00:34:21.951 [2024-07-25 01:20:14.912384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.951 [2024-07-25 01:20:14.912410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.951 qpair failed and we were unable to recover it. 00:34:21.951 [2024-07-25 01:20:14.912554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.951 [2024-07-25 01:20:14.912580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.951 qpair failed and we were unable to recover it. 00:34:21.951 [2024-07-25 01:20:14.912685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.951 [2024-07-25 01:20:14.912710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.951 qpair failed and we were unable to recover it. 00:34:21.951 [2024-07-25 01:20:14.912826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.951 [2024-07-25 01:20:14.912852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.951 qpair failed and we were unable to recover it. 00:34:21.951 [2024-07-25 01:20:14.912998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.951 [2024-07-25 01:20:14.913025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.951 qpair failed and we were unable to recover it. 00:34:21.951 [2024-07-25 01:20:14.913171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.951 [2024-07-25 01:20:14.913197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.951 qpair failed and we were unable to recover it. 00:34:21.951 [2024-07-25 01:20:14.913310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.951 [2024-07-25 01:20:14.913337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.951 qpair failed and we were unable to recover it. 00:34:21.951 [2024-07-25 01:20:14.913477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.951 [2024-07-25 01:20:14.913503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.951 qpair failed and we were unable to recover it. 00:34:21.951 [2024-07-25 01:20:14.913655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.951 [2024-07-25 01:20:14.913681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.951 qpair failed and we were unable to recover it. 00:34:21.951 [2024-07-25 01:20:14.913820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.951 [2024-07-25 01:20:14.913846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.951 qpair failed and we were unable to recover it. 00:34:21.951 [2024-07-25 01:20:14.914021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.951 [2024-07-25 01:20:14.914047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.951 qpair failed and we were unable to recover it. 00:34:21.951 [2024-07-25 01:20:14.914215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.951 [2024-07-25 01:20:14.914245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.951 qpair failed and we were unable to recover it. 00:34:21.951 [2024-07-25 01:20:14.914385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.951 [2024-07-25 01:20:14.914411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.951 qpair failed and we were unable to recover it. 00:34:21.951 [2024-07-25 01:20:14.914581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.951 [2024-07-25 01:20:14.914607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.951 qpair failed and we were unable to recover it. 00:34:21.951 [2024-07-25 01:20:14.914748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.951 [2024-07-25 01:20:14.914773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.951 qpair failed and we were unable to recover it. 00:34:21.951 [2024-07-25 01:20:14.914942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.951 [2024-07-25 01:20:14.914967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.951 qpair failed and we were unable to recover it. 00:34:21.951 [2024-07-25 01:20:14.915115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.951 [2024-07-25 01:20:14.915140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.951 qpair failed and we were unable to recover it. 00:34:21.951 [2024-07-25 01:20:14.915287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.952 [2024-07-25 01:20:14.915313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.952 qpair failed and we were unable to recover it. 00:34:21.952 [2024-07-25 01:20:14.915422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.952 [2024-07-25 01:20:14.915448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.952 qpair failed and we were unable to recover it. 00:34:21.952 [2024-07-25 01:20:14.915612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.952 [2024-07-25 01:20:14.915638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.952 qpair failed and we were unable to recover it. 00:34:21.952 [2024-07-25 01:20:14.915781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.952 [2024-07-25 01:20:14.915806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.952 qpair failed and we were unable to recover it. 00:34:21.952 [2024-07-25 01:20:14.915921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.952 [2024-07-25 01:20:14.915946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.952 qpair failed and we were unable to recover it. 00:34:21.952 [2024-07-25 01:20:14.916087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.952 [2024-07-25 01:20:14.916112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.952 qpair failed and we were unable to recover it. 00:34:21.952 [2024-07-25 01:20:14.916265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.952 [2024-07-25 01:20:14.916291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.952 qpair failed and we were unable to recover it. 00:34:21.952 [2024-07-25 01:20:14.916436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.952 [2024-07-25 01:20:14.916462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.952 qpair failed and we were unable to recover it. 00:34:21.952 [2024-07-25 01:20:14.916608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.952 [2024-07-25 01:20:14.916634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.952 qpair failed and we were unable to recover it. 00:34:21.952 [2024-07-25 01:20:14.916738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.952 [2024-07-25 01:20:14.916764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.952 qpair failed and we were unable to recover it. 00:34:21.952 [2024-07-25 01:20:14.916881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.952 [2024-07-25 01:20:14.916907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.952 qpair failed and we were unable to recover it. 00:34:21.952 [2024-07-25 01:20:14.917052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.952 [2024-07-25 01:20:14.917078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.952 qpair failed and we were unable to recover it. 00:34:21.952 [2024-07-25 01:20:14.917220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.952 [2024-07-25 01:20:14.917250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.952 qpair failed and we were unable to recover it. 00:34:21.952 [2024-07-25 01:20:14.917395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.952 [2024-07-25 01:20:14.917420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.952 qpair failed and we were unable to recover it. 00:34:21.952 [2024-07-25 01:20:14.917585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.952 [2024-07-25 01:20:14.917610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.952 qpair failed and we were unable to recover it. 00:34:21.952 [2024-07-25 01:20:14.917787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.952 [2024-07-25 01:20:14.917812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.952 qpair failed and we were unable to recover it. 00:34:21.952 [2024-07-25 01:20:14.917953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.952 [2024-07-25 01:20:14.917978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.952 qpair failed and we were unable to recover it. 00:34:21.952 [2024-07-25 01:20:14.918121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.952 [2024-07-25 01:20:14.918147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.952 qpair failed and we were unable to recover it. 00:34:21.952 [2024-07-25 01:20:14.918293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.952 [2024-07-25 01:20:14.918319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.952 qpair failed and we were unable to recover it. 00:34:21.952 [2024-07-25 01:20:14.918490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.952 [2024-07-25 01:20:14.918520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.952 qpair failed and we were unable to recover it. 00:34:21.952 [2024-07-25 01:20:14.918662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.952 [2024-07-25 01:20:14.918688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.952 qpair failed and we were unable to recover it. 00:34:21.952 [2024-07-25 01:20:14.918828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.952 [2024-07-25 01:20:14.918853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.952 qpair failed and we were unable to recover it. 00:34:21.952 [2024-07-25 01:20:14.918992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.952 [2024-07-25 01:20:14.919018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.952 qpair failed and we were unable to recover it. 00:34:21.952 [2024-07-25 01:20:14.919166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.952 [2024-07-25 01:20:14.919192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.952 qpair failed and we were unable to recover it. 00:34:21.952 [2024-07-25 01:20:14.919308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.952 [2024-07-25 01:20:14.919334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.952 qpair failed and we were unable to recover it. 00:34:21.952 [2024-07-25 01:20:14.919479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.952 [2024-07-25 01:20:14.919505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.952 qpair failed and we were unable to recover it. 00:34:21.952 [2024-07-25 01:20:14.919640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.952 [2024-07-25 01:20:14.919666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.952 qpair failed and we were unable to recover it. 00:34:21.952 [2024-07-25 01:20:14.919806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.953 [2024-07-25 01:20:14.919832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.953 qpair failed and we were unable to recover it. 00:34:21.953 [2024-07-25 01:20:14.919971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.953 [2024-07-25 01:20:14.919996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.953 qpair failed and we were unable to recover it. 00:34:21.953 [2024-07-25 01:20:14.920164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.953 [2024-07-25 01:20:14.920190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.953 qpair failed and we were unable to recover it. 00:34:21.953 [2024-07-25 01:20:14.920306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.953 [2024-07-25 01:20:14.920332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.953 qpair failed and we were unable to recover it. 00:34:21.953 [2024-07-25 01:20:14.920445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.953 [2024-07-25 01:20:14.920472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.953 qpair failed and we were unable to recover it. 00:34:21.953 [2024-07-25 01:20:14.920623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.953 [2024-07-25 01:20:14.920649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.953 qpair failed and we were unable to recover it. 00:34:21.953 [2024-07-25 01:20:14.920772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.953 [2024-07-25 01:20:14.920798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.953 qpair failed and we were unable to recover it. 00:34:21.953 [2024-07-25 01:20:14.920938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.953 [2024-07-25 01:20:14.920964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.953 qpair failed and we were unable to recover it. 00:34:21.953 [2024-07-25 01:20:14.921137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.953 [2024-07-25 01:20:14.921162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.953 qpair failed and we were unable to recover it. 00:34:21.953 [2024-07-25 01:20:14.921300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.953 [2024-07-25 01:20:14.921326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.953 qpair failed and we were unable to recover it. 00:34:21.953 [2024-07-25 01:20:14.921431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.953 [2024-07-25 01:20:14.921457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.953 qpair failed and we were unable to recover it. 00:34:21.953 [2024-07-25 01:20:14.921627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.953 [2024-07-25 01:20:14.921653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.953 qpair failed and we were unable to recover it. 00:34:21.953 [2024-07-25 01:20:14.921792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.953 [2024-07-25 01:20:14.921817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.953 qpair failed and we were unable to recover it. 00:34:21.953 [2024-07-25 01:20:14.921962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.953 [2024-07-25 01:20:14.921987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.953 qpair failed and we were unable to recover it. 00:34:21.953 [2024-07-25 01:20:14.922125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.953 [2024-07-25 01:20:14.922151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.953 qpair failed and we were unable to recover it. 00:34:21.953 [2024-07-25 01:20:14.922266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.953 [2024-07-25 01:20:14.922292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.953 qpair failed and we were unable to recover it. 00:34:21.953 [2024-07-25 01:20:14.922414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.953 [2024-07-25 01:20:14.922440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.953 qpair failed and we were unable to recover it. 00:34:21.953 [2024-07-25 01:20:14.922610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.953 [2024-07-25 01:20:14.922636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.953 qpair failed and we were unable to recover it. 00:34:21.953 [2024-07-25 01:20:14.922777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.953 [2024-07-25 01:20:14.922803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.953 qpair failed and we were unable to recover it. 00:34:21.953 [2024-07-25 01:20:14.922948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.953 [2024-07-25 01:20:14.922973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.953 qpair failed and we were unable to recover it. 00:34:21.953 [2024-07-25 01:20:14.923110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.953 [2024-07-25 01:20:14.923136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.953 qpair failed and we were unable to recover it. 00:34:21.953 [2024-07-25 01:20:14.923282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.953 [2024-07-25 01:20:14.923309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.953 qpair failed and we were unable to recover it. 00:34:21.953 [2024-07-25 01:20:14.923485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.953 [2024-07-25 01:20:14.923510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.953 qpair failed and we were unable to recover it. 00:34:21.953 [2024-07-25 01:20:14.923674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.953 [2024-07-25 01:20:14.923700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.953 qpair failed and we were unable to recover it. 00:34:21.953 [2024-07-25 01:20:14.923810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.953 [2024-07-25 01:20:14.923836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.953 qpair failed and we were unable to recover it. 00:34:21.954 [2024-07-25 01:20:14.923948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.954 [2024-07-25 01:20:14.923973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.954 qpair failed and we were unable to recover it. 00:34:21.954 [2024-07-25 01:20:14.924092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.954 [2024-07-25 01:20:14.924117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.954 qpair failed and we were unable to recover it. 00:34:21.954 [2024-07-25 01:20:14.924265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.954 [2024-07-25 01:20:14.924292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.954 qpair failed and we were unable to recover it. 00:34:21.954 [2024-07-25 01:20:14.924438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.954 [2024-07-25 01:20:14.924465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.954 qpair failed and we were unable to recover it. 00:34:21.954 [2024-07-25 01:20:14.924610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.954 [2024-07-25 01:20:14.924636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.954 qpair failed and we were unable to recover it. 00:34:21.954 [2024-07-25 01:20:14.924774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.954 [2024-07-25 01:20:14.924799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.954 qpair failed and we were unable to recover it. 00:34:21.954 [2024-07-25 01:20:14.924968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.954 [2024-07-25 01:20:14.924994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.954 qpair failed and we were unable to recover it. 00:34:21.954 [2024-07-25 01:20:14.925107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.954 [2024-07-25 01:20:14.925138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.954 qpair failed and we were unable to recover it. 00:34:21.954 [2024-07-25 01:20:14.925311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.954 [2024-07-25 01:20:14.925337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.954 qpair failed and we were unable to recover it. 00:34:21.954 [2024-07-25 01:20:14.925456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.954 [2024-07-25 01:20:14.925482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.954 qpair failed and we were unable to recover it. 00:34:21.954 [2024-07-25 01:20:14.925623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.954 [2024-07-25 01:20:14.925649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.954 qpair failed and we were unable to recover it. 00:34:21.954 [2024-07-25 01:20:14.925793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.954 [2024-07-25 01:20:14.925819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.954 qpair failed and we were unable to recover it. 00:34:21.954 [2024-07-25 01:20:14.925930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.954 [2024-07-25 01:20:14.925955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.954 qpair failed and we were unable to recover it. 00:34:21.954 [2024-07-25 01:20:14.926066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.954 [2024-07-25 01:20:14.926092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.954 qpair failed and we were unable to recover it. 00:34:21.954 [2024-07-25 01:20:14.926246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.954 [2024-07-25 01:20:14.926273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.954 qpair failed and we were unable to recover it. 00:34:21.954 [2024-07-25 01:20:14.926441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.954 [2024-07-25 01:20:14.926467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.954 qpair failed and we were unable to recover it. 00:34:21.954 [2024-07-25 01:20:14.926587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.954 [2024-07-25 01:20:14.926612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.954 qpair failed and we were unable to recover it. 00:34:21.954 [2024-07-25 01:20:14.926754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.954 [2024-07-25 01:20:14.926779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.954 qpair failed and we were unable to recover it. 00:34:21.954 [2024-07-25 01:20:14.926906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.954 [2024-07-25 01:20:14.926932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.954 qpair failed and we were unable to recover it. 00:34:21.954 [2024-07-25 01:20:14.927052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.954 [2024-07-25 01:20:14.927077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.954 qpair failed and we were unable to recover it. 00:34:21.954 [2024-07-25 01:20:14.927216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.955 [2024-07-25 01:20:14.927246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.955 qpair failed and we were unable to recover it. 00:34:21.955 [2024-07-25 01:20:14.927399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.955 [2024-07-25 01:20:14.927425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.955 qpair failed and we were unable to recover it. 00:34:21.955 [2024-07-25 01:20:14.927537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.955 [2024-07-25 01:20:14.927564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.955 qpair failed and we were unable to recover it. 00:34:21.955 [2024-07-25 01:20:14.927701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.955 [2024-07-25 01:20:14.927726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.955 qpair failed and we were unable to recover it. 00:34:21.955 [2024-07-25 01:20:14.927868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.955 [2024-07-25 01:20:14.927895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.955 qpair failed and we were unable to recover it. 00:34:21.955 [2024-07-25 01:20:14.928036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.955 [2024-07-25 01:20:14.928062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.955 qpair failed and we were unable to recover it. 00:34:21.955 [2024-07-25 01:20:14.928230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.955 [2024-07-25 01:20:14.928260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.955 qpair failed and we were unable to recover it. 00:34:21.955 [2024-07-25 01:20:14.928407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.955 [2024-07-25 01:20:14.928432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.955 qpair failed and we were unable to recover it. 00:34:21.955 [2024-07-25 01:20:14.928573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.955 [2024-07-25 01:20:14.928599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.955 qpair failed and we were unable to recover it. 00:34:21.955 [2024-07-25 01:20:14.928715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.955 [2024-07-25 01:20:14.928741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.955 qpair failed and we were unable to recover it. 00:34:21.955 [2024-07-25 01:20:14.928882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.955 [2024-07-25 01:20:14.928908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.955 qpair failed and we were unable to recover it. 00:34:21.955 [2024-07-25 01:20:14.929051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.955 [2024-07-25 01:20:14.929077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.955 qpair failed and we were unable to recover it. 00:34:21.955 [2024-07-25 01:20:14.929193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.955 [2024-07-25 01:20:14.929219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.955 qpair failed and we were unable to recover it. 00:34:21.955 [2024-07-25 01:20:14.929363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.955 [2024-07-25 01:20:14.929388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.955 qpair failed and we were unable to recover it. 00:34:21.955 [2024-07-25 01:20:14.929513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.955 [2024-07-25 01:20:14.929539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.955 qpair failed and we were unable to recover it. 00:34:21.955 [2024-07-25 01:20:14.929666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.955 [2024-07-25 01:20:14.929691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.955 qpair failed and we were unable to recover it. 00:34:21.955 [2024-07-25 01:20:14.929828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.955 [2024-07-25 01:20:14.929853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.955 qpair failed and we were unable to recover it. 00:34:21.955 [2024-07-25 01:20:14.929995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.955 [2024-07-25 01:20:14.930022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.955 qpair failed and we were unable to recover it. 00:34:21.955 [2024-07-25 01:20:14.930161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.955 [2024-07-25 01:20:14.930187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.955 qpair failed and we were unable to recover it. 00:34:21.955 [2024-07-25 01:20:14.930356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.955 [2024-07-25 01:20:14.930382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.955 qpair failed and we were unable to recover it. 00:34:21.955 [2024-07-25 01:20:14.930503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.955 [2024-07-25 01:20:14.930529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.955 qpair failed and we were unable to recover it. 00:34:21.955 [2024-07-25 01:20:14.930641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.955 [2024-07-25 01:20:14.930667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.955 qpair failed and we were unable to recover it. 00:34:21.955 [2024-07-25 01:20:14.930847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.955 [2024-07-25 01:20:14.930873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.955 qpair failed and we were unable to recover it. 00:34:21.955 [2024-07-25 01:20:14.930990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.955 [2024-07-25 01:20:14.931017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.955 qpair failed and we were unable to recover it. 00:34:21.955 [2024-07-25 01:20:14.931161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.955 [2024-07-25 01:20:14.931188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.955 qpair failed and we were unable to recover it. 00:34:21.955 [2024-07-25 01:20:14.931357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.955 [2024-07-25 01:20:14.931383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.956 qpair failed and we were unable to recover it. 00:34:21.956 [2024-07-25 01:20:14.931526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.956 [2024-07-25 01:20:14.931552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.956 qpair failed and we were unable to recover it. 00:34:21.956 [2024-07-25 01:20:14.931719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.956 [2024-07-25 01:20:14.931750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.956 qpair failed and we were unable to recover it. 00:34:21.956 [2024-07-25 01:20:14.931890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.956 [2024-07-25 01:20:14.931916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.956 qpair failed and we were unable to recover it. 00:34:21.956 [2024-07-25 01:20:14.932036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.956 [2024-07-25 01:20:14.932063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.956 qpair failed and we were unable to recover it. 00:34:21.956 [2024-07-25 01:20:14.932180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.956 [2024-07-25 01:20:14.932205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.956 qpair failed and we were unable to recover it. 00:34:21.956 [2024-07-25 01:20:14.932353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.956 [2024-07-25 01:20:14.932379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.956 qpair failed and we were unable to recover it. 00:34:21.956 [2024-07-25 01:20:14.932502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.956 [2024-07-25 01:20:14.932528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.956 qpair failed and we were unable to recover it. 00:34:21.956 [2024-07-25 01:20:14.932667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.956 [2024-07-25 01:20:14.932693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.956 qpair failed and we were unable to recover it. 00:34:21.956 [2024-07-25 01:20:14.932859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.956 [2024-07-25 01:20:14.932885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.956 qpair failed and we were unable to recover it. 00:34:21.956 [2024-07-25 01:20:14.933026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.956 [2024-07-25 01:20:14.933052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.956 qpair failed and we were unable to recover it. 00:34:21.956 [2024-07-25 01:20:14.933161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.956 [2024-07-25 01:20:14.933186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.956 qpair failed and we were unable to recover it. 00:34:21.956 [2024-07-25 01:20:14.933334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.956 [2024-07-25 01:20:14.933360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.956 qpair failed and we were unable to recover it. 00:34:21.956 [2024-07-25 01:20:14.933499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.956 [2024-07-25 01:20:14.933524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.956 qpair failed and we were unable to recover it. 00:34:21.956 [2024-07-25 01:20:14.933632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.956 [2024-07-25 01:20:14.933658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.956 qpair failed and we were unable to recover it. 00:34:21.956 [2024-07-25 01:20:14.933776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.956 [2024-07-25 01:20:14.933802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.956 qpair failed and we were unable to recover it. 00:34:21.956 [2024-07-25 01:20:14.933951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.956 [2024-07-25 01:20:14.933979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.956 qpair failed and we were unable to recover it. 00:34:21.956 [2024-07-25 01:20:14.934127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.956 [2024-07-25 01:20:14.934153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.956 qpair failed and we were unable to recover it. 00:34:21.956 [2024-07-25 01:20:14.934303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.956 [2024-07-25 01:20:14.934329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.956 qpair failed and we were unable to recover it. 00:34:21.956 [2024-07-25 01:20:14.934507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.956 [2024-07-25 01:20:14.934532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.956 qpair failed and we were unable to recover it. 00:34:21.956 [2024-07-25 01:20:14.934698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.956 [2024-07-25 01:20:14.934724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.956 qpair failed and we were unable to recover it. 00:34:21.956 [2024-07-25 01:20:14.934892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.956 [2024-07-25 01:20:14.934918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.956 qpair failed and we were unable to recover it. 00:34:21.956 [2024-07-25 01:20:14.935035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.956 [2024-07-25 01:20:14.935061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.956 qpair failed and we were unable to recover it. 00:34:21.956 [2024-07-25 01:20:14.935199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.956 [2024-07-25 01:20:14.935225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.956 qpair failed and we were unable to recover it. 00:34:21.956 [2024-07-25 01:20:14.935354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.956 [2024-07-25 01:20:14.935379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.956 qpair failed and we were unable to recover it. 00:34:21.956 [2024-07-25 01:20:14.935492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.956 [2024-07-25 01:20:14.935516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.956 qpair failed and we were unable to recover it. 00:34:21.956 [2024-07-25 01:20:14.935625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.956 [2024-07-25 01:20:14.935650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.956 qpair failed and we were unable to recover it. 00:34:21.956 [2024-07-25 01:20:14.935813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.956 [2024-07-25 01:20:14.935838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.956 qpair failed and we were unable to recover it. 00:34:21.956 [2024-07-25 01:20:14.935976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.956 [2024-07-25 01:20:14.936001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.956 qpair failed and we were unable to recover it. 00:34:21.956 [2024-07-25 01:20:14.936116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.957 [2024-07-25 01:20:14.936143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.957 qpair failed and we were unable to recover it. 00:34:21.957 [2024-07-25 01:20:14.936314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.957 [2024-07-25 01:20:14.936341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.957 qpair failed and we were unable to recover it. 00:34:21.957 [2024-07-25 01:20:14.936480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.957 [2024-07-25 01:20:14.936505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.957 qpair failed and we were unable to recover it. 00:34:21.957 [2024-07-25 01:20:14.936673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.957 [2024-07-25 01:20:14.936699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.957 qpair failed and we were unable to recover it. 00:34:21.957 [2024-07-25 01:20:14.936839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.957 [2024-07-25 01:20:14.936865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.957 qpair failed and we were unable to recover it. 00:34:21.957 [2024-07-25 01:20:14.936987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.957 [2024-07-25 01:20:14.937013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.957 qpair failed and we were unable to recover it. 00:34:21.957 [2024-07-25 01:20:14.937181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.957 [2024-07-25 01:20:14.937206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.957 qpair failed and we were unable to recover it. 00:34:21.957 [2024-07-25 01:20:14.937354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.957 [2024-07-25 01:20:14.937380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.957 qpair failed and we were unable to recover it. 00:34:21.957 [2024-07-25 01:20:14.937516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.957 [2024-07-25 01:20:14.937541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.957 qpair failed and we were unable to recover it. 00:34:21.957 [2024-07-25 01:20:14.937689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.957 [2024-07-25 01:20:14.937715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.957 qpair failed and we were unable to recover it. 00:34:21.957 [2024-07-25 01:20:14.937858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.957 [2024-07-25 01:20:14.937884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.957 qpair failed and we were unable to recover it. 00:34:21.957 [2024-07-25 01:20:14.938023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.957 [2024-07-25 01:20:14.938049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.957 qpair failed and we were unable to recover it. 00:34:21.957 [2024-07-25 01:20:14.938195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.957 [2024-07-25 01:20:14.938221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.957 qpair failed and we were unable to recover it. 00:34:21.957 [2024-07-25 01:20:14.938368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.957 [2024-07-25 01:20:14.938399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.957 qpair failed and we were unable to recover it. 00:34:21.957 [2024-07-25 01:20:14.938542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.957 [2024-07-25 01:20:14.938567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.957 qpair failed and we were unable to recover it. 00:34:21.957 [2024-07-25 01:20:14.938732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.957 [2024-07-25 01:20:14.938757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.957 qpair failed and we were unable to recover it. 00:34:21.957 [2024-07-25 01:20:14.938898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.957 [2024-07-25 01:20:14.938923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.957 qpair failed and we were unable to recover it. 00:34:21.957 [2024-07-25 01:20:14.939094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.957 [2024-07-25 01:20:14.939119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.957 qpair failed and we were unable to recover it. 00:34:21.957 [2024-07-25 01:20:14.939271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.957 [2024-07-25 01:20:14.939297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.957 qpair failed and we were unable to recover it. 00:34:21.957 [2024-07-25 01:20:14.939441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.957 [2024-07-25 01:20:14.939468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.957 qpair failed and we were unable to recover it. 00:34:21.957 [2024-07-25 01:20:14.939637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.957 [2024-07-25 01:20:14.939663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.957 qpair failed and we were unable to recover it. 00:34:21.957 [2024-07-25 01:20:14.939804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.957 [2024-07-25 01:20:14.939830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.957 qpair failed and we were unable to recover it. 00:34:21.957 [2024-07-25 01:20:14.939966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.957 [2024-07-25 01:20:14.939992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.957 qpair failed and we were unable to recover it. 00:34:21.957 [2024-07-25 01:20:14.940136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.957 [2024-07-25 01:20:14.940162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.957 qpair failed and we were unable to recover it. 00:34:21.957 [2024-07-25 01:20:14.940306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.957 [2024-07-25 01:20:14.940332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.957 qpair failed and we were unable to recover it. 00:34:21.957 [2024-07-25 01:20:14.940452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.957 [2024-07-25 01:20:14.940478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.957 qpair failed and we were unable to recover it. 00:34:21.957 [2024-07-25 01:20:14.940652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.957 [2024-07-25 01:20:14.940677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.957 qpair failed and we were unable to recover it. 00:34:21.957 [2024-07-25 01:20:14.940827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.957 [2024-07-25 01:20:14.940853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.957 qpair failed and we were unable to recover it. 00:34:21.957 [2024-07-25 01:20:14.940993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.957 [2024-07-25 01:20:14.941019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.958 qpair failed and we were unable to recover it. 00:34:21.958 [2024-07-25 01:20:14.941161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.958 [2024-07-25 01:20:14.941187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.958 qpair failed and we were unable to recover it. 00:34:21.958 [2024-07-25 01:20:14.941331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.958 [2024-07-25 01:20:14.941358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.958 qpair failed and we were unable to recover it. 00:34:21.958 [2024-07-25 01:20:14.941496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.958 [2024-07-25 01:20:14.941522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.958 qpair failed and we were unable to recover it. 00:34:21.958 [2024-07-25 01:20:14.941642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.958 [2024-07-25 01:20:14.941668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.958 qpair failed and we were unable to recover it. 00:34:21.958 [2024-07-25 01:20:14.941839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.958 [2024-07-25 01:20:14.941864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.958 qpair failed and we were unable to recover it. 00:34:21.958 [2024-07-25 01:20:14.941977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.958 [2024-07-25 01:20:14.942003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.958 qpair failed and we were unable to recover it. 00:34:21.958 [2024-07-25 01:20:14.942150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.958 [2024-07-25 01:20:14.942176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.958 qpair failed and we were unable to recover it. 00:34:21.958 [2024-07-25 01:20:14.942323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.958 [2024-07-25 01:20:14.942349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.958 qpair failed and we were unable to recover it. 00:34:21.958 [2024-07-25 01:20:14.942463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.958 [2024-07-25 01:20:14.942489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.958 qpair failed and we were unable to recover it. 00:34:21.958 [2024-07-25 01:20:14.942610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.958 [2024-07-25 01:20:14.942636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.958 qpair failed and we were unable to recover it. 00:34:21.958 [2024-07-25 01:20:14.942751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.958 [2024-07-25 01:20:14.942776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.958 qpair failed and we were unable to recover it. 00:34:21.958 [2024-07-25 01:20:14.942948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.958 [2024-07-25 01:20:14.942974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.958 qpair failed and we were unable to recover it. 00:34:21.958 [2024-07-25 01:20:14.943086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.958 [2024-07-25 01:20:14.943112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.958 qpair failed and we were unable to recover it. 00:34:21.958 [2024-07-25 01:20:14.943282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.958 [2024-07-25 01:20:14.943308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.958 qpair failed and we were unable to recover it. 00:34:21.958 [2024-07-25 01:20:14.943450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.958 [2024-07-25 01:20:14.943475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.958 qpair failed and we were unable to recover it. 00:34:21.958 [2024-07-25 01:20:14.943621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.958 [2024-07-25 01:20:14.943647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.958 qpair failed and we were unable to recover it. 00:34:21.958 [2024-07-25 01:20:14.943760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.958 [2024-07-25 01:20:14.943785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.958 qpair failed and we were unable to recover it. 00:34:21.958 [2024-07-25 01:20:14.943902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.958 [2024-07-25 01:20:14.943927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.958 qpair failed and we were unable to recover it. 00:34:21.958 [2024-07-25 01:20:14.944046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.958 [2024-07-25 01:20:14.944071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.958 qpair failed and we were unable to recover it. 00:34:21.958 [2024-07-25 01:20:14.944209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.958 [2024-07-25 01:20:14.944235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.958 qpair failed and we were unable to recover it. 00:34:21.958 [2024-07-25 01:20:14.944386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.958 [2024-07-25 01:20:14.944412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.958 qpair failed and we were unable to recover it. 00:34:21.958 [2024-07-25 01:20:14.944553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.958 [2024-07-25 01:20:14.944579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.958 qpair failed and we were unable to recover it. 00:34:21.958 [2024-07-25 01:20:14.944752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.958 [2024-07-25 01:20:14.944777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.958 qpair failed and we were unable to recover it. 00:34:21.958 [2024-07-25 01:20:14.944915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.958 [2024-07-25 01:20:14.944940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.958 qpair failed and we were unable to recover it. 00:34:21.958 [2024-07-25 01:20:14.945084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.958 [2024-07-25 01:20:14.945114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.958 qpair failed and we were unable to recover it. 00:34:21.958 [2024-07-25 01:20:14.945264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.958 [2024-07-25 01:20:14.945290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.958 qpair failed and we were unable to recover it. 00:34:21.958 [2024-07-25 01:20:14.945460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.958 [2024-07-25 01:20:14.945485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.958 qpair failed and we were unable to recover it. 00:34:21.958 [2024-07-25 01:20:14.945631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.958 [2024-07-25 01:20:14.945657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.958 qpair failed and we were unable to recover it. 00:34:21.958 [2024-07-25 01:20:14.945770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.958 [2024-07-25 01:20:14.945795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.958 qpair failed and we were unable to recover it. 00:34:21.958 [2024-07-25 01:20:14.945964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.958 [2024-07-25 01:20:14.945989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.958 qpair failed and we were unable to recover it. 00:34:21.958 [2024-07-25 01:20:14.946132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.958 [2024-07-25 01:20:14.946158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.958 qpair failed and we were unable to recover it. 00:34:21.958 [2024-07-25 01:20:14.946272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.958 [2024-07-25 01:20:14.946299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.958 qpair failed and we were unable to recover it. 00:34:21.958 [2024-07-25 01:20:14.946443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.958 [2024-07-25 01:20:14.946469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.958 qpair failed and we were unable to recover it. 00:34:21.958 [2024-07-25 01:20:14.946617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.958 [2024-07-25 01:20:14.946642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.958 qpair failed and we were unable to recover it. 00:34:21.958 [2024-07-25 01:20:14.946777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.958 [2024-07-25 01:20:14.946803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.958 qpair failed and we were unable to recover it. 00:34:21.958 [2024-07-25 01:20:14.946970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.958 [2024-07-25 01:20:14.946996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.958 qpair failed and we were unable to recover it. 00:34:21.958 [2024-07-25 01:20:14.947146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.958 [2024-07-25 01:20:14.947171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.959 qpair failed and we were unable to recover it. 00:34:21.959 [2024-07-25 01:20:14.947310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.959 [2024-07-25 01:20:14.947336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.959 qpair failed and we were unable to recover it. 00:34:21.959 [2024-07-25 01:20:14.947453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.959 [2024-07-25 01:20:14.947479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.959 qpair failed and we were unable to recover it. 00:34:21.959 [2024-07-25 01:20:14.947648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.959 [2024-07-25 01:20:14.947674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.959 qpair failed and we were unable to recover it. 00:34:21.959 [2024-07-25 01:20:14.947820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.959 [2024-07-25 01:20:14.947846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.959 qpair failed and we were unable to recover it. 00:34:21.959 [2024-07-25 01:20:14.947965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.959 [2024-07-25 01:20:14.947992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.959 qpair failed and we were unable to recover it. 00:34:21.959 [2024-07-25 01:20:14.948128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.959 [2024-07-25 01:20:14.948153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.959 qpair failed and we were unable to recover it. 00:34:21.959 [2024-07-25 01:20:14.948321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.959 [2024-07-25 01:20:14.948348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.959 qpair failed and we were unable to recover it. 00:34:21.959 [2024-07-25 01:20:14.948484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.959 [2024-07-25 01:20:14.948509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.959 qpair failed and we were unable to recover it. 00:34:21.959 [2024-07-25 01:20:14.948623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.959 [2024-07-25 01:20:14.948649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.959 qpair failed and we were unable to recover it. 00:34:21.959 [2024-07-25 01:20:14.948785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.959 [2024-07-25 01:20:14.948811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.959 qpair failed and we were unable to recover it. 00:34:21.959 [2024-07-25 01:20:14.948984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.959 [2024-07-25 01:20:14.949010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.959 qpair failed and we were unable to recover it. 00:34:21.959 [2024-07-25 01:20:14.949150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.959 [2024-07-25 01:20:14.949176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.959 qpair failed and we were unable to recover it. 00:34:21.959 [2024-07-25 01:20:14.949350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.959 [2024-07-25 01:20:14.949376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.959 qpair failed and we were unable to recover it. 00:34:21.959 [2024-07-25 01:20:14.949517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.959 [2024-07-25 01:20:14.949543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.959 qpair failed and we were unable to recover it. 00:34:21.959 [2024-07-25 01:20:14.949691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.959 [2024-07-25 01:20:14.949717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.959 qpair failed and we were unable to recover it. 00:34:21.959 [2024-07-25 01:20:14.949886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.959 [2024-07-25 01:20:14.949911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.959 qpair failed and we were unable to recover it. 00:34:21.959 [2024-07-25 01:20:14.950040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.959 [2024-07-25 01:20:14.950066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.959 qpair failed and we were unable to recover it. 00:34:21.959 [2024-07-25 01:20:14.950237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.959 [2024-07-25 01:20:14.950268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.959 qpair failed and we were unable to recover it. 00:34:21.959 [2024-07-25 01:20:14.950408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.959 [2024-07-25 01:20:14.950433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.959 qpair failed and we were unable to recover it. 00:34:21.959 [2024-07-25 01:20:14.950580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.959 [2024-07-25 01:20:14.950606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.959 qpair failed and we were unable to recover it. 00:34:21.959 [2024-07-25 01:20:14.950749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.959 [2024-07-25 01:20:14.950776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.959 qpair failed and we were unable to recover it. 00:34:21.959 [2024-07-25 01:20:14.950917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.959 [2024-07-25 01:20:14.950944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.959 qpair failed and we were unable to recover it. 00:34:21.959 [2024-07-25 01:20:14.951115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.959 [2024-07-25 01:20:14.951141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.959 qpair failed and we were unable to recover it. 00:34:21.959 [2024-07-25 01:20:14.951281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.959 [2024-07-25 01:20:14.951307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.959 qpair failed and we were unable to recover it. 00:34:21.959 [2024-07-25 01:20:14.951442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.959 [2024-07-25 01:20:14.951468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.959 qpair failed and we were unable to recover it. 00:34:21.959 [2024-07-25 01:20:14.951641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.959 [2024-07-25 01:20:14.951666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.959 qpair failed and we were unable to recover it. 00:34:21.959 [2024-07-25 01:20:14.951775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.959 [2024-07-25 01:20:14.951800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.959 qpair failed and we were unable to recover it. 00:34:21.959 [2024-07-25 01:20:14.951944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.959 [2024-07-25 01:20:14.951973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.959 qpair failed and we were unable to recover it. 00:34:21.959 [2024-07-25 01:20:14.952113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.959 [2024-07-25 01:20:14.952140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.959 qpair failed and we were unable to recover it. 00:34:21.959 [2024-07-25 01:20:14.952310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.959 [2024-07-25 01:20:14.952337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.959 qpair failed and we were unable to recover it. 00:34:21.959 [2024-07-25 01:20:14.952450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.959 [2024-07-25 01:20:14.952475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.959 qpair failed and we were unable to recover it. 00:34:21.959 [2024-07-25 01:20:14.952582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.959 [2024-07-25 01:20:14.952607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.959 qpair failed and we were unable to recover it. 00:34:21.960 [2024-07-25 01:20:14.952722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.960 [2024-07-25 01:20:14.952748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.960 qpair failed and we were unable to recover it. 00:34:21.960 [2024-07-25 01:20:14.952892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.960 [2024-07-25 01:20:14.952918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.960 qpair failed and we were unable to recover it. 00:34:21.960 [2024-07-25 01:20:14.953086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.960 [2024-07-25 01:20:14.953112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.960 qpair failed and we were unable to recover it. 00:34:21.960 [2024-07-25 01:20:14.953252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.960 [2024-07-25 01:20:14.953278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.960 qpair failed and we were unable to recover it. 00:34:21.960 [2024-07-25 01:20:14.953402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.960 [2024-07-25 01:20:14.953428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.960 qpair failed and we were unable to recover it. 00:34:21.960 [2024-07-25 01:20:14.953570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.960 [2024-07-25 01:20:14.953596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.960 qpair failed and we were unable to recover it. 00:34:21.960 [2024-07-25 01:20:14.953740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.960 [2024-07-25 01:20:14.953765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.960 qpair failed and we were unable to recover it. 00:34:21.960 [2024-07-25 01:20:14.953932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.960 [2024-07-25 01:20:14.953957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.960 qpair failed and we were unable to recover it. 00:34:21.960 [2024-07-25 01:20:14.954076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.960 [2024-07-25 01:20:14.954102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.960 qpair failed and we were unable to recover it. 00:34:21.960 [2024-07-25 01:20:14.954220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.960 [2024-07-25 01:20:14.954250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.960 qpair failed and we were unable to recover it. 00:34:21.960 [2024-07-25 01:20:14.954396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.960 [2024-07-25 01:20:14.954422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.960 qpair failed and we were unable to recover it. 00:34:21.960 [2024-07-25 01:20:14.954533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.960 [2024-07-25 01:20:14.954560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.960 qpair failed and we were unable to recover it. 00:34:21.960 [2024-07-25 01:20:14.954732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.960 [2024-07-25 01:20:14.954758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.960 qpair failed and we were unable to recover it. 00:34:21.960 [2024-07-25 01:20:14.954875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.960 [2024-07-25 01:20:14.954902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.960 qpair failed and we were unable to recover it. 00:34:21.960 [2024-07-25 01:20:14.955077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.960 [2024-07-25 01:20:14.955103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.960 qpair failed and we were unable to recover it. 00:34:21.960 [2024-07-25 01:20:14.955238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.960 [2024-07-25 01:20:14.955276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.960 qpair failed and we were unable to recover it. 00:34:21.960 [2024-07-25 01:20:14.955417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.960 [2024-07-25 01:20:14.955443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.960 qpair failed and we were unable to recover it. 00:34:21.960 [2024-07-25 01:20:14.955582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.960 [2024-07-25 01:20:14.955607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.960 qpair failed and we were unable to recover it. 00:34:21.960 [2024-07-25 01:20:14.955747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.960 [2024-07-25 01:20:14.955774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.960 qpair failed and we were unable to recover it. 00:34:21.960 [2024-07-25 01:20:14.955929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.960 [2024-07-25 01:20:14.955954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.960 qpair failed and we were unable to recover it. 00:34:21.960 [2024-07-25 01:20:14.956099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.960 [2024-07-25 01:20:14.956125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.960 qpair failed and we were unable to recover it. 00:34:21.960 [2024-07-25 01:20:14.956279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.960 [2024-07-25 01:20:14.956305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.960 qpair failed and we were unable to recover it. 00:34:21.960 [2024-07-25 01:20:14.956458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.960 [2024-07-25 01:20:14.956483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.960 qpair failed and we were unable to recover it. 00:34:21.960 [2024-07-25 01:20:14.956632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.960 [2024-07-25 01:20:14.956658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.960 qpair failed and we were unable to recover it. 00:34:21.960 [2024-07-25 01:20:14.956801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.960 [2024-07-25 01:20:14.956826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.960 qpair failed and we were unable to recover it. 00:34:21.960 [2024-07-25 01:20:14.956941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.960 [2024-07-25 01:20:14.956966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.960 qpair failed and we were unable to recover it. 00:34:21.960 [2024-07-25 01:20:14.957080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.960 [2024-07-25 01:20:14.957105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.960 qpair failed and we were unable to recover it. 00:34:21.960 [2024-07-25 01:20:14.957251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.960 [2024-07-25 01:20:14.957277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.960 qpair failed and we were unable to recover it. 00:34:21.960 [2024-07-25 01:20:14.957420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.960 [2024-07-25 01:20:14.957445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.960 qpair failed and we were unable to recover it. 00:34:21.960 [2024-07-25 01:20:14.957570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.960 [2024-07-25 01:20:14.957596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.960 qpair failed and we were unable to recover it. 00:34:21.960 [2024-07-25 01:20:14.957732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.960 [2024-07-25 01:20:14.957758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.960 qpair failed and we were unable to recover it. 00:34:21.960 [2024-07-25 01:20:14.957893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.960 [2024-07-25 01:20:14.957919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.960 qpair failed and we were unable to recover it. 00:34:21.960 [2024-07-25 01:20:14.958064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.960 [2024-07-25 01:20:14.958089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.960 qpair failed and we were unable to recover it. 00:34:21.960 [2024-07-25 01:20:14.958229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.960 [2024-07-25 01:20:14.958259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.960 qpair failed and we were unable to recover it. 00:34:21.960 [2024-07-25 01:20:14.958370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.960 [2024-07-25 01:20:14.958397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.960 qpair failed and we were unable to recover it. 00:34:21.960 [2024-07-25 01:20:14.958545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.960 [2024-07-25 01:20:14.958575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.960 qpair failed and we were unable to recover it. 00:34:21.960 [2024-07-25 01:20:14.958696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.961 [2024-07-25 01:20:14.958722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.961 qpair failed and we were unable to recover it. 00:34:21.961 [2024-07-25 01:20:14.958860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.961 [2024-07-25 01:20:14.958886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.961 qpair failed and we were unable to recover it. 00:34:21.961 [2024-07-25 01:20:14.959029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.961 [2024-07-25 01:20:14.959055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.961 qpair failed and we were unable to recover it. 00:34:21.961 [2024-07-25 01:20:14.959199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.961 [2024-07-25 01:20:14.959225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.961 qpair failed and we were unable to recover it. 00:34:21.961 [2024-07-25 01:20:14.959414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.961 [2024-07-25 01:20:14.959440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.961 qpair failed and we were unable to recover it. 00:34:21.961 [2024-07-25 01:20:14.959564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.961 [2024-07-25 01:20:14.959591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.961 qpair failed and we were unable to recover it. 00:34:21.961 [2024-07-25 01:20:14.959763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.961 [2024-07-25 01:20:14.959790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.961 qpair failed and we were unable to recover it. 00:34:21.961 [2024-07-25 01:20:14.959909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.961 [2024-07-25 01:20:14.959934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.961 qpair failed and we were unable to recover it. 00:34:21.961 [2024-07-25 01:20:14.960071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.961 [2024-07-25 01:20:14.960097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.961 qpair failed and we were unable to recover it. 00:34:21.961 [2024-07-25 01:20:14.960251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.961 [2024-07-25 01:20:14.960278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.961 qpair failed and we were unable to recover it. 00:34:21.961 [2024-07-25 01:20:14.960420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.961 [2024-07-25 01:20:14.960445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.961 qpair failed and we were unable to recover it. 00:34:21.961 [2024-07-25 01:20:14.960587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.961 [2024-07-25 01:20:14.960613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.961 qpair failed and we were unable to recover it. 00:34:21.961 [2024-07-25 01:20:14.960755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.961 [2024-07-25 01:20:14.960781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.961 qpair failed and we were unable to recover it. 00:34:21.961 [2024-07-25 01:20:14.960954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.961 [2024-07-25 01:20:14.960979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.961 qpair failed and we were unable to recover it. 00:34:21.961 [2024-07-25 01:20:14.961119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.961 [2024-07-25 01:20:14.961144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.961 qpair failed and we were unable to recover it. 00:34:21.961 [2024-07-25 01:20:14.961286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.961 [2024-07-25 01:20:14.961314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.961 qpair failed and we were unable to recover it. 00:34:21.961 [2024-07-25 01:20:14.961483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.961 [2024-07-25 01:20:14.961508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.961 qpair failed and we were unable to recover it. 00:34:21.961 [2024-07-25 01:20:14.961649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.961 [2024-07-25 01:20:14.961675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.961 qpair failed and we were unable to recover it. 00:34:21.961 [2024-07-25 01:20:14.961816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.961 [2024-07-25 01:20:14.961842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.961 qpair failed and we were unable to recover it. 00:34:21.961 [2024-07-25 01:20:14.961983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.961 [2024-07-25 01:20:14.962009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.961 qpair failed and we were unable to recover it. 00:34:21.961 [2024-07-25 01:20:14.962155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.961 [2024-07-25 01:20:14.962181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.961 qpair failed and we were unable to recover it. 00:34:21.961 [2024-07-25 01:20:14.962325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.961 [2024-07-25 01:20:14.962351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.961 qpair failed and we were unable to recover it. 00:34:21.961 [2024-07-25 01:20:14.962522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.961 [2024-07-25 01:20:14.962547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.961 qpair failed and we were unable to recover it. 00:34:21.961 [2024-07-25 01:20:14.962692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.961 [2024-07-25 01:20:14.962717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.961 qpair failed and we were unable to recover it. 00:34:21.961 [2024-07-25 01:20:14.962859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.961 [2024-07-25 01:20:14.962885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.961 qpair failed and we were unable to recover it. 00:34:21.961 [2024-07-25 01:20:14.962996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.961 [2024-07-25 01:20:14.963021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.961 qpair failed and we were unable to recover it. 00:34:21.961 [2024-07-25 01:20:14.963171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.961 [2024-07-25 01:20:14.963196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.961 qpair failed and we were unable to recover it. 00:34:21.961 [2024-07-25 01:20:14.963340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.961 [2024-07-25 01:20:14.963366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.961 qpair failed and we were unable to recover it. 00:34:21.961 [2024-07-25 01:20:14.963485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.961 [2024-07-25 01:20:14.963511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.961 qpair failed and we were unable to recover it. 00:34:21.961 [2024-07-25 01:20:14.963645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.961 [2024-07-25 01:20:14.963671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.961 qpair failed and we were unable to recover it. 00:34:21.961 [2024-07-25 01:20:14.963807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.961 [2024-07-25 01:20:14.963832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.961 qpair failed and we were unable to recover it. 00:34:21.961 [2024-07-25 01:20:14.963976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.961 [2024-07-25 01:20:14.964001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.961 qpair failed and we were unable to recover it. 00:34:21.961 [2024-07-25 01:20:14.964150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.961 [2024-07-25 01:20:14.964175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.961 qpair failed and we were unable to recover it. 00:34:21.961 [2024-07-25 01:20:14.964313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.961 [2024-07-25 01:20:14.964339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.961 qpair failed and we were unable to recover it. 00:34:21.961 [2024-07-25 01:20:14.964509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.961 [2024-07-25 01:20:14.964535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.961 qpair failed and we were unable to recover it. 00:34:21.961 [2024-07-25 01:20:14.964654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.962 [2024-07-25 01:20:14.964681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.962 qpair failed and we were unable to recover it. 00:34:21.962 [2024-07-25 01:20:14.964822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.962 [2024-07-25 01:20:14.964847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.962 qpair failed and we were unable to recover it. 00:34:21.962 [2024-07-25 01:20:14.965017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.962 [2024-07-25 01:20:14.965043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.962 qpair failed and we were unable to recover it. 00:34:21.962 [2024-07-25 01:20:14.965156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.962 [2024-07-25 01:20:14.965182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.962 qpair failed and we were unable to recover it. 00:34:21.962 [2024-07-25 01:20:14.965302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.962 [2024-07-25 01:20:14.965332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.962 qpair failed and we were unable to recover it. 00:34:21.962 [2024-07-25 01:20:14.965470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.962 [2024-07-25 01:20:14.965495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.962 qpair failed and we were unable to recover it. 00:34:21.962 [2024-07-25 01:20:14.965664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.962 [2024-07-25 01:20:14.965690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.962 qpair failed and we were unable to recover it. 00:34:21.962 [2024-07-25 01:20:14.965797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.962 [2024-07-25 01:20:14.965823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.962 qpair failed and we were unable to recover it. 00:34:21.962 [2024-07-25 01:20:14.965970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.962 [2024-07-25 01:20:14.965995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.962 qpair failed and we were unable to recover it. 00:34:21.962 [2024-07-25 01:20:14.966138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.962 [2024-07-25 01:20:14.966164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.962 qpair failed and we were unable to recover it. 00:34:21.962 [2024-07-25 01:20:14.966310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.962 [2024-07-25 01:20:14.966337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.962 qpair failed and we were unable to recover it. 00:34:21.962 [2024-07-25 01:20:14.966478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.962 [2024-07-25 01:20:14.966504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.962 qpair failed and we were unable to recover it. 00:34:21.962 [2024-07-25 01:20:14.966646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.962 [2024-07-25 01:20:14.966672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.962 qpair failed and we were unable to recover it. 00:34:21.962 [2024-07-25 01:20:14.966820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.962 [2024-07-25 01:20:14.966846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.962 qpair failed and we were unable to recover it. 00:34:21.962 [2024-07-25 01:20:14.967011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.962 [2024-07-25 01:20:14.967036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.962 qpair failed and we were unable to recover it. 00:34:21.962 [2024-07-25 01:20:14.967204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.962 [2024-07-25 01:20:14.967229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.962 qpair failed and we were unable to recover it. 00:34:21.962 [2024-07-25 01:20:14.967381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.962 [2024-07-25 01:20:14.967406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.962 qpair failed and we were unable to recover it. 00:34:21.962 [2024-07-25 01:20:14.967548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.962 [2024-07-25 01:20:14.967574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.962 qpair failed and we were unable to recover it. 00:34:21.962 [2024-07-25 01:20:14.967694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.962 [2024-07-25 01:20:14.967721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.962 qpair failed and we were unable to recover it. 00:34:21.962 [2024-07-25 01:20:14.967869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.962 [2024-07-25 01:20:14.967895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.962 qpair failed and we were unable to recover it. 00:34:21.962 [2024-07-25 01:20:14.968006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.962 [2024-07-25 01:20:14.968032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.962 qpair failed and we were unable to recover it. 00:34:21.962 [2024-07-25 01:20:14.968148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.962 [2024-07-25 01:20:14.968173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.962 qpair failed and we were unable to recover it. 00:34:21.962 [2024-07-25 01:20:14.968314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.962 [2024-07-25 01:20:14.968341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.962 qpair failed and we were unable to recover it. 00:34:21.962 [2024-07-25 01:20:14.968485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.962 [2024-07-25 01:20:14.968511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.962 qpair failed and we were unable to recover it. 00:34:21.962 [2024-07-25 01:20:14.968624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.962 [2024-07-25 01:20:14.968651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.962 qpair failed and we were unable to recover it. 00:34:21.962 [2024-07-25 01:20:14.968788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.962 [2024-07-25 01:20:14.968814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.962 qpair failed and we were unable to recover it. 00:34:21.962 [2024-07-25 01:20:14.968981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.962 [2024-07-25 01:20:14.969007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.962 qpair failed and we were unable to recover it. 00:34:21.962 [2024-07-25 01:20:14.969120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.962 [2024-07-25 01:20:14.969146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.962 qpair failed and we were unable to recover it. 00:34:21.962 [2024-07-25 01:20:14.969285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.962 [2024-07-25 01:20:14.969311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.962 qpair failed and we were unable to recover it. 00:34:21.962 [2024-07-25 01:20:14.969457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.962 [2024-07-25 01:20:14.969483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.962 qpair failed and we were unable to recover it. 00:34:21.962 [2024-07-25 01:20:14.969596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.963 [2024-07-25 01:20:14.969621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.963 qpair failed and we were unable to recover it. 00:34:21.963 [2024-07-25 01:20:14.969791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.963 [2024-07-25 01:20:14.969817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.963 qpair failed and we were unable to recover it. 00:34:21.963 [2024-07-25 01:20:14.969957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.963 [2024-07-25 01:20:14.969982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.963 qpair failed and we were unable to recover it. 00:34:21.963 [2024-07-25 01:20:14.970148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.963 [2024-07-25 01:20:14.970173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.963 qpair failed and we were unable to recover it. 00:34:21.963 [2024-07-25 01:20:14.970341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.963 [2024-07-25 01:20:14.970368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.963 qpair failed and we were unable to recover it. 00:34:21.963 [2024-07-25 01:20:14.970516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.963 [2024-07-25 01:20:14.970541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.963 qpair failed and we were unable to recover it. 00:34:21.963 [2024-07-25 01:20:14.970663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.963 [2024-07-25 01:20:14.970688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.963 qpair failed and we were unable to recover it. 00:34:21.963 [2024-07-25 01:20:14.970850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.963 [2024-07-25 01:20:14.970876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.963 qpair failed and we were unable to recover it. 00:34:21.963 [2024-07-25 01:20:14.970989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.963 [2024-07-25 01:20:14.971015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.963 qpair failed and we were unable to recover it. 00:34:21.963 [2024-07-25 01:20:14.971160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.963 [2024-07-25 01:20:14.971185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.963 qpair failed and we were unable to recover it. 00:34:21.963 [2024-07-25 01:20:14.971316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.963 [2024-07-25 01:20:14.971342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.963 qpair failed and we were unable to recover it. 00:34:21.963 [2024-07-25 01:20:14.971486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.963 [2024-07-25 01:20:14.971512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.963 qpair failed and we were unable to recover it. 00:34:21.963 [2024-07-25 01:20:14.971629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.963 [2024-07-25 01:20:14.971655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.963 qpair failed and we were unable to recover it. 00:34:21.963 [2024-07-25 01:20:14.971799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.963 [2024-07-25 01:20:14.971826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.963 qpair failed and we were unable to recover it. 00:34:21.963 [2024-07-25 01:20:14.971971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.963 [2024-07-25 01:20:14.972001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.963 qpair failed and we were unable to recover it. 00:34:21.963 [2024-07-25 01:20:14.972120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.963 [2024-07-25 01:20:14.972146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.963 qpair failed and we were unable to recover it. 00:34:21.963 [2024-07-25 01:20:14.972286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.963 [2024-07-25 01:20:14.972313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.963 qpair failed and we were unable to recover it. 00:34:21.963 [2024-07-25 01:20:14.972463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.963 [2024-07-25 01:20:14.972488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.963 qpair failed and we were unable to recover it. 00:34:21.963 [2024-07-25 01:20:14.972632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.963 [2024-07-25 01:20:14.972658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.963 qpair failed and we were unable to recover it. 00:34:21.963 [2024-07-25 01:20:14.972796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.963 [2024-07-25 01:20:14.972822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.963 qpair failed and we were unable to recover it. 00:34:21.963 [2024-07-25 01:20:14.972968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.963 [2024-07-25 01:20:14.972993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.963 qpair failed and we were unable to recover it. 00:34:21.963 [2024-07-25 01:20:14.973134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.963 [2024-07-25 01:20:14.973160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.963 qpair failed and we were unable to recover it. 00:34:21.963 [2024-07-25 01:20:14.973265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.963 [2024-07-25 01:20:14.973291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.963 qpair failed and we were unable to recover it. 00:34:21.963 [2024-07-25 01:20:14.973403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.963 [2024-07-25 01:20:14.973429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.963 qpair failed and we were unable to recover it. 00:34:21.963 [2024-07-25 01:20:14.973553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.963 [2024-07-25 01:20:14.973579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.963 qpair failed and we were unable to recover it. 00:34:21.963 [2024-07-25 01:20:14.973690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.963 [2024-07-25 01:20:14.973716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.963 qpair failed and we were unable to recover it. 00:34:21.963 [2024-07-25 01:20:14.973839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.963 [2024-07-25 01:20:14.973865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.963 qpair failed and we were unable to recover it. 00:34:21.963 [2024-07-25 01:20:14.973987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.963 [2024-07-25 01:20:14.974013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.963 qpair failed and we were unable to recover it. 00:34:21.963 [2024-07-25 01:20:14.974191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.963 [2024-07-25 01:20:14.974217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.963 qpair failed and we were unable to recover it. 00:34:21.963 [2024-07-25 01:20:14.974373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.963 [2024-07-25 01:20:14.974400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.963 qpair failed and we were unable to recover it. 00:34:21.963 [2024-07-25 01:20:14.974570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.963 [2024-07-25 01:20:14.974595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.963 qpair failed and we were unable to recover it. 00:34:21.963 [2024-07-25 01:20:14.974741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.963 [2024-07-25 01:20:14.974766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.963 qpair failed and we were unable to recover it. 00:34:21.963 [2024-07-25 01:20:14.974910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.963 [2024-07-25 01:20:14.974936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.963 qpair failed and we were unable to recover it. 00:34:21.963 [2024-07-25 01:20:14.975048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.963 [2024-07-25 01:20:14.975075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.963 qpair failed and we were unable to recover it. 00:34:21.963 [2024-07-25 01:20:14.975212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.963 [2024-07-25 01:20:14.975237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.963 qpair failed and we were unable to recover it. 00:34:21.963 [2024-07-25 01:20:14.975413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.964 [2024-07-25 01:20:14.975439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.964 qpair failed and we were unable to recover it. 00:34:21.964 [2024-07-25 01:20:14.975584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.964 [2024-07-25 01:20:14.975610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.964 qpair failed and we were unable to recover it. 00:34:21.964 [2024-07-25 01:20:14.975725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.964 [2024-07-25 01:20:14.975750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.964 qpair failed and we were unable to recover it. 00:34:21.964 [2024-07-25 01:20:14.975889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.964 [2024-07-25 01:20:14.975915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.964 qpair failed and we were unable to recover it. 00:34:21.964 [2024-07-25 01:20:14.976061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.964 [2024-07-25 01:20:14.976087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.964 qpair failed and we were unable to recover it. 00:34:21.964 [2024-07-25 01:20:14.976238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.964 [2024-07-25 01:20:14.976269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.964 qpair failed and we were unable to recover it. 00:34:21.964 [2024-07-25 01:20:14.976404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.964 [2024-07-25 01:20:14.976431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.964 qpair failed and we were unable to recover it. 00:34:21.964 [2024-07-25 01:20:14.976554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.964 [2024-07-25 01:20:14.976579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.964 qpair failed and we were unable to recover it. 00:34:21.964 [2024-07-25 01:20:14.976695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.964 [2024-07-25 01:20:14.976721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.964 qpair failed and we were unable to recover it. 00:34:21.964 [2024-07-25 01:20:14.976868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.964 [2024-07-25 01:20:14.976894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.964 qpair failed and we were unable to recover it. 00:34:21.964 [2024-07-25 01:20:14.977034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.964 [2024-07-25 01:20:14.977059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.964 qpair failed and we were unable to recover it. 00:34:21.964 [2024-07-25 01:20:14.977173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.964 [2024-07-25 01:20:14.977198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.964 qpair failed and we were unable to recover it. 00:34:21.964 [2024-07-25 01:20:14.977322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.964 [2024-07-25 01:20:14.977349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.964 qpair failed and we were unable to recover it. 00:34:21.964 [2024-07-25 01:20:14.977466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.964 [2024-07-25 01:20:14.977492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.964 qpair failed and we were unable to recover it. 00:34:21.964 [2024-07-25 01:20:14.977605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.964 [2024-07-25 01:20:14.977631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.964 qpair failed and we were unable to recover it. 00:34:21.964 [2024-07-25 01:20:14.977806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.964 [2024-07-25 01:20:14.977832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.964 qpair failed and we were unable to recover it. 00:34:21.964 [2024-07-25 01:20:14.977950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.964 [2024-07-25 01:20:14.977975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.964 qpair failed and we were unable to recover it. 00:34:21.964 [2024-07-25 01:20:14.978082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.964 [2024-07-25 01:20:14.978108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.964 qpair failed and we were unable to recover it. 00:34:21.964 [2024-07-25 01:20:14.978249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.964 [2024-07-25 01:20:14.978275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.964 qpair failed and we were unable to recover it. 00:34:21.964 [2024-07-25 01:20:14.978420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.964 [2024-07-25 01:20:14.978450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.964 qpair failed and we were unable to recover it. 00:34:21.964 [2024-07-25 01:20:14.978592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.964 [2024-07-25 01:20:14.978618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.964 qpair failed and we were unable to recover it. 00:34:21.964 [2024-07-25 01:20:14.978761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.964 [2024-07-25 01:20:14.978786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.964 qpair failed and we were unable to recover it. 00:34:21.964 [2024-07-25 01:20:14.978896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.964 [2024-07-25 01:20:14.978923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.964 qpair failed and we were unable to recover it. 00:34:21.964 [2024-07-25 01:20:14.979065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.964 [2024-07-25 01:20:14.979092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.964 qpair failed and we were unable to recover it. 00:34:21.964 [2024-07-25 01:20:14.979231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.964 [2024-07-25 01:20:14.979262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.964 qpair failed and we were unable to recover it. 00:34:21.964 [2024-07-25 01:20:14.979403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.964 [2024-07-25 01:20:14.979429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.964 qpair failed and we were unable to recover it. 00:34:21.964 [2024-07-25 01:20:14.979547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.964 [2024-07-25 01:20:14.979573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.964 qpair failed and we were unable to recover it. 00:34:21.964 [2024-07-25 01:20:14.979722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.964 [2024-07-25 01:20:14.979748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.964 qpair failed and we were unable to recover it. 00:34:21.964 [2024-07-25 01:20:14.979866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.964 [2024-07-25 01:20:14.979902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:21.964 qpair failed and we were unable to recover it. 00:34:21.964 [2024-07-25 01:20:14.980067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.964 [2024-07-25 01:20:14.980099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.964 qpair failed and we were unable to recover it. 00:34:21.964 [2024-07-25 01:20:14.980213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.964 [2024-07-25 01:20:14.980239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.964 qpair failed and we were unable to recover it. 00:34:21.964 [2024-07-25 01:20:14.980360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.964 [2024-07-25 01:20:14.980385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.964 qpair failed and we were unable to recover it. 00:34:21.964 [2024-07-25 01:20:14.980528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.964 [2024-07-25 01:20:14.980553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.964 qpair failed and we were unable to recover it. 00:34:21.964 [2024-07-25 01:20:14.980705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.964 [2024-07-25 01:20:14.980730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.964 qpair failed and we were unable to recover it. 00:34:21.964 [2024-07-25 01:20:14.980852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.964 [2024-07-25 01:20:14.980877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.964 qpair failed and we were unable to recover it. 00:34:21.964 [2024-07-25 01:20:14.981015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.964 [2024-07-25 01:20:14.981040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.964 qpair failed and we were unable to recover it. 00:34:21.964 [2024-07-25 01:20:14.981180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.964 [2024-07-25 01:20:14.981206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.964 qpair failed and we were unable to recover it. 00:34:21.964 [2024-07-25 01:20:14.981356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.965 [2024-07-25 01:20:14.981383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.965 qpair failed and we were unable to recover it. 00:34:21.965 [2024-07-25 01:20:14.981503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.965 [2024-07-25 01:20:14.981528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.965 qpair failed and we were unable to recover it. 00:34:21.965 [2024-07-25 01:20:14.981674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.965 [2024-07-25 01:20:14.981699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.965 qpair failed and we were unable to recover it. 00:34:21.965 [2024-07-25 01:20:14.981864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.965 [2024-07-25 01:20:14.981889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.965 qpair failed and we were unable to recover it. 00:34:21.965 [2024-07-25 01:20:14.982002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.965 [2024-07-25 01:20:14.982026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.965 qpair failed and we were unable to recover it. 00:34:21.965 [2024-07-25 01:20:14.982170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.965 [2024-07-25 01:20:14.982196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.965 qpair failed and we were unable to recover it. 00:34:21.965 [2024-07-25 01:20:14.982318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.965 [2024-07-25 01:20:14.982344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.965 qpair failed and we were unable to recover it. 00:34:21.965 [2024-07-25 01:20:14.982487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.965 [2024-07-25 01:20:14.982512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.965 qpair failed and we were unable to recover it. 00:34:21.965 [2024-07-25 01:20:14.982629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.965 [2024-07-25 01:20:14.982654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.965 qpair failed and we were unable to recover it. 00:34:21.965 [2024-07-25 01:20:14.982784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.965 [2024-07-25 01:20:14.982809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.965 qpair failed and we were unable to recover it. 00:34:21.965 [2024-07-25 01:20:14.982926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.965 [2024-07-25 01:20:14.982951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.965 qpair failed and we were unable to recover it. 00:34:21.965 [2024-07-25 01:20:14.983062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.965 [2024-07-25 01:20:14.983087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.965 qpair failed and we were unable to recover it. 00:34:21.965 [2024-07-25 01:20:14.983230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.965 [2024-07-25 01:20:14.983262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.965 qpair failed and we were unable to recover it. 00:34:21.965 [2024-07-25 01:20:14.983406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.965 [2024-07-25 01:20:14.983431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.965 qpair failed and we were unable to recover it. 00:34:21.965 [2024-07-25 01:20:14.983560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.965 [2024-07-25 01:20:14.983584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.965 qpair failed and we were unable to recover it. 00:34:21.965 [2024-07-25 01:20:14.983702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.965 [2024-07-25 01:20:14.983727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.965 qpair failed and we were unable to recover it. 00:34:21.965 [2024-07-25 01:20:14.983872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.965 [2024-07-25 01:20:14.983897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.965 qpair failed and we were unable to recover it. 00:34:21.965 [2024-07-25 01:20:14.984008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.965 [2024-07-25 01:20:14.984032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.965 qpair failed and we were unable to recover it. 00:34:21.965 [2024-07-25 01:20:14.984150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.965 [2024-07-25 01:20:14.984176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.965 qpair failed and we were unable to recover it. 00:34:21.965 [2024-07-25 01:20:14.984330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.965 [2024-07-25 01:20:14.984356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.965 qpair failed and we were unable to recover it. 00:34:21.965 [2024-07-25 01:20:14.984497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.965 [2024-07-25 01:20:14.984521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.965 qpair failed and we were unable to recover it. 00:34:21.965 [2024-07-25 01:20:14.984647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.965 [2024-07-25 01:20:14.984672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.965 qpair failed and we were unable to recover it. 00:34:21.965 [2024-07-25 01:20:14.984788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.965 [2024-07-25 01:20:14.984817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.965 qpair failed and we were unable to recover it. 00:34:21.965 [2024-07-25 01:20:14.984962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.965 [2024-07-25 01:20:14.984987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.965 qpair failed and we were unable to recover it. 00:34:21.965 [2024-07-25 01:20:14.985106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.965 [2024-07-25 01:20:14.985131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.965 qpair failed and we were unable to recover it. 00:34:21.965 [2024-07-25 01:20:14.985279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.965 [2024-07-25 01:20:14.985304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.965 qpair failed and we were unable to recover it. 00:34:21.965 [2024-07-25 01:20:14.985439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.965 [2024-07-25 01:20:14.985464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.965 qpair failed and we were unable to recover it. 00:34:21.965 [2024-07-25 01:20:14.985607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.965 [2024-07-25 01:20:14.985632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.965 qpair failed and we were unable to recover it. 00:34:21.965 [2024-07-25 01:20:14.985750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.965 [2024-07-25 01:20:14.985774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.965 qpair failed and we were unable to recover it. 00:34:21.965 [2024-07-25 01:20:14.985887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.965 [2024-07-25 01:20:14.985913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.965 qpair failed and we were unable to recover it. 00:34:21.965 [2024-07-25 01:20:14.986027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.965 [2024-07-25 01:20:14.986052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.965 qpair failed and we were unable to recover it. 00:34:21.965 [2024-07-25 01:20:14.986165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.965 [2024-07-25 01:20:14.986189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.965 qpair failed and we were unable to recover it. 00:34:21.965 [2024-07-25 01:20:14.986307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.965 [2024-07-25 01:20:14.986333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.965 qpair failed and we were unable to recover it. 00:34:21.965 [2024-07-25 01:20:14.986502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.965 [2024-07-25 01:20:14.986527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.965 qpair failed and we were unable to recover it. 00:34:21.965 [2024-07-25 01:20:14.986642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.965 [2024-07-25 01:20:14.986667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.965 qpair failed and we were unable to recover it. 00:34:21.965 [2024-07-25 01:20:14.986789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.965 [2024-07-25 01:20:14.986814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.965 qpair failed and we were unable to recover it. 00:34:21.965 [2024-07-25 01:20:14.986958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.965 [2024-07-25 01:20:14.986984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.965 qpair failed and we were unable to recover it. 00:34:21.965 [2024-07-25 01:20:14.987099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.965 [2024-07-25 01:20:14.987124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.965 qpair failed and we were unable to recover it. 00:34:21.965 [2024-07-25 01:20:14.987248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.965 [2024-07-25 01:20:14.987272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.965 qpair failed and we were unable to recover it. 00:34:21.965 [2024-07-25 01:20:14.987395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.965 [2024-07-25 01:20:14.987420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.965 qpair failed and we were unable to recover it. 00:34:21.965 [2024-07-25 01:20:14.987553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.965 [2024-07-25 01:20:14.987577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.965 qpair failed and we were unable to recover it. 00:34:21.966 [2024-07-25 01:20:14.987719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.966 [2024-07-25 01:20:14.987744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.966 qpair failed and we were unable to recover it. 00:34:21.966 [2024-07-25 01:20:14.987857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.966 [2024-07-25 01:20:14.987881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.966 qpair failed and we were unable to recover it. 00:34:21.966 [2024-07-25 01:20:14.987996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.966 [2024-07-25 01:20:14.988020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.966 qpair failed and we were unable to recover it. 00:34:21.966 [2024-07-25 01:20:14.988161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.966 [2024-07-25 01:20:14.988186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.966 qpair failed and we were unable to recover it. 00:34:21.966 [2024-07-25 01:20:14.988340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.966 [2024-07-25 01:20:14.988366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.966 qpair failed and we were unable to recover it. 00:34:21.966 [2024-07-25 01:20:14.988480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.966 [2024-07-25 01:20:14.988505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.966 qpair failed and we were unable to recover it. 00:34:21.966 [2024-07-25 01:20:14.988636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.966 [2024-07-25 01:20:14.988660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.966 qpair failed and we were unable to recover it. 00:34:21.966 [2024-07-25 01:20:14.988777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.966 [2024-07-25 01:20:14.988802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.966 qpair failed and we were unable to recover it. 00:34:21.966 [2024-07-25 01:20:14.988924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.966 [2024-07-25 01:20:14.988951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.966 qpair failed and we were unable to recover it. 00:34:21.966 [2024-07-25 01:20:14.989090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.966 [2024-07-25 01:20:14.989114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.966 qpair failed and we were unable to recover it. 00:34:21.966 [2024-07-25 01:20:14.989256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.966 [2024-07-25 01:20:14.989281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.966 qpair failed and we were unable to recover it. 00:34:21.966 [2024-07-25 01:20:14.989390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.966 [2024-07-25 01:20:14.989416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.966 qpair failed and we were unable to recover it. 00:34:21.966 [2024-07-25 01:20:14.989529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.966 [2024-07-25 01:20:14.989554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.966 qpair failed and we were unable to recover it. 00:34:21.966 [2024-07-25 01:20:14.989692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.966 [2024-07-25 01:20:14.989716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.966 qpair failed and we were unable to recover it. 00:34:21.966 [2024-07-25 01:20:14.989857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.966 [2024-07-25 01:20:14.989882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.966 qpair failed and we were unable to recover it. 00:34:21.966 [2024-07-25 01:20:14.990024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.966 [2024-07-25 01:20:14.990049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.966 qpair failed and we were unable to recover it. 00:34:21.966 [2024-07-25 01:20:14.990187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.966 [2024-07-25 01:20:14.990212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.966 qpair failed and we were unable to recover it. 00:34:21.966 [2024-07-25 01:20:14.990361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.966 [2024-07-25 01:20:14.990386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.966 qpair failed and we were unable to recover it. 00:34:21.966 [2024-07-25 01:20:14.990531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.966 [2024-07-25 01:20:14.990556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.966 qpair failed and we were unable to recover it. 00:34:21.966 [2024-07-25 01:20:14.990672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.966 [2024-07-25 01:20:14.990697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.966 qpair failed and we were unable to recover it. 00:34:21.966 [2024-07-25 01:20:14.990846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.966 [2024-07-25 01:20:14.990872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.966 qpair failed and we were unable to recover it. 00:34:21.966 [2024-07-25 01:20:14.990979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.966 [2024-07-25 01:20:14.991009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.966 qpair failed and we were unable to recover it. 00:34:21.966 [2024-07-25 01:20:14.991150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.966 [2024-07-25 01:20:14.991174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.966 qpair failed and we were unable to recover it. 00:34:21.966 [2024-07-25 01:20:14.991288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.966 [2024-07-25 01:20:14.991313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.966 qpair failed and we were unable to recover it. 00:34:21.966 [2024-07-25 01:20:14.991423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.966 [2024-07-25 01:20:14.991448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.966 qpair failed and we were unable to recover it. 00:34:21.966 [2024-07-25 01:20:14.991567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.966 [2024-07-25 01:20:14.991593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.966 qpair failed and we were unable to recover it. 00:34:21.966 [2024-07-25 01:20:14.991714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.966 [2024-07-25 01:20:14.991740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.966 qpair failed and we were unable to recover it. 00:34:21.966 [2024-07-25 01:20:14.991850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.966 [2024-07-25 01:20:14.991876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.966 qpair failed and we were unable to recover it. 00:34:21.966 [2024-07-25 01:20:14.991986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.966 [2024-07-25 01:20:14.992011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.966 qpair failed and we were unable to recover it. 00:34:21.966 [2024-07-25 01:20:14.992124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.966 [2024-07-25 01:20:14.992149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.966 qpair failed and we were unable to recover it. 00:34:21.966 [2024-07-25 01:20:14.992275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.966 [2024-07-25 01:20:14.992301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.966 qpair failed and we were unable to recover it. 00:34:21.966 [2024-07-25 01:20:14.992415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.966 [2024-07-25 01:20:14.992440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.966 qpair failed and we were unable to recover it. 00:34:21.966 [2024-07-25 01:20:14.992589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.966 [2024-07-25 01:20:14.992615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.966 qpair failed and we were unable to recover it. 00:34:21.966 [2024-07-25 01:20:14.992755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.966 [2024-07-25 01:20:14.992779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.966 qpair failed and we were unable to recover it. 00:34:21.966 [2024-07-25 01:20:14.992924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.966 [2024-07-25 01:20:14.992949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.966 qpair failed and we were unable to recover it. 00:34:21.967 [2024-07-25 01:20:14.993095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.967 [2024-07-25 01:20:14.993121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.967 qpair failed and we were unable to recover it. 00:34:21.967 [2024-07-25 01:20:14.993269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.967 [2024-07-25 01:20:14.993294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.967 qpair failed and we were unable to recover it. 00:34:21.967 [2024-07-25 01:20:14.993425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.967 [2024-07-25 01:20:14.993451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.967 qpair failed and we were unable to recover it. 00:34:21.967 [2024-07-25 01:20:14.993592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.967 [2024-07-25 01:20:14.993618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.967 qpair failed and we were unable to recover it. 00:34:21.967 [2024-07-25 01:20:14.993739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.967 [2024-07-25 01:20:14.993763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.967 qpair failed and we were unable to recover it. 00:34:21.967 [2024-07-25 01:20:14.993876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.967 [2024-07-25 01:20:14.993901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.967 qpair failed and we were unable to recover it. 00:34:21.967 [2024-07-25 01:20:14.994041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.967 [2024-07-25 01:20:14.994065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.967 qpair failed and we were unable to recover it. 00:34:21.967 [2024-07-25 01:20:14.994220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.967 [2024-07-25 01:20:14.994259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.967 qpair failed and we were unable to recover it. 00:34:21.967 [2024-07-25 01:20:14.994374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.967 [2024-07-25 01:20:14.994399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.967 qpair failed and we were unable to recover it. 00:34:21.967 [2024-07-25 01:20:14.994541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.967 [2024-07-25 01:20:14.994565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.967 qpair failed and we were unable to recover it. 00:34:21.967 [2024-07-25 01:20:14.994701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.967 [2024-07-25 01:20:14.994725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.967 qpair failed and we were unable to recover it. 00:34:21.967 [2024-07-25 01:20:14.994844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.967 [2024-07-25 01:20:14.994869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.967 qpair failed and we were unable to recover it. 00:34:21.967 [2024-07-25 01:20:14.995013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.967 [2024-07-25 01:20:14.995038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.967 qpair failed and we were unable to recover it. 00:34:21.967 [2024-07-25 01:20:14.995178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.967 [2024-07-25 01:20:14.995207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.967 qpair failed and we were unable to recover it. 00:34:21.967 [2024-07-25 01:20:14.995329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.967 [2024-07-25 01:20:14.995353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.967 qpair failed and we were unable to recover it. 00:34:21.967 [2024-07-25 01:20:14.995468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.967 [2024-07-25 01:20:14.995493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.967 qpair failed and we were unable to recover it. 00:34:21.967 [2024-07-25 01:20:14.995609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.967 [2024-07-25 01:20:14.995634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.967 qpair failed and we were unable to recover it. 00:34:21.967 [2024-07-25 01:20:14.995750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.967 [2024-07-25 01:20:14.995775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.967 qpair failed and we were unable to recover it. 00:34:21.967 [2024-07-25 01:20:14.995914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.967 [2024-07-25 01:20:14.995939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.967 qpair failed and we were unable to recover it. 00:34:21.967 [2024-07-25 01:20:14.996075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.967 [2024-07-25 01:20:14.996100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.967 qpair failed and we were unable to recover it. 00:34:21.967 [2024-07-25 01:20:14.996216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.967 [2024-07-25 01:20:14.996248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.967 qpair failed and we were unable to recover it. 00:34:21.967 [2024-07-25 01:20:14.996361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.967 [2024-07-25 01:20:14.996386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.967 qpair failed and we were unable to recover it. 00:34:21.967 [2024-07-25 01:20:14.996532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.967 [2024-07-25 01:20:14.996556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.967 qpair failed and we were unable to recover it. 00:34:21.967 [2024-07-25 01:20:14.996675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.967 [2024-07-25 01:20:14.996701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.967 qpair failed and we were unable to recover it. 00:34:21.967 [2024-07-25 01:20:14.996817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.967 [2024-07-25 01:20:14.996843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.967 qpair failed and we were unable to recover it. 00:34:21.967 [2024-07-25 01:20:14.996963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.967 [2024-07-25 01:20:14.996988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.967 qpair failed and we were unable to recover it. 00:34:21.967 [2024-07-25 01:20:14.997130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.967 [2024-07-25 01:20:14.997155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.967 qpair failed and we were unable to recover it. 00:34:21.967 [2024-07-25 01:20:14.997301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.967 [2024-07-25 01:20:14.997327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.967 qpair failed and we were unable to recover it. 00:34:21.967 [2024-07-25 01:20:14.997446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.967 [2024-07-25 01:20:14.997472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.967 qpair failed and we were unable to recover it. 00:34:21.967 [2024-07-25 01:20:14.997591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.967 [2024-07-25 01:20:14.997616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.967 qpair failed and we were unable to recover it. 00:34:21.967 [2024-07-25 01:20:14.997772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.967 [2024-07-25 01:20:14.997797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.967 qpair failed and we were unable to recover it. 00:34:21.967 [2024-07-25 01:20:14.997938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.967 [2024-07-25 01:20:14.997963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.967 qpair failed and we were unable to recover it. 00:34:21.967 [2024-07-25 01:20:14.998096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.967 [2024-07-25 01:20:14.998120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.967 qpair failed and we were unable to recover it. 00:34:21.967 [2024-07-25 01:20:14.998234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.967 [2024-07-25 01:20:14.998274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.967 qpair failed and we were unable to recover it. 00:34:21.967 [2024-07-25 01:20:14.998386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.967 [2024-07-25 01:20:14.998410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.967 qpair failed and we were unable to recover it. 00:34:21.967 [2024-07-25 01:20:14.998526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.967 [2024-07-25 01:20:14.998552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.967 qpair failed and we were unable to recover it. 00:34:21.967 [2024-07-25 01:20:14.998680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.967 [2024-07-25 01:20:14.998704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.967 qpair failed and we were unable to recover it. 00:34:21.967 [2024-07-25 01:20:14.998815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.967 [2024-07-25 01:20:14.998842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.967 qpair failed and we were unable to recover it. 00:34:21.967 [2024-07-25 01:20:14.998985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.967 [2024-07-25 01:20:14.999010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.967 qpair failed and we were unable to recover it. 00:34:21.967 [2024-07-25 01:20:14.999127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.967 [2024-07-25 01:20:14.999153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.967 qpair failed and we were unable to recover it. 00:34:21.967 [2024-07-25 01:20:14.999324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.967 [2024-07-25 01:20:14.999350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.967 qpair failed and we were unable to recover it. 00:34:21.967 [2024-07-25 01:20:14.999470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.967 [2024-07-25 01:20:14.999494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.967 qpair failed and we were unable to recover it. 00:34:21.967 [2024-07-25 01:20:14.999637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.967 [2024-07-25 01:20:14.999662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.968 qpair failed and we were unable to recover it. 00:34:21.968 [2024-07-25 01:20:14.999778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.968 [2024-07-25 01:20:14.999802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.968 qpair failed and we were unable to recover it. 00:34:21.968 [2024-07-25 01:20:14.999925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.968 [2024-07-25 01:20:14.999950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.968 qpair failed and we were unable to recover it. 00:34:21.968 [2024-07-25 01:20:15.000094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.968 [2024-07-25 01:20:15.000119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.968 qpair failed and we were unable to recover it. 00:34:21.968 [2024-07-25 01:20:15.000258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.968 [2024-07-25 01:20:15.000283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.968 qpair failed and we were unable to recover it. 00:34:21.968 [2024-07-25 01:20:15.000396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.968 [2024-07-25 01:20:15.000421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.968 qpair failed and we were unable to recover it. 00:34:21.968 [2024-07-25 01:20:15.000566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.968 [2024-07-25 01:20:15.000591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.968 qpair failed and we were unable to recover it. 00:34:21.968 [2024-07-25 01:20:15.000705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.968 [2024-07-25 01:20:15.000730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.968 qpair failed and we were unable to recover it. 00:34:21.968 [2024-07-25 01:20:15.000883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.968 [2024-07-25 01:20:15.000907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.968 qpair failed and we were unable to recover it. 00:34:21.968 [2024-07-25 01:20:15.001049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.968 [2024-07-25 01:20:15.001073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.968 qpair failed and we were unable to recover it. 00:34:21.968 [2024-07-25 01:20:15.001208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.968 [2024-07-25 01:20:15.001233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.968 qpair failed and we were unable to recover it. 00:34:21.968 [2024-07-25 01:20:15.001353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.968 [2024-07-25 01:20:15.001381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.968 qpair failed and we were unable to recover it. 00:34:21.968 [2024-07-25 01:20:15.001510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.968 [2024-07-25 01:20:15.001535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.968 qpair failed and we were unable to recover it. 00:34:21.968 [2024-07-25 01:20:15.001656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.968 [2024-07-25 01:20:15.001683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.968 qpair failed and we were unable to recover it. 00:34:21.968 [2024-07-25 01:20:15.001801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.968 [2024-07-25 01:20:15.001826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.968 qpair failed and we were unable to recover it. 00:34:21.968 [2024-07-25 01:20:15.001991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.968 [2024-07-25 01:20:15.002017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.968 qpair failed and we were unable to recover it. 00:34:21.968 [2024-07-25 01:20:15.002133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.968 [2024-07-25 01:20:15.002158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.968 qpair failed and we were unable to recover it. 00:34:21.968 [2024-07-25 01:20:15.002303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.968 [2024-07-25 01:20:15.002328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.968 qpair failed and we were unable to recover it. 00:34:21.968 [2024-07-25 01:20:15.002442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.968 [2024-07-25 01:20:15.002468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.968 qpair failed and we were unable to recover it. 00:34:21.968 [2024-07-25 01:20:15.002611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.968 [2024-07-25 01:20:15.002636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.968 qpair failed and we were unable to recover it. 00:34:21.968 [2024-07-25 01:20:15.002751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.968 [2024-07-25 01:20:15.002776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.968 qpair failed and we were unable to recover it. 00:34:21.968 [2024-07-25 01:20:15.002944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.968 [2024-07-25 01:20:15.002970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.968 qpair failed and we were unable to recover it. 00:34:21.968 [2024-07-25 01:20:15.003088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.968 [2024-07-25 01:20:15.003113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.968 qpair failed and we were unable to recover it. 00:34:21.968 [2024-07-25 01:20:15.003239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.968 [2024-07-25 01:20:15.003270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.968 qpair failed and we were unable to recover it. 00:34:21.968 [2024-07-25 01:20:15.003409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.968 [2024-07-25 01:20:15.003434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.968 qpair failed and we were unable to recover it. 00:34:21.968 [2024-07-25 01:20:15.003557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.968 [2024-07-25 01:20:15.003583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.968 qpair failed and we were unable to recover it. 00:34:21.968 [2024-07-25 01:20:15.003725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.968 [2024-07-25 01:20:15.003750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.968 qpair failed and we were unable to recover it. 00:34:21.968 [2024-07-25 01:20:15.003892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.968 [2024-07-25 01:20:15.003916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.968 qpair failed and we were unable to recover it. 00:34:21.968 [2024-07-25 01:20:15.004034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.968 [2024-07-25 01:20:15.004060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.968 qpair failed and we were unable to recover it. 00:34:21.968 [2024-07-25 01:20:15.004172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.968 [2024-07-25 01:20:15.004196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.968 qpair failed and we were unable to recover it. 00:34:21.968 [2024-07-25 01:20:15.004342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.968 [2024-07-25 01:20:15.004368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.968 qpair failed and we were unable to recover it. 00:34:21.968 [2024-07-25 01:20:15.004498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.968 [2024-07-25 01:20:15.004523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.968 qpair failed and we were unable to recover it. 00:34:21.968 [2024-07-25 01:20:15.004668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.968 [2024-07-25 01:20:15.004693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.968 qpair failed and we were unable to recover it. 00:34:21.968 [2024-07-25 01:20:15.004821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.968 [2024-07-25 01:20:15.004847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.968 qpair failed and we were unable to recover it. 00:34:21.968 [2024-07-25 01:20:15.004976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.968 [2024-07-25 01:20:15.005001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.968 qpair failed and we were unable to recover it. 00:34:21.968 [2024-07-25 01:20:15.005136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.968 [2024-07-25 01:20:15.005161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.968 qpair failed and we were unable to recover it. 00:34:21.968 [2024-07-25 01:20:15.005279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.968 [2024-07-25 01:20:15.005304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.968 qpair failed and we were unable to recover it. 00:34:21.968 [2024-07-25 01:20:15.005424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.968 [2024-07-25 01:20:15.005450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.968 qpair failed and we were unable to recover it. 00:34:21.968 [2024-07-25 01:20:15.005562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.968 [2024-07-25 01:20:15.005587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.968 qpair failed and we were unable to recover it. 00:34:21.968 [2024-07-25 01:20:15.005754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.968 [2024-07-25 01:20:15.005780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.968 qpair failed and we were unable to recover it. 00:34:21.968 [2024-07-25 01:20:15.005898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.968 [2024-07-25 01:20:15.005923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.968 qpair failed and we were unable to recover it. 00:34:21.968 [2024-07-25 01:20:15.006039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.968 [2024-07-25 01:20:15.006063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.968 qpair failed and we were unable to recover it. 00:34:21.969 [2024-07-25 01:20:15.006183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.969 [2024-07-25 01:20:15.006208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.969 qpair failed and we were unable to recover it. 00:34:21.969 [2024-07-25 01:20:15.006337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.969 [2024-07-25 01:20:15.006363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.969 qpair failed and we were unable to recover it. 00:34:21.969 [2024-07-25 01:20:15.006508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.969 [2024-07-25 01:20:15.006533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.969 qpair failed and we were unable to recover it. 00:34:21.969 [2024-07-25 01:20:15.006682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.969 [2024-07-25 01:20:15.006708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.969 qpair failed and we were unable to recover it. 00:34:21.969 [2024-07-25 01:20:15.006828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.969 [2024-07-25 01:20:15.006853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.969 qpair failed and we were unable to recover it. 00:34:21.969 [2024-07-25 01:20:15.006992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.969 [2024-07-25 01:20:15.007018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.969 qpair failed and we were unable to recover it. 00:34:21.969 [2024-07-25 01:20:15.007135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.969 [2024-07-25 01:20:15.007160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.969 qpair failed and we were unable to recover it. 00:34:21.969 [2024-07-25 01:20:15.007306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.969 [2024-07-25 01:20:15.007332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.969 qpair failed and we were unable to recover it. 00:34:21.969 [2024-07-25 01:20:15.007443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.969 [2024-07-25 01:20:15.007468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.969 qpair failed and we were unable to recover it. 00:34:21.969 [2024-07-25 01:20:15.007580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.969 [2024-07-25 01:20:15.007609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.969 qpair failed and we were unable to recover it. 00:34:21.969 [2024-07-25 01:20:15.007725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.969 [2024-07-25 01:20:15.007751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.969 qpair failed and we were unable to recover it. 00:34:21.969 [2024-07-25 01:20:15.007898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.969 [2024-07-25 01:20:15.007923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.969 qpair failed and we were unable to recover it. 00:34:21.969 [2024-07-25 01:20:15.008043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.969 [2024-07-25 01:20:15.008068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.969 qpair failed and we were unable to recover it. 00:34:21.969 [2024-07-25 01:20:15.008182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.969 [2024-07-25 01:20:15.008208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.969 qpair failed and we were unable to recover it. 00:34:21.969 [2024-07-25 01:20:15.008336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.969 [2024-07-25 01:20:15.008362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.969 qpair failed and we were unable to recover it. 00:34:21.969 [2024-07-25 01:20:15.008503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.969 [2024-07-25 01:20:15.008528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.969 qpair failed and we were unable to recover it. 00:34:21.969 [2024-07-25 01:20:15.008645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.969 [2024-07-25 01:20:15.008670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.969 qpair failed and we were unable to recover it. 00:34:21.969 [2024-07-25 01:20:15.008790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.969 [2024-07-25 01:20:15.008815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.969 qpair failed and we were unable to recover it. 00:34:21.969 [2024-07-25 01:20:15.008927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.969 [2024-07-25 01:20:15.008952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.969 qpair failed and we were unable to recover it. 00:34:21.969 [2024-07-25 01:20:15.009072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.969 [2024-07-25 01:20:15.009098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.969 qpair failed and we were unable to recover it. 00:34:21.969 [2024-07-25 01:20:15.009205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.969 [2024-07-25 01:20:15.009230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.969 qpair failed and we were unable to recover it. 00:34:21.969 [2024-07-25 01:20:15.009348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.969 [2024-07-25 01:20:15.009373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.969 qpair failed and we were unable to recover it. 00:34:21.969 [2024-07-25 01:20:15.009480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.969 [2024-07-25 01:20:15.009506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.969 qpair failed and we were unable to recover it. 00:34:21.969 [2024-07-25 01:20:15.009619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.969 [2024-07-25 01:20:15.009644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.969 qpair failed and we were unable to recover it. 00:34:21.969 [2024-07-25 01:20:15.009788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.969 [2024-07-25 01:20:15.009814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.969 qpair failed and we were unable to recover it. 00:34:21.969 [2024-07-25 01:20:15.009919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.969 [2024-07-25 01:20:15.009944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.969 qpair failed and we were unable to recover it. 00:34:21.969 [2024-07-25 01:20:15.010064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.969 [2024-07-25 01:20:15.010090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.969 qpair failed and we were unable to recover it. 00:34:21.969 [2024-07-25 01:20:15.010210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.969 [2024-07-25 01:20:15.010236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.969 qpair failed and we were unable to recover it. 00:34:21.969 [2024-07-25 01:20:15.010357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.969 [2024-07-25 01:20:15.010383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.969 qpair failed and we were unable to recover it. 00:34:21.969 [2024-07-25 01:20:15.010521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.969 [2024-07-25 01:20:15.010547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.969 qpair failed and we were unable to recover it. 00:34:21.969 [2024-07-25 01:20:15.010691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.969 [2024-07-25 01:20:15.010716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.969 qpair failed and we were unable to recover it. 00:34:21.969 [2024-07-25 01:20:15.010861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.969 [2024-07-25 01:20:15.010886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.969 qpair failed and we were unable to recover it. 00:34:21.969 [2024-07-25 01:20:15.011025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.969 [2024-07-25 01:20:15.011050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.969 qpair failed and we were unable to recover it. 00:34:21.969 [2024-07-25 01:20:15.011197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.969 [2024-07-25 01:20:15.011222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.969 qpair failed and we were unable to recover it. 00:34:21.969 [2024-07-25 01:20:15.011367] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fad390 is same with the state(5) to be set 00:34:21.969 [2024-07-25 01:20:15.011547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.969 [2024-07-25 01:20:15.011584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:21.969 qpair failed and we were unable to recover it. 00:34:21.969 [2024-07-25 01:20:15.011757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.969 [2024-07-25 01:20:15.011794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:21.969 qpair failed and we were unable to recover it. 00:34:21.969 [2024-07-25 01:20:15.011950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.969 [2024-07-25 01:20:15.011984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:21.969 qpair failed and we were unable to recover it. 00:34:21.969 [2024-07-25 01:20:15.012215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.969 [2024-07-25 01:20:15.012271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:21.969 qpair failed and we were unable to recover it. 00:34:21.969 [2024-07-25 01:20:15.012452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.969 [2024-07-25 01:20:15.012482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:21.969 qpair failed and we were unable to recover it. 00:34:21.969 [2024-07-25 01:20:15.012645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.969 [2024-07-25 01:20:15.012674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:21.969 qpair failed and we were unable to recover it. 00:34:21.969 [2024-07-25 01:20:15.012838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.969 [2024-07-25 01:20:15.012885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:21.969 qpair failed and we were unable to recover it. 00:34:21.969 [2024-07-25 01:20:15.013076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.969 [2024-07-25 01:20:15.013105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:21.970 qpair failed and we were unable to recover it. 00:34:21.970 [2024-07-25 01:20:15.013285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.970 [2024-07-25 01:20:15.013315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:21.970 qpair failed and we were unable to recover it. 00:34:21.970 [2024-07-25 01:20:15.013484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.970 [2024-07-25 01:20:15.013521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.970 qpair failed and we were unable to recover it. 00:34:21.970 [2024-07-25 01:20:15.013665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.970 [2024-07-25 01:20:15.013695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.970 qpair failed and we were unable to recover it. 00:34:21.970 [2024-07-25 01:20:15.013825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.970 [2024-07-25 01:20:15.013853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.970 qpair failed and we were unable to recover it. 00:34:21.970 [2024-07-25 01:20:15.014064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.970 [2024-07-25 01:20:15.014092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.970 qpair failed and we were unable to recover it. 00:34:21.970 [2024-07-25 01:20:15.014227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.970 [2024-07-25 01:20:15.014262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.970 qpair failed and we were unable to recover it. 00:34:21.970 [2024-07-25 01:20:15.014424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.970 [2024-07-25 01:20:15.014449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.970 qpair failed and we were unable to recover it. 00:34:21.970 [2024-07-25 01:20:15.014590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.970 [2024-07-25 01:20:15.014619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.970 qpair failed and we were unable to recover it. 00:34:21.970 [2024-07-25 01:20:15.014753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.970 [2024-07-25 01:20:15.014783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.970 qpair failed and we were unable to recover it. 00:34:21.970 [2024-07-25 01:20:15.014910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.970 [2024-07-25 01:20:15.014938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.970 qpair failed and we were unable to recover it. 00:34:21.970 [2024-07-25 01:20:15.015156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.970 [2024-07-25 01:20:15.015185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.970 qpair failed and we were unable to recover it. 00:34:21.970 [2024-07-25 01:20:15.015347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.970 [2024-07-25 01:20:15.015374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.970 qpair failed and we were unable to recover it. 00:34:21.970 [2024-07-25 01:20:15.015516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.970 [2024-07-25 01:20:15.015541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.970 qpair failed and we were unable to recover it. 00:34:21.970 [2024-07-25 01:20:15.015707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.970 [2024-07-25 01:20:15.015732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.970 qpair failed and we were unable to recover it. 00:34:21.970 [2024-07-25 01:20:15.015848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.970 [2024-07-25 01:20:15.015873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.970 qpair failed and we were unable to recover it. 00:34:21.970 [2024-07-25 01:20:15.016008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.970 [2024-07-25 01:20:15.016033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.970 qpair failed and we were unable to recover it. 00:34:21.970 [2024-07-25 01:20:15.016164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.970 [2024-07-25 01:20:15.016192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.970 qpair failed and we were unable to recover it. 00:34:21.970 [2024-07-25 01:20:15.016336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.970 [2024-07-25 01:20:15.016363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.970 qpair failed and we were unable to recover it. 00:34:21.970 [2024-07-25 01:20:15.016527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.970 [2024-07-25 01:20:15.016553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.970 qpair failed and we were unable to recover it. 00:34:21.970 [2024-07-25 01:20:15.016693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.970 [2024-07-25 01:20:15.016718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.970 qpair failed and we were unable to recover it. 00:34:21.970 [2024-07-25 01:20:15.016866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.970 [2024-07-25 01:20:15.016891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.970 qpair failed and we were unable to recover it. 00:34:21.970 [2024-07-25 01:20:15.017011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.970 [2024-07-25 01:20:15.017036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.970 qpair failed and we were unable to recover it. 00:34:21.970 [2024-07-25 01:20:15.017223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.970 [2024-07-25 01:20:15.017259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.970 qpair failed and we were unable to recover it. 00:34:21.970 [2024-07-25 01:20:15.017388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.970 [2024-07-25 01:20:15.017414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.970 qpair failed and we were unable to recover it. 00:34:21.970 [2024-07-25 01:20:15.017522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.970 [2024-07-25 01:20:15.017547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.970 qpair failed and we were unable to recover it. 00:34:21.970 [2024-07-25 01:20:15.017659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.970 [2024-07-25 01:20:15.017684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.970 qpair failed and we were unable to recover it. 00:34:21.970 [2024-07-25 01:20:15.017789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.970 [2024-07-25 01:20:15.017814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.970 qpair failed and we were unable to recover it. 00:34:21.970 [2024-07-25 01:20:15.017950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.970 [2024-07-25 01:20:15.017977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.970 qpair failed and we were unable to recover it. 00:34:21.970 [2024-07-25 01:20:15.018129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.970 [2024-07-25 01:20:15.018157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.970 qpair failed and we were unable to recover it. 00:34:21.970 [2024-07-25 01:20:15.018328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.970 [2024-07-25 01:20:15.018353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.970 qpair failed and we were unable to recover it. 00:34:21.970 [2024-07-25 01:20:15.018469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.970 [2024-07-25 01:20:15.018495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.970 qpair failed and we were unable to recover it. 00:34:21.970 [2024-07-25 01:20:15.018605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.970 [2024-07-25 01:20:15.018629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.970 qpair failed and we were unable to recover it. 00:34:21.970 [2024-07-25 01:20:15.018742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.970 [2024-07-25 01:20:15.018766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.970 qpair failed and we were unable to recover it. 00:34:21.970 [2024-07-25 01:20:15.018911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.970 [2024-07-25 01:20:15.018942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.970 qpair failed and we were unable to recover it. 00:34:21.970 [2024-07-25 01:20:15.019116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.970 [2024-07-25 01:20:15.019143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.970 qpair failed and we were unable to recover it. 00:34:21.970 [2024-07-25 01:20:15.019302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.970 [2024-07-25 01:20:15.019327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.970 qpair failed and we were unable to recover it. 00:34:21.970 [2024-07-25 01:20:15.019460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.970 [2024-07-25 01:20:15.019485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.970 qpair failed and we were unable to recover it. 00:34:21.970 [2024-07-25 01:20:15.019602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.970 [2024-07-25 01:20:15.019627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.970 qpair failed and we were unable to recover it. 00:34:21.970 [2024-07-25 01:20:15.019736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.970 [2024-07-25 01:20:15.019761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.970 qpair failed and we were unable to recover it. 00:34:21.970 [2024-07-25 01:20:15.019905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.970 [2024-07-25 01:20:15.019930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.970 qpair failed and we were unable to recover it. 00:34:21.970 [2024-07-25 01:20:15.020064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.970 [2024-07-25 01:20:15.020092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.970 qpair failed and we were unable to recover it. 00:34:21.970 [2024-07-25 01:20:15.020268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.970 [2024-07-25 01:20:15.020311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.970 qpair failed and we were unable to recover it. 00:34:21.970 [2024-07-25 01:20:15.020425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.971 [2024-07-25 01:20:15.020451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.971 qpair failed and we were unable to recover it. 00:34:21.971 [2024-07-25 01:20:15.020582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.971 [2024-07-25 01:20:15.020609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.971 qpair failed and we were unable to recover it. 00:34:21.971 [2024-07-25 01:20:15.020764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.971 [2024-07-25 01:20:15.020792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.971 qpair failed and we were unable to recover it. 00:34:21.971 [2024-07-25 01:20:15.020954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.971 [2024-07-25 01:20:15.020983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.971 qpair failed and we were unable to recover it. 00:34:21.971 [2024-07-25 01:20:15.021133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.971 [2024-07-25 01:20:15.021161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.971 qpair failed and we were unable to recover it. 00:34:21.971 [2024-07-25 01:20:15.021340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.971 [2024-07-25 01:20:15.021366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.971 qpair failed and we were unable to recover it. 00:34:21.971 [2024-07-25 01:20:15.021509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.971 [2024-07-25 01:20:15.021534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.971 qpair failed and we were unable to recover it. 00:34:21.971 [2024-07-25 01:20:15.021673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.971 [2024-07-25 01:20:15.021698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.971 qpair failed and we were unable to recover it. 00:34:21.971 [2024-07-25 01:20:15.021838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.971 [2024-07-25 01:20:15.021866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.971 qpair failed and we were unable to recover it. 00:34:21.971 [2024-07-25 01:20:15.021989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.971 [2024-07-25 01:20:15.022017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.971 qpair failed and we were unable to recover it. 00:34:21.971 [2024-07-25 01:20:15.022165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.971 [2024-07-25 01:20:15.022192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.971 qpair failed and we were unable to recover it. 00:34:21.971 [2024-07-25 01:20:15.022331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.971 [2024-07-25 01:20:15.022356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.971 qpair failed and we were unable to recover it. 00:34:21.971 [2024-07-25 01:20:15.022507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.971 [2024-07-25 01:20:15.022532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.971 qpair failed and we were unable to recover it. 00:34:21.971 [2024-07-25 01:20:15.022649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.971 [2024-07-25 01:20:15.022675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.971 qpair failed and we were unable to recover it. 00:34:21.971 [2024-07-25 01:20:15.022801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.971 [2024-07-25 01:20:15.022827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.971 qpair failed and we were unable to recover it. 00:34:21.971 [2024-07-25 01:20:15.022938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.971 [2024-07-25 01:20:15.022964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.971 qpair failed and we were unable to recover it. 00:34:21.971 [2024-07-25 01:20:15.023139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.971 [2024-07-25 01:20:15.023168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.971 qpair failed and we were unable to recover it. 00:34:21.971 [2024-07-25 01:20:15.023303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.971 [2024-07-25 01:20:15.023330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.971 qpair failed and we were unable to recover it. 00:34:21.971 [2024-07-25 01:20:15.023454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.971 [2024-07-25 01:20:15.023479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.971 qpair failed and we were unable to recover it. 00:34:21.971 [2024-07-25 01:20:15.023626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.971 [2024-07-25 01:20:15.023651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.971 qpair failed and we were unable to recover it. 00:34:21.971 [2024-07-25 01:20:15.023773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.971 [2024-07-25 01:20:15.023798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.971 qpair failed and we were unable to recover it. 00:34:21.971 [2024-07-25 01:20:15.023931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.971 [2024-07-25 01:20:15.023956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.971 qpair failed and we were unable to recover it. 00:34:21.971 [2024-07-25 01:20:15.024096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.971 [2024-07-25 01:20:15.024125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.971 qpair failed and we were unable to recover it. 00:34:21.971 [2024-07-25 01:20:15.024288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.972 [2024-07-25 01:20:15.024317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.972 qpair failed and we were unable to recover it. 00:34:21.972 [2024-07-25 01:20:15.024450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.972 [2024-07-25 01:20:15.024479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.972 qpair failed and we were unable to recover it. 00:34:21.972 [2024-07-25 01:20:15.024702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.972 [2024-07-25 01:20:15.024731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.972 qpair failed and we were unable to recover it. 00:34:21.972 [2024-07-25 01:20:15.024887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.972 [2024-07-25 01:20:15.024915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.972 qpair failed and we were unable to recover it. 00:34:21.972 [2024-07-25 01:20:15.025069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.972 [2024-07-25 01:20:15.025096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.972 qpair failed and we were unable to recover it. 00:34:21.972 [2024-07-25 01:20:15.025229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.972 [2024-07-25 01:20:15.025260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.972 qpair failed and we were unable to recover it. 00:34:21.972 [2024-07-25 01:20:15.025398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.972 [2024-07-25 01:20:15.025423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.972 qpair failed and we were unable to recover it. 00:34:21.972 [2024-07-25 01:20:15.025591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.972 [2024-07-25 01:20:15.025616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.972 qpair failed and we were unable to recover it. 00:34:21.972 [2024-07-25 01:20:15.025756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.972 [2024-07-25 01:20:15.025788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.972 qpair failed and we were unable to recover it. 00:34:21.972 [2024-07-25 01:20:15.025952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.972 [2024-07-25 01:20:15.025980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.972 qpair failed and we were unable to recover it. 00:34:21.972 [2024-07-25 01:20:15.026130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.972 [2024-07-25 01:20:15.026158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.972 qpair failed and we were unable to recover it. 00:34:21.972 [2024-07-25 01:20:15.026295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.972 [2024-07-25 01:20:15.026322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.972 qpair failed and we were unable to recover it. 00:34:21.972 [2024-07-25 01:20:15.026439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.972 [2024-07-25 01:20:15.026465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.972 qpair failed and we were unable to recover it. 00:34:21.972 [2024-07-25 01:20:15.026632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.972 [2024-07-25 01:20:15.026657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.972 qpair failed and we were unable to recover it. 00:34:21.972 [2024-07-25 01:20:15.026799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.972 [2024-07-25 01:20:15.026827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.972 qpair failed and we were unable to recover it. 00:34:21.972 [2024-07-25 01:20:15.027007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.972 [2024-07-25 01:20:15.027036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.972 qpair failed and we were unable to recover it. 00:34:21.972 [2024-07-25 01:20:15.027190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.972 [2024-07-25 01:20:15.027218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.972 qpair failed and we were unable to recover it. 00:34:21.972 [2024-07-25 01:20:15.027366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.972 [2024-07-25 01:20:15.027392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.972 qpair failed and we were unable to recover it. 00:34:21.972 [2024-07-25 01:20:15.027533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.972 [2024-07-25 01:20:15.027559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.972 qpair failed and we were unable to recover it. 00:34:21.972 [2024-07-25 01:20:15.027706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.972 [2024-07-25 01:20:15.027732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.972 qpair failed and we were unable to recover it. 00:34:21.972 [2024-07-25 01:20:15.027866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.972 [2024-07-25 01:20:15.027893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.972 qpair failed and we were unable to recover it. 00:34:21.972 [2024-07-25 01:20:15.028049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.972 [2024-07-25 01:20:15.028078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.972 qpair failed and we were unable to recover it. 00:34:21.972 [2024-07-25 01:20:15.028240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.972 [2024-07-25 01:20:15.028300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.972 qpair failed and we were unable to recover it. 00:34:21.972 [2024-07-25 01:20:15.028468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.972 [2024-07-25 01:20:15.028492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.972 qpair failed and we were unable to recover it. 00:34:21.972 [2024-07-25 01:20:15.028608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.972 [2024-07-25 01:20:15.028650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.972 qpair failed and we were unable to recover it. 00:34:21.972 [2024-07-25 01:20:15.028802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.972 [2024-07-25 01:20:15.028827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.972 qpair failed and we were unable to recover it. 00:34:21.972 [2024-07-25 01:20:15.028975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.972 [2024-07-25 01:20:15.029003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.972 qpair failed and we were unable to recover it. 00:34:21.972 [2024-07-25 01:20:15.029170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.972 [2024-07-25 01:20:15.029195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.972 qpair failed and we were unable to recover it. 00:34:21.972 [2024-07-25 01:20:15.029356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.972 [2024-07-25 01:20:15.029382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.972 qpair failed and we were unable to recover it. 00:34:21.972 [2024-07-25 01:20:15.029537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.972 [2024-07-25 01:20:15.029562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.972 qpair failed and we were unable to recover it. 00:34:21.972 [2024-07-25 01:20:15.029732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.972 [2024-07-25 01:20:15.029757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.972 qpair failed and we were unable to recover it. 00:34:21.972 [2024-07-25 01:20:15.029900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.972 [2024-07-25 01:20:15.029925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.972 qpair failed and we were unable to recover it. 00:34:21.972 [2024-07-25 01:20:15.030066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.972 [2024-07-25 01:20:15.030094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.972 qpair failed and we were unable to recover it. 00:34:21.972 [2024-07-25 01:20:15.030295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.972 [2024-07-25 01:20:15.030320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.972 qpair failed and we were unable to recover it. 00:34:21.972 [2024-07-25 01:20:15.030461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.972 [2024-07-25 01:20:15.030486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.972 qpair failed and we were unable to recover it. 00:34:21.972 [2024-07-25 01:20:15.030656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.972 [2024-07-25 01:20:15.030684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.972 qpair failed and we were unable to recover it. 00:34:21.972 [2024-07-25 01:20:15.030817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.972 [2024-07-25 01:20:15.030845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.972 qpair failed and we were unable to recover it. 00:34:21.972 [2024-07-25 01:20:15.031012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.973 [2024-07-25 01:20:15.031039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.973 qpair failed and we were unable to recover it. 00:34:21.973 [2024-07-25 01:20:15.031194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.973 [2024-07-25 01:20:15.031219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.973 qpair failed and we were unable to recover it. 00:34:21.973 [2024-07-25 01:20:15.031340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.973 [2024-07-25 01:20:15.031366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.973 qpair failed and we were unable to recover it. 00:34:21.973 [2024-07-25 01:20:15.031485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.973 [2024-07-25 01:20:15.031511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.973 qpair failed and we were unable to recover it. 00:34:21.973 [2024-07-25 01:20:15.031649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.973 [2024-07-25 01:20:15.031678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.973 qpair failed and we were unable to recover it. 00:34:21.973 [2024-07-25 01:20:15.031836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.973 [2024-07-25 01:20:15.031864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.973 qpair failed and we were unable to recover it. 00:34:21.973 [2024-07-25 01:20:15.032018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.973 [2024-07-25 01:20:15.032046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.973 qpair failed and we were unable to recover it. 00:34:21.973 [2024-07-25 01:20:15.032204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.973 [2024-07-25 01:20:15.032229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.973 qpair failed and we were unable to recover it. 00:34:21.973 [2024-07-25 01:20:15.032355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.973 [2024-07-25 01:20:15.032381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.973 qpair failed and we were unable to recover it. 00:34:21.973 [2024-07-25 01:20:15.032519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.973 [2024-07-25 01:20:15.032543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.973 qpair failed and we were unable to recover it. 00:34:21.973 [2024-07-25 01:20:15.032725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.973 [2024-07-25 01:20:15.032754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.973 qpair failed and we were unable to recover it. 00:34:21.973 [2024-07-25 01:20:15.032910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.973 [2024-07-25 01:20:15.032943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.973 qpair failed and we were unable to recover it. 00:34:21.973 [2024-07-25 01:20:15.033076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.973 [2024-07-25 01:20:15.033104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.973 qpair failed and we were unable to recover it. 00:34:21.973 [2024-07-25 01:20:15.033266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.973 [2024-07-25 01:20:15.033292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.973 qpair failed and we were unable to recover it. 00:34:21.973 [2024-07-25 01:20:15.033406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.973 [2024-07-25 01:20:15.033431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.973 qpair failed and we were unable to recover it. 00:34:21.973 [2024-07-25 01:20:15.033566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.973 [2024-07-25 01:20:15.033591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.973 qpair failed and we were unable to recover it. 00:34:21.973 [2024-07-25 01:20:15.033705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.973 [2024-07-25 01:20:15.033730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.973 qpair failed and we were unable to recover it. 00:34:21.973 [2024-07-25 01:20:15.033835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.973 [2024-07-25 01:20:15.033860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.973 qpair failed and we were unable to recover it. 00:34:21.973 [2024-07-25 01:20:15.033981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.973 [2024-07-25 01:20:15.034006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.973 qpair failed and we were unable to recover it. 00:34:21.973 [2024-07-25 01:20:15.034172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.973 [2024-07-25 01:20:15.034200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.973 qpair failed and we were unable to recover it. 00:34:21.973 [2024-07-25 01:20:15.034374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.973 [2024-07-25 01:20:15.034400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.973 qpair failed and we were unable to recover it. 00:34:21.973 [2024-07-25 01:20:15.034512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.973 [2024-07-25 01:20:15.034538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.973 qpair failed and we were unable to recover it. 00:34:21.973 [2024-07-25 01:20:15.034698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.973 [2024-07-25 01:20:15.034725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.973 qpair failed and we were unable to recover it. 00:34:21.973 [2024-07-25 01:20:15.034874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.973 [2024-07-25 01:20:15.034902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.973 qpair failed and we were unable to recover it. 00:34:21.973 [2024-07-25 01:20:15.035029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.973 [2024-07-25 01:20:15.035057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.973 qpair failed and we were unable to recover it. 00:34:21.973 [2024-07-25 01:20:15.035219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.973 [2024-07-25 01:20:15.035251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.973 qpair failed and we were unable to recover it. 00:34:21.973 [2024-07-25 01:20:15.035396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.973 [2024-07-25 01:20:15.035421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.973 qpair failed and we were unable to recover it. 00:34:21.973 [2024-07-25 01:20:15.035542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.973 [2024-07-25 01:20:15.035567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.973 qpair failed and we were unable to recover it. 00:34:21.973 [2024-07-25 01:20:15.035689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.973 [2024-07-25 01:20:15.035714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.973 qpair failed and we were unable to recover it. 00:34:21.973 [2024-07-25 01:20:15.035854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.973 [2024-07-25 01:20:15.035879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.973 qpair failed and we were unable to recover it. 00:34:21.973 [2024-07-25 01:20:15.035988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.973 [2024-07-25 01:20:15.036013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.973 qpair failed and we were unable to recover it. 00:34:21.973 [2024-07-25 01:20:15.036213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.973 [2024-07-25 01:20:15.036240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.973 qpair failed and we were unable to recover it. 00:34:21.973 [2024-07-25 01:20:15.036419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.973 [2024-07-25 01:20:15.036444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.973 qpair failed and we were unable to recover it. 00:34:21.973 [2024-07-25 01:20:15.036562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.973 [2024-07-25 01:20:15.036588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.973 qpair failed and we were unable to recover it. 00:34:21.973 [2024-07-25 01:20:15.036702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.973 [2024-07-25 01:20:15.036727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.973 qpair failed and we were unable to recover it. 00:34:21.973 [2024-07-25 01:20:15.036868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.973 [2024-07-25 01:20:15.036893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.973 qpair failed and we were unable to recover it. 00:34:21.973 [2024-07-25 01:20:15.037041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.973 [2024-07-25 01:20:15.037066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.973 qpair failed and we were unable to recover it. 00:34:21.973 [2024-07-25 01:20:15.037207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.973 [2024-07-25 01:20:15.037232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.974 qpair failed and we were unable to recover it. 00:34:21.974 [2024-07-25 01:20:15.037401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.974 [2024-07-25 01:20:15.037427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.974 qpair failed and we were unable to recover it. 00:34:21.974 [2024-07-25 01:20:15.037546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.974 [2024-07-25 01:20:15.037572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.974 qpair failed and we were unable to recover it. 00:34:21.974 [2024-07-25 01:20:15.037694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.974 [2024-07-25 01:20:15.037718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.974 qpair failed and we were unable to recover it. 00:34:21.974 [2024-07-25 01:20:15.037860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.974 [2024-07-25 01:20:15.037886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.974 qpair failed and we were unable to recover it. 00:34:21.974 [2024-07-25 01:20:15.038030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.974 [2024-07-25 01:20:15.038056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.974 qpair failed and we were unable to recover it. 00:34:21.974 [2024-07-25 01:20:15.038176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.974 [2024-07-25 01:20:15.038201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.974 qpair failed and we were unable to recover it. 00:34:21.974 [2024-07-25 01:20:15.038346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.974 [2024-07-25 01:20:15.038371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.974 qpair failed and we were unable to recover it. 00:34:21.974 [2024-07-25 01:20:15.038483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.974 [2024-07-25 01:20:15.038508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.974 qpair failed and we were unable to recover it. 00:34:21.974 [2024-07-25 01:20:15.038675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.974 [2024-07-25 01:20:15.038701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.974 qpair failed and we were unable to recover it. 00:34:21.974 [2024-07-25 01:20:15.038842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.974 [2024-07-25 01:20:15.038867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.974 qpair failed and we were unable to recover it. 00:34:21.974 [2024-07-25 01:20:15.038983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.974 [2024-07-25 01:20:15.039007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.974 qpair failed and we were unable to recover it. 00:34:21.974 [2024-07-25 01:20:15.039112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.974 [2024-07-25 01:20:15.039137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.974 qpair failed and we were unable to recover it. 00:34:21.974 [2024-07-25 01:20:15.039257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.974 [2024-07-25 01:20:15.039282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.974 qpair failed and we were unable to recover it. 00:34:21.974 [2024-07-25 01:20:15.039423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.974 [2024-07-25 01:20:15.039448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.974 qpair failed and we were unable to recover it. 00:34:21.974 [2024-07-25 01:20:15.039591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.974 [2024-07-25 01:20:15.039616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.974 qpair failed and we were unable to recover it. 00:34:21.974 [2024-07-25 01:20:15.039732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.974 [2024-07-25 01:20:15.039757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.974 qpair failed and we were unable to recover it. 00:34:21.974 [2024-07-25 01:20:15.039880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.974 [2024-07-25 01:20:15.039905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.974 qpair failed and we were unable to recover it. 00:34:21.974 [2024-07-25 01:20:15.040070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.974 [2024-07-25 01:20:15.040095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.974 qpair failed and we were unable to recover it. 00:34:21.974 [2024-07-25 01:20:15.040265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.974 [2024-07-25 01:20:15.040292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.974 qpair failed and we were unable to recover it. 00:34:21.974 [2024-07-25 01:20:15.040401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.974 [2024-07-25 01:20:15.040427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.974 qpair failed and we were unable to recover it. 00:34:21.974 [2024-07-25 01:20:15.040537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.974 [2024-07-25 01:20:15.040561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.974 qpair failed and we were unable to recover it. 00:34:21.974 [2024-07-25 01:20:15.040679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.974 [2024-07-25 01:20:15.040704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.974 qpair failed and we were unable to recover it. 00:34:21.974 [2024-07-25 01:20:15.040855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.974 [2024-07-25 01:20:15.040879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.974 qpair failed and we were unable to recover it. 00:34:21.974 [2024-07-25 01:20:15.040999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.974 [2024-07-25 01:20:15.041024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.974 qpair failed and we were unable to recover it. 00:34:21.974 [2024-07-25 01:20:15.041162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.974 [2024-07-25 01:20:15.041187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.974 qpair failed and we were unable to recover it. 00:34:21.974 [2024-07-25 01:20:15.041329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.974 [2024-07-25 01:20:15.041354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.974 qpair failed and we were unable to recover it. 00:34:21.974 [2024-07-25 01:20:15.041472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.974 [2024-07-25 01:20:15.041495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.974 qpair failed and we were unable to recover it. 00:34:21.974 [2024-07-25 01:20:15.041642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.974 [2024-07-25 01:20:15.041667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.974 qpair failed and we were unable to recover it. 00:34:21.974 [2024-07-25 01:20:15.041809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.974 [2024-07-25 01:20:15.041834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.974 qpair failed and we were unable to recover it. 00:34:21.974 [2024-07-25 01:20:15.041945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.974 [2024-07-25 01:20:15.041970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.974 qpair failed and we were unable to recover it. 00:34:21.974 [2024-07-25 01:20:15.042096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.974 [2024-07-25 01:20:15.042120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.974 qpair failed and we were unable to recover it. 00:34:21.974 [2024-07-25 01:20:15.042230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.974 [2024-07-25 01:20:15.042261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.974 qpair failed and we were unable to recover it. 00:34:21.974 [2024-07-25 01:20:15.042393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.974 [2024-07-25 01:20:15.042416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.974 qpair failed and we were unable to recover it. 00:34:21.974 [2024-07-25 01:20:15.042536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.974 [2024-07-25 01:20:15.042560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.974 qpair failed and we were unable to recover it. 00:34:21.974 [2024-07-25 01:20:15.042705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.974 [2024-07-25 01:20:15.042728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.974 qpair failed and we were unable to recover it. 00:34:21.974 [2024-07-25 01:20:15.042871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.974 [2024-07-25 01:20:15.042894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.974 qpair failed and we were unable to recover it. 00:34:21.974 [2024-07-25 01:20:15.043008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.974 [2024-07-25 01:20:15.043033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.974 qpair failed and we were unable to recover it. 00:34:21.975 [2024-07-25 01:20:15.043152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.975 [2024-07-25 01:20:15.043176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.975 qpair failed and we were unable to recover it. 00:34:21.975 [2024-07-25 01:20:15.043319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.975 [2024-07-25 01:20:15.043344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.975 qpair failed and we were unable to recover it. 00:34:21.975 [2024-07-25 01:20:15.043468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.975 [2024-07-25 01:20:15.043493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.975 qpair failed and we were unable to recover it. 00:34:21.975 [2024-07-25 01:20:15.043633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.975 [2024-07-25 01:20:15.043661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.975 qpair failed and we were unable to recover it. 00:34:21.975 [2024-07-25 01:20:15.043802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.975 [2024-07-25 01:20:15.043826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.975 qpair failed and we were unable to recover it. 00:34:21.975 [2024-07-25 01:20:15.043971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.975 [2024-07-25 01:20:15.043994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.975 qpair failed and we were unable to recover it. 00:34:21.975 [2024-07-25 01:20:15.044111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.975 [2024-07-25 01:20:15.044134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.975 qpair failed and we were unable to recover it. 00:34:21.975 [2024-07-25 01:20:15.044277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.975 [2024-07-25 01:20:15.044302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.975 qpair failed and we were unable to recover it. 00:34:21.975 [2024-07-25 01:20:15.044450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.975 [2024-07-25 01:20:15.044475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.975 qpair failed and we were unable to recover it. 00:34:21.975 [2024-07-25 01:20:15.044616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.975 [2024-07-25 01:20:15.044639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.975 qpair failed and we were unable to recover it. 00:34:21.975 [2024-07-25 01:20:15.044755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.975 [2024-07-25 01:20:15.044778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.975 qpair failed and we were unable to recover it. 00:34:21.975 [2024-07-25 01:20:15.044919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.975 [2024-07-25 01:20:15.044942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.975 qpair failed and we were unable to recover it. 00:34:21.975 [2024-07-25 01:20:15.045053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.975 [2024-07-25 01:20:15.045077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.975 qpair failed and we were unable to recover it. 00:34:21.975 [2024-07-25 01:20:15.045250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.975 [2024-07-25 01:20:15.045276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.975 qpair failed and we were unable to recover it. 00:34:21.975 [2024-07-25 01:20:15.045395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.975 [2024-07-25 01:20:15.045420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.975 qpair failed and we were unable to recover it. 00:34:21.975 [2024-07-25 01:20:15.045563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.975 [2024-07-25 01:20:15.045587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.975 qpair failed and we were unable to recover it. 00:34:21.975 [2024-07-25 01:20:15.045730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.975 [2024-07-25 01:20:15.045754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.975 qpair failed and we were unable to recover it. 00:34:21.975 [2024-07-25 01:20:15.045878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.975 [2024-07-25 01:20:15.045903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.975 qpair failed and we were unable to recover it. 00:34:21.975 [2024-07-25 01:20:15.046043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.975 [2024-07-25 01:20:15.046069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.975 qpair failed and we were unable to recover it. 00:34:21.975 [2024-07-25 01:20:15.046182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.975 [2024-07-25 01:20:15.046207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.975 qpair failed and we were unable to recover it. 00:34:21.975 [2024-07-25 01:20:15.046333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.975 [2024-07-25 01:20:15.046359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.975 qpair failed and we were unable to recover it. 00:34:21.975 [2024-07-25 01:20:15.046494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.975 [2024-07-25 01:20:15.046519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.975 qpair failed and we were unable to recover it. 00:34:21.975 [2024-07-25 01:20:15.046641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.975 [2024-07-25 01:20:15.046666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.975 qpair failed and we were unable to recover it. 00:34:21.975 [2024-07-25 01:20:15.046784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.975 [2024-07-25 01:20:15.046810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.975 qpair failed and we were unable to recover it. 00:34:21.975 [2024-07-25 01:20:15.046954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.975 [2024-07-25 01:20:15.046980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.975 qpair failed and we were unable to recover it. 00:34:21.975 [2024-07-25 01:20:15.047090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.975 [2024-07-25 01:20:15.047115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.975 qpair failed and we were unable to recover it. 00:34:21.975 [2024-07-25 01:20:15.047266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.975 [2024-07-25 01:20:15.047292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.975 qpair failed and we were unable to recover it. 00:34:21.975 [2024-07-25 01:20:15.047411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.975 [2024-07-25 01:20:15.047436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.975 qpair failed and we were unable to recover it. 00:34:21.975 [2024-07-25 01:20:15.047551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.975 [2024-07-25 01:20:15.047576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.975 qpair failed and we were unable to recover it. 00:34:21.975 [2024-07-25 01:20:15.047692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.975 [2024-07-25 01:20:15.047719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.975 qpair failed and we were unable to recover it. 00:34:21.975 [2024-07-25 01:20:15.047842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.975 [2024-07-25 01:20:15.047868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.975 qpair failed and we were unable to recover it. 00:34:21.975 [2024-07-25 01:20:15.047979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.975 [2024-07-25 01:20:15.048005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.975 qpair failed and we were unable to recover it. 00:34:21.975 [2024-07-25 01:20:15.048147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.975 [2024-07-25 01:20:15.048171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.975 qpair failed and we were unable to recover it. 00:34:21.975 [2024-07-25 01:20:15.048338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.975 [2024-07-25 01:20:15.048364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.975 qpair failed and we were unable to recover it. 00:34:21.976 [2024-07-25 01:20:15.048475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.976 [2024-07-25 01:20:15.048501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.976 qpair failed and we were unable to recover it. 00:34:21.976 [2024-07-25 01:20:15.048622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.976 [2024-07-25 01:20:15.048647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.976 qpair failed and we were unable to recover it. 00:34:21.976 [2024-07-25 01:20:15.048762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.976 [2024-07-25 01:20:15.048786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.976 qpair failed and we were unable to recover it. 00:34:21.976 [2024-07-25 01:20:15.048901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.976 [2024-07-25 01:20:15.048924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.976 qpair failed and we were unable to recover it. 00:34:21.976 [2024-07-25 01:20:15.049040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.976 [2024-07-25 01:20:15.049064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.976 qpair failed and we were unable to recover it. 00:34:21.976 [2024-07-25 01:20:15.049178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.976 [2024-07-25 01:20:15.049202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.976 qpair failed and we were unable to recover it. 00:34:21.976 [2024-07-25 01:20:15.049326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.976 [2024-07-25 01:20:15.049350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.976 qpair failed and we were unable to recover it. 00:34:21.976 [2024-07-25 01:20:15.049463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.976 [2024-07-25 01:20:15.049488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.976 qpair failed and we were unable to recover it. 00:34:21.976 [2024-07-25 01:20:15.049621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.976 [2024-07-25 01:20:15.049646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.976 qpair failed and we were unable to recover it. 00:34:21.976 [2024-07-25 01:20:15.049771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.976 [2024-07-25 01:20:15.049799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.976 qpair failed and we were unable to recover it. 00:34:21.976 [2024-07-25 01:20:15.049909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.976 [2024-07-25 01:20:15.049933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.976 qpair failed and we were unable to recover it. 00:34:21.976 [2024-07-25 01:20:15.050078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.976 [2024-07-25 01:20:15.050102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:21.976 qpair failed and we were unable to recover it. 00:34:22.236 [2024-07-25 01:20:15.072624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.236 [2024-07-25 01:20:15.072661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.236 qpair failed and we were unable to recover it. 00:34:22.236 [2024-07-25 01:20:15.072856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.236 [2024-07-25 01:20:15.072882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.236 qpair failed and we were unable to recover it. 00:34:22.236 [2024-07-25 01:20:15.073000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.236 [2024-07-25 01:20:15.073026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.236 qpair failed and we were unable to recover it. 00:34:22.236 [2024-07-25 01:20:15.073199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.236 [2024-07-25 01:20:15.073224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.236 qpair failed and we were unable to recover it. 00:34:22.236 [2024-07-25 01:20:15.073395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.236 [2024-07-25 01:20:15.073420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.236 qpair failed and we were unable to recover it. 00:34:22.236 [2024-07-25 01:20:15.073578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.236 [2024-07-25 01:20:15.073603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.236 qpair failed and we were unable to recover it. 00:34:22.236 [2024-07-25 01:20:15.073782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.236 [2024-07-25 01:20:15.073806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.236 qpair failed and we were unable to recover it. 00:34:22.236 [2024-07-25 01:20:15.074016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.236 [2024-07-25 01:20:15.074040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.236 qpair failed and we were unable to recover it. 00:34:22.236 [2024-07-25 01:20:15.074195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.236 [2024-07-25 01:20:15.074219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.236 qpair failed and we were unable to recover it. 00:34:22.236 [2024-07-25 01:20:15.074387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.236 [2024-07-25 01:20:15.074412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.236 qpair failed and we were unable to recover it. 00:34:22.236 [2024-07-25 01:20:15.074551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.236 [2024-07-25 01:20:15.074577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.236 qpair failed and we were unable to recover it. 00:34:22.236 [2024-07-25 01:20:15.074762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.236 [2024-07-25 01:20:15.074786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.236 qpair failed and we were unable to recover it. 00:34:22.236 [2024-07-25 01:20:15.092123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.236 [2024-07-25 01:20:15.092156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.236 qpair failed and we were unable to recover it. 00:34:22.236 [2024-07-25 01:20:15.092360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.236 [2024-07-25 01:20:15.092388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.236 qpair failed and we were unable to recover it. 00:34:22.236 [2024-07-25 01:20:15.092509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.236 [2024-07-25 01:20:15.092534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.236 qpair failed and we were unable to recover it. 00:34:22.236 [2024-07-25 01:20:15.092719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.236 [2024-07-25 01:20:15.092744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.236 qpair failed and we were unable to recover it. 00:34:22.236 [2024-07-25 01:20:15.092922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.236 [2024-07-25 01:20:15.092947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.236 qpair failed and we were unable to recover it. 00:34:22.236 [2024-07-25 01:20:15.093124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.236 [2024-07-25 01:20:15.093149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.236 qpair failed and we were unable to recover it. 00:34:22.236 [2024-07-25 01:20:15.093331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.236 [2024-07-25 01:20:15.093356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.236 qpair failed and we were unable to recover it. 00:34:22.236 [2024-07-25 01:20:15.093508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.236 [2024-07-25 01:20:15.093532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.236 qpair failed and we were unable to recover it. 00:34:22.236 [2024-07-25 01:20:15.093718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.236 [2024-07-25 01:20:15.093742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.236 qpair failed and we were unable to recover it. 00:34:22.236 [2024-07-25 01:20:15.093932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.236 [2024-07-25 01:20:15.093957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.236 qpair failed and we were unable to recover it. 00:34:22.236 [2024-07-25 01:20:15.094104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.236 [2024-07-25 01:20:15.094128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.236 qpair failed and we were unable to recover it. 00:34:22.236 [2024-07-25 01:20:15.094277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.236 [2024-07-25 01:20:15.094302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.236 qpair failed and we were unable to recover it. 00:34:22.236 [2024-07-25 01:20:15.094470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.236 [2024-07-25 01:20:15.094495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.236 qpair failed and we were unable to recover it. 00:34:22.236 [2024-07-25 01:20:15.094670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.236 [2024-07-25 01:20:15.094694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.236 qpair failed and we were unable to recover it. 00:34:22.236 [2024-07-25 01:20:15.094823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.236 [2024-07-25 01:20:15.094847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.236 qpair failed and we were unable to recover it. 00:34:22.236 [2024-07-25 01:20:15.094997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.236 [2024-07-25 01:20:15.095022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.236 qpair failed and we were unable to recover it. 00:34:22.236 [2024-07-25 01:20:15.095135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.236 [2024-07-25 01:20:15.095160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.236 qpair failed and we were unable to recover it. 00:34:22.236 [2024-07-25 01:20:15.095335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.236 [2024-07-25 01:20:15.095360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.236 qpair failed and we were unable to recover it. 00:34:22.236 [2024-07-25 01:20:15.095538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.236 [2024-07-25 01:20:15.095562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.236 qpair failed and we were unable to recover it. 00:34:22.236 [2024-07-25 01:20:15.095707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.236 [2024-07-25 01:20:15.095732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.236 qpair failed and we were unable to recover it. 00:34:22.236 [2024-07-25 01:20:15.095872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.236 [2024-07-25 01:20:15.095896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.236 qpair failed and we were unable to recover it. 00:34:22.236 [2024-07-25 01:20:15.096086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.236 [2024-07-25 01:20:15.096111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.236 qpair failed and we were unable to recover it. 00:34:22.236 [2024-07-25 01:20:15.096217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.236 [2024-07-25 01:20:15.096253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.236 qpair failed and we were unable to recover it. 00:34:22.237 [2024-07-25 01:20:15.096375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.237 [2024-07-25 01:20:15.096400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.237 qpair failed and we were unable to recover it. 00:34:22.237 [2024-07-25 01:20:15.096561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.237 [2024-07-25 01:20:15.096600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.237 qpair failed and we were unable to recover it. 00:34:22.237 [2024-07-25 01:20:15.096734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.237 [2024-07-25 01:20:15.096763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.237 qpair failed and we were unable to recover it. 00:34:22.237 [2024-07-25 01:20:15.096933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.237 [2024-07-25 01:20:15.096958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.237 qpair failed and we were unable to recover it. 00:34:22.237 [2024-07-25 01:20:15.097129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.237 [2024-07-25 01:20:15.097153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.237 qpair failed and we were unable to recover it. 00:34:22.237 [2024-07-25 01:20:15.097291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.237 [2024-07-25 01:20:15.097316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.237 qpair failed and we were unable to recover it. 00:34:22.237 [2024-07-25 01:20:15.097450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.237 [2024-07-25 01:20:15.097474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.237 qpair failed and we were unable to recover it. 00:34:22.237 [2024-07-25 01:20:15.097584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.237 [2024-07-25 01:20:15.097609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.237 qpair failed and we were unable to recover it. 00:34:22.237 [2024-07-25 01:20:15.097759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.237 [2024-07-25 01:20:15.097783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.237 qpair failed and we were unable to recover it. 00:34:22.237 [2024-07-25 01:20:15.097943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.237 [2024-07-25 01:20:15.097968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.237 qpair failed and we were unable to recover it. 00:34:22.237 [2024-07-25 01:20:15.098147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.237 [2024-07-25 01:20:15.098188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.237 qpair failed and we were unable to recover it. 00:34:22.237 [2024-07-25 01:20:15.098360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.237 [2024-07-25 01:20:15.098385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.237 qpair failed and we were unable to recover it. 00:34:22.237 [2024-07-25 01:20:15.098529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.237 [2024-07-25 01:20:15.098554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.237 qpair failed and we were unable to recover it. 00:34:22.237 [2024-07-25 01:20:15.098718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.237 [2024-07-25 01:20:15.098761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.237 qpair failed and we were unable to recover it. 00:34:22.237 [2024-07-25 01:20:15.098904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.237 [2024-07-25 01:20:15.098929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.237 qpair failed and we were unable to recover it. 00:34:22.237 [2024-07-25 01:20:15.099074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.237 [2024-07-25 01:20:15.099098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.237 qpair failed and we were unable to recover it. 00:34:22.237 [2024-07-25 01:20:15.099280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.237 [2024-07-25 01:20:15.099306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.237 qpair failed and we were unable to recover it. 00:34:22.237 [2024-07-25 01:20:15.099545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.237 [2024-07-25 01:20:15.099570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.237 qpair failed and we were unable to recover it. 00:34:22.237 [2024-07-25 01:20:15.099817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.237 [2024-07-25 01:20:15.099842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.237 qpair failed and we were unable to recover it. 00:34:22.237 [2024-07-25 01:20:15.100015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.237 [2024-07-25 01:20:15.100040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.237 qpair failed and we were unable to recover it. 00:34:22.237 [2024-07-25 01:20:15.100238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.237 [2024-07-25 01:20:15.100267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.237 qpair failed and we were unable to recover it. 00:34:22.237 [2024-07-25 01:20:15.100392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.237 [2024-07-25 01:20:15.100415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.237 qpair failed and we were unable to recover it. 00:34:22.237 [2024-07-25 01:20:15.100542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.237 [2024-07-25 01:20:15.100566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.237 qpair failed and we were unable to recover it. 00:34:22.237 [2024-07-25 01:20:15.100713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.237 [2024-07-25 01:20:15.100739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.237 qpair failed and we were unable to recover it. 00:34:22.237 [2024-07-25 01:20:15.100925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.237 [2024-07-25 01:20:15.100949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.237 qpair failed and we were unable to recover it. 00:34:22.237 [2024-07-25 01:20:15.101098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.237 [2024-07-25 01:20:15.101122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.237 qpair failed and we were unable to recover it. 00:34:22.237 [2024-07-25 01:20:15.101268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.237 [2024-07-25 01:20:15.101293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.237 qpair failed and we were unable to recover it. 00:34:22.237 [2024-07-25 01:20:15.101444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.237 [2024-07-25 01:20:15.101469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.237 qpair failed and we were unable to recover it. 00:34:22.237 [2024-07-25 01:20:15.101587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.237 [2024-07-25 01:20:15.101611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.237 qpair failed and we were unable to recover it. 00:34:22.237 [2024-07-25 01:20:15.101761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.237 [2024-07-25 01:20:15.101785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.237 qpair failed and we were unable to recover it. 00:34:22.237 [2024-07-25 01:20:15.101910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.237 [2024-07-25 01:20:15.101934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.237 qpair failed and we were unable to recover it. 00:34:22.237 [2024-07-25 01:20:15.102111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.237 [2024-07-25 01:20:15.102136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.237 qpair failed and we were unable to recover it. 00:34:22.237 [2024-07-25 01:20:15.102311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.237 [2024-07-25 01:20:15.102336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.237 qpair failed and we were unable to recover it. 00:34:22.237 [2024-07-25 01:20:15.102482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.237 [2024-07-25 01:20:15.102506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.237 qpair failed and we were unable to recover it. 00:34:22.237 [2024-07-25 01:20:15.102633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.237 [2024-07-25 01:20:15.102657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.237 qpair failed and we were unable to recover it. 00:34:22.237 [2024-07-25 01:20:15.102808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.237 [2024-07-25 01:20:15.102833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.237 qpair failed and we were unable to recover it. 00:34:22.237 [2024-07-25 01:20:15.102991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.237 [2024-07-25 01:20:15.103015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.237 qpair failed and we were unable to recover it. 00:34:22.237 [2024-07-25 01:20:15.103168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.238 [2024-07-25 01:20:15.103193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.238 qpair failed and we were unable to recover it. 00:34:22.238 [2024-07-25 01:20:15.103351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.238 [2024-07-25 01:20:15.103376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.238 qpair failed and we were unable to recover it. 00:34:22.238 [2024-07-25 01:20:15.103528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.238 [2024-07-25 01:20:15.103552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.238 qpair failed and we were unable to recover it. 00:34:22.238 [2024-07-25 01:20:15.103705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.238 [2024-07-25 01:20:15.103729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.238 qpair failed and we were unable to recover it. 00:34:22.238 [2024-07-25 01:20:15.103912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.238 [2024-07-25 01:20:15.103936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.238 qpair failed and we were unable to recover it. 00:34:22.238 [2024-07-25 01:20:15.104065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.238 [2024-07-25 01:20:15.104094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.238 qpair failed and we were unable to recover it. 00:34:22.238 [2024-07-25 01:20:15.104212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.238 [2024-07-25 01:20:15.104237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.238 qpair failed and we were unable to recover it. 00:34:22.238 [2024-07-25 01:20:15.104377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.238 [2024-07-25 01:20:15.104401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.238 qpair failed and we were unable to recover it. 00:34:22.238 [2024-07-25 01:20:15.104532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.238 [2024-07-25 01:20:15.104557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.238 qpair failed and we were unable to recover it. 00:34:22.238 [2024-07-25 01:20:15.104735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.238 [2024-07-25 01:20:15.104759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.238 qpair failed and we were unable to recover it. 00:34:22.238 [2024-07-25 01:20:15.104906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.238 [2024-07-25 01:20:15.104940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.238 qpair failed and we were unable to recover it. 00:34:22.238 [2024-07-25 01:20:15.105089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.238 [2024-07-25 01:20:15.105115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.238 qpair failed and we were unable to recover it. 00:34:22.238 [2024-07-25 01:20:15.105260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.238 [2024-07-25 01:20:15.105286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.238 qpair failed and we were unable to recover it. 00:34:22.238 [2024-07-25 01:20:15.105442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.238 [2024-07-25 01:20:15.105466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.238 qpair failed and we were unable to recover it. 00:34:22.238 [2024-07-25 01:20:15.105612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.238 [2024-07-25 01:20:15.105637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.238 qpair failed and we were unable to recover it. 00:34:22.238 [2024-07-25 01:20:15.105757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.238 [2024-07-25 01:20:15.105783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.238 qpair failed and we were unable to recover it. 00:34:22.238 [2024-07-25 01:20:15.105938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.238 [2024-07-25 01:20:15.105962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.238 qpair failed and we were unable to recover it. 00:34:22.238 [2024-07-25 01:20:15.106109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.238 [2024-07-25 01:20:15.106133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.238 qpair failed and we were unable to recover it. 00:34:22.238 [2024-07-25 01:20:15.106301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.238 [2024-07-25 01:20:15.106326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.238 qpair failed and we were unable to recover it. 00:34:22.238 [2024-07-25 01:20:15.106464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.238 [2024-07-25 01:20:15.106489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.238 qpair failed and we were unable to recover it. 00:34:22.238 [2024-07-25 01:20:15.106669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.238 [2024-07-25 01:20:15.106693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.238 qpair failed and we were unable to recover it. 00:34:22.238 [2024-07-25 01:20:15.106865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.238 [2024-07-25 01:20:15.106889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.238 qpair failed and we were unable to recover it. 00:34:22.238 [2024-07-25 01:20:15.107030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.238 [2024-07-25 01:20:15.107055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.238 qpair failed and we were unable to recover it. 00:34:22.238 [2024-07-25 01:20:15.107180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.238 [2024-07-25 01:20:15.107205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.238 qpair failed and we were unable to recover it. 00:34:22.238 [2024-07-25 01:20:15.107348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.238 [2024-07-25 01:20:15.107374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.238 qpair failed and we were unable to recover it. 00:34:22.238 [2024-07-25 01:20:15.107537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.238 [2024-07-25 01:20:15.107562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.238 qpair failed and we were unable to recover it. 00:34:22.238 [2024-07-25 01:20:15.107681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.238 [2024-07-25 01:20:15.107705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.238 qpair failed and we were unable to recover it. 00:34:22.238 [2024-07-25 01:20:15.107851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.238 [2024-07-25 01:20:15.107875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.238 qpair failed and we were unable to recover it. 00:34:22.238 [2024-07-25 01:20:15.107993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.238 [2024-07-25 01:20:15.108018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.238 qpair failed and we were unable to recover it. 00:34:22.238 [2024-07-25 01:20:15.108191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.238 [2024-07-25 01:20:15.108216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.238 qpair failed and we were unable to recover it. 00:34:22.238 [2024-07-25 01:20:15.108382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.238 [2024-07-25 01:20:15.108406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.238 qpair failed and we were unable to recover it. 00:34:22.238 [2024-07-25 01:20:15.108552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.238 [2024-07-25 01:20:15.108577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.238 qpair failed and we were unable to recover it. 00:34:22.238 [2024-07-25 01:20:15.108729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.238 [2024-07-25 01:20:15.108754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.238 qpair failed and we were unable to recover it. 00:34:22.238 [2024-07-25 01:20:15.108899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.238 [2024-07-25 01:20:15.108923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.238 qpair failed and we were unable to recover it. 00:34:22.238 [2024-07-25 01:20:15.109078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.238 [2024-07-25 01:20:15.109104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.238 qpair failed and we were unable to recover it. 00:34:22.238 [2024-07-25 01:20:15.109254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.238 [2024-07-25 01:20:15.109280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.238 qpair failed and we were unable to recover it. 00:34:22.238 [2024-07-25 01:20:15.109432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.238 [2024-07-25 01:20:15.109456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.238 qpair failed and we were unable to recover it. 00:34:22.239 [2024-07-25 01:20:15.109618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.239 [2024-07-25 01:20:15.109643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.239 qpair failed and we were unable to recover it. 00:34:22.239 [2024-07-25 01:20:15.109758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.239 [2024-07-25 01:20:15.109783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.239 qpair failed and we were unable to recover it. 00:34:22.239 [2024-07-25 01:20:15.109910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.239 [2024-07-25 01:20:15.109935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.239 qpair failed and we were unable to recover it. 00:34:22.239 [2024-07-25 01:20:15.110214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.239 [2024-07-25 01:20:15.110239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.239 qpair failed and we were unable to recover it. 00:34:22.239 [2024-07-25 01:20:15.110432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.239 [2024-07-25 01:20:15.110457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.239 qpair failed and we were unable to recover it. 00:34:22.239 [2024-07-25 01:20:15.110615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.239 [2024-07-25 01:20:15.110640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.239 qpair failed and we were unable to recover it. 00:34:22.239 [2024-07-25 01:20:15.110793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.239 [2024-07-25 01:20:15.110817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.239 qpair failed and we were unable to recover it. 00:34:22.239 [2024-07-25 01:20:15.110971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.239 [2024-07-25 01:20:15.110995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.239 qpair failed and we were unable to recover it. 00:34:22.239 [2024-07-25 01:20:15.111174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.239 [2024-07-25 01:20:15.111204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.239 qpair failed and we were unable to recover it. 00:34:22.239 [2024-07-25 01:20:15.111374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.239 [2024-07-25 01:20:15.111400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.239 qpair failed and we were unable to recover it. 00:34:22.239 [2024-07-25 01:20:15.111577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.239 [2024-07-25 01:20:15.111601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.239 qpair failed and we were unable to recover it. 00:34:22.239 [2024-07-25 01:20:15.111748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.239 [2024-07-25 01:20:15.111772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.239 qpair failed and we were unable to recover it. 00:34:22.239 [2024-07-25 01:20:15.111932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.239 [2024-07-25 01:20:15.111957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.239 qpair failed and we were unable to recover it. 00:34:22.239 [2024-07-25 01:20:15.112081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.239 [2024-07-25 01:20:15.112105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.239 qpair failed and we were unable to recover it. 00:34:22.239 [2024-07-25 01:20:15.112254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.239 [2024-07-25 01:20:15.112279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.239 qpair failed and we were unable to recover it. 00:34:22.239 [2024-07-25 01:20:15.112404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.239 [2024-07-25 01:20:15.112429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.239 qpair failed and we were unable to recover it. 00:34:22.239 [2024-07-25 01:20:15.112581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.239 [2024-07-25 01:20:15.112605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.239 qpair failed and we were unable to recover it. 00:34:22.239 [2024-07-25 01:20:15.112749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.239 [2024-07-25 01:20:15.112773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.239 qpair failed and we were unable to recover it. 00:34:22.239 [2024-07-25 01:20:15.112962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.239 [2024-07-25 01:20:15.112986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.505 qpair failed and we were unable to recover it. 00:34:22.505 [2024-07-25 01:20:15.532296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.505 [2024-07-25 01:20:15.532334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.505 qpair failed and we were unable to recover it. 00:34:22.505 [2024-07-25 01:20:15.532507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.505 [2024-07-25 01:20:15.532537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.505 qpair failed and we were unable to recover it. 00:34:22.505 [2024-07-25 01:20:15.532678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.505 [2024-07-25 01:20:15.532718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.505 qpair failed and we were unable to recover it. 00:34:22.505 [2024-07-25 01:20:15.532913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.505 [2024-07-25 01:20:15.532940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.505 qpair failed and we were unable to recover it. 00:34:22.505 [2024-07-25 01:20:15.533227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.505 [2024-07-25 01:20:15.533317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.505 qpair failed and we were unable to recover it. 00:34:22.505 [2024-07-25 01:20:15.533445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.505 [2024-07-25 01:20:15.533469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.505 qpair failed and we were unable to recover it. 00:34:22.505 [2024-07-25 01:20:15.533622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.505 [2024-07-25 01:20:15.533647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.505 qpair failed and we were unable to recover it. 00:34:22.505 [2024-07-25 01:20:15.533793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.505 [2024-07-25 01:20:15.533833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.505 qpair failed and we were unable to recover it. 00:34:22.505 [2024-07-25 01:20:15.533953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.505 [2024-07-25 01:20:15.533978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.505 qpair failed and we were unable to recover it. 00:34:22.505 [2024-07-25 01:20:15.534121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.505 [2024-07-25 01:20:15.534146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.505 qpair failed and we were unable to recover it. 00:34:22.505 [2024-07-25 01:20:15.534305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.505 [2024-07-25 01:20:15.534333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.505 qpair failed and we were unable to recover it. 00:34:22.505 [2024-07-25 01:20:15.534477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.505 [2024-07-25 01:20:15.534506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.505 qpair failed and we were unable to recover it. 00:34:22.505 [2024-07-25 01:20:15.534696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.505 [2024-07-25 01:20:15.534724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.505 qpair failed and we were unable to recover it. 00:34:22.505 [2024-07-25 01:20:15.534870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.505 [2024-07-25 01:20:15.534913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.505 qpair failed and we were unable to recover it. 00:34:22.505 [2024-07-25 01:20:15.535042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.505 [2024-07-25 01:20:15.535086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.505 qpair failed and we were unable to recover it. 00:34:22.505 [2024-07-25 01:20:15.535254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.505 [2024-07-25 01:20:15.535283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.505 qpair failed and we were unable to recover it. 00:34:22.505 [2024-07-25 01:20:15.535416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.505 [2024-07-25 01:20:15.535442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.505 qpair failed and we were unable to recover it. 00:34:22.505 [2024-07-25 01:20:15.535545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.505 [2024-07-25 01:20:15.535570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.505 qpair failed and we were unable to recover it. 00:34:22.505 [2024-07-25 01:20:15.535726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.505 [2024-07-25 01:20:15.535751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.505 qpair failed and we were unable to recover it. 00:34:22.506 [2024-07-25 01:20:15.535947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.506 [2024-07-25 01:20:15.535973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.506 qpair failed and we were unable to recover it. 00:34:22.506 [2024-07-25 01:20:15.536179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.506 [2024-07-25 01:20:15.536208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.506 qpair failed and we were unable to recover it. 00:34:22.506 [2024-07-25 01:20:15.536375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.506 [2024-07-25 01:20:15.536404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.506 qpair failed and we were unable to recover it. 00:34:22.506 [2024-07-25 01:20:15.536531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.506 [2024-07-25 01:20:15.536558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.506 qpair failed and we were unable to recover it. 00:34:22.506 [2024-07-25 01:20:15.536696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.506 [2024-07-25 01:20:15.536723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.506 qpair failed and we were unable to recover it. 00:34:22.506 [2024-07-25 01:20:15.536901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.506 [2024-07-25 01:20:15.536930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.506 qpair failed and we were unable to recover it. 00:34:22.506 [2024-07-25 01:20:15.537097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.506 [2024-07-25 01:20:15.537122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.506 qpair failed and we were unable to recover it. 00:34:22.506 [2024-07-25 01:20:15.537232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.506 [2024-07-25 01:20:15.537280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.506 qpair failed and we were unable to recover it. 00:34:22.506 [2024-07-25 01:20:15.537463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.506 [2024-07-25 01:20:15.537492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.506 qpair failed and we were unable to recover it. 00:34:22.506 [2024-07-25 01:20:15.537631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.506 [2024-07-25 01:20:15.537673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.506 qpair failed and we were unable to recover it. 00:34:22.506 [2024-07-25 01:20:15.537914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.506 [2024-07-25 01:20:15.537949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.506 qpair failed and we were unable to recover it. 00:34:22.506 [2024-07-25 01:20:15.538135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.506 [2024-07-25 01:20:15.538164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.506 qpair failed and we were unable to recover it. 00:34:22.506 [2024-07-25 01:20:15.538334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.506 [2024-07-25 01:20:15.538361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.506 qpair failed and we were unable to recover it. 00:34:22.506 [2024-07-25 01:20:15.538549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.506 [2024-07-25 01:20:15.538578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.506 qpair failed and we were unable to recover it. 00:34:22.506 [2024-07-25 01:20:15.538750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.506 [2024-07-25 01:20:15.538779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.506 qpair failed and we were unable to recover it. 00:34:22.506 [2024-07-25 01:20:15.538967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.506 [2024-07-25 01:20:15.538993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.506 qpair failed and we were unable to recover it. 00:34:22.506 [2024-07-25 01:20:15.539198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.506 [2024-07-25 01:20:15.539227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.506 qpair failed and we were unable to recover it. 00:34:22.506 [2024-07-25 01:20:15.539372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.506 [2024-07-25 01:20:15.539401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.506 qpair failed and we were unable to recover it. 00:34:22.506 [2024-07-25 01:20:15.539564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.506 [2024-07-25 01:20:15.539589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.506 qpair failed and we were unable to recover it. 00:34:22.506 [2024-07-25 01:20:15.539796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.506 [2024-07-25 01:20:15.539842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.506 qpair failed and we were unable to recover it. 00:34:22.506 [2024-07-25 01:20:15.540029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.506 [2024-07-25 01:20:15.540056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.506 qpair failed and we were unable to recover it. 00:34:22.506 [2024-07-25 01:20:15.540228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.506 [2024-07-25 01:20:15.540262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.506 qpair failed and we were unable to recover it. 00:34:22.506 [2024-07-25 01:20:15.540436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.506 [2024-07-25 01:20:15.540466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.506 qpair failed and we were unable to recover it. 00:34:22.506 [2024-07-25 01:20:15.540626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.506 [2024-07-25 01:20:15.540655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.506 qpair failed and we were unable to recover it. 00:34:22.506 [2024-07-25 01:20:15.540850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.506 [2024-07-25 01:20:15.540877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.506 qpair failed and we were unable to recover it. 00:34:22.506 [2024-07-25 01:20:15.541025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.506 [2024-07-25 01:20:15.541068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.506 qpair failed and we were unable to recover it. 00:34:22.506 [2024-07-25 01:20:15.541192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.506 [2024-07-25 01:20:15.541222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.506 qpair failed and we were unable to recover it. 00:34:22.506 [2024-07-25 01:20:15.541427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.506 [2024-07-25 01:20:15.541454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.506 qpair failed and we were unable to recover it. 00:34:22.506 [2024-07-25 01:20:15.541592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.506 [2024-07-25 01:20:15.541623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.506 qpair failed and we were unable to recover it. 00:34:22.506 [2024-07-25 01:20:15.541789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.506 [2024-07-25 01:20:15.541818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.506 qpair failed and we were unable to recover it. 00:34:22.506 [2024-07-25 01:20:15.542011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.506 [2024-07-25 01:20:15.542038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.506 qpair failed and we were unable to recover it. 00:34:22.506 [2024-07-25 01:20:15.542203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.506 [2024-07-25 01:20:15.542235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.506 qpair failed and we were unable to recover it. 00:34:22.506 [2024-07-25 01:20:15.542411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.506 [2024-07-25 01:20:15.542440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.506 qpair failed and we were unable to recover it. 00:34:22.506 [2024-07-25 01:20:15.542610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.506 [2024-07-25 01:20:15.542637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.506 qpair failed and we were unable to recover it. 00:34:22.506 [2024-07-25 01:20:15.542747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.506 [2024-07-25 01:20:15.542789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.506 qpair failed and we were unable to recover it. 00:34:22.506 [2024-07-25 01:20:15.542976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.506 [2024-07-25 01:20:15.543005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.506 qpair failed and we were unable to recover it. 00:34:22.506 [2024-07-25 01:20:15.543141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.506 [2024-07-25 01:20:15.543168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.506 qpair failed and we were unable to recover it. 00:34:22.506 [2024-07-25 01:20:15.543340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.507 [2024-07-25 01:20:15.543381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.507 qpair failed and we were unable to recover it. 00:34:22.507 [2024-07-25 01:20:15.543534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.507 [2024-07-25 01:20:15.543562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.507 qpair failed and we were unable to recover it. 00:34:22.507 [2024-07-25 01:20:15.543730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.507 [2024-07-25 01:20:15.543757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.507 qpair failed and we were unable to recover it. 00:34:22.507 [2024-07-25 01:20:15.543877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.507 [2024-07-25 01:20:15.543911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.507 qpair failed and we were unable to recover it. 00:34:22.507 [2024-07-25 01:20:15.544084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.507 [2024-07-25 01:20:15.544114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.507 qpair failed and we were unable to recover it. 00:34:22.507 [2024-07-25 01:20:15.544289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.507 [2024-07-25 01:20:15.544316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.507 qpair failed and we were unable to recover it. 00:34:22.507 [2024-07-25 01:20:15.544431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.507 [2024-07-25 01:20:15.544473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.507 qpair failed and we were unable to recover it. 00:34:22.507 [2024-07-25 01:20:15.544655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.507 [2024-07-25 01:20:15.544686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.507 qpair failed and we were unable to recover it. 00:34:22.507 [2024-07-25 01:20:15.544878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.507 [2024-07-25 01:20:15.544905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.507 qpair failed and we were unable to recover it. 00:34:22.507 [2024-07-25 01:20:15.545068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.507 [2024-07-25 01:20:15.545097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.507 qpair failed and we were unable to recover it. 00:34:22.507 [2024-07-25 01:20:15.545255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.507 [2024-07-25 01:20:15.545297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.507 qpair failed and we were unable to recover it. 00:34:22.507 [2024-07-25 01:20:15.545466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.507 [2024-07-25 01:20:15.545493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.507 qpair failed and we were unable to recover it. 00:34:22.507 [2024-07-25 01:20:15.545689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.507 [2024-07-25 01:20:15.545741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.507 qpair failed and we were unable to recover it. 00:34:22.507 [2024-07-25 01:20:15.545927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.507 [2024-07-25 01:20:15.545963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.507 qpair failed and we were unable to recover it. 00:34:22.507 [2024-07-25 01:20:15.546162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.507 [2024-07-25 01:20:15.546191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.507 qpair failed and we were unable to recover it. 00:34:22.507 [2024-07-25 01:20:15.546370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.507 [2024-07-25 01:20:15.546398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.507 qpair failed and we were unable to recover it. 00:34:22.507 [2024-07-25 01:20:15.546543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.507 [2024-07-25 01:20:15.546570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.507 qpair failed and we were unable to recover it. 00:34:22.507 [2024-07-25 01:20:15.546755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.507 [2024-07-25 01:20:15.546783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.507 qpair failed and we were unable to recover it. 00:34:22.507 [2024-07-25 01:20:15.546932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.507 [2024-07-25 01:20:15.546962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.507 qpair failed and we were unable to recover it. 00:34:22.507 [2024-07-25 01:20:15.547123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.507 [2024-07-25 01:20:15.547150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.507 qpair failed and we were unable to recover it. 00:34:22.507 [2024-07-25 01:20:15.547298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.507 [2024-07-25 01:20:15.547326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.507 qpair failed and we were unable to recover it. 00:34:22.507 [2024-07-25 01:20:15.547471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.507 [2024-07-25 01:20:15.547500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.507 qpair failed and we were unable to recover it. 00:34:22.507 [2024-07-25 01:20:15.547611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.507 [2024-07-25 01:20:15.547638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.507 qpair failed and we were unable to recover it. 00:34:22.507 [2024-07-25 01:20:15.547750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.507 [2024-07-25 01:20:15.547777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.507 qpair failed and we were unable to recover it. 00:34:22.507 [2024-07-25 01:20:15.547958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.507 [2024-07-25 01:20:15.547987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.507 qpair failed and we were unable to recover it. 00:34:22.507 [2024-07-25 01:20:15.548116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.507 [2024-07-25 01:20:15.548146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.507 qpair failed and we were unable to recover it. 00:34:22.507 [2024-07-25 01:20:15.548313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.507 [2024-07-25 01:20:15.548341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.507 qpair failed and we were unable to recover it. 00:34:22.507 [2024-07-25 01:20:15.548505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.507 [2024-07-25 01:20:15.548536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.507 qpair failed and we were unable to recover it. 00:34:22.507 [2024-07-25 01:20:15.548694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.507 [2024-07-25 01:20:15.548724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.507 qpair failed and we were unable to recover it. 00:34:22.507 [2024-07-25 01:20:15.548885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.507 [2024-07-25 01:20:15.548912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.507 qpair failed and we were unable to recover it. 00:34:22.507 [2024-07-25 01:20:15.549033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.507 [2024-07-25 01:20:15.549061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.507 qpair failed and we were unable to recover it. 00:34:22.507 [2024-07-25 01:20:15.549253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.507 [2024-07-25 01:20:15.549284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.507 qpair failed and we were unable to recover it. 00:34:22.507 [2024-07-25 01:20:15.549446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.507 [2024-07-25 01:20:15.549473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.507 qpair failed and we were unable to recover it. 00:34:22.507 [2024-07-25 01:20:15.549646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.507 [2024-07-25 01:20:15.549673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.507 qpair failed and we were unable to recover it. 00:34:22.507 [2024-07-25 01:20:15.549843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.507 [2024-07-25 01:20:15.549887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.507 qpair failed and we were unable to recover it. 00:34:22.507 [2024-07-25 01:20:15.550027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.507 [2024-07-25 01:20:15.550053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.507 qpair failed and we were unable to recover it. 00:34:22.507 [2024-07-25 01:20:15.550195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.507 [2024-07-25 01:20:15.550237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.507 qpair failed and we were unable to recover it. 00:34:22.507 [2024-07-25 01:20:15.550415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.508 [2024-07-25 01:20:15.550444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.508 qpair failed and we were unable to recover it. 00:34:22.508 [2024-07-25 01:20:15.550615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.508 [2024-07-25 01:20:15.550641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.508 qpair failed and we were unable to recover it. 00:34:22.508 [2024-07-25 01:20:15.550792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.508 [2024-07-25 01:20:15.550822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.508 qpair failed and we were unable to recover it. 00:34:22.508 [2024-07-25 01:20:15.551026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.508 [2024-07-25 01:20:15.551071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.508 qpair failed and we were unable to recover it. 00:34:22.508 [2024-07-25 01:20:15.551238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.508 [2024-07-25 01:20:15.551275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.508 qpair failed and we were unable to recover it. 00:34:22.508 [2024-07-25 01:20:15.551423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.508 [2024-07-25 01:20:15.551450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.508 qpair failed and we were unable to recover it. 00:34:22.508 [2024-07-25 01:20:15.551633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.508 [2024-07-25 01:20:15.551662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.508 qpair failed and we were unable to recover it. 00:34:22.508 [2024-07-25 01:20:15.551854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.508 [2024-07-25 01:20:15.551880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.508 qpair failed and we were unable to recover it. 00:34:22.508 [2024-07-25 01:20:15.551998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.508 [2024-07-25 01:20:15.552024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.508 qpair failed and we were unable to recover it. 00:34:22.508 [2024-07-25 01:20:15.552142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.508 [2024-07-25 01:20:15.552170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.508 qpair failed and we were unable to recover it. 00:34:22.508 [2024-07-25 01:20:15.552318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.508 [2024-07-25 01:20:15.552345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.508 qpair failed and we were unable to recover it. 00:34:22.508 [2024-07-25 01:20:15.552461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.508 [2024-07-25 01:20:15.552488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.508 qpair failed and we were unable to recover it. 00:34:22.508 [2024-07-25 01:20:15.552598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.508 [2024-07-25 01:20:15.552625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.508 qpair failed and we were unable to recover it. 00:34:22.508 [2024-07-25 01:20:15.552742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.508 [2024-07-25 01:20:15.552768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.508 qpair failed and we were unable to recover it. 00:34:22.508 [2024-07-25 01:20:15.552908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.508 [2024-07-25 01:20:15.552935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.508 qpair failed and we were unable to recover it. 00:34:22.508 [2024-07-25 01:20:15.553088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.508 [2024-07-25 01:20:15.553114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.508 qpair failed and we were unable to recover it. 00:34:22.508 [2024-07-25 01:20:15.553253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.508 [2024-07-25 01:20:15.553294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.508 qpair failed and we were unable to recover it. 00:34:22.508 [2024-07-25 01:20:15.553441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.508 [2024-07-25 01:20:15.553468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.508 qpair failed and we were unable to recover it. 00:34:22.508 [2024-07-25 01:20:15.553580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.508 [2024-07-25 01:20:15.553606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.508 qpair failed and we were unable to recover it. 00:34:22.508 [2024-07-25 01:20:15.553773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.508 [2024-07-25 01:20:15.553799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.508 qpair failed and we were unable to recover it. 00:34:22.508 [2024-07-25 01:20:15.553911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.508 [2024-07-25 01:20:15.553937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.508 qpair failed and we were unable to recover it. 00:34:22.508 [2024-07-25 01:20:15.554054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.508 [2024-07-25 01:20:15.554082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.508 qpair failed and we were unable to recover it. 00:34:22.508 [2024-07-25 01:20:15.554223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.508 [2024-07-25 01:20:15.554255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.508 qpair failed and we were unable to recover it. 00:34:22.508 [2024-07-25 01:20:15.554398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.508 [2024-07-25 01:20:15.554425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.508 qpair failed and we were unable to recover it. 00:34:22.508 [2024-07-25 01:20:15.554599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.508 [2024-07-25 01:20:15.554626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.508 qpair failed and we were unable to recover it. 00:34:22.508 [2024-07-25 01:20:15.554777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.508 [2024-07-25 01:20:15.554803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.508 qpair failed and we were unable to recover it. 00:34:22.508 [2024-07-25 01:20:15.554963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.508 [2024-07-25 01:20:15.555003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.508 qpair failed and we were unable to recover it. 00:34:22.508 [2024-07-25 01:20:15.555175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.508 [2024-07-25 01:20:15.555206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.508 qpair failed and we were unable to recover it. 00:34:22.508 [2024-07-25 01:20:15.555361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.508 [2024-07-25 01:20:15.555389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.508 qpair failed and we were unable to recover it. 00:34:22.508 [2024-07-25 01:20:15.555536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.508 [2024-07-25 01:20:15.555563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.508 qpair failed and we were unable to recover it. 00:34:22.508 [2024-07-25 01:20:15.555739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.508 [2024-07-25 01:20:15.555783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.508 qpair failed and we were unable to recover it. 00:34:22.508 [2024-07-25 01:20:15.555917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.508 [2024-07-25 01:20:15.555944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.508 qpair failed and we were unable to recover it. 00:34:22.508 [2024-07-25 01:20:15.556093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.508 [2024-07-25 01:20:15.556121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.508 qpair failed and we were unable to recover it. 00:34:22.508 [2024-07-25 01:20:15.556278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.508 [2024-07-25 01:20:15.556306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.508 qpair failed and we were unable to recover it. 00:34:22.508 [2024-07-25 01:20:15.556423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.508 [2024-07-25 01:20:15.556451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.508 qpair failed and we were unable to recover it. 00:34:22.508 [2024-07-25 01:20:15.556565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.508 [2024-07-25 01:20:15.556592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.508 qpair failed and we were unable to recover it. 00:34:22.508 [2024-07-25 01:20:15.556728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.508 [2024-07-25 01:20:15.556758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.508 qpair failed and we were unable to recover it. 00:34:22.508 [2024-07-25 01:20:15.556901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.508 [2024-07-25 01:20:15.556928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.509 qpair failed and we were unable to recover it. 00:34:22.509 [2024-07-25 01:20:15.557114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.509 [2024-07-25 01:20:15.557143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.509 qpair failed and we were unable to recover it. 00:34:22.509 [2024-07-25 01:20:15.557267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.509 [2024-07-25 01:20:15.557311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.509 qpair failed and we were unable to recover it. 00:34:22.509 [2024-07-25 01:20:15.557426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.509 [2024-07-25 01:20:15.557454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.509 qpair failed and we were unable to recover it. 00:34:22.509 [2024-07-25 01:20:15.557569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.509 [2024-07-25 01:20:15.557596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.509 qpair failed and we were unable to recover it. 00:34:22.509 [2024-07-25 01:20:15.557802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.509 [2024-07-25 01:20:15.557829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.509 qpair failed and we were unable to recover it. 00:34:22.509 [2024-07-25 01:20:15.557981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.509 [2024-07-25 01:20:15.558008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.509 qpair failed and we were unable to recover it. 00:34:22.509 [2024-07-25 01:20:15.558151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.509 [2024-07-25 01:20:15.558178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.509 qpair failed and we were unable to recover it. 00:34:22.509 [2024-07-25 01:20:15.558317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.509 [2024-07-25 01:20:15.558345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.509 qpair failed and we were unable to recover it. 00:34:22.509 [2024-07-25 01:20:15.558516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.509 [2024-07-25 01:20:15.558542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.509 qpair failed and we were unable to recover it. 00:34:22.509 [2024-07-25 01:20:15.558765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.509 [2024-07-25 01:20:15.558819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.509 qpair failed and we were unable to recover it. 00:34:22.509 [2024-07-25 01:20:15.558970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.509 [2024-07-25 01:20:15.558999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.509 qpair failed and we were unable to recover it. 00:34:22.509 [2024-07-25 01:20:15.559200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.509 [2024-07-25 01:20:15.559226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.509 qpair failed and we were unable to recover it. 00:34:22.509 [2024-07-25 01:20:15.559342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.509 [2024-07-25 01:20:15.559370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.509 qpair failed and we were unable to recover it. 00:34:22.509 [2024-07-25 01:20:15.559558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.509 [2024-07-25 01:20:15.559587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.509 qpair failed and we were unable to recover it. 00:34:22.509 [2024-07-25 01:20:15.559754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.509 [2024-07-25 01:20:15.559781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.509 qpair failed and we were unable to recover it. 00:34:22.509 [2024-07-25 01:20:15.559926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.509 [2024-07-25 01:20:15.559969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.509 qpair failed and we were unable to recover it. 00:34:22.509 [2024-07-25 01:20:15.560128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.509 [2024-07-25 01:20:15.560157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.509 qpair failed and we were unable to recover it. 00:34:22.509 [2024-07-25 01:20:15.560347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.509 [2024-07-25 01:20:15.560376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.509 qpair failed and we were unable to recover it. 00:34:22.509 [2024-07-25 01:20:15.560493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.509 [2024-07-25 01:20:15.560519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.509 qpair failed and we were unable to recover it. 00:34:22.509 [2024-07-25 01:20:15.560643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.509 [2024-07-25 01:20:15.560669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.509 qpair failed and we were unable to recover it. 00:34:22.509 [2024-07-25 01:20:15.560815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.509 [2024-07-25 01:20:15.560843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.509 qpair failed and we were unable to recover it. 00:34:22.509 [2024-07-25 01:20:15.561037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.509 [2024-07-25 01:20:15.561099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.509 qpair failed and we were unable to recover it. 00:34:22.509 [2024-07-25 01:20:15.561281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.509 [2024-07-25 01:20:15.561311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.509 qpair failed and we were unable to recover it. 00:34:22.509 [2024-07-25 01:20:15.561475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.509 [2024-07-25 01:20:15.561502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.509 qpair failed and we were unable to recover it. 00:34:22.509 [2024-07-25 01:20:15.561627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.509 [2024-07-25 01:20:15.561654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.509 qpair failed and we were unable to recover it. 00:34:22.509 [2024-07-25 01:20:15.561822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.509 [2024-07-25 01:20:15.561849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.509 qpair failed and we were unable to recover it. 00:34:22.509 [2024-07-25 01:20:15.561969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.509 [2024-07-25 01:20:15.561996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.509 qpair failed and we were unable to recover it. 00:34:22.509 [2024-07-25 01:20:15.562141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.509 [2024-07-25 01:20:15.562168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.509 qpair failed and we were unable to recover it. 00:34:22.509 [2024-07-25 01:20:15.562317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.509 [2024-07-25 01:20:15.562344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.509 qpair failed and we were unable to recover it. 00:34:22.509 [2024-07-25 01:20:15.562517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.509 [2024-07-25 01:20:15.562543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.509 qpair failed and we were unable to recover it. 00:34:22.509 [2024-07-25 01:20:15.562670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.509 [2024-07-25 01:20:15.562696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.509 qpair failed and we were unable to recover it. 00:34:22.509 [2024-07-25 01:20:15.562812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.509 [2024-07-25 01:20:15.562839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.509 qpair failed and we were unable to recover it. 00:34:22.509 [2024-07-25 01:20:15.563026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.509 [2024-07-25 01:20:15.563053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.509 qpair failed and we were unable to recover it. 00:34:22.509 [2024-07-25 01:20:15.563240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.509 [2024-07-25 01:20:15.563275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.509 qpair failed and we were unable to recover it. 00:34:22.509 [2024-07-25 01:20:15.563435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.509 [2024-07-25 01:20:15.563462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.509 qpair failed and we were unable to recover it. 00:34:22.509 [2024-07-25 01:20:15.563608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.509 [2024-07-25 01:20:15.563635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.509 qpair failed and we were unable to recover it. 00:34:22.509 [2024-07-25 01:20:15.563876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.509 [2024-07-25 01:20:15.563925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.509 qpair failed and we were unable to recover it. 00:34:22.510 [2024-07-25 01:20:15.564082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.510 [2024-07-25 01:20:15.564111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.510 qpair failed and we were unable to recover it. 00:34:22.510 [2024-07-25 01:20:15.564307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.510 [2024-07-25 01:20:15.564334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.510 qpair failed and we were unable to recover it. 00:34:22.510 [2024-07-25 01:20:15.564497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.510 [2024-07-25 01:20:15.564527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.510 qpair failed and we were unable to recover it. 00:34:22.510 [2024-07-25 01:20:15.564708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.510 [2024-07-25 01:20:15.564738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.510 qpair failed and we were unable to recover it. 00:34:22.510 [2024-07-25 01:20:15.564928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.510 [2024-07-25 01:20:15.564954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.510 qpair failed and we were unable to recover it. 00:34:22.510 [2024-07-25 01:20:15.565122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.510 [2024-07-25 01:20:15.565151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.510 qpair failed and we were unable to recover it. 00:34:22.510 [2024-07-25 01:20:15.565274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.510 [2024-07-25 01:20:15.565305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.510 qpair failed and we were unable to recover it. 00:34:22.510 [2024-07-25 01:20:15.565468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.510 [2024-07-25 01:20:15.565496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.510 qpair failed and we were unable to recover it. 00:34:22.510 [2024-07-25 01:20:15.565664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.510 [2024-07-25 01:20:15.565701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.510 qpair failed and we were unable to recover it. 00:34:22.510 [2024-07-25 01:20:15.565833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.510 [2024-07-25 01:20:15.565864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.510 qpair failed and we were unable to recover it. 00:34:22.510 [2024-07-25 01:20:15.566035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.510 [2024-07-25 01:20:15.566062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.510 qpair failed and we were unable to recover it. 00:34:22.510 [2024-07-25 01:20:15.566202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.510 [2024-07-25 01:20:15.566228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.510 qpair failed and we were unable to recover it. 00:34:22.510 [2024-07-25 01:20:15.566402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.510 [2024-07-25 01:20:15.566431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.510 qpair failed and we were unable to recover it. 00:34:22.510 [2024-07-25 01:20:15.566565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.510 [2024-07-25 01:20:15.566591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.510 qpair failed and we were unable to recover it. 00:34:22.510 [2024-07-25 01:20:15.566760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.510 [2024-07-25 01:20:15.566803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.510 qpair failed and we were unable to recover it. 00:34:22.510 [2024-07-25 01:20:15.566990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.510 [2024-07-25 01:20:15.567019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.510 qpair failed and we were unable to recover it. 00:34:22.510 [2024-07-25 01:20:15.567190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.510 [2024-07-25 01:20:15.567216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.510 qpair failed and we were unable to recover it. 00:34:22.510 [2024-07-25 01:20:15.567466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.510 [2024-07-25 01:20:15.567496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.510 qpair failed and we were unable to recover it. 00:34:22.510 [2024-07-25 01:20:15.567649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.510 [2024-07-25 01:20:15.567678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.510 qpair failed and we were unable to recover it. 00:34:22.510 [2024-07-25 01:20:15.567867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.510 [2024-07-25 01:20:15.567893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.510 qpair failed and we were unable to recover it. 00:34:22.510 [2024-07-25 01:20:15.568111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.510 [2024-07-25 01:20:15.568167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.510 qpair failed and we were unable to recover it. 00:34:22.510 [2024-07-25 01:20:15.568355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.510 [2024-07-25 01:20:15.568385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.510 qpair failed and we were unable to recover it. 00:34:22.510 [2024-07-25 01:20:15.568536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.510 [2024-07-25 01:20:15.568568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.510 qpair failed and we were unable to recover it. 00:34:22.510 [2024-07-25 01:20:15.568712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.510 [2024-07-25 01:20:15.568758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.510 qpair failed and we were unable to recover it. 00:34:22.510 [2024-07-25 01:20:15.568893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.510 [2024-07-25 01:20:15.568934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.510 qpair failed and we were unable to recover it. 00:34:22.510 [2024-07-25 01:20:15.569129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.510 [2024-07-25 01:20:15.569159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.510 qpair failed and we were unable to recover it. 00:34:22.510 [2024-07-25 01:20:15.569312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.510 [2024-07-25 01:20:15.569340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.510 qpair failed and we were unable to recover it. 00:34:22.510 [2024-07-25 01:20:15.569461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.510 [2024-07-25 01:20:15.569489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.510 qpair failed and we were unable to recover it. 00:34:22.510 [2024-07-25 01:20:15.569634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.510 [2024-07-25 01:20:15.569660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.510 qpair failed and we were unable to recover it. 00:34:22.510 [2024-07-25 01:20:15.569773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.511 [2024-07-25 01:20:15.569800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.511 qpair failed and we were unable to recover it. 00:34:22.511 [2024-07-25 01:20:15.569947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.511 [2024-07-25 01:20:15.569976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.511 qpair failed and we were unable to recover it. 00:34:22.511 [2024-07-25 01:20:15.570145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.511 [2024-07-25 01:20:15.570171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.511 qpair failed and we were unable to recover it. 00:34:22.511 [2024-07-25 01:20:15.570356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.511 [2024-07-25 01:20:15.570385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.511 qpair failed and we were unable to recover it. 00:34:22.511 [2024-07-25 01:20:15.570540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.511 [2024-07-25 01:20:15.570569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.511 qpair failed and we were unable to recover it. 00:34:22.511 [2024-07-25 01:20:15.570732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.511 [2024-07-25 01:20:15.570758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.511 qpair failed and we were unable to recover it. 00:34:22.511 [2024-07-25 01:20:15.570907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.511 [2024-07-25 01:20:15.570952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.511 qpair failed and we were unable to recover it. 00:34:22.511 [2024-07-25 01:20:15.571111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.511 [2024-07-25 01:20:15.571140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.511 qpair failed and we were unable to recover it. 00:34:22.511 [2024-07-25 01:20:15.571305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.511 [2024-07-25 01:20:15.571332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.511 qpair failed and we were unable to recover it. 00:34:22.511 [2024-07-25 01:20:15.571450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.511 [2024-07-25 01:20:15.571476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.511 qpair failed and we were unable to recover it. 00:34:22.511 [2024-07-25 01:20:15.571639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.511 [2024-07-25 01:20:15.571668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.511 qpair failed and we were unable to recover it. 00:34:22.511 [2024-07-25 01:20:15.571829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.511 [2024-07-25 01:20:15.571855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.511 qpair failed and we were unable to recover it. 00:34:22.511 [2024-07-25 01:20:15.572021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.511 [2024-07-25 01:20:15.572047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.511 qpair failed and we were unable to recover it. 00:34:22.511 [2024-07-25 01:20:15.572186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.511 [2024-07-25 01:20:15.572213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.511 qpair failed and we were unable to recover it. 00:34:22.511 [2024-07-25 01:20:15.572399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.511 [2024-07-25 01:20:15.572425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.511 qpair failed and we were unable to recover it. 00:34:22.511 [2024-07-25 01:20:15.572616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.511 [2024-07-25 01:20:15.572645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.511 qpair failed and we were unable to recover it. 00:34:22.511 [2024-07-25 01:20:15.572825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.511 [2024-07-25 01:20:15.572854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.511 qpair failed and we were unable to recover it. 00:34:22.511 [2024-07-25 01:20:15.573018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.511 [2024-07-25 01:20:15.573044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.511 qpair failed and we were unable to recover it. 00:34:22.511 [2024-07-25 01:20:15.573157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.511 [2024-07-25 01:20:15.573184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.511 qpair failed and we were unable to recover it. 00:34:22.511 [2024-07-25 01:20:15.573315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.511 [2024-07-25 01:20:15.573349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.511 qpair failed and we were unable to recover it. 00:34:22.511 [2024-07-25 01:20:15.573484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.511 [2024-07-25 01:20:15.573511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.511 qpair failed and we were unable to recover it. 00:34:22.511 [2024-07-25 01:20:15.573654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.511 [2024-07-25 01:20:15.573680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.511 qpair failed and we were unable to recover it. 00:34:22.511 [2024-07-25 01:20:15.573857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.511 [2024-07-25 01:20:15.573899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.511 qpair failed and we were unable to recover it. 00:34:22.511 [2024-07-25 01:20:15.574033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.511 [2024-07-25 01:20:15.574076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.511 qpair failed and we were unable to recover it. 00:34:22.511 [2024-07-25 01:20:15.574253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.511 [2024-07-25 01:20:15.574312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.511 qpair failed and we were unable to recover it. 00:34:22.511 [2024-07-25 01:20:15.574493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.511 [2024-07-25 01:20:15.574536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.511 qpair failed and we were unable to recover it. 00:34:22.511 [2024-07-25 01:20:15.574708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.511 [2024-07-25 01:20:15.574735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.511 qpair failed and we were unable to recover it. 00:34:22.511 [2024-07-25 01:20:15.574854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.511 [2024-07-25 01:20:15.574897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.511 qpair failed and we were unable to recover it. 00:34:22.511 [2024-07-25 01:20:15.575131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.511 [2024-07-25 01:20:15.575185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.511 qpair failed and we were unable to recover it. 00:34:22.511 [2024-07-25 01:20:15.575369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.511 [2024-07-25 01:20:15.575397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.511 qpair failed and we were unable to recover it. 00:34:22.511 [2024-07-25 01:20:15.575536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.511 [2024-07-25 01:20:15.575579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.511 qpair failed and we were unable to recover it. 00:34:22.511 [2024-07-25 01:20:15.575704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.511 [2024-07-25 01:20:15.575735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.511 qpair failed and we were unable to recover it. 00:34:22.511 [2024-07-25 01:20:15.575902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.511 [2024-07-25 01:20:15.575929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.511 qpair failed and we were unable to recover it. 00:34:22.511 [2024-07-25 01:20:15.576055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.511 [2024-07-25 01:20:15.576082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.511 qpair failed and we were unable to recover it. 00:34:22.511 [2024-07-25 01:20:15.576230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.511 [2024-07-25 01:20:15.576268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.511 qpair failed and we were unable to recover it. 00:34:22.511 [2024-07-25 01:20:15.576388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.511 [2024-07-25 01:20:15.576415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.511 qpair failed and we were unable to recover it. 00:34:22.511 [2024-07-25 01:20:15.576587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.511 [2024-07-25 01:20:15.576614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.511 qpair failed and we were unable to recover it. 00:34:22.511 [2024-07-25 01:20:15.576798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.512 [2024-07-25 01:20:15.576828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.512 qpair failed and we were unable to recover it. 00:34:22.512 [2024-07-25 01:20:15.577012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.512 [2024-07-25 01:20:15.577039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.512 qpair failed and we were unable to recover it. 00:34:22.512 [2024-07-25 01:20:15.577219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.512 [2024-07-25 01:20:15.577256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.512 qpair failed and we were unable to recover it. 00:34:22.512 [2024-07-25 01:20:15.577385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.512 [2024-07-25 01:20:15.577415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.512 qpair failed and we were unable to recover it. 00:34:22.512 [2024-07-25 01:20:15.577576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.512 [2024-07-25 01:20:15.577602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.512 qpair failed and we were unable to recover it. 00:34:22.512 [2024-07-25 01:20:15.577831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.512 [2024-07-25 01:20:15.577884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.512 qpair failed and we were unable to recover it. 00:34:22.512 [2024-07-25 01:20:15.578066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.512 [2024-07-25 01:20:15.578096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.512 qpair failed and we were unable to recover it. 00:34:22.512 [2024-07-25 01:20:15.578257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.512 [2024-07-25 01:20:15.578285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.512 qpair failed and we were unable to recover it. 00:34:22.512 [2024-07-25 01:20:15.578437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.512 [2024-07-25 01:20:15.578463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.512 qpair failed and we were unable to recover it. 00:34:22.512 [2024-07-25 01:20:15.578614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.512 [2024-07-25 01:20:15.578660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.512 qpair failed and we were unable to recover it. 00:34:22.512 [2024-07-25 01:20:15.578854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.512 [2024-07-25 01:20:15.578881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.512 qpair failed and we were unable to recover it. 00:34:22.512 [2024-07-25 01:20:15.579021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.512 [2024-07-25 01:20:15.579051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.512 qpair failed and we were unable to recover it. 00:34:22.512 [2024-07-25 01:20:15.579234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.512 [2024-07-25 01:20:15.579269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.512 qpair failed and we were unable to recover it. 00:34:22.512 [2024-07-25 01:20:15.579436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.512 [2024-07-25 01:20:15.579463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.512 qpair failed and we were unable to recover it. 00:34:22.512 [2024-07-25 01:20:15.579604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.512 [2024-07-25 01:20:15.579631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.512 qpair failed and we were unable to recover it. 00:34:22.512 [2024-07-25 01:20:15.579792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.512 [2024-07-25 01:20:15.579821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.512 qpair failed and we were unable to recover it. 00:34:22.512 [2024-07-25 01:20:15.579989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.512 [2024-07-25 01:20:15.580015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.512 qpair failed and we were unable to recover it. 00:34:22.512 [2024-07-25 01:20:15.580156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.512 [2024-07-25 01:20:15.580182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.512 qpair failed and we were unable to recover it. 00:34:22.512 [2024-07-25 01:20:15.580373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.512 [2024-07-25 01:20:15.580403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.512 qpair failed and we were unable to recover it. 00:34:22.512 [2024-07-25 01:20:15.580565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.512 [2024-07-25 01:20:15.580592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.512 qpair failed and we were unable to recover it. 00:34:22.512 [2024-07-25 01:20:15.580733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.512 [2024-07-25 01:20:15.580776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.512 qpair failed and we were unable to recover it. 00:34:22.512 [2024-07-25 01:20:15.580926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.512 [2024-07-25 01:20:15.580956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.512 qpair failed and we were unable to recover it. 00:34:22.512 [2024-07-25 01:20:15.581123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.512 [2024-07-25 01:20:15.581154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.512 qpair failed and we were unable to recover it. 00:34:22.512 [2024-07-25 01:20:15.581302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.512 [2024-07-25 01:20:15.581347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.512 qpair failed and we were unable to recover it. 00:34:22.512 [2024-07-25 01:20:15.581513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.512 [2024-07-25 01:20:15.581543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.512 qpair failed and we were unable to recover it. 00:34:22.512 [2024-07-25 01:20:15.581705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.512 [2024-07-25 01:20:15.581732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.512 qpair failed and we were unable to recover it. 00:34:22.512 [2024-07-25 01:20:15.581895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.512 [2024-07-25 01:20:15.581963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.512 qpair failed and we were unable to recover it. 00:34:22.512 [2024-07-25 01:20:15.582094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.512 [2024-07-25 01:20:15.582123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.512 qpair failed and we were unable to recover it. 00:34:22.512 [2024-07-25 01:20:15.582267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.512 [2024-07-25 01:20:15.582295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.512 qpair failed and we were unable to recover it. 00:34:22.512 [2024-07-25 01:20:15.582410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.512 [2024-07-25 01:20:15.582437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.512 qpair failed and we were unable to recover it. 00:34:22.512 [2024-07-25 01:20:15.582550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.512 [2024-07-25 01:20:15.582578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.512 qpair failed and we were unable to recover it. 00:34:22.512 [2024-07-25 01:20:15.582719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.512 [2024-07-25 01:20:15.582746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.512 qpair failed and we were unable to recover it. 00:34:22.512 [2024-07-25 01:20:15.582862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.512 [2024-07-25 01:20:15.582891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.512 qpair failed and we were unable to recover it. 00:34:22.512 [2024-07-25 01:20:15.583106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.512 [2024-07-25 01:20:15.583133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.512 qpair failed and we were unable to recover it. 00:34:22.512 [2024-07-25 01:20:15.583280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.512 [2024-07-25 01:20:15.583308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.512 qpair failed and we were unable to recover it. 00:34:22.512 [2024-07-25 01:20:15.583422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.512 [2024-07-25 01:20:15.583448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.512 qpair failed and we were unable to recover it. 00:34:22.512 [2024-07-25 01:20:15.583590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.512 [2024-07-25 01:20:15.583616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.512 qpair failed and we were unable to recover it. 00:34:22.512 [2024-07-25 01:20:15.583759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.512 [2024-07-25 01:20:15.583786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.512 qpair failed and we were unable to recover it. 00:34:22.512 [2024-07-25 01:20:15.583969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.513 [2024-07-25 01:20:15.583998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.513 qpair failed and we were unable to recover it. 00:34:22.513 [2024-07-25 01:20:15.584152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.513 [2024-07-25 01:20:15.584181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.513 qpair failed and we were unable to recover it. 00:34:22.513 [2024-07-25 01:20:15.584348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.513 [2024-07-25 01:20:15.584376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.513 qpair failed and we were unable to recover it. 00:34:22.513 [2024-07-25 01:20:15.584561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.513 [2024-07-25 01:20:15.584591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.513 qpair failed and we were unable to recover it. 00:34:22.513 [2024-07-25 01:20:15.584770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.513 [2024-07-25 01:20:15.584799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.513 qpair failed and we were unable to recover it. 00:34:22.513 [2024-07-25 01:20:15.584954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.513 [2024-07-25 01:20:15.584981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.513 qpair failed and we were unable to recover it. 00:34:22.513 [2024-07-25 01:20:15.585124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.513 [2024-07-25 01:20:15.585150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.513 qpair failed and we were unable to recover it. 00:34:22.513 [2024-07-25 01:20:15.585332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.513 [2024-07-25 01:20:15.585363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.513 qpair failed and we were unable to recover it. 00:34:22.513 [2024-07-25 01:20:15.585527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.513 [2024-07-25 01:20:15.585554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.513 qpair failed and we were unable to recover it. 00:34:22.513 [2024-07-25 01:20:15.585694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.513 [2024-07-25 01:20:15.585721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.513 qpair failed and we were unable to recover it. 00:34:22.513 [2024-07-25 01:20:15.585889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.513 [2024-07-25 01:20:15.585918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.513 qpair failed and we were unable to recover it. 00:34:22.513 [2024-07-25 01:20:15.586083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.513 [2024-07-25 01:20:15.586111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.513 qpair failed and we were unable to recover it. 00:34:22.513 [2024-07-25 01:20:15.586295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.513 [2024-07-25 01:20:15.586325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.513 qpair failed and we were unable to recover it. 00:34:22.513 [2024-07-25 01:20:15.586511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.513 [2024-07-25 01:20:15.586541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.513 qpair failed and we were unable to recover it. 00:34:22.513 [2024-07-25 01:20:15.586700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.513 [2024-07-25 01:20:15.586727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.513 qpair failed and we were unable to recover it. 00:34:22.513 [2024-07-25 01:20:15.586838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.513 [2024-07-25 01:20:15.586865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.513 qpair failed and we were unable to recover it. 00:34:22.513 [2024-07-25 01:20:15.587040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.513 [2024-07-25 01:20:15.587071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.513 qpair failed and we were unable to recover it. 00:34:22.513 [2024-07-25 01:20:15.587233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.513 [2024-07-25 01:20:15.587265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.513 qpair failed and we were unable to recover it. 00:34:22.513 [2024-07-25 01:20:15.587415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.513 [2024-07-25 01:20:15.587442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.513 qpair failed and we were unable to recover it. 00:34:22.513 [2024-07-25 01:20:15.587599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.513 [2024-07-25 01:20:15.587628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.513 qpair failed and we were unable to recover it. 00:34:22.513 [2024-07-25 01:20:15.587793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.513 [2024-07-25 01:20:15.587819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.513 qpair failed and we were unable to recover it. 00:34:22.513 [2024-07-25 01:20:15.587963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.513 [2024-07-25 01:20:15.587991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.513 qpair failed and we were unable to recover it. 00:34:22.513 [2024-07-25 01:20:15.588152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.513 [2024-07-25 01:20:15.588181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.513 qpair failed and we were unable to recover it. 00:34:22.513 [2024-07-25 01:20:15.588325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.513 [2024-07-25 01:20:15.588353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.513 qpair failed and we were unable to recover it. 00:34:22.513 [2024-07-25 01:20:15.588493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.513 [2024-07-25 01:20:15.588525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.513 qpair failed and we were unable to recover it. 00:34:22.513 [2024-07-25 01:20:15.588665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.513 [2024-07-25 01:20:15.588708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.513 qpair failed and we were unable to recover it. 00:34:22.513 [2024-07-25 01:20:15.588895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.513 [2024-07-25 01:20:15.588922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.513 qpair failed and we were unable to recover it. 00:34:22.513 [2024-07-25 01:20:15.589035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.513 [2024-07-25 01:20:15.589078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.513 qpair failed and we were unable to recover it. 00:34:22.513 [2024-07-25 01:20:15.589233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.513 [2024-07-25 01:20:15.589271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.513 qpair failed and we were unable to recover it. 00:34:22.513 [2024-07-25 01:20:15.589439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.513 [2024-07-25 01:20:15.589466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.513 qpair failed and we were unable to recover it. 00:34:22.513 [2024-07-25 01:20:15.589614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.513 [2024-07-25 01:20:15.589641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.513 qpair failed and we were unable to recover it. 00:34:22.513 [2024-07-25 01:20:15.589789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.513 [2024-07-25 01:20:15.589832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.513 qpair failed and we were unable to recover it. 00:34:22.513 [2024-07-25 01:20:15.590025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.513 [2024-07-25 01:20:15.590052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.513 qpair failed and we were unable to recover it. 00:34:22.513 [2024-07-25 01:20:15.590215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.513 [2024-07-25 01:20:15.590253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.513 qpair failed and we were unable to recover it. 00:34:22.513 [2024-07-25 01:20:15.590438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.513 [2024-07-25 01:20:15.590468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.513 qpair failed and we were unable to recover it. 00:34:22.513 [2024-07-25 01:20:15.590631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.513 [2024-07-25 01:20:15.590658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.513 qpair failed and we were unable to recover it. 00:34:22.513 [2024-07-25 01:20:15.590779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.513 [2024-07-25 01:20:15.590806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.513 qpair failed and we were unable to recover it. 00:34:22.513 [2024-07-25 01:20:15.590953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.514 [2024-07-25 01:20:15.590981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.514 qpair failed and we were unable to recover it. 00:34:22.514 [2024-07-25 01:20:15.591130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.514 [2024-07-25 01:20:15.591157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.514 qpair failed and we were unable to recover it. 00:34:22.514 [2024-07-25 01:20:15.591298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.514 [2024-07-25 01:20:15.591326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.514 qpair failed and we were unable to recover it. 00:34:22.514 [2024-07-25 01:20:15.591467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.514 [2024-07-25 01:20:15.591510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.514 qpair failed and we were unable to recover it. 00:34:22.514 [2024-07-25 01:20:15.591673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.514 [2024-07-25 01:20:15.591701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.514 qpair failed and we were unable to recover it. 00:34:22.514 [2024-07-25 01:20:15.591849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.514 [2024-07-25 01:20:15.591893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.514 qpair failed and we were unable to recover it. 00:34:22.514 [2024-07-25 01:20:15.592017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.514 [2024-07-25 01:20:15.592048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.514 qpair failed and we were unable to recover it. 00:34:22.514 [2024-07-25 01:20:15.592188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.514 [2024-07-25 01:20:15.592216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.514 qpair failed and we were unable to recover it. 00:34:22.514 [2024-07-25 01:20:15.592338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.514 [2024-07-25 01:20:15.592366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.514 qpair failed and we were unable to recover it. 00:34:22.514 [2024-07-25 01:20:15.592481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.514 [2024-07-25 01:20:15.592507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.514 qpair failed and we were unable to recover it. 00:34:22.514 [2024-07-25 01:20:15.592660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.514 [2024-07-25 01:20:15.592688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.514 qpair failed and we were unable to recover it. 00:34:22.514 [2024-07-25 01:20:15.592828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.514 [2024-07-25 01:20:15.592855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.514 qpair failed and we were unable to recover it. 00:34:22.514 [2024-07-25 01:20:15.593003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.514 [2024-07-25 01:20:15.593029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.514 qpair failed and we were unable to recover it. 00:34:22.514 [2024-07-25 01:20:15.593253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.514 [2024-07-25 01:20:15.593311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:22.514 qpair failed and we were unable to recover it. 00:34:22.514 [2024-07-25 01:20:15.593481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.514 [2024-07-25 01:20:15.593509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:22.514 qpair failed and we were unable to recover it. 00:34:22.514 [2024-07-25 01:20:15.593734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.514 [2024-07-25 01:20:15.593762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:22.514 qpair failed and we were unable to recover it. 00:34:22.514 [2024-07-25 01:20:15.593931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.514 [2024-07-25 01:20:15.593975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:22.514 qpair failed and we were unable to recover it. 00:34:22.514 [2024-07-25 01:20:15.594138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.514 [2024-07-25 01:20:15.594182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:22.514 qpair failed and we were unable to recover it. 00:34:22.514 [2024-07-25 01:20:15.594329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.514 [2024-07-25 01:20:15.594357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:22.514 qpair failed and we were unable to recover it. 00:34:22.514 [2024-07-25 01:20:15.594516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.514 [2024-07-25 01:20:15.594561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:22.514 qpair failed and we were unable to recover it. 00:34:22.514 [2024-07-25 01:20:15.594714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.514 [2024-07-25 01:20:15.594763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:22.514 qpair failed and we were unable to recover it. 00:34:22.514 [2024-07-25 01:20:15.594952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.514 [2024-07-25 01:20:15.594996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:22.514 qpair failed and we were unable to recover it. 00:34:22.514 [2024-07-25 01:20:15.595114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.514 [2024-07-25 01:20:15.595142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:22.514 qpair failed and we were unable to recover it. 00:34:22.514 [2024-07-25 01:20:15.595314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.514 [2024-07-25 01:20:15.595361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:22.514 qpair failed and we were unable to recover it. 00:34:22.514 [2024-07-25 01:20:15.595552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.514 [2024-07-25 01:20:15.595596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:22.514 qpair failed and we were unable to recover it. 00:34:22.514 [2024-07-25 01:20:15.595732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.514 [2024-07-25 01:20:15.595775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:22.514 qpair failed and we were unable to recover it. 00:34:22.514 [2024-07-25 01:20:15.595965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.514 [2024-07-25 01:20:15.596010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:22.514 qpair failed and we were unable to recover it. 00:34:22.514 [2024-07-25 01:20:15.596125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.514 [2024-07-25 01:20:15.596156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:22.514 qpair failed and we were unable to recover it. 00:34:22.514 [2024-07-25 01:20:15.596308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.514 [2024-07-25 01:20:15.596335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:22.514 qpair failed and we were unable to recover it. 00:34:22.514 [2024-07-25 01:20:15.596507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.514 [2024-07-25 01:20:15.596534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:22.514 qpair failed and we were unable to recover it. 00:34:22.514 [2024-07-25 01:20:15.596693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.514 [2024-07-25 01:20:15.596737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:22.514 qpair failed and we were unable to recover it. 00:34:22.514 [2024-07-25 01:20:15.596902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.514 [2024-07-25 01:20:15.596946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:22.515 qpair failed and we were unable to recover it. 00:34:22.515 [2024-07-25 01:20:15.597090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.515 [2024-07-25 01:20:15.597118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:22.515 qpair failed and we were unable to recover it. 00:34:22.515 [2024-07-25 01:20:15.597262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.515 [2024-07-25 01:20:15.597290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:22.515 qpair failed and we were unable to recover it. 00:34:22.515 [2024-07-25 01:20:15.597453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.515 [2024-07-25 01:20:15.597497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:22.515 qpair failed and we were unable to recover it. 00:34:22.515 [2024-07-25 01:20:15.597664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.515 [2024-07-25 01:20:15.597708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:22.515 qpair failed and we were unable to recover it. 00:34:22.515 [2024-07-25 01:20:15.597932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.515 [2024-07-25 01:20:15.597958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:22.515 qpair failed and we were unable to recover it. 00:34:22.515 [2024-07-25 01:20:15.598129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.515 [2024-07-25 01:20:15.598156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:22.515 qpair failed and we were unable to recover it. 00:34:22.515 [2024-07-25 01:20:15.598303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.515 [2024-07-25 01:20:15.598331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:22.515 qpair failed and we were unable to recover it. 00:34:22.515 [2024-07-25 01:20:15.598520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.515 [2024-07-25 01:20:15.598564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:22.515 qpair failed and we were unable to recover it. 00:34:22.515 [2024-07-25 01:20:15.598743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.515 [2024-07-25 01:20:15.598771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:22.515 qpair failed and we were unable to recover it. 00:34:22.515 [2024-07-25 01:20:15.598923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.515 [2024-07-25 01:20:15.598950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:22.515 qpair failed and we were unable to recover it. 00:34:22.515 [2024-07-25 01:20:15.599103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.515 [2024-07-25 01:20:15.599129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:22.515 qpair failed and we were unable to recover it. 00:34:22.515 [2024-07-25 01:20:15.599295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.515 [2024-07-25 01:20:15.599326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:22.515 qpair failed and we were unable to recover it. 00:34:22.515 [2024-07-25 01:20:15.599513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.515 [2024-07-25 01:20:15.599556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:22.515 qpair failed and we were unable to recover it. 00:34:22.515 [2024-07-25 01:20:15.599693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.515 [2024-07-25 01:20:15.599737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:22.515 qpair failed and we were unable to recover it. 00:34:22.515 [2024-07-25 01:20:15.599959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.515 [2024-07-25 01:20:15.599986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:22.515 qpair failed and we were unable to recover it. 00:34:22.515 [2024-07-25 01:20:15.600127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.515 [2024-07-25 01:20:15.600158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:22.515 qpair failed and we were unable to recover it. 00:34:22.515 [2024-07-25 01:20:15.600333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.515 [2024-07-25 01:20:15.600378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:22.515 qpair failed and we were unable to recover it. 00:34:22.515 [2024-07-25 01:20:15.600578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.515 [2024-07-25 01:20:15.600623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:22.515 qpair failed and we were unable to recover it. 00:34:22.515 [2024-07-25 01:20:15.600789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.515 [2024-07-25 01:20:15.600833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:22.515 qpair failed and we were unable to recover it. 00:34:22.515 [2024-07-25 01:20:15.600975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.515 [2024-07-25 01:20:15.601001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:22.515 qpair failed and we were unable to recover it. 00:34:22.515 [2024-07-25 01:20:15.601224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.515 [2024-07-25 01:20:15.601258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:22.515 qpair failed and we were unable to recover it. 00:34:22.515 [2024-07-25 01:20:15.601419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.515 [2024-07-25 01:20:15.601463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:22.515 qpair failed and we were unable to recover it. 00:34:22.515 [2024-07-25 01:20:15.601612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.515 [2024-07-25 01:20:15.601656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:22.515 qpair failed and we were unable to recover it. 00:34:22.515 [2024-07-25 01:20:15.601808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.515 [2024-07-25 01:20:15.601853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:22.515 qpair failed and we were unable to recover it. 00:34:22.515 [2024-07-25 01:20:15.601997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.515 [2024-07-25 01:20:15.602024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:22.515 qpair failed and we were unable to recover it. 00:34:22.515 [2024-07-25 01:20:15.602163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.515 [2024-07-25 01:20:15.602190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:22.515 qpair failed and we were unable to recover it. 00:34:22.515 [2024-07-25 01:20:15.602342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.515 [2024-07-25 01:20:15.602389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:22.515 qpair failed and we were unable to recover it. 00:34:22.515 [2024-07-25 01:20:15.602595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.515 [2024-07-25 01:20:15.602622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:22.515 qpair failed and we were unable to recover it. 00:34:22.515 [2024-07-25 01:20:15.602732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.515 [2024-07-25 01:20:15.602759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:22.515 qpair failed and we were unable to recover it. 00:34:22.515 [2024-07-25 01:20:15.602884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.515 [2024-07-25 01:20:15.602911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:22.515 qpair failed and we were unable to recover it. 00:34:22.515 [2024-07-25 01:20:15.603030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.515 [2024-07-25 01:20:15.603058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:22.515 qpair failed and we were unable to recover it. 00:34:22.515 [2024-07-25 01:20:15.603231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.515 [2024-07-25 01:20:15.603264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:22.515 qpair failed and we were unable to recover it. 00:34:22.515 [2024-07-25 01:20:15.603388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.515 [2024-07-25 01:20:15.603417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:22.515 qpair failed and we were unable to recover it. 00:34:22.515 [2024-07-25 01:20:15.603565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.515 [2024-07-25 01:20:15.603593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:22.515 qpair failed and we were unable to recover it. 00:34:22.515 [2024-07-25 01:20:15.603731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.515 [2024-07-25 01:20:15.603758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:22.515 qpair failed and we were unable to recover it. 00:34:22.515 [2024-07-25 01:20:15.603927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.515 [2024-07-25 01:20:15.603958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:22.515 qpair failed and we were unable to recover it. 00:34:22.515 [2024-07-25 01:20:15.604099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.515 [2024-07-25 01:20:15.604126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:22.515 qpair failed and we were unable to recover it. 00:34:22.515 [2024-07-25 01:20:15.604236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.516 [2024-07-25 01:20:15.604269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:22.516 qpair failed and we were unable to recover it. 00:34:22.516 [2024-07-25 01:20:15.604436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.516 [2024-07-25 01:20:15.604463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:22.516 qpair failed and we were unable to recover it. 00:34:22.516 [2024-07-25 01:20:15.604626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.516 [2024-07-25 01:20:15.604668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:22.516 qpair failed and we were unable to recover it. 00:34:22.516 [2024-07-25 01:20:15.604810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.516 [2024-07-25 01:20:15.604854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:22.516 qpair failed and we were unable to recover it. 00:34:22.516 [2024-07-25 01:20:15.605008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.516 [2024-07-25 01:20:15.605035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:22.516 qpair failed and we were unable to recover it. 00:34:22.516 [2024-07-25 01:20:15.605202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.516 [2024-07-25 01:20:15.605229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:22.516 qpair failed and we were unable to recover it. 00:34:22.516 [2024-07-25 01:20:15.605415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.516 [2024-07-25 01:20:15.605460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.516 qpair failed and we were unable to recover it. 00:34:22.516 [2024-07-25 01:20:15.605653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.516 [2024-07-25 01:20:15.605684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.516 qpair failed and we were unable to recover it. 00:34:22.516 [2024-07-25 01:20:15.605817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.516 [2024-07-25 01:20:15.605848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.516 qpair failed and we were unable to recover it. 00:34:22.516 [2024-07-25 01:20:15.606034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.516 [2024-07-25 01:20:15.606062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.516 qpair failed and we were unable to recover it. 00:34:22.516 [2024-07-25 01:20:15.606256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.516 [2024-07-25 01:20:15.606300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.516 qpair failed and we were unable to recover it. 00:34:22.516 [2024-07-25 01:20:15.606441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.516 [2024-07-25 01:20:15.606483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.516 qpair failed and we were unable to recover it. 00:34:22.516 [2024-07-25 01:20:15.606623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.516 [2024-07-25 01:20:15.606650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.516 qpair failed and we were unable to recover it. 00:34:22.516 [2024-07-25 01:20:15.606848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.516 [2024-07-25 01:20:15.606877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.516 qpair failed and we were unable to recover it. 00:34:22.516 [2024-07-25 01:20:15.607034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.516 [2024-07-25 01:20:15.607064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.516 qpair failed and we were unable to recover it. 00:34:22.516 [2024-07-25 01:20:15.607253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.516 [2024-07-25 01:20:15.607280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.516 qpair failed and we were unable to recover it. 00:34:22.516 [2024-07-25 01:20:15.607455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.516 [2024-07-25 01:20:15.607482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.516 qpair failed and we were unable to recover it. 00:34:22.516 [2024-07-25 01:20:15.607681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.516 [2024-07-25 01:20:15.607710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.516 qpair failed and we were unable to recover it. 00:34:22.516 [2024-07-25 01:20:15.607868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.516 [2024-07-25 01:20:15.607897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.516 qpair failed and we were unable to recover it. 00:34:22.516 [2024-07-25 01:20:15.608046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.516 [2024-07-25 01:20:15.608075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.516 qpair failed and we were unable to recover it. 00:34:22.516 [2024-07-25 01:20:15.608208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.516 [2024-07-25 01:20:15.608237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.516 qpair failed and we were unable to recover it. 00:34:22.516 [2024-07-25 01:20:15.608419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.516 [2024-07-25 01:20:15.608445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.516 qpair failed and we were unable to recover it. 00:34:22.516 [2024-07-25 01:20:15.608576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.516 [2024-07-25 01:20:15.608605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.516 qpair failed and we were unable to recover it. 00:34:22.516 [2024-07-25 01:20:15.608760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.516 [2024-07-25 01:20:15.608789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.516 qpair failed and we were unable to recover it. 00:34:22.516 [2024-07-25 01:20:15.608972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.516 [2024-07-25 01:20:15.609001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.516 qpair failed and we were unable to recover it. 00:34:22.516 [2024-07-25 01:20:15.609185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.516 [2024-07-25 01:20:15.609219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.516 qpair failed and we were unable to recover it. 00:34:22.516 [2024-07-25 01:20:15.609400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.516 [2024-07-25 01:20:15.609427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.516 qpair failed and we were unable to recover it. 00:34:22.516 [2024-07-25 01:20:15.609611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.516 [2024-07-25 01:20:15.609651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:22.516 qpair failed and we were unable to recover it. 00:34:22.516 [2024-07-25 01:20:15.609802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.516 [2024-07-25 01:20:15.609848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:22.516 qpair failed and we were unable to recover it. 00:34:22.516 [2024-07-25 01:20:15.609991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.516 [2024-07-25 01:20:15.610035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:22.516 qpair failed and we were unable to recover it. 00:34:22.516 [2024-07-25 01:20:15.610208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.516 [2024-07-25 01:20:15.610235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:22.516 qpair failed and we were unable to recover it. 00:34:22.516 [2024-07-25 01:20:15.610396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.516 [2024-07-25 01:20:15.610424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:22.516 qpair failed and we were unable to recover it. 00:34:22.516 [2024-07-25 01:20:15.610625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.516 [2024-07-25 01:20:15.610671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:22.516 qpair failed and we were unable to recover it. 00:34:22.516 [2024-07-25 01:20:15.610848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.516 [2024-07-25 01:20:15.610898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:22.516 qpair failed and we were unable to recover it. 00:34:22.516 [2024-07-25 01:20:15.611032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.516 [2024-07-25 01:20:15.611077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:22.516 qpair failed and we were unable to recover it. 00:34:22.516 [2024-07-25 01:20:15.611197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.516 [2024-07-25 01:20:15.611224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:22.516 qpair failed and we were unable to recover it. 00:34:22.516 [2024-07-25 01:20:15.611383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.516 [2024-07-25 01:20:15.611411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.516 qpair failed and we were unable to recover it. 00:34:22.516 [2024-07-25 01:20:15.611549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.516 [2024-07-25 01:20:15.611575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.516 qpair failed and we were unable to recover it. 00:34:22.517 [2024-07-25 01:20:15.611718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.517 [2024-07-25 01:20:15.611744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.517 qpair failed and we were unable to recover it. 00:34:22.517 [2024-07-25 01:20:15.611918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.517 [2024-07-25 01:20:15.611947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.517 qpair failed and we were unable to recover it. 00:34:22.517 [2024-07-25 01:20:15.612080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.517 [2024-07-25 01:20:15.612111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.517 qpair failed and we were unable to recover it. 00:34:22.517 [2024-07-25 01:20:15.612265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.517 [2024-07-25 01:20:15.612318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.517 qpair failed and we were unable to recover it. 00:34:22.517 [2024-07-25 01:20:15.612460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.517 [2024-07-25 01:20:15.612486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.517 qpair failed and we were unable to recover it. 00:34:22.517 [2024-07-25 01:20:15.612616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.517 [2024-07-25 01:20:15.612645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.517 qpair failed and we were unable to recover it. 00:34:22.517 [2024-07-25 01:20:15.612808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.517 [2024-07-25 01:20:15.612837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.517 qpair failed and we were unable to recover it. 00:34:22.517 [2024-07-25 01:20:15.613074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.517 [2024-07-25 01:20:15.613103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.517 qpair failed and we were unable to recover it. 00:34:22.517 [2024-07-25 01:20:15.613233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.517 [2024-07-25 01:20:15.613266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.517 qpair failed and we were unable to recover it. 00:34:22.517 [2024-07-25 01:20:15.613383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.517 [2024-07-25 01:20:15.613410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.517 qpair failed and we were unable to recover it. 00:34:22.517 [2024-07-25 01:20:15.613575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.517 [2024-07-25 01:20:15.613604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.517 qpair failed and we were unable to recover it. 00:34:22.517 [2024-07-25 01:20:15.613759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.517 [2024-07-25 01:20:15.613788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.517 qpair failed and we were unable to recover it. 00:34:22.517 [2024-07-25 01:20:15.613947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.517 [2024-07-25 01:20:15.613976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.517 qpair failed and we were unable to recover it. 00:34:22.517 [2024-07-25 01:20:15.614128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.517 [2024-07-25 01:20:15.614157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.517 qpair failed and we were unable to recover it. 00:34:22.517 [2024-07-25 01:20:15.614324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.517 [2024-07-25 01:20:15.614355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.517 qpair failed and we were unable to recover it. 00:34:22.517 [2024-07-25 01:20:15.614501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.517 [2024-07-25 01:20:15.614544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.517 qpair failed and we were unable to recover it. 00:34:22.517 [2024-07-25 01:20:15.614759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.517 [2024-07-25 01:20:15.614789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.517 qpair failed and we were unable to recover it. 00:34:22.517 [2024-07-25 01:20:15.614945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.517 [2024-07-25 01:20:15.614976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.517 qpair failed and we were unable to recover it. 00:34:22.517 [2024-07-25 01:20:15.615141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.517 [2024-07-25 01:20:15.615185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.517 qpair failed and we were unable to recover it. 00:34:22.517 [2024-07-25 01:20:15.615366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.517 [2024-07-25 01:20:15.615395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.517 qpair failed and we were unable to recover it. 00:34:22.517 [2024-07-25 01:20:15.615561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.517 [2024-07-25 01:20:15.615591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.517 qpair failed and we were unable to recover it. 00:34:22.517 [2024-07-25 01:20:15.615787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.517 [2024-07-25 01:20:15.615853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.517 qpair failed and we were unable to recover it. 00:34:22.517 [2024-07-25 01:20:15.616094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.517 [2024-07-25 01:20:15.616144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.517 qpair failed and we were unable to recover it. 00:34:22.517 [2024-07-25 01:20:15.616319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.517 [2024-07-25 01:20:15.616347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.517 qpair failed and we were unable to recover it. 00:34:22.517 [2024-07-25 01:20:15.616460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.517 [2024-07-25 01:20:15.616487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.517 qpair failed and we were unable to recover it. 00:34:22.517 [2024-07-25 01:20:15.616658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.517 [2024-07-25 01:20:15.616684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.517 qpair failed and we were unable to recover it. 00:34:22.517 [2024-07-25 01:20:15.616884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.517 [2024-07-25 01:20:15.616913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.517 qpair failed and we were unable to recover it. 00:34:22.517 [2024-07-25 01:20:15.617072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.517 [2024-07-25 01:20:15.617102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.517 qpair failed and we were unable to recover it. 00:34:22.517 [2024-07-25 01:20:15.617299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.517 [2024-07-25 01:20:15.617327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.517 qpair failed and we were unable to recover it. 00:34:22.517 [2024-07-25 01:20:15.617451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.517 [2024-07-25 01:20:15.617478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.517 qpair failed and we were unable to recover it. 00:34:22.517 [2024-07-25 01:20:15.617725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.517 [2024-07-25 01:20:15.617775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.517 qpair failed and we were unable to recover it. 00:34:22.517 [2024-07-25 01:20:15.617968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.517 [2024-07-25 01:20:15.617998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.517 qpair failed and we were unable to recover it. 00:34:22.517 [2024-07-25 01:20:15.618124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.517 [2024-07-25 01:20:15.618154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.517 qpair failed and we were unable to recover it. 00:34:22.517 [2024-07-25 01:20:15.618314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.517 [2024-07-25 01:20:15.618341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.517 qpair failed and we were unable to recover it. 00:34:22.517 [2024-07-25 01:20:15.618507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.517 [2024-07-25 01:20:15.618534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.517 qpair failed and we were unable to recover it. 00:34:22.517 [2024-07-25 01:20:15.618694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.517 [2024-07-25 01:20:15.618724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.517 qpair failed and we were unable to recover it. 00:34:22.517 [2024-07-25 01:20:15.618850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.517 [2024-07-25 01:20:15.618881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.517 qpair failed and we were unable to recover it. 00:34:22.517 [2024-07-25 01:20:15.619063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.517 [2024-07-25 01:20:15.619093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.518 qpair failed and we were unable to recover it. 00:34:22.518 [2024-07-25 01:20:15.619276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.518 [2024-07-25 01:20:15.619320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.518 qpair failed and we were unable to recover it. 00:34:22.518 [2024-07-25 01:20:15.619464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.518 [2024-07-25 01:20:15.619492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.518 qpair failed and we were unable to recover it. 00:34:22.518 [2024-07-25 01:20:15.619660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.518 [2024-07-25 01:20:15.619687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.518 qpair failed and we were unable to recover it. 00:34:22.518 [2024-07-25 01:20:15.619852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.518 [2024-07-25 01:20:15.619882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.518 qpair failed and we were unable to recover it. 00:34:22.518 [2024-07-25 01:20:15.620013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.518 [2024-07-25 01:20:15.620043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.518 qpair failed and we were unable to recover it. 00:34:22.518 [2024-07-25 01:20:15.620212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.518 [2024-07-25 01:20:15.620238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.518 qpair failed and we were unable to recover it. 00:34:22.518 [2024-07-25 01:20:15.620391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.518 [2024-07-25 01:20:15.620418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.518 qpair failed and we were unable to recover it. 00:34:22.518 [2024-07-25 01:20:15.620548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.518 [2024-07-25 01:20:15.620578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.518 qpair failed and we were unable to recover it. 00:34:22.518 [2024-07-25 01:20:15.620743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.518 [2024-07-25 01:20:15.620772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.518 qpair failed and we were unable to recover it. 00:34:22.518 [2024-07-25 01:20:15.620928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.518 [2024-07-25 01:20:15.620957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.518 qpair failed and we were unable to recover it. 00:34:22.518 [2024-07-25 01:20:15.621119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.518 [2024-07-25 01:20:15.621148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.518 qpair failed and we were unable to recover it. 00:34:22.518 [2024-07-25 01:20:15.621356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.518 [2024-07-25 01:20:15.621397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:22.518 qpair failed and we were unable to recover it. 00:34:22.518 [2024-07-25 01:20:15.621545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.518 [2024-07-25 01:20:15.621573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:22.518 qpair failed and we were unable to recover it. 00:34:22.518 [2024-07-25 01:20:15.621697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.518 [2024-07-25 01:20:15.621725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:22.518 qpair failed and we were unable to recover it. 00:34:22.518 [2024-07-25 01:20:15.621867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.518 [2024-07-25 01:20:15.621894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:22.518 qpair failed and we were unable to recover it. 00:34:22.518 [2024-07-25 01:20:15.622062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.518 [2024-07-25 01:20:15.622106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:22.518 qpair failed and we were unable to recover it. 00:34:22.518 [2024-07-25 01:20:15.622253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.518 [2024-07-25 01:20:15.622290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:22.518 qpair failed and we were unable to recover it. 00:34:22.518 [2024-07-25 01:20:15.622463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.518 [2024-07-25 01:20:15.622490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:22.518 qpair failed and we were unable to recover it. 00:34:22.518 [2024-07-25 01:20:15.622689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.518 [2024-07-25 01:20:15.622738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:22.518 qpair failed and we were unable to recover it. 00:34:22.518 [2024-07-25 01:20:15.622891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.518 [2024-07-25 01:20:15.622940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:22.518 qpair failed and we were unable to recover it. 00:34:22.518 [2024-07-25 01:20:15.623113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.518 [2024-07-25 01:20:15.623140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:22.518 qpair failed and we were unable to recover it. 00:34:22.518 [2024-07-25 01:20:15.623269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.518 [2024-07-25 01:20:15.623296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:22.518 qpair failed and we were unable to recover it. 00:34:22.518 [2024-07-25 01:20:15.623461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.518 [2024-07-25 01:20:15.623506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:22.518 qpair failed and we were unable to recover it. 00:34:22.518 [2024-07-25 01:20:15.623668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.518 [2024-07-25 01:20:15.623713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:22.518 qpair failed and we were unable to recover it. 00:34:22.518 [2024-07-25 01:20:15.623906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.518 [2024-07-25 01:20:15.623936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:22.518 qpair failed and we were unable to recover it. 00:34:22.518 [2024-07-25 01:20:15.624110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.518 [2024-07-25 01:20:15.624150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.518 qpair failed and we were unable to recover it. 00:34:22.518 [2024-07-25 01:20:15.624322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.518 [2024-07-25 01:20:15.624350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.518 qpair failed and we were unable to recover it. 00:34:22.518 [2024-07-25 01:20:15.624520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.518 [2024-07-25 01:20:15.624564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.518 qpair failed and we were unable to recover it. 00:34:22.518 [2024-07-25 01:20:15.624813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.518 [2024-07-25 01:20:15.624866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.518 qpair failed and we were unable to recover it. 00:34:22.518 [2024-07-25 01:20:15.625107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.518 [2024-07-25 01:20:15.625156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.518 qpair failed and we were unable to recover it. 00:34:22.518 [2024-07-25 01:20:15.625309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.518 [2024-07-25 01:20:15.625336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.518 qpair failed and we were unable to recover it. 00:34:22.518 [2024-07-25 01:20:15.625499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.518 [2024-07-25 01:20:15.625526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.518 qpair failed and we were unable to recover it. 00:34:22.518 [2024-07-25 01:20:15.625656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.518 [2024-07-25 01:20:15.625686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.518 qpair failed and we were unable to recover it. 00:34:22.518 [2024-07-25 01:20:15.625841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.518 [2024-07-25 01:20:15.625871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.518 qpair failed and we were unable to recover it. 00:34:22.518 [2024-07-25 01:20:15.626044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.518 [2024-07-25 01:20:15.626082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.518 qpair failed and we were unable to recover it. 00:34:22.518 [2024-07-25 01:20:15.626254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.518 [2024-07-25 01:20:15.626284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.518 qpair failed and we were unable to recover it. 00:34:22.518 [2024-07-25 01:20:15.626424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.518 [2024-07-25 01:20:15.626451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.518 qpair failed and we were unable to recover it. 00:34:22.518 [2024-07-25 01:20:15.626694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.519 [2024-07-25 01:20:15.626752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:22.519 qpair failed and we were unable to recover it. 00:34:22.519 [2024-07-25 01:20:15.626955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.519 [2024-07-25 01:20:15.627005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:22.519 qpair failed and we were unable to recover it. 00:34:22.519 [2024-07-25 01:20:15.627177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.519 [2024-07-25 01:20:15.627205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:22.519 qpair failed and we were unable to recover it. 00:34:22.519 [2024-07-25 01:20:15.627356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.519 [2024-07-25 01:20:15.627384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:22.519 qpair failed and we were unable to recover it. 00:34:22.519 [2024-07-25 01:20:15.627551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.519 [2024-07-25 01:20:15.627599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:22.519 qpair failed and we were unable to recover it. 00:34:22.519 [2024-07-25 01:20:15.627794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.519 [2024-07-25 01:20:15.627839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:22.519 qpair failed and we were unable to recover it. 00:34:22.519 [2024-07-25 01:20:15.628024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.519 [2024-07-25 01:20:15.628079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:22.519 qpair failed and we were unable to recover it. 00:34:22.519 [2024-07-25 01:20:15.628223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.519 [2024-07-25 01:20:15.628263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:22.519 qpair failed and we were unable to recover it. 00:34:22.519 [2024-07-25 01:20:15.628412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.519 [2024-07-25 01:20:15.628440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:22.519 qpair failed and we were unable to recover it. 00:34:22.519 [2024-07-25 01:20:15.628620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.519 [2024-07-25 01:20:15.628648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:22.519 qpair failed and we were unable to recover it. 00:34:22.519 [2024-07-25 01:20:15.628770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.519 [2024-07-25 01:20:15.628798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:22.519 qpair failed and we were unable to recover it. 00:34:22.519 [2024-07-25 01:20:15.628970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.519 [2024-07-25 01:20:15.629015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:22.519 qpair failed and we were unable to recover it. 00:34:22.519 [2024-07-25 01:20:15.629187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.519 [2024-07-25 01:20:15.629214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:22.519 qpair failed and we were unable to recover it. 00:34:22.519 [2024-07-25 01:20:15.629343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.519 [2024-07-25 01:20:15.629369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:22.519 qpair failed and we were unable to recover it. 00:34:22.519 [2024-07-25 01:20:15.629521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.519 [2024-07-25 01:20:15.629566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:22.519 qpair failed and we were unable to recover it. 00:34:22.519 [2024-07-25 01:20:15.629720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.519 [2024-07-25 01:20:15.629764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:22.519 qpair failed and we were unable to recover it. 00:34:22.519 [2024-07-25 01:20:15.629930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.519 [2024-07-25 01:20:15.629974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:22.519 qpair failed and we were unable to recover it. 00:34:22.519 [2024-07-25 01:20:15.630120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.519 [2024-07-25 01:20:15.630147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:22.519 qpair failed and we were unable to recover it. 00:34:22.519 [2024-07-25 01:20:15.630281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.519 [2024-07-25 01:20:15.630312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:22.519 qpair failed and we were unable to recover it. 00:34:22.519 [2024-07-25 01:20:15.630495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.519 [2024-07-25 01:20:15.630540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:22.519 qpair failed and we were unable to recover it. 00:34:22.519 [2024-07-25 01:20:15.630719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.519 [2024-07-25 01:20:15.630762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:22.519 qpair failed and we were unable to recover it. 00:34:22.519 [2024-07-25 01:20:15.630904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.519 [2024-07-25 01:20:15.630935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.519 qpair failed and we were unable to recover it. 00:34:22.519 [2024-07-25 01:20:15.631112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.519 [2024-07-25 01:20:15.631139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.519 qpair failed and we were unable to recover it. 00:34:22.519 [2024-07-25 01:20:15.631287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.519 [2024-07-25 01:20:15.631315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.519 qpair failed and we were unable to recover it. 00:34:22.519 [2024-07-25 01:20:15.631478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.519 [2024-07-25 01:20:15.631506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.519 qpair failed and we were unable to recover it. 00:34:22.519 [2024-07-25 01:20:15.631658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.519 [2024-07-25 01:20:15.631687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.519 qpair failed and we were unable to recover it. 00:34:22.519 [2024-07-25 01:20:15.631840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.519 [2024-07-25 01:20:15.631869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.519 qpair failed and we were unable to recover it. 00:34:22.519 [2024-07-25 01:20:15.632036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.519 [2024-07-25 01:20:15.632075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.519 qpair failed and we were unable to recover it. 00:34:22.519 [2024-07-25 01:20:15.632299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.519 [2024-07-25 01:20:15.632326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.519 qpair failed and we were unable to recover it. 00:34:22.519 [2024-07-25 01:20:15.632471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.519 [2024-07-25 01:20:15.632497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.519 qpair failed and we were unable to recover it. 00:34:22.519 [2024-07-25 01:20:15.632694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.519 [2024-07-25 01:20:15.632723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.519 qpair failed and we were unable to recover it. 00:34:22.519 [2024-07-25 01:20:15.632887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.519 [2024-07-25 01:20:15.632916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.519 qpair failed and we were unable to recover it. 00:34:22.520 [2024-07-25 01:20:15.633101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.520 [2024-07-25 01:20:15.633130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.520 qpair failed and we were unable to recover it. 00:34:22.520 [2024-07-25 01:20:15.633318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.520 [2024-07-25 01:20:15.633358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:22.520 qpair failed and we were unable to recover it. 00:34:22.520 [2024-07-25 01:20:15.633519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.520 [2024-07-25 01:20:15.633559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.520 qpair failed and we were unable to recover it. 00:34:22.520 [2024-07-25 01:20:15.633731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.520 [2024-07-25 01:20:15.633762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.520 qpair failed and we were unable to recover it. 00:34:22.520 [2024-07-25 01:20:15.633992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.520 [2024-07-25 01:20:15.634043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.520 qpair failed and we were unable to recover it. 00:34:22.520 [2024-07-25 01:20:15.634205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.520 [2024-07-25 01:20:15.634231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.520 qpair failed and we were unable to recover it. 00:34:22.520 [2024-07-25 01:20:15.634362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.520 [2024-07-25 01:20:15.634389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.520 qpair failed and we were unable to recover it. 00:34:22.520 [2024-07-25 01:20:15.634505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.520 [2024-07-25 01:20:15.634551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.520 qpair failed and we were unable to recover it. 00:34:22.520 [2024-07-25 01:20:15.634763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.520 [2024-07-25 01:20:15.634792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.520 qpair failed and we were unable to recover it. 00:34:22.520 [2024-07-25 01:20:15.634976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.520 [2024-07-25 01:20:15.635006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.520 qpair failed and we were unable to recover it. 00:34:22.520 [2024-07-25 01:20:15.635134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.520 [2024-07-25 01:20:15.635165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.520 qpair failed and we were unable to recover it. 00:34:22.520 [2024-07-25 01:20:15.635330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.520 [2024-07-25 01:20:15.635358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.520 qpair failed and we were unable to recover it. 00:34:22.520 [2024-07-25 01:20:15.635502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.520 [2024-07-25 01:20:15.635547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.520 qpair failed and we were unable to recover it. 00:34:22.520 [2024-07-25 01:20:15.635681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.520 [2024-07-25 01:20:15.635710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.520 qpair failed and we were unable to recover it. 00:34:22.520 [2024-07-25 01:20:15.635894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.520 [2024-07-25 01:20:15.635960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.520 qpair failed and we were unable to recover it. 00:34:22.520 [2024-07-25 01:20:15.636146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.520 [2024-07-25 01:20:15.636175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.520 qpair failed and we were unable to recover it. 00:34:22.520 [2024-07-25 01:20:15.636324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.520 [2024-07-25 01:20:15.636351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.520 qpair failed and we were unable to recover it. 00:34:22.520 [2024-07-25 01:20:15.636490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.520 [2024-07-25 01:20:15.636516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.520 qpair failed and we were unable to recover it. 00:34:22.520 [2024-07-25 01:20:15.636686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.520 [2024-07-25 01:20:15.636715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.520 qpair failed and we were unable to recover it. 00:34:22.520 [2024-07-25 01:20:15.636897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.520 [2024-07-25 01:20:15.636926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.520 qpair failed and we were unable to recover it. 00:34:22.520 [2024-07-25 01:20:15.637058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.520 [2024-07-25 01:20:15.637103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.520 qpair failed and we were unable to recover it. 00:34:22.520 [2024-07-25 01:20:15.637255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.520 [2024-07-25 01:20:15.637299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.520 qpair failed and we were unable to recover it. 00:34:22.520 [2024-07-25 01:20:15.637418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.520 [2024-07-25 01:20:15.637444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.520 qpair failed and we were unable to recover it. 00:34:22.520 [2024-07-25 01:20:15.637598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.520 [2024-07-25 01:20:15.637624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.520 qpair failed and we were unable to recover it. 00:34:22.520 [2024-07-25 01:20:15.637812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.520 [2024-07-25 01:20:15.637842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.520 qpair failed and we were unable to recover it. 00:34:22.520 [2024-07-25 01:20:15.638027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.520 [2024-07-25 01:20:15.638057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.520 qpair failed and we were unable to recover it. 00:34:22.520 [2024-07-25 01:20:15.638216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.520 [2024-07-25 01:20:15.638248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.520 qpair failed and we were unable to recover it. 00:34:22.520 [2024-07-25 01:20:15.638361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.520 [2024-07-25 01:20:15.638388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.520 qpair failed and we were unable to recover it. 00:34:22.520 [2024-07-25 01:20:15.638570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.520 [2024-07-25 01:20:15.638615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.520 qpair failed and we were unable to recover it. 00:34:22.520 [2024-07-25 01:20:15.638914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.520 [2024-07-25 01:20:15.638970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.520 qpair failed and we were unable to recover it. 00:34:22.520 [2024-07-25 01:20:15.639100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.520 [2024-07-25 01:20:15.639131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.520 qpair failed and we were unable to recover it. 00:34:22.520 [2024-07-25 01:20:15.639311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.520 [2024-07-25 01:20:15.639339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.520 qpair failed and we were unable to recover it. 00:34:22.520 [2024-07-25 01:20:15.639453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.520 [2024-07-25 01:20:15.639480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.520 qpair failed and we were unable to recover it. 00:34:22.520 [2024-07-25 01:20:15.639652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.520 [2024-07-25 01:20:15.639679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.520 qpair failed and we were unable to recover it. 00:34:22.520 [2024-07-25 01:20:15.639839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.520 [2024-07-25 01:20:15.639868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.520 qpair failed and we were unable to recover it. 00:34:22.520 [2024-07-25 01:20:15.640072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.520 [2024-07-25 01:20:15.640102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.520 qpair failed and we were unable to recover it. 00:34:22.520 [2024-07-25 01:20:15.640281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.520 [2024-07-25 01:20:15.640308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.520 qpair failed and we were unable to recover it. 00:34:22.520 [2024-07-25 01:20:15.640482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.520 [2024-07-25 01:20:15.640509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.520 qpair failed and we were unable to recover it. 00:34:22.521 [2024-07-25 01:20:15.640660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.521 [2024-07-25 01:20:15.640688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.521 qpair failed and we were unable to recover it. 00:34:22.521 [2024-07-25 01:20:15.640845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.521 [2024-07-25 01:20:15.640871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.521 qpair failed and we were unable to recover it. 00:34:22.521 [2024-07-25 01:20:15.641012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.521 [2024-07-25 01:20:15.641047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.521 qpair failed and we were unable to recover it. 00:34:22.521 [2024-07-25 01:20:15.641216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.521 [2024-07-25 01:20:15.641256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.521 qpair failed and we were unable to recover it. 00:34:22.521 [2024-07-25 01:20:15.641399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.521 [2024-07-25 01:20:15.641425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.521 qpair failed and we were unable to recover it. 00:34:22.521 [2024-07-25 01:20:15.641586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.521 [2024-07-25 01:20:15.641615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.521 qpair failed and we were unable to recover it. 00:34:22.521 [2024-07-25 01:20:15.641792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.521 [2024-07-25 01:20:15.641821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.521 qpair failed and we were unable to recover it. 00:34:22.521 [2024-07-25 01:20:15.642005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.521 [2024-07-25 01:20:15.642034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.521 qpair failed and we were unable to recover it. 00:34:22.521 [2024-07-25 01:20:15.642192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.521 [2024-07-25 01:20:15.642221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.521 qpair failed and we were unable to recover it. 00:34:22.521 [2024-07-25 01:20:15.642419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.521 [2024-07-25 01:20:15.642446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.521 qpair failed and we were unable to recover it. 00:34:22.521 [2024-07-25 01:20:15.642651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.521 [2024-07-25 01:20:15.642678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.521 qpair failed and we were unable to recover it. 00:34:22.521 [2024-07-25 01:20:15.642855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.521 [2024-07-25 01:20:15.642882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.521 qpair failed and we were unable to recover it. 00:34:22.521 [2024-07-25 01:20:15.643127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.521 [2024-07-25 01:20:15.643156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.521 qpair failed and we were unable to recover it. 00:34:22.521 [2024-07-25 01:20:15.643352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.521 [2024-07-25 01:20:15.643380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.521 qpair failed and we were unable to recover it. 00:34:22.521 [2024-07-25 01:20:15.643501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.521 [2024-07-25 01:20:15.643529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.521 qpair failed and we were unable to recover it. 00:34:22.521 [2024-07-25 01:20:15.643676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.521 [2024-07-25 01:20:15.643702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.521 qpair failed and we were unable to recover it. 00:34:22.521 [2024-07-25 01:20:15.643890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.521 [2024-07-25 01:20:15.643920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.521 qpair failed and we were unable to recover it. 00:34:22.521 [2024-07-25 01:20:15.644082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.521 [2024-07-25 01:20:15.644112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.521 qpair failed and we were unable to recover it. 00:34:22.521 [2024-07-25 01:20:15.644255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.521 [2024-07-25 01:20:15.644292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.521 qpair failed and we were unable to recover it. 00:34:22.521 [2024-07-25 01:20:15.644482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.521 [2024-07-25 01:20:15.644538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.521 qpair failed and we were unable to recover it. 00:34:22.521 [2024-07-25 01:20:15.644702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.521 [2024-07-25 01:20:15.644735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.521 qpair failed and we were unable to recover it. 00:34:22.521 [2024-07-25 01:20:15.644934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.521 [2024-07-25 01:20:15.644965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.521 qpair failed and we were unable to recover it. 00:34:22.521 [2024-07-25 01:20:15.645152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.521 [2024-07-25 01:20:15.645182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.521 qpair failed and we were unable to recover it. 00:34:22.521 [2024-07-25 01:20:15.645348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.521 [2024-07-25 01:20:15.645376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.521 qpair failed and we were unable to recover it. 00:34:22.521 [2024-07-25 01:20:15.645546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.521 [2024-07-25 01:20:15.645573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.521 qpair failed and we were unable to recover it. 00:34:22.521 [2024-07-25 01:20:15.645785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.521 [2024-07-25 01:20:15.645820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.521 qpair failed and we were unable to recover it. 00:34:22.521 [2024-07-25 01:20:15.646038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.521 [2024-07-25 01:20:15.646069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.521 qpair failed and we were unable to recover it. 00:34:22.521 [2024-07-25 01:20:15.646247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.521 [2024-07-25 01:20:15.646276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.521 qpair failed and we were unable to recover it. 00:34:22.521 [2024-07-25 01:20:15.646424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.521 [2024-07-25 01:20:15.646451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.521 qpair failed and we were unable to recover it. 00:34:22.521 [2024-07-25 01:20:15.646600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.521 [2024-07-25 01:20:15.646644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.521 qpair failed and we were unable to recover it. 00:34:22.521 [2024-07-25 01:20:15.646828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.521 [2024-07-25 01:20:15.646858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.521 qpair failed and we were unable to recover it. 00:34:22.521 [2024-07-25 01:20:15.647101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.521 [2024-07-25 01:20:15.647156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.521 qpair failed and we were unable to recover it. 00:34:22.521 [2024-07-25 01:20:15.647348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.521 [2024-07-25 01:20:15.647376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.521 qpair failed and we were unable to recover it. 00:34:22.521 [2024-07-25 01:20:15.647522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.521 [2024-07-25 01:20:15.647571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.521 qpair failed and we were unable to recover it. 00:34:22.521 [2024-07-25 01:20:15.647745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.521 [2024-07-25 01:20:15.647775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.521 qpair failed and we were unable to recover it. 00:34:22.803 [2024-07-25 01:20:15.647973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.803 [2024-07-25 01:20:15.648023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.803 qpair failed and we were unable to recover it. 00:34:22.803 [2024-07-25 01:20:15.648246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.803 [2024-07-25 01:20:15.648301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.803 qpair failed and we were unable to recover it. 00:34:22.803 [2024-07-25 01:20:15.648442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.803 [2024-07-25 01:20:15.648469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.803 qpair failed and we were unable to recover it. 00:34:22.803 [2024-07-25 01:20:15.648636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.803 [2024-07-25 01:20:15.648663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.803 qpair failed and we were unable to recover it. 00:34:22.803 [2024-07-25 01:20:15.648887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.803 [2024-07-25 01:20:15.648918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.803 qpair failed and we were unable to recover it. 00:34:22.803 [2024-07-25 01:20:15.649095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.803 [2024-07-25 01:20:15.649126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.803 qpair failed and we were unable to recover it. 00:34:22.803 [2024-07-25 01:20:15.649265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.803 [2024-07-25 01:20:15.649311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.803 qpair failed and we were unable to recover it. 00:34:22.803 [2024-07-25 01:20:15.649468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.803 [2024-07-25 01:20:15.649508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:22.803 qpair failed and we were unable to recover it. 00:34:22.803 [2024-07-25 01:20:15.649648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.803 [2024-07-25 01:20:15.649699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:22.803 qpair failed and we were unable to recover it. 00:34:22.803 [2024-07-25 01:20:15.650005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.803 [2024-07-25 01:20:15.650061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:22.803 qpair failed and we were unable to recover it. 00:34:22.803 [2024-07-25 01:20:15.650205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.803 [2024-07-25 01:20:15.650232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:22.803 qpair failed and we were unable to recover it. 00:34:22.803 [2024-07-25 01:20:15.650367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.803 [2024-07-25 01:20:15.650395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:22.803 qpair failed and we were unable to recover it. 00:34:22.803 [2024-07-25 01:20:15.650621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.803 [2024-07-25 01:20:15.650666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:22.803 qpair failed and we were unable to recover it. 00:34:22.803 [2024-07-25 01:20:15.650891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.803 [2024-07-25 01:20:15.650935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:22.803 qpair failed and we were unable to recover it. 00:34:22.803 [2024-07-25 01:20:15.651103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.803 [2024-07-25 01:20:15.651130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:22.803 qpair failed and we were unable to recover it. 00:34:22.803 [2024-07-25 01:20:15.651352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.803 [2024-07-25 01:20:15.651380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:22.803 qpair failed and we were unable to recover it. 00:34:22.803 [2024-07-25 01:20:15.651539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.803 [2024-07-25 01:20:15.651584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:22.803 qpair failed and we were unable to recover it. 00:34:22.803 [2024-07-25 01:20:15.651779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.803 [2024-07-25 01:20:15.651809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:22.803 qpair failed and we were unable to recover it. 00:34:22.803 [2024-07-25 01:20:15.652007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.803 [2024-07-25 01:20:15.652058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:22.803 qpair failed and we were unable to recover it. 00:34:22.803 [2024-07-25 01:20:15.652207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.803 [2024-07-25 01:20:15.652236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.803 qpair failed and we were unable to recover it. 00:34:22.803 [2024-07-25 01:20:15.652371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.803 [2024-07-25 01:20:15.652399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.803 qpair failed and we were unable to recover it. 00:34:22.803 [2024-07-25 01:20:15.652565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.803 [2024-07-25 01:20:15.652596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.803 qpair failed and we were unable to recover it. 00:34:22.803 [2024-07-25 01:20:15.652766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.804 [2024-07-25 01:20:15.652796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.804 qpair failed and we were unable to recover it. 00:34:22.804 [2024-07-25 01:20:15.652988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.804 [2024-07-25 01:20:15.653039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.804 qpair failed and we were unable to recover it. 00:34:22.804 [2024-07-25 01:20:15.653201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.804 [2024-07-25 01:20:15.653233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.804 qpair failed and we were unable to recover it. 00:34:22.804 [2024-07-25 01:20:15.653384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.804 [2024-07-25 01:20:15.653411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.804 qpair failed and we were unable to recover it. 00:34:22.804 [2024-07-25 01:20:15.653559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.804 [2024-07-25 01:20:15.653594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.804 qpair failed and we were unable to recover it. 00:34:22.804 [2024-07-25 01:20:15.653754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.804 [2024-07-25 01:20:15.653795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.804 qpair failed and we were unable to recover it. 00:34:22.804 [2024-07-25 01:20:15.653991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.804 [2024-07-25 01:20:15.654020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.804 qpair failed and we were unable to recover it. 00:34:22.804 [2024-07-25 01:20:15.654180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.804 [2024-07-25 01:20:15.654209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.804 qpair failed and we were unable to recover it. 00:34:22.804 [2024-07-25 01:20:15.654363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.804 [2024-07-25 01:20:15.654391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.804 qpair failed and we were unable to recover it. 00:34:22.804 [2024-07-25 01:20:15.654503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.804 [2024-07-25 01:20:15.654532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.804 qpair failed and we were unable to recover it. 00:34:22.804 [2024-07-25 01:20:15.654662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.804 [2024-07-25 01:20:15.654720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.804 qpair failed and we were unable to recover it. 00:34:22.804 [2024-07-25 01:20:15.654917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.804 [2024-07-25 01:20:15.654948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.804 qpair failed and we were unable to recover it. 00:34:22.804 [2024-07-25 01:20:15.655129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.804 [2024-07-25 01:20:15.655158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.804 qpair failed and we were unable to recover it. 00:34:22.804 [2024-07-25 01:20:15.655307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.804 [2024-07-25 01:20:15.655336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.804 qpair failed and we were unable to recover it. 00:34:22.804 [2024-07-25 01:20:15.655479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.804 [2024-07-25 01:20:15.655519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.804 qpair failed and we were unable to recover it. 00:34:22.804 [2024-07-25 01:20:15.655689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.804 [2024-07-25 01:20:15.655720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.804 qpair failed and we were unable to recover it. 00:34:22.804 [2024-07-25 01:20:15.655945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.804 [2024-07-25 01:20:15.655994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.804 qpair failed and we were unable to recover it. 00:34:22.804 [2024-07-25 01:20:15.656131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.804 [2024-07-25 01:20:15.656158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.804 qpair failed and we were unable to recover it. 00:34:22.804 [2024-07-25 01:20:15.656305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.804 [2024-07-25 01:20:15.656332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.804 qpair failed and we were unable to recover it. 00:34:22.804 [2024-07-25 01:20:15.656473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.804 [2024-07-25 01:20:15.656500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.804 qpair failed and we were unable to recover it. 00:34:22.804 [2024-07-25 01:20:15.656641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.804 [2024-07-25 01:20:15.656667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.804 qpair failed and we were unable to recover it. 00:34:22.804 [2024-07-25 01:20:15.656800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.804 [2024-07-25 01:20:15.656830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.804 qpair failed and we were unable to recover it. 00:34:22.804 [2024-07-25 01:20:15.656950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.804 [2024-07-25 01:20:15.656990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.804 qpair failed and we were unable to recover it. 00:34:22.804 [2024-07-25 01:20:15.657129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.804 [2024-07-25 01:20:15.657159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.804 qpair failed and we were unable to recover it. 00:34:22.804 [2024-07-25 01:20:15.657319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.804 [2024-07-25 01:20:15.657346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.804 qpair failed and we were unable to recover it. 00:34:22.804 [2024-07-25 01:20:15.657514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.804 [2024-07-25 01:20:15.657540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.804 qpair failed and we were unable to recover it. 00:34:22.804 [2024-07-25 01:20:15.657753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.804 [2024-07-25 01:20:15.657812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.804 qpair failed and we were unable to recover it. 00:34:22.804 [2024-07-25 01:20:15.658052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.804 [2024-07-25 01:20:15.658103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.804 qpair failed and we were unable to recover it. 00:34:22.804 [2024-07-25 01:20:15.658275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.804 [2024-07-25 01:20:15.658302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.804 qpair failed and we were unable to recover it. 00:34:22.804 [2024-07-25 01:20:15.658428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.804 [2024-07-25 01:20:15.658454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.804 qpair failed and we were unable to recover it. 00:34:22.804 [2024-07-25 01:20:15.658622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.804 [2024-07-25 01:20:15.658648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.804 qpair failed and we were unable to recover it. 00:34:22.804 [2024-07-25 01:20:15.658864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.804 [2024-07-25 01:20:15.658917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.804 qpair failed and we were unable to recover it. 00:34:22.804 [2024-07-25 01:20:15.659084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.804 [2024-07-25 01:20:15.659113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.804 qpair failed and we were unable to recover it. 00:34:22.804 [2024-07-25 01:20:15.659292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.804 [2024-07-25 01:20:15.659329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.804 qpair failed and we were unable to recover it. 00:34:22.804 [2024-07-25 01:20:15.659460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.804 [2024-07-25 01:20:15.659487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.804 qpair failed and we were unable to recover it. 00:34:22.804 [2024-07-25 01:20:15.659633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.804 [2024-07-25 01:20:15.659675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.804 qpair failed and we were unable to recover it. 00:34:22.804 [2024-07-25 01:20:15.659842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.804 [2024-07-25 01:20:15.659872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.804 qpair failed and we were unable to recover it. 00:34:22.804 [2024-07-25 01:20:15.660057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.805 [2024-07-25 01:20:15.660086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.805 qpair failed and we were unable to recover it. 00:34:22.805 [2024-07-25 01:20:15.660261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.805 [2024-07-25 01:20:15.660306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.805 qpair failed and we were unable to recover it. 00:34:22.805 [2024-07-25 01:20:15.660448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.805 [2024-07-25 01:20:15.660474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.805 qpair failed and we were unable to recover it. 00:34:22.805 [2024-07-25 01:20:15.660590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.805 [2024-07-25 01:20:15.660621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.805 qpair failed and we were unable to recover it. 00:34:22.805 [2024-07-25 01:20:15.660738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.805 [2024-07-25 01:20:15.660764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.805 qpair failed and we were unable to recover it. 00:34:22.805 [2024-07-25 01:20:15.660924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.805 [2024-07-25 01:20:15.660953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.805 qpair failed and we were unable to recover it. 00:34:22.805 [2024-07-25 01:20:15.661073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.805 [2024-07-25 01:20:15.661101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.805 qpair failed and we were unable to recover it. 00:34:22.805 [2024-07-25 01:20:15.661229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.805 [2024-07-25 01:20:15.661271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.805 qpair failed and we were unable to recover it. 00:34:22.805 [2024-07-25 01:20:15.661428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.805 [2024-07-25 01:20:15.661455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.805 qpair failed and we were unable to recover it. 00:34:22.805 [2024-07-25 01:20:15.661592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.805 [2024-07-25 01:20:15.661618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.805 qpair failed and we were unable to recover it. 00:34:22.805 [2024-07-25 01:20:15.661762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.805 [2024-07-25 01:20:15.661792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.805 qpair failed and we were unable to recover it. 00:34:22.805 [2024-07-25 01:20:15.661951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.805 [2024-07-25 01:20:15.661980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.805 qpair failed and we were unable to recover it. 00:34:22.805 [2024-07-25 01:20:15.662163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.805 [2024-07-25 01:20:15.662193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.805 qpair failed and we were unable to recover it. 00:34:22.805 [2024-07-25 01:20:15.662362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.805 [2024-07-25 01:20:15.662389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.805 qpair failed and we were unable to recover it. 00:34:22.805 [2024-07-25 01:20:15.662532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.805 [2024-07-25 01:20:15.662558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.805 qpair failed and we were unable to recover it. 00:34:22.805 [2024-07-25 01:20:15.662722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.805 [2024-07-25 01:20:15.662748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.805 qpair failed and we were unable to recover it. 00:34:22.805 [2024-07-25 01:20:15.662894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.805 [2024-07-25 01:20:15.662937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.805 qpair failed and we were unable to recover it. 00:34:22.805 [2024-07-25 01:20:15.663100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.805 [2024-07-25 01:20:15.663131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.805 qpair failed and we were unable to recover it. 00:34:22.805 [2024-07-25 01:20:15.663328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.805 [2024-07-25 01:20:15.663355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.805 qpair failed and we were unable to recover it. 00:34:22.805 [2024-07-25 01:20:15.663472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.805 [2024-07-25 01:20:15.663498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.805 qpair failed and we were unable to recover it. 00:34:22.805 [2024-07-25 01:20:15.663685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.805 [2024-07-25 01:20:15.663712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.805 qpair failed and we were unable to recover it. 00:34:22.805 [2024-07-25 01:20:15.663920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.805 [2024-07-25 01:20:15.663950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.805 qpair failed and we were unable to recover it. 00:34:22.805 [2024-07-25 01:20:15.664122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.805 [2024-07-25 01:20:15.664148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.805 qpair failed and we were unable to recover it. 00:34:22.805 [2024-07-25 01:20:15.664322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.805 [2024-07-25 01:20:15.664349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.805 qpair failed and we were unable to recover it. 00:34:22.805 [2024-07-25 01:20:15.664463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.805 [2024-07-25 01:20:15.664489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.805 qpair failed and we were unable to recover it. 00:34:22.805 [2024-07-25 01:20:15.664625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.805 [2024-07-25 01:20:15.664668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.805 qpair failed and we were unable to recover it. 00:34:22.805 [2024-07-25 01:20:15.664857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.805 [2024-07-25 01:20:15.664886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.805 qpair failed and we were unable to recover it. 00:34:22.805 [2024-07-25 01:20:15.665022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.805 [2024-07-25 01:20:15.665062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.805 qpair failed and we were unable to recover it. 00:34:22.805 [2024-07-25 01:20:15.665195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.805 [2024-07-25 01:20:15.665224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.805 qpair failed and we were unable to recover it. 00:34:22.805 [2024-07-25 01:20:15.665421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.805 [2024-07-25 01:20:15.665447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.805 qpair failed and we were unable to recover it. 00:34:22.805 [2024-07-25 01:20:15.665609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.805 [2024-07-25 01:20:15.665643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.805 qpair failed and we were unable to recover it. 00:34:22.805 [2024-07-25 01:20:15.665801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.805 [2024-07-25 01:20:15.665831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.805 qpair failed and we were unable to recover it. 00:34:22.805 [2024-07-25 01:20:15.666014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.805 [2024-07-25 01:20:15.666043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.805 qpair failed and we were unable to recover it. 00:34:22.805 [2024-07-25 01:20:15.666200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.805 [2024-07-25 01:20:15.666230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.805 qpair failed and we were unable to recover it. 00:34:22.805 [2024-07-25 01:20:15.666370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.805 [2024-07-25 01:20:15.666398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.805 qpair failed and we were unable to recover it. 00:34:22.805 [2024-07-25 01:20:15.666522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.805 [2024-07-25 01:20:15.666550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.805 qpair failed and we were unable to recover it. 00:34:22.805 [2024-07-25 01:20:15.666697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.805 [2024-07-25 01:20:15.666733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.805 qpair failed and we were unable to recover it. 00:34:22.805 [2024-07-25 01:20:15.666875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.805 [2024-07-25 01:20:15.666905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.805 qpair failed and we were unable to recover it. 00:34:22.806 [2024-07-25 01:20:15.667056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.806 [2024-07-25 01:20:15.667085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.806 qpair failed and we were unable to recover it. 00:34:22.806 [2024-07-25 01:20:15.667248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.806 [2024-07-25 01:20:15.667278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.806 qpair failed and we were unable to recover it. 00:34:22.806 [2024-07-25 01:20:15.667449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.806 [2024-07-25 01:20:15.667476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.806 qpair failed and we were unable to recover it. 00:34:22.806 [2024-07-25 01:20:15.667659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.806 [2024-07-25 01:20:15.667689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.806 qpair failed and we were unable to recover it. 00:34:22.806 [2024-07-25 01:20:15.667870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.806 [2024-07-25 01:20:15.667899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.806 qpair failed and we were unable to recover it. 00:34:22.806 [2024-07-25 01:20:15.668051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.806 [2024-07-25 01:20:15.668081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.806 qpair failed and we were unable to recover it. 00:34:22.806 [2024-07-25 01:20:15.668255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.806 [2024-07-25 01:20:15.668283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.806 qpair failed and we were unable to recover it. 00:34:22.806 [2024-07-25 01:20:15.668405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.806 [2024-07-25 01:20:15.668430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.806 qpair failed and we were unable to recover it. 00:34:22.806 [2024-07-25 01:20:15.668576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.806 [2024-07-25 01:20:15.668602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.806 qpair failed and we were unable to recover it. 00:34:22.806 [2024-07-25 01:20:15.668766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.806 [2024-07-25 01:20:15.668792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.806 qpair failed and we were unable to recover it. 00:34:22.806 [2024-07-25 01:20:15.668926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.806 [2024-07-25 01:20:15.668956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.806 qpair failed and we were unable to recover it. 00:34:22.806 [2024-07-25 01:20:15.669117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.806 [2024-07-25 01:20:15.669147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.806 qpair failed and we were unable to recover it. 00:34:22.806 [2024-07-25 01:20:15.669310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.806 [2024-07-25 01:20:15.669337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.806 qpair failed and we were unable to recover it. 00:34:22.806 [2024-07-25 01:20:15.669454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.806 [2024-07-25 01:20:15.669480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.806 qpair failed and we were unable to recover it. 00:34:22.806 [2024-07-25 01:20:15.669666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.806 [2024-07-25 01:20:15.669696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.806 qpair failed and we were unable to recover it. 00:34:22.806 [2024-07-25 01:20:15.669858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.806 [2024-07-25 01:20:15.669884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.806 qpair failed and we were unable to recover it. 00:34:22.806 [2024-07-25 01:20:15.670020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.806 [2024-07-25 01:20:15.670066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.806 qpair failed and we were unable to recover it. 00:34:22.806 [2024-07-25 01:20:15.670253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.806 [2024-07-25 01:20:15.670280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.806 qpair failed and we were unable to recover it. 00:34:22.806 [2024-07-25 01:20:15.670456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.806 [2024-07-25 01:20:15.670482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.806 qpair failed and we were unable to recover it. 00:34:22.806 [2024-07-25 01:20:15.670625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.806 [2024-07-25 01:20:15.670667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.806 qpair failed and we were unable to recover it. 00:34:22.806 [2024-07-25 01:20:15.670900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.806 [2024-07-25 01:20:15.670929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.806 qpair failed and we were unable to recover it. 00:34:22.806 [2024-07-25 01:20:15.671118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.806 [2024-07-25 01:20:15.671145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.806 qpair failed and we were unable to recover it. 00:34:22.806 [2024-07-25 01:20:15.671329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.806 [2024-07-25 01:20:15.671356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.806 qpair failed and we were unable to recover it. 00:34:22.806 [2024-07-25 01:20:15.671492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.806 [2024-07-25 01:20:15.671518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.806 qpair failed and we were unable to recover it. 00:34:22.806 [2024-07-25 01:20:15.671705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.806 [2024-07-25 01:20:15.671732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.806 qpair failed and we were unable to recover it. 00:34:22.806 [2024-07-25 01:20:15.671850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.806 [2024-07-25 01:20:15.671877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.806 qpair failed and we were unable to recover it. 00:34:22.806 [2024-07-25 01:20:15.672023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.806 [2024-07-25 01:20:15.672049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.806 qpair failed and we were unable to recover it. 00:34:22.806 [2024-07-25 01:20:15.672227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.806 [2024-07-25 01:20:15.672263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.806 qpair failed and we were unable to recover it. 00:34:22.806 [2024-07-25 01:20:15.672370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.806 [2024-07-25 01:20:15.672397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.806 qpair failed and we were unable to recover it. 00:34:22.806 [2024-07-25 01:20:15.672510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.806 [2024-07-25 01:20:15.672555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.806 qpair failed and we were unable to recover it. 00:34:22.806 [2024-07-25 01:20:15.672716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.806 [2024-07-25 01:20:15.672743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.806 qpair failed and we were unable to recover it. 00:34:22.806 [2024-07-25 01:20:15.672883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.806 [2024-07-25 01:20:15.672928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.806 qpair failed and we were unable to recover it. 00:34:22.806 [2024-07-25 01:20:15.673086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.806 [2024-07-25 01:20:15.673112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.806 qpair failed and we were unable to recover it. 00:34:22.806 [2024-07-25 01:20:15.673234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.806 [2024-07-25 01:20:15.673269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.806 qpair failed and we were unable to recover it. 00:34:22.806 [2024-07-25 01:20:15.673436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.806 [2024-07-25 01:20:15.673463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.806 qpair failed and we were unable to recover it. 00:34:22.806 [2024-07-25 01:20:15.673596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.806 [2024-07-25 01:20:15.673627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.806 qpair failed and we were unable to recover it. 00:34:22.806 [2024-07-25 01:20:15.673805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.806 [2024-07-25 01:20:15.673843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.806 qpair failed and we were unable to recover it. 00:34:22.806 [2024-07-25 01:20:15.673985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.807 [2024-07-25 01:20:15.674013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.807 qpair failed and we were unable to recover it. 00:34:22.807 [2024-07-25 01:20:15.674184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.807 [2024-07-25 01:20:15.674227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.807 qpair failed and we were unable to recover it. 00:34:22.807 [2024-07-25 01:20:15.674374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.807 [2024-07-25 01:20:15.674401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.807 qpair failed and we were unable to recover it. 00:34:22.807 [2024-07-25 01:20:15.674524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.807 [2024-07-25 01:20:15.674551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.807 qpair failed and we were unable to recover it. 00:34:22.807 [2024-07-25 01:20:15.674695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.807 [2024-07-25 01:20:15.674733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.807 qpair failed and we were unable to recover it. 00:34:22.807 [2024-07-25 01:20:15.674908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.807 [2024-07-25 01:20:15.674944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.807 qpair failed and we were unable to recover it. 00:34:22.807 [2024-07-25 01:20:15.675095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.807 [2024-07-25 01:20:15.675122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.807 qpair failed and we were unable to recover it. 00:34:22.807 [2024-07-25 01:20:15.675262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.807 [2024-07-25 01:20:15.675306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.807 qpair failed and we were unable to recover it. 00:34:22.807 [2024-07-25 01:20:15.675484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.807 [2024-07-25 01:20:15.675511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.807 qpair failed and we were unable to recover it. 00:34:22.807 [2024-07-25 01:20:15.675654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.807 [2024-07-25 01:20:15.675680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.807 qpair failed and we were unable to recover it. 00:34:22.807 [2024-07-25 01:20:15.675863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.807 [2024-07-25 01:20:15.675892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.807 qpair failed and we were unable to recover it. 00:34:22.807 [2024-07-25 01:20:15.676035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.807 [2024-07-25 01:20:15.676062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.807 qpair failed and we were unable to recover it. 00:34:22.807 [2024-07-25 01:20:15.676175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.807 [2024-07-25 01:20:15.676201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.807 qpair failed and we were unable to recover it. 00:34:22.807 [2024-07-25 01:20:15.676335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.807 [2024-07-25 01:20:15.676362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.807 qpair failed and we were unable to recover it. 00:34:22.807 [2024-07-25 01:20:15.676485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.807 [2024-07-25 01:20:15.676512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.807 qpair failed and we were unable to recover it. 00:34:22.807 [2024-07-25 01:20:15.676708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.807 [2024-07-25 01:20:15.676738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.807 qpair failed and we were unable to recover it. 00:34:22.807 [2024-07-25 01:20:15.676883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.807 [2024-07-25 01:20:15.676913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.807 qpair failed and we were unable to recover it. 00:34:22.807 [2024-07-25 01:20:15.677051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.807 [2024-07-25 01:20:15.677078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.807 qpair failed and we were unable to recover it. 00:34:22.807 [2024-07-25 01:20:15.677214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.807 [2024-07-25 01:20:15.677265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.807 qpair failed and we were unable to recover it. 00:34:22.807 [2024-07-25 01:20:15.677464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.807 [2024-07-25 01:20:15.677490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.807 qpair failed and we were unable to recover it. 00:34:22.807 [2024-07-25 01:20:15.677598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.807 [2024-07-25 01:20:15.677625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.807 qpair failed and we were unable to recover it. 00:34:22.807 [2024-07-25 01:20:15.677774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.807 [2024-07-25 01:20:15.677818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.807 qpair failed and we were unable to recover it. 00:34:22.807 [2024-07-25 01:20:15.677946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.807 [2024-07-25 01:20:15.677975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.807 qpair failed and we were unable to recover it. 00:34:22.807 [2024-07-25 01:20:15.678136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.807 [2024-07-25 01:20:15.678168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.807 qpair failed and we were unable to recover it. 00:34:22.807 [2024-07-25 01:20:15.678333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.807 [2024-07-25 01:20:15.678363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.807 qpair failed and we were unable to recover it. 00:34:22.807 [2024-07-25 01:20:15.678490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.807 [2024-07-25 01:20:15.678519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.807 qpair failed and we were unable to recover it. 00:34:22.807 [2024-07-25 01:20:15.678711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.807 [2024-07-25 01:20:15.678737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.807 qpair failed and we were unable to recover it. 00:34:22.807 [2024-07-25 01:20:15.678923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.807 [2024-07-25 01:20:15.678953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.807 qpair failed and we were unable to recover it. 00:34:22.807 [2024-07-25 01:20:15.679103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.807 [2024-07-25 01:20:15.679132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.807 qpair failed and we were unable to recover it. 00:34:22.807 [2024-07-25 01:20:15.679311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.807 [2024-07-25 01:20:15.679338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.807 qpair failed and we were unable to recover it. 00:34:22.807 [2024-07-25 01:20:15.679499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.807 [2024-07-25 01:20:15.679529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.807 qpair failed and we were unable to recover it. 00:34:22.807 [2024-07-25 01:20:15.679685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.807 [2024-07-25 01:20:15.679714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.807 qpair failed and we were unable to recover it. 00:34:22.807 [2024-07-25 01:20:15.679871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.807 [2024-07-25 01:20:15.679898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.807 qpair failed and we were unable to recover it. 00:34:22.807 [2024-07-25 01:20:15.680017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.807 [2024-07-25 01:20:15.680043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.807 qpair failed and we were unable to recover it. 00:34:22.807 [2024-07-25 01:20:15.680214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.807 [2024-07-25 01:20:15.680247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.807 qpair failed and we were unable to recover it. 00:34:22.807 [2024-07-25 01:20:15.680391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.807 [2024-07-25 01:20:15.680417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.807 qpair failed and we were unable to recover it. 00:34:22.807 [2024-07-25 01:20:15.680589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.807 [2024-07-25 01:20:15.680615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.808 qpair failed and we were unable to recover it. 00:34:22.808 [2024-07-25 01:20:15.680783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.808 [2024-07-25 01:20:15.680812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.808 qpair failed and we were unable to recover it. 00:34:22.808 [2024-07-25 01:20:15.680970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.808 [2024-07-25 01:20:15.680996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.808 qpair failed and we were unable to recover it. 00:34:22.808 [2024-07-25 01:20:15.681141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.808 [2024-07-25 01:20:15.681168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.808 qpair failed and we were unable to recover it. 00:34:22.808 [2024-07-25 01:20:15.681281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.808 [2024-07-25 01:20:15.681309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.808 qpair failed and we were unable to recover it. 00:34:22.808 [2024-07-25 01:20:15.681453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.808 [2024-07-25 01:20:15.681480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.808 qpair failed and we were unable to recover it. 00:34:22.808 [2024-07-25 01:20:15.681634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.808 [2024-07-25 01:20:15.681663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.808 qpair failed and we were unable to recover it. 00:34:22.808 [2024-07-25 01:20:15.681844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.808 [2024-07-25 01:20:15.681873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.808 qpair failed and we were unable to recover it. 00:34:22.808 [2024-07-25 01:20:15.682062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.808 [2024-07-25 01:20:15.682088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.808 qpair failed and we were unable to recover it. 00:34:22.808 [2024-07-25 01:20:15.682206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.808 [2024-07-25 01:20:15.682233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.808 qpair failed and we were unable to recover it. 00:34:22.808 [2024-07-25 01:20:15.682360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.808 [2024-07-25 01:20:15.682387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.808 qpair failed and we were unable to recover it. 00:34:22.808 [2024-07-25 01:20:15.682540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.808 [2024-07-25 01:20:15.682566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.808 qpair failed and we were unable to recover it. 00:34:22.808 [2024-07-25 01:20:15.682738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.808 [2024-07-25 01:20:15.682764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.808 qpair failed and we were unable to recover it. 00:34:22.808 [2024-07-25 01:20:15.682927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.808 [2024-07-25 01:20:15.682957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.808 qpair failed and we were unable to recover it. 00:34:22.808 [2024-07-25 01:20:15.683117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.808 [2024-07-25 01:20:15.683147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.808 qpair failed and we were unable to recover it. 00:34:22.808 [2024-07-25 01:20:15.683295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.808 [2024-07-25 01:20:15.683322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.808 qpair failed and we were unable to recover it. 00:34:22.808 [2024-07-25 01:20:15.683471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.808 [2024-07-25 01:20:15.683497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.808 qpair failed and we were unable to recover it. 00:34:22.808 [2024-07-25 01:20:15.683634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.808 [2024-07-25 01:20:15.683661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.808 qpair failed and we were unable to recover it. 00:34:22.808 [2024-07-25 01:20:15.683804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.808 [2024-07-25 01:20:15.683830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.808 qpair failed and we were unable to recover it. 00:34:22.808 [2024-07-25 01:20:15.683965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.808 [2024-07-25 01:20:15.683995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.808 qpair failed and we were unable to recover it. 00:34:22.808 [2024-07-25 01:20:15.684154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.808 [2024-07-25 01:20:15.684181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.808 qpair failed and we were unable to recover it. 00:34:22.808 [2024-07-25 01:20:15.684319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.808 [2024-07-25 01:20:15.684363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.808 qpair failed and we were unable to recover it. 00:34:22.808 [2024-07-25 01:20:15.684548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.808 [2024-07-25 01:20:15.684577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.808 qpair failed and we were unable to recover it. 00:34:22.808 [2024-07-25 01:20:15.684739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.808 [2024-07-25 01:20:15.684765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.808 qpair failed and we were unable to recover it. 00:34:22.808 [2024-07-25 01:20:15.684874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.808 [2024-07-25 01:20:15.684901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.808 qpair failed and we were unable to recover it. 00:34:22.808 [2024-07-25 01:20:15.685040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.808 [2024-07-25 01:20:15.685069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.808 qpair failed and we were unable to recover it. 00:34:22.808 [2024-07-25 01:20:15.685232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.808 [2024-07-25 01:20:15.685266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.808 qpair failed and we were unable to recover it. 00:34:22.808 [2024-07-25 01:20:15.685409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.808 [2024-07-25 01:20:15.685454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.808 qpair failed and we were unable to recover it. 00:34:22.808 [2024-07-25 01:20:15.685620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.808 [2024-07-25 01:20:15.685649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.808 qpair failed and we were unable to recover it. 00:34:22.808 [2024-07-25 01:20:15.685807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.808 [2024-07-25 01:20:15.685833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.808 qpair failed and we were unable to recover it. 00:34:22.808 [2024-07-25 01:20:15.686019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.808 [2024-07-25 01:20:15.686048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.808 qpair failed and we were unable to recover it. 00:34:22.808 [2024-07-25 01:20:15.686182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.808 [2024-07-25 01:20:15.686212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.808 qpair failed and we were unable to recover it. 00:34:22.808 [2024-07-25 01:20:15.686390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.809 [2024-07-25 01:20:15.686417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.809 qpair failed and we were unable to recover it. 00:34:22.809 [2024-07-25 01:20:15.686535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.809 [2024-07-25 01:20:15.686562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.809 qpair failed and we were unable to recover it. 00:34:22.809 [2024-07-25 01:20:15.686705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.809 [2024-07-25 01:20:15.686732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.809 qpair failed and we were unable to recover it. 00:34:22.809 [2024-07-25 01:20:15.686870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.809 [2024-07-25 01:20:15.686896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.809 qpair failed and we were unable to recover it. 00:34:22.809 [2024-07-25 01:20:15.687058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.809 [2024-07-25 01:20:15.687087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.809 qpair failed and we were unable to recover it. 00:34:22.809 [2024-07-25 01:20:15.687258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.809 [2024-07-25 01:20:15.687285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.809 qpair failed and we were unable to recover it. 00:34:22.809 [2024-07-25 01:20:15.687429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.809 [2024-07-25 01:20:15.687457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.809 qpair failed and we were unable to recover it. 00:34:22.809 [2024-07-25 01:20:15.687611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.809 [2024-07-25 01:20:15.687640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.809 qpair failed and we were unable to recover it. 00:34:22.809 [2024-07-25 01:20:15.687790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.809 [2024-07-25 01:20:15.687819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.809 qpair failed and we were unable to recover it. 00:34:22.809 [2024-07-25 01:20:15.688010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.809 [2024-07-25 01:20:15.688040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.809 qpair failed and we were unable to recover it. 00:34:22.809 [2024-07-25 01:20:15.688227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.809 [2024-07-25 01:20:15.688276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.809 qpair failed and we were unable to recover it. 00:34:22.809 [2024-07-25 01:20:15.688461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.809 [2024-07-25 01:20:15.688490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.809 qpair failed and we were unable to recover it. 00:34:22.809 [2024-07-25 01:20:15.688656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.809 [2024-07-25 01:20:15.688682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.809 qpair failed and we were unable to recover it. 00:34:22.809 [2024-07-25 01:20:15.688841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.809 [2024-07-25 01:20:15.688871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.809 qpair failed and we were unable to recover it. 00:34:22.809 [2024-07-25 01:20:15.689025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.809 [2024-07-25 01:20:15.689054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.809 qpair failed and we were unable to recover it. 00:34:22.809 [2024-07-25 01:20:15.689188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.809 [2024-07-25 01:20:15.689215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.809 qpair failed and we were unable to recover it. 00:34:22.809 [2024-07-25 01:20:15.689404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.809 [2024-07-25 01:20:15.689432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.809 qpair failed and we were unable to recover it. 00:34:22.809 [2024-07-25 01:20:15.689594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.809 [2024-07-25 01:20:15.689623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.809 qpair failed and we were unable to recover it. 00:34:22.809 [2024-07-25 01:20:15.689817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.809 [2024-07-25 01:20:15.689843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.809 qpair failed and we were unable to recover it. 00:34:22.809 [2024-07-25 01:20:15.690005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.809 [2024-07-25 01:20:15.690034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.809 qpair failed and we were unable to recover it. 00:34:22.809 [2024-07-25 01:20:15.690192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.809 [2024-07-25 01:20:15.690221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.809 qpair failed and we were unable to recover it. 00:34:22.809 [2024-07-25 01:20:15.690413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.809 [2024-07-25 01:20:15.690440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.809 qpair failed and we were unable to recover it. 00:34:22.809 [2024-07-25 01:20:15.690635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.809 [2024-07-25 01:20:15.690662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.809 qpair failed and we were unable to recover it. 00:34:22.809 [2024-07-25 01:20:15.690815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.809 [2024-07-25 01:20:15.690859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.809 qpair failed and we were unable to recover it. 00:34:22.809 [2024-07-25 01:20:15.690992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.809 [2024-07-25 01:20:15.691018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.809 qpair failed and we were unable to recover it. 00:34:22.809 [2024-07-25 01:20:15.691164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.809 [2024-07-25 01:20:15.691190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.809 qpair failed and we were unable to recover it. 00:34:22.809 [2024-07-25 01:20:15.691365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.809 [2024-07-25 01:20:15.691395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.809 qpair failed and we were unable to recover it. 00:34:22.809 [2024-07-25 01:20:15.691561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.809 [2024-07-25 01:20:15.691587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.809 qpair failed and we were unable to recover it. 00:34:22.809 [2024-07-25 01:20:15.691733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.809 [2024-07-25 01:20:15.691778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.809 qpair failed and we were unable to recover it. 00:34:22.809 [2024-07-25 01:20:15.691903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.809 [2024-07-25 01:20:15.691932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.809 qpair failed and we were unable to recover it. 00:34:22.809 [2024-07-25 01:20:15.692099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.809 [2024-07-25 01:20:15.692126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.809 qpair failed and we were unable to recover it. 00:34:22.809 [2024-07-25 01:20:15.692263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.809 [2024-07-25 01:20:15.692291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.809 qpair failed and we were unable to recover it. 00:34:22.809 [2024-07-25 01:20:15.692465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.809 [2024-07-25 01:20:15.692494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.809 qpair failed and we were unable to recover it. 00:34:22.809 [2024-07-25 01:20:15.692627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.809 [2024-07-25 01:20:15.692653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.809 qpair failed and we were unable to recover it. 00:34:22.809 [2024-07-25 01:20:15.692771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.809 [2024-07-25 01:20:15.692798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.809 qpair failed and we were unable to recover it. 00:34:22.809 [2024-07-25 01:20:15.692990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.809 [2024-07-25 01:20:15.693019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.809 qpair failed and we were unable to recover it. 00:34:22.809 [2024-07-25 01:20:15.693184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.809 [2024-07-25 01:20:15.693210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.809 qpair failed and we were unable to recover it. 00:34:22.809 [2024-07-25 01:20:15.693349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.809 [2024-07-25 01:20:15.693376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.809 qpair failed and we were unable to recover it. 00:34:22.809 [2024-07-25 01:20:15.693526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.810 [2024-07-25 01:20:15.693552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.810 qpair failed and we were unable to recover it. 00:34:22.810 [2024-07-25 01:20:15.693755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.810 [2024-07-25 01:20:15.693781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.810 qpair failed and we were unable to recover it. 00:34:22.810 [2024-07-25 01:20:15.693945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.810 [2024-07-25 01:20:15.693975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.810 qpair failed and we were unable to recover it. 00:34:22.810 [2024-07-25 01:20:15.694132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.810 [2024-07-25 01:20:15.694160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.810 qpair failed and we were unable to recover it. 00:34:22.810 [2024-07-25 01:20:15.694346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.810 [2024-07-25 01:20:15.694373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.810 qpair failed and we were unable to recover it. 00:34:22.810 [2024-07-25 01:20:15.694511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.810 [2024-07-25 01:20:15.694538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.810 qpair failed and we were unable to recover it. 00:34:22.810 [2024-07-25 01:20:15.694704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.810 [2024-07-25 01:20:15.694733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.810 qpair failed and we were unable to recover it. 00:34:22.810 [2024-07-25 01:20:15.694867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.810 [2024-07-25 01:20:15.694893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.810 qpair failed and we were unable to recover it. 00:34:22.810 [2024-07-25 01:20:15.695020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.810 [2024-07-25 01:20:15.695046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.810 qpair failed and we were unable to recover it. 00:34:22.810 [2024-07-25 01:20:15.695191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.810 [2024-07-25 01:20:15.695217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.810 qpair failed and we were unable to recover it. 00:34:22.810 [2024-07-25 01:20:15.695337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.810 [2024-07-25 01:20:15.695364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.810 qpair failed and we were unable to recover it. 00:34:22.810 [2024-07-25 01:20:15.695531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.810 [2024-07-25 01:20:15.695557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.810 qpair failed and we were unable to recover it. 00:34:22.810 [2024-07-25 01:20:15.695730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.810 [2024-07-25 01:20:15.695761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.810 qpair failed and we were unable to recover it. 00:34:22.810 [2024-07-25 01:20:15.695926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.810 [2024-07-25 01:20:15.695953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.810 qpair failed and we were unable to recover it. 00:34:22.810 [2024-07-25 01:20:15.696060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.810 [2024-07-25 01:20:15.696087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.810 qpair failed and we were unable to recover it. 00:34:22.810 [2024-07-25 01:20:15.696291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.810 [2024-07-25 01:20:15.696318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.810 qpair failed and we were unable to recover it. 00:34:22.810 [2024-07-25 01:20:15.696469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.810 [2024-07-25 01:20:15.696496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.810 qpair failed and we were unable to recover it. 00:34:22.810 [2024-07-25 01:20:15.696603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.810 [2024-07-25 01:20:15.696629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.810 qpair failed and we were unable to recover it. 00:34:22.810 [2024-07-25 01:20:15.696738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.810 [2024-07-25 01:20:15.696764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.810 qpair failed and we were unable to recover it. 00:34:22.810 [2024-07-25 01:20:15.696930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.810 [2024-07-25 01:20:15.696956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.810 qpair failed and we were unable to recover it. 00:34:22.810 [2024-07-25 01:20:15.697113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.810 [2024-07-25 01:20:15.697143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.810 qpair failed and we were unable to recover it. 00:34:22.810 [2024-07-25 01:20:15.697304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.810 [2024-07-25 01:20:15.697336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.810 qpair failed and we were unable to recover it. 00:34:22.810 [2024-07-25 01:20:15.697505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.810 [2024-07-25 01:20:15.697532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.810 qpair failed and we were unable to recover it. 00:34:22.810 [2024-07-25 01:20:15.697658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.810 [2024-07-25 01:20:15.697685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.810 qpair failed and we were unable to recover it. 00:34:22.810 [2024-07-25 01:20:15.697830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.810 [2024-07-25 01:20:15.697857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.810 qpair failed and we were unable to recover it. 00:34:22.810 [2024-07-25 01:20:15.698036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.810 [2024-07-25 01:20:15.698062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.810 qpair failed and we were unable to recover it. 00:34:22.810 [2024-07-25 01:20:15.698220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.810 [2024-07-25 01:20:15.698257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.810 qpair failed and we were unable to recover it. 00:34:22.810 [2024-07-25 01:20:15.698427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.810 [2024-07-25 01:20:15.698454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.810 qpair failed and we were unable to recover it. 00:34:22.810 [2024-07-25 01:20:15.698574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.810 [2024-07-25 01:20:15.698600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.810 qpair failed and we were unable to recover it. 00:34:22.810 [2024-07-25 01:20:15.698710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.810 [2024-07-25 01:20:15.698736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.810 qpair failed and we were unable to recover it. 00:34:22.810 [2024-07-25 01:20:15.698909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.810 [2024-07-25 01:20:15.698938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.810 qpair failed and we were unable to recover it. 00:34:22.810 [2024-07-25 01:20:15.699107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.810 [2024-07-25 01:20:15.699134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.810 qpair failed and we were unable to recover it. 00:34:22.810 [2024-07-25 01:20:15.699252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.810 [2024-07-25 01:20:15.699277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.810 qpair failed and we were unable to recover it. 00:34:22.810 [2024-07-25 01:20:15.699438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.810 [2024-07-25 01:20:15.699478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.810 qpair failed and we were unable to recover it. 00:34:22.810 [2024-07-25 01:20:15.699670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.810 [2024-07-25 01:20:15.699697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.810 qpair failed and we were unable to recover it. 00:34:22.810 [2024-07-25 01:20:15.699858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.810 [2024-07-25 01:20:15.699887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.810 qpair failed and we were unable to recover it. 00:34:22.810 [2024-07-25 01:20:15.700042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.810 [2024-07-25 01:20:15.700072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.810 qpair failed and we were unable to recover it. 00:34:22.810 [2024-07-25 01:20:15.700259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.811 [2024-07-25 01:20:15.700287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.811 qpair failed and we were unable to recover it. 00:34:22.811 [2024-07-25 01:20:15.700420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.811 [2024-07-25 01:20:15.700449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.811 qpair failed and we were unable to recover it. 00:34:22.811 [2024-07-25 01:20:15.700639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.811 [2024-07-25 01:20:15.700673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.811 qpair failed and we were unable to recover it. 00:34:22.811 [2024-07-25 01:20:15.700837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.811 [2024-07-25 01:20:15.700863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.811 qpair failed and we were unable to recover it. 00:34:22.811 [2024-07-25 01:20:15.701032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.811 [2024-07-25 01:20:15.701059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.811 qpair failed and we were unable to recover it. 00:34:22.811 [2024-07-25 01:20:15.701171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.811 [2024-07-25 01:20:15.701197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.811 qpair failed and we were unable to recover it. 00:34:22.811 [2024-07-25 01:20:15.701394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.811 [2024-07-25 01:20:15.701421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.811 qpair failed and we were unable to recover it. 00:34:22.811 [2024-07-25 01:20:15.701586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.811 [2024-07-25 01:20:15.701617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.811 qpair failed and we were unable to recover it. 00:34:22.811 [2024-07-25 01:20:15.701765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.811 [2024-07-25 01:20:15.701795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.811 qpair failed and we were unable to recover it. 00:34:22.811 [2024-07-25 01:20:15.701984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.811 [2024-07-25 01:20:15.702010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.811 qpair failed and we were unable to recover it. 00:34:22.811 [2024-07-25 01:20:15.702156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.811 [2024-07-25 01:20:15.702183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.811 qpair failed and we were unable to recover it. 00:34:22.811 [2024-07-25 01:20:15.702301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.811 [2024-07-25 01:20:15.702328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.811 qpair failed and we were unable to recover it. 00:34:22.811 [2024-07-25 01:20:15.702472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.811 [2024-07-25 01:20:15.702498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.811 qpair failed and we were unable to recover it. 00:34:22.811 [2024-07-25 01:20:15.702630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.811 [2024-07-25 01:20:15.702657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.811 qpair failed and we were unable to recover it. 00:34:22.811 [2024-07-25 01:20:15.702774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.811 [2024-07-25 01:20:15.702800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.811 qpair failed and we were unable to recover it. 00:34:22.811 [2024-07-25 01:20:15.702989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.811 [2024-07-25 01:20:15.703016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.811 qpair failed and we were unable to recover it. 00:34:22.811 [2024-07-25 01:20:15.703176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.811 [2024-07-25 01:20:15.703205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.811 qpair failed and we were unable to recover it. 00:34:22.811 [2024-07-25 01:20:15.703396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.811 [2024-07-25 01:20:15.703425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.811 qpair failed and we were unable to recover it. 00:34:22.811 [2024-07-25 01:20:15.703591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.811 [2024-07-25 01:20:15.703618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.811 qpair failed and we were unable to recover it. 00:34:22.811 [2024-07-25 01:20:15.703774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.811 [2024-07-25 01:20:15.703803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.811 qpair failed and we were unable to recover it. 00:34:22.811 [2024-07-25 01:20:15.703925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.811 [2024-07-25 01:20:15.703954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.811 qpair failed and we were unable to recover it. 00:34:22.811 [2024-07-25 01:20:15.704102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.811 [2024-07-25 01:20:15.704131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.811 qpair failed and we were unable to recover it. 00:34:22.811 [2024-07-25 01:20:15.704305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.811 [2024-07-25 01:20:15.704333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.811 qpair failed and we were unable to recover it. 00:34:22.811 [2024-07-25 01:20:15.704476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.811 [2024-07-25 01:20:15.704503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.811 qpair failed and we were unable to recover it. 00:34:22.811 [2024-07-25 01:20:15.704682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.811 [2024-07-25 01:20:15.704712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.811 qpair failed and we were unable to recover it. 00:34:22.811 [2024-07-25 01:20:15.704864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.811 [2024-07-25 01:20:15.704893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.811 qpair failed and we were unable to recover it. 00:34:22.811 [2024-07-25 01:20:15.705018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.811 [2024-07-25 01:20:15.705048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.811 qpair failed and we were unable to recover it. 00:34:22.811 [2024-07-25 01:20:15.705231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.811 [2024-07-25 01:20:15.705264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.811 qpair failed and we were unable to recover it. 00:34:22.811 [2024-07-25 01:20:15.705432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.811 [2024-07-25 01:20:15.705461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.811 qpair failed and we were unable to recover it. 00:34:22.811 [2024-07-25 01:20:15.705619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.811 [2024-07-25 01:20:15.705653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.811 qpair failed and we were unable to recover it. 00:34:22.811 [2024-07-25 01:20:15.705818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.811 [2024-07-25 01:20:15.705846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.811 qpair failed and we were unable to recover it. 00:34:22.811 [2024-07-25 01:20:15.705994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.811 [2024-07-25 01:20:15.706031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.811 qpair failed and we were unable to recover it. 00:34:22.811 [2024-07-25 01:20:15.706200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.811 [2024-07-25 01:20:15.706229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.811 qpair failed and we were unable to recover it. 00:34:22.811 [2024-07-25 01:20:15.706429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.811 [2024-07-25 01:20:15.706455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.811 qpair failed and we were unable to recover it. 00:34:22.811 [2024-07-25 01:20:15.706644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.811 [2024-07-25 01:20:15.706674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.811 qpair failed and we were unable to recover it. 00:34:22.811 [2024-07-25 01:20:15.706804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.811 [2024-07-25 01:20:15.706834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.811 qpair failed and we were unable to recover it. 00:34:22.811 [2024-07-25 01:20:15.706994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.811 [2024-07-25 01:20:15.707021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.811 qpair failed and we were unable to recover it. 00:34:22.811 [2024-07-25 01:20:15.707181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.811 [2024-07-25 01:20:15.707210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.811 qpair failed and we were unable to recover it. 00:34:22.811 [2024-07-25 01:20:15.707381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.812 [2024-07-25 01:20:15.707411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.812 qpair failed and we were unable to recover it. 00:34:22.812 [2024-07-25 01:20:15.707567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.812 [2024-07-25 01:20:15.707594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.812 qpair failed and we were unable to recover it. 00:34:22.812 [2024-07-25 01:20:15.707702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.812 [2024-07-25 01:20:15.707728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.812 qpair failed and we were unable to recover it. 00:34:22.812 [2024-07-25 01:20:15.707894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.812 [2024-07-25 01:20:15.707936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.812 qpair failed and we were unable to recover it. 00:34:22.812 [2024-07-25 01:20:15.708096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.812 [2024-07-25 01:20:15.708123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.812 qpair failed and we were unable to recover it. 00:34:22.812 [2024-07-25 01:20:15.708307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.812 [2024-07-25 01:20:15.708337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.812 qpair failed and we were unable to recover it. 00:34:22.812 [2024-07-25 01:20:15.708532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.812 [2024-07-25 01:20:15.708558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.812 qpair failed and we were unable to recover it. 00:34:22.812 [2024-07-25 01:20:15.708726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.812 [2024-07-25 01:20:15.708753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.812 qpair failed and we were unable to recover it. 00:34:22.812 [2024-07-25 01:20:15.708912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.812 [2024-07-25 01:20:15.708941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.812 qpair failed and we were unable to recover it. 00:34:22.812 [2024-07-25 01:20:15.709121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.812 [2024-07-25 01:20:15.709150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.812 qpair failed and we were unable to recover it. 00:34:22.812 [2024-07-25 01:20:15.709311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.812 [2024-07-25 01:20:15.709338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.812 qpair failed and we were unable to recover it. 00:34:22.812 [2024-07-25 01:20:15.709488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.812 [2024-07-25 01:20:15.709514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.812 qpair failed and we were unable to recover it. 00:34:22.812 [2024-07-25 01:20:15.709652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.812 [2024-07-25 01:20:15.709678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.812 qpair failed and we were unable to recover it. 00:34:22.812 [2024-07-25 01:20:15.709824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.812 [2024-07-25 01:20:15.709851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.812 qpair failed and we were unable to recover it. 00:34:22.812 [2024-07-25 01:20:15.710012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.812 [2024-07-25 01:20:15.710042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.812 qpair failed and we were unable to recover it. 00:34:22.812 [2024-07-25 01:20:15.710172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.812 [2024-07-25 01:20:15.710202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.812 qpair failed and we were unable to recover it. 00:34:22.812 [2024-07-25 01:20:15.710395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.812 [2024-07-25 01:20:15.710422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.812 qpair failed and we were unable to recover it. 00:34:22.812 [2024-07-25 01:20:15.710561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.812 [2024-07-25 01:20:15.710587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.812 qpair failed and we were unable to recover it. 00:34:22.812 [2024-07-25 01:20:15.710731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.812 [2024-07-25 01:20:15.710760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.812 qpair failed and we were unable to recover it. 00:34:22.812 [2024-07-25 01:20:15.710931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.812 [2024-07-25 01:20:15.710957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.812 qpair failed and we were unable to recover it. 00:34:22.812 [2024-07-25 01:20:15.711115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.812 [2024-07-25 01:20:15.711145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.812 qpair failed and we were unable to recover it. 00:34:22.812 [2024-07-25 01:20:15.711284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.812 [2024-07-25 01:20:15.711315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.812 qpair failed and we were unable to recover it. 00:34:22.812 [2024-07-25 01:20:15.711489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.812 [2024-07-25 01:20:15.711515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.812 qpair failed and we were unable to recover it. 00:34:22.812 [2024-07-25 01:20:15.711699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.812 [2024-07-25 01:20:15.711729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.812 qpair failed and we were unable to recover it. 00:34:22.812 [2024-07-25 01:20:15.711859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.812 [2024-07-25 01:20:15.711890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.812 qpair failed and we were unable to recover it. 00:34:22.812 [2024-07-25 01:20:15.712064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.812 [2024-07-25 01:20:15.712090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.812 qpair failed and we were unable to recover it. 00:34:22.812 [2024-07-25 01:20:15.712255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.812 [2024-07-25 01:20:15.712285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.812 qpair failed and we were unable to recover it. 00:34:22.812 [2024-07-25 01:20:15.712419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.812 [2024-07-25 01:20:15.712450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.812 qpair failed and we were unable to recover it. 00:34:22.812 [2024-07-25 01:20:15.712619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.812 [2024-07-25 01:20:15.712645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.812 qpair failed and we were unable to recover it. 00:34:22.812 [2024-07-25 01:20:15.712785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.812 [2024-07-25 01:20:15.712830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.812 qpair failed and we were unable to recover it. 00:34:22.812 [2024-07-25 01:20:15.712995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.812 [2024-07-25 01:20:15.713024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.812 qpair failed and we were unable to recover it. 00:34:22.812 [2024-07-25 01:20:15.713169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.812 [2024-07-25 01:20:15.713195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.812 qpair failed and we were unable to recover it. 00:34:22.812 [2024-07-25 01:20:15.713339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.813 [2024-07-25 01:20:15.713367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.813 qpair failed and we were unable to recover it. 00:34:22.813 [2024-07-25 01:20:15.713515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.813 [2024-07-25 01:20:15.713542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.813 qpair failed and we were unable to recover it. 00:34:22.813 [2024-07-25 01:20:15.713658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.813 [2024-07-25 01:20:15.713684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.813 qpair failed and we were unable to recover it. 00:34:22.813 [2024-07-25 01:20:15.713870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.813 [2024-07-25 01:20:15.713900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.813 qpair failed and we were unable to recover it. 00:34:22.813 [2024-07-25 01:20:15.714084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.813 [2024-07-25 01:20:15.714114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.813 qpair failed and we were unable to recover it. 00:34:22.813 [2024-07-25 01:20:15.714256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.813 [2024-07-25 01:20:15.714284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.813 qpair failed and we were unable to recover it. 00:34:22.813 [2024-07-25 01:20:15.714456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.813 [2024-07-25 01:20:15.714482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.813 qpair failed and we were unable to recover it. 00:34:22.813 [2024-07-25 01:20:15.714603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.813 [2024-07-25 01:20:15.714630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.813 qpair failed and we were unable to recover it. 00:34:22.813 [2024-07-25 01:20:15.714801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.813 [2024-07-25 01:20:15.714828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.813 qpair failed and we were unable to recover it. 00:34:22.813 [2024-07-25 01:20:15.714977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.813 [2024-07-25 01:20:15.715008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.813 qpair failed and we were unable to recover it. 00:34:22.813 [2024-07-25 01:20:15.715159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.813 [2024-07-25 01:20:15.715186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.813 qpair failed and we were unable to recover it. 00:34:22.813 [2024-07-25 01:20:15.715326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.813 [2024-07-25 01:20:15.715353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.813 qpair failed and we were unable to recover it. 00:34:22.813 [2024-07-25 01:20:15.715470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.813 [2024-07-25 01:20:15.715496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.813 qpair failed and we were unable to recover it. 00:34:22.813 [2024-07-25 01:20:15.715637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.813 [2024-07-25 01:20:15.715663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.813 qpair failed and we were unable to recover it. 00:34:22.813 [2024-07-25 01:20:15.715773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.813 [2024-07-25 01:20:15.715800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.813 qpair failed and we were unable to recover it. 00:34:22.813 [2024-07-25 01:20:15.715909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.813 [2024-07-25 01:20:15.715936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.813 qpair failed and we were unable to recover it. 00:34:22.813 [2024-07-25 01:20:15.716082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.813 [2024-07-25 01:20:15.716113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.813 qpair failed and we were unable to recover it. 00:34:22.813 [2024-07-25 01:20:15.716281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.813 [2024-07-25 01:20:15.716318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.813 qpair failed and we were unable to recover it. 00:34:22.813 [2024-07-25 01:20:15.716462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.813 [2024-07-25 01:20:15.716491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.813 qpair failed and we were unable to recover it. 00:34:22.813 [2024-07-25 01:20:15.716621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.813 [2024-07-25 01:20:15.716651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.813 qpair failed and we were unable to recover it. 00:34:22.813 [2024-07-25 01:20:15.716803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.813 [2024-07-25 01:20:15.716829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.813 qpair failed and we were unable to recover it. 00:34:22.813 [2024-07-25 01:20:15.716976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.813 [2024-07-25 01:20:15.717019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.813 qpair failed and we were unable to recover it. 00:34:22.813 [2024-07-25 01:20:15.717180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.813 [2024-07-25 01:20:15.717210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.813 qpair failed and we were unable to recover it. 00:34:22.813 [2024-07-25 01:20:15.717406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.813 [2024-07-25 01:20:15.717433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.813 qpair failed and we were unable to recover it. 00:34:22.813 [2024-07-25 01:20:15.717620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.813 [2024-07-25 01:20:15.717649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.813 qpair failed and we were unable to recover it. 00:34:22.813 [2024-07-25 01:20:15.717776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.813 [2024-07-25 01:20:15.717806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.813 qpair failed and we were unable to recover it. 00:34:22.813 [2024-07-25 01:20:15.718006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.813 [2024-07-25 01:20:15.718033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.813 qpair failed and we were unable to recover it. 00:34:22.813 [2024-07-25 01:20:15.718169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.813 [2024-07-25 01:20:15.718203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.813 qpair failed and we were unable to recover it. 00:34:22.813 [2024-07-25 01:20:15.718337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.813 [2024-07-25 01:20:15.718367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.813 qpair failed and we were unable to recover it. 00:34:22.813 [2024-07-25 01:20:15.718535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.813 [2024-07-25 01:20:15.718561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.813 qpair failed and we were unable to recover it. 00:34:22.813 [2024-07-25 01:20:15.718734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.813 [2024-07-25 01:20:15.718761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.813 qpair failed and we were unable to recover it. 00:34:22.813 [2024-07-25 01:20:15.718953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.813 [2024-07-25 01:20:15.718980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.813 qpair failed and we were unable to recover it. 00:34:22.813 [2024-07-25 01:20:15.719120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.813 [2024-07-25 01:20:15.719154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.813 qpair failed and we were unable to recover it. 00:34:22.813 [2024-07-25 01:20:15.719265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.813 [2024-07-25 01:20:15.719291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.813 qpair failed and we were unable to recover it. 00:34:22.813 [2024-07-25 01:20:15.719414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.813 [2024-07-25 01:20:15.719441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.813 qpair failed and we were unable to recover it. 00:34:22.813 [2024-07-25 01:20:15.719547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.813 [2024-07-25 01:20:15.719573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.813 qpair failed and we were unable to recover it. 00:34:22.813 [2024-07-25 01:20:15.719741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.813 [2024-07-25 01:20:15.719767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.813 qpair failed and we were unable to recover it. 00:34:22.813 [2024-07-25 01:20:15.719944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.813 [2024-07-25 01:20:15.719974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.813 qpair failed and we were unable to recover it. 00:34:22.813 [2024-07-25 01:20:15.720137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.814 [2024-07-25 01:20:15.720164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.814 qpair failed and we were unable to recover it. 00:34:22.814 [2024-07-25 01:20:15.720338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.814 [2024-07-25 01:20:15.720368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.814 qpair failed and we were unable to recover it. 00:34:22.814 [2024-07-25 01:20:15.720505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.814 [2024-07-25 01:20:15.720535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.814 qpair failed and we were unable to recover it. 00:34:22.814 [2024-07-25 01:20:15.720702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.814 [2024-07-25 01:20:15.720729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.814 qpair failed and we were unable to recover it. 00:34:22.814 [2024-07-25 01:20:15.720866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.814 [2024-07-25 01:20:15.720911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.814 qpair failed and we were unable to recover it. 00:34:22.814 [2024-07-25 01:20:15.721090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.814 [2024-07-25 01:20:15.721119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.814 qpair failed and we were unable to recover it. 00:34:22.814 [2024-07-25 01:20:15.721268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.814 [2024-07-25 01:20:15.721295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.814 qpair failed and we were unable to recover it. 00:34:22.814 [2024-07-25 01:20:15.721400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.814 [2024-07-25 01:20:15.721428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.814 qpair failed and we were unable to recover it. 00:34:22.814 [2024-07-25 01:20:15.721598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.814 [2024-07-25 01:20:15.721626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.814 qpair failed and we were unable to recover it. 00:34:22.814 [2024-07-25 01:20:15.721792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.814 [2024-07-25 01:20:15.721818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.814 qpair failed and we were unable to recover it. 00:34:22.814 [2024-07-25 01:20:15.721959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.814 [2024-07-25 01:20:15.722003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.814 qpair failed and we were unable to recover it. 00:34:22.814 [2024-07-25 01:20:15.722168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.814 [2024-07-25 01:20:15.722195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.814 qpair failed and we were unable to recover it. 00:34:22.814 [2024-07-25 01:20:15.722310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.814 [2024-07-25 01:20:15.722337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.814 qpair failed and we were unable to recover it. 00:34:22.814 [2024-07-25 01:20:15.722484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.814 [2024-07-25 01:20:15.722511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.814 qpair failed and we were unable to recover it. 00:34:22.814 [2024-07-25 01:20:15.722658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.814 [2024-07-25 01:20:15.722700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.814 qpair failed and we were unable to recover it. 00:34:22.814 [2024-07-25 01:20:15.722832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.814 [2024-07-25 01:20:15.722859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.814 qpair failed and we were unable to recover it. 00:34:22.814 [2024-07-25 01:20:15.722962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.814 [2024-07-25 01:20:15.722993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.814 qpair failed and we were unable to recover it. 00:34:22.814 [2024-07-25 01:20:15.723156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.814 [2024-07-25 01:20:15.723186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.814 qpair failed and we were unable to recover it. 00:34:22.814 [2024-07-25 01:20:15.723351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.814 [2024-07-25 01:20:15.723379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.814 qpair failed and we were unable to recover it. 00:34:22.814 [2024-07-25 01:20:15.723526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.814 [2024-07-25 01:20:15.723570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.814 qpair failed and we were unable to recover it. 00:34:22.814 [2024-07-25 01:20:15.723736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.814 [2024-07-25 01:20:15.723765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.814 qpair failed and we were unable to recover it. 00:34:22.814 [2024-07-25 01:20:15.723954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.814 [2024-07-25 01:20:15.723981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.814 qpair failed and we were unable to recover it. 00:34:22.814 [2024-07-25 01:20:15.724119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.814 [2024-07-25 01:20:15.724148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.814 qpair failed and we were unable to recover it. 00:34:22.814 [2024-07-25 01:20:15.724296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.814 [2024-07-25 01:20:15.724326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.814 qpair failed and we were unable to recover it. 00:34:22.814 [2024-07-25 01:20:15.724456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.814 [2024-07-25 01:20:15.724483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.814 qpair failed and we were unable to recover it. 00:34:22.814 [2024-07-25 01:20:15.724597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.814 [2024-07-25 01:20:15.724624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.814 qpair failed and we were unable to recover it. 00:34:22.814 [2024-07-25 01:20:15.724794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.814 [2024-07-25 01:20:15.724823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.814 qpair failed and we were unable to recover it. 00:34:22.814 [2024-07-25 01:20:15.724983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.814 [2024-07-25 01:20:15.725009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.814 qpair failed and we were unable to recover it. 00:34:22.814 [2024-07-25 01:20:15.725147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.814 [2024-07-25 01:20:15.725174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.814 qpair failed and we were unable to recover it. 00:34:22.814 [2024-07-25 01:20:15.725366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.814 [2024-07-25 01:20:15.725396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.814 qpair failed and we were unable to recover it. 00:34:22.814 [2024-07-25 01:20:15.725565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.814 [2024-07-25 01:20:15.725592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.814 qpair failed and we were unable to recover it. 00:34:22.814 [2024-07-25 01:20:15.725738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.814 [2024-07-25 01:20:15.725764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.814 qpair failed and we were unable to recover it. 00:34:22.814 [2024-07-25 01:20:15.725905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.814 [2024-07-25 01:20:15.725931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.814 qpair failed and we were unable to recover it. 00:34:22.814 [2024-07-25 01:20:15.726102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.814 [2024-07-25 01:20:15.726129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.814 qpair failed and we were unable to recover it. 00:34:22.814 [2024-07-25 01:20:15.726312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.814 [2024-07-25 01:20:15.726341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.814 qpair failed and we were unable to recover it. 00:34:22.814 [2024-07-25 01:20:15.726469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.814 [2024-07-25 01:20:15.726498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.814 qpair failed and we were unable to recover it. 00:34:22.814 [2024-07-25 01:20:15.726648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.814 [2024-07-25 01:20:15.726674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.814 qpair failed and we were unable to recover it. 00:34:22.814 [2024-07-25 01:20:15.726789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.814 [2024-07-25 01:20:15.726815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.814 qpair failed and we were unable to recover it. 00:34:22.814 [2024-07-25 01:20:15.726963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.815 [2024-07-25 01:20:15.726989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.815 qpair failed and we were unable to recover it. 00:34:22.815 [2024-07-25 01:20:15.727160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.815 [2024-07-25 01:20:15.727186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.815 qpair failed and we were unable to recover it. 00:34:22.815 [2024-07-25 01:20:15.727362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.815 [2024-07-25 01:20:15.727389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.815 qpair failed and we were unable to recover it. 00:34:22.815 [2024-07-25 01:20:15.727528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.815 [2024-07-25 01:20:15.727557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.815 qpair failed and we were unable to recover it. 00:34:22.815 [2024-07-25 01:20:15.727700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.815 [2024-07-25 01:20:15.727727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.815 qpair failed and we were unable to recover it. 00:34:22.815 [2024-07-25 01:20:15.727905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.815 [2024-07-25 01:20:15.727955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.815 qpair failed and we were unable to recover it. 00:34:22.815 [2024-07-25 01:20:15.728108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.815 [2024-07-25 01:20:15.728137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.815 qpair failed and we were unable to recover it. 00:34:22.815 [2024-07-25 01:20:15.728311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.815 [2024-07-25 01:20:15.728339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.815 qpair failed and we were unable to recover it. 00:34:22.815 [2024-07-25 01:20:15.728488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.815 [2024-07-25 01:20:15.728517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.815 qpair failed and we were unable to recover it. 00:34:22.815 [2024-07-25 01:20:15.728667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.815 [2024-07-25 01:20:15.728697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.815 qpair failed and we were unable to recover it. 00:34:22.815 [2024-07-25 01:20:15.728868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.815 [2024-07-25 01:20:15.728895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.815 qpair failed and we were unable to recover it. 00:34:22.815 [2024-07-25 01:20:15.729036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.815 [2024-07-25 01:20:15.729063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.815 qpair failed and we were unable to recover it. 00:34:22.815 [2024-07-25 01:20:15.729231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.815 [2024-07-25 01:20:15.729276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.815 qpair failed and we were unable to recover it. 00:34:22.815 [2024-07-25 01:20:15.729423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.815 [2024-07-25 01:20:15.729450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.815 qpair failed and we were unable to recover it. 00:34:22.815 [2024-07-25 01:20:15.729629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.815 [2024-07-25 01:20:15.729655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.815 qpair failed and we were unable to recover it. 00:34:22.815 [2024-07-25 01:20:15.729825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.815 [2024-07-25 01:20:15.729855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.815 qpair failed and we were unable to recover it. 00:34:22.815 [2024-07-25 01:20:15.730040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.815 [2024-07-25 01:20:15.730067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.815 qpair failed and we were unable to recover it. 00:34:22.815 [2024-07-25 01:20:15.730255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.815 [2024-07-25 01:20:15.730295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.815 qpair failed and we were unable to recover it. 00:34:22.815 [2024-07-25 01:20:15.730417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.815 [2024-07-25 01:20:15.730447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.815 qpair failed and we were unable to recover it. 00:34:22.815 [2024-07-25 01:20:15.730638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.815 [2024-07-25 01:20:15.730665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.815 qpair failed and we were unable to recover it. 00:34:22.815 [2024-07-25 01:20:15.730828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.815 [2024-07-25 01:20:15.730858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.815 qpair failed and we were unable to recover it. 00:34:22.815 [2024-07-25 01:20:15.731017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.815 [2024-07-25 01:20:15.731047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.815 qpair failed and we were unable to recover it. 00:34:22.815 [2024-07-25 01:20:15.731215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.815 [2024-07-25 01:20:15.731249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.815 qpair failed and we were unable to recover it. 00:34:22.815 [2024-07-25 01:20:15.731407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.815 [2024-07-25 01:20:15.731436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.815 qpair failed and we were unable to recover it. 00:34:22.815 [2024-07-25 01:20:15.731596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.815 [2024-07-25 01:20:15.731626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.815 qpair failed and we were unable to recover it. 00:34:22.815 [2024-07-25 01:20:15.731824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.815 [2024-07-25 01:20:15.731851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.815 qpair failed and we were unable to recover it. 00:34:22.815 [2024-07-25 01:20:15.732039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.815 [2024-07-25 01:20:15.732068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.815 qpair failed and we were unable to recover it. 00:34:22.815 [2024-07-25 01:20:15.732250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.815 [2024-07-25 01:20:15.732276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.815 qpair failed and we were unable to recover it. 00:34:22.815 [2024-07-25 01:20:15.732404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.815 [2024-07-25 01:20:15.732430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.815 qpair failed and we were unable to recover it. 00:34:22.815 [2024-07-25 01:20:15.732620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.815 [2024-07-25 01:20:15.732649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.815 qpair failed and we were unable to recover it. 00:34:22.815 [2024-07-25 01:20:15.732843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.815 [2024-07-25 01:20:15.732871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.815 qpair failed and we were unable to recover it. 00:34:22.815 [2024-07-25 01:20:15.733037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.815 [2024-07-25 01:20:15.733063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.815 qpair failed and we were unable to recover it. 00:34:22.815 [2024-07-25 01:20:15.733171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.815 [2024-07-25 01:20:15.733197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.815 qpair failed and we were unable to recover it. 00:34:22.815 [2024-07-25 01:20:15.733387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.815 [2024-07-25 01:20:15.733417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.815 qpair failed and we were unable to recover it. 00:34:22.815 [2024-07-25 01:20:15.733588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.815 [2024-07-25 01:20:15.733614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.815 qpair failed and we were unable to recover it. 00:34:22.815 [2024-07-25 01:20:15.733757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.815 [2024-07-25 01:20:15.733783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.815 qpair failed and we were unable to recover it. 00:34:22.815 [2024-07-25 01:20:15.733906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.815 [2024-07-25 01:20:15.733932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.815 qpair failed and we were unable to recover it. 00:34:22.815 [2024-07-25 01:20:15.734084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.815 [2024-07-25 01:20:15.734111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.815 qpair failed and we were unable to recover it. 00:34:22.815 [2024-07-25 01:20:15.734256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.815 [2024-07-25 01:20:15.734298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.816 qpair failed and we were unable to recover it. 00:34:22.816 [2024-07-25 01:20:15.734495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.816 [2024-07-25 01:20:15.734521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.816 qpair failed and we were unable to recover it. 00:34:22.816 [2024-07-25 01:20:15.734666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.816 [2024-07-25 01:20:15.734692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.816 qpair failed and we were unable to recover it. 00:34:22.816 [2024-07-25 01:20:15.734859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.816 [2024-07-25 01:20:15.734888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.816 qpair failed and we were unable to recover it. 00:34:22.816 [2024-07-25 01:20:15.735055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.816 [2024-07-25 01:20:15.735084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.816 qpair failed and we were unable to recover it. 00:34:22.816 [2024-07-25 01:20:15.735216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.816 [2024-07-25 01:20:15.735255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.816 qpair failed and we were unable to recover it. 00:34:22.816 [2024-07-25 01:20:15.735384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.816 [2024-07-25 01:20:15.735426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.816 qpair failed and we were unable to recover it. 00:34:22.816 [2024-07-25 01:20:15.735590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.816 [2024-07-25 01:20:15.735618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.816 qpair failed and we were unable to recover it. 00:34:22.816 [2024-07-25 01:20:15.735788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.816 [2024-07-25 01:20:15.735814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.816 qpair failed and we were unable to recover it. 00:34:22.816 [2024-07-25 01:20:15.735975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.816 [2024-07-25 01:20:15.736004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.816 qpair failed and we were unable to recover it. 00:34:22.816 [2024-07-25 01:20:15.736139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.816 [2024-07-25 01:20:15.736179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.816 qpair failed and we were unable to recover it. 00:34:22.816 [2024-07-25 01:20:15.736376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.816 [2024-07-25 01:20:15.736403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.816 qpair failed and we were unable to recover it. 00:34:22.816 [2024-07-25 01:20:15.736543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.816 [2024-07-25 01:20:15.736573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.816 qpair failed and we were unable to recover it. 00:34:22.816 [2024-07-25 01:20:15.736728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.816 [2024-07-25 01:20:15.736757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.816 qpair failed and we were unable to recover it. 00:34:22.816 [2024-07-25 01:20:15.736952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.816 [2024-07-25 01:20:15.736977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.816 qpair failed and we were unable to recover it. 00:34:22.816 [2024-07-25 01:20:15.737136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.816 [2024-07-25 01:20:15.737166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.816 qpair failed and we were unable to recover it. 00:34:22.816 [2024-07-25 01:20:15.737339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.816 [2024-07-25 01:20:15.737368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.816 qpair failed and we were unable to recover it. 00:34:22.816 [2024-07-25 01:20:15.737532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.816 [2024-07-25 01:20:15.737559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.816 qpair failed and we were unable to recover it. 00:34:22.816 [2024-07-25 01:20:15.737695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.816 [2024-07-25 01:20:15.737722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.816 qpair failed and we were unable to recover it. 00:34:22.816 [2024-07-25 01:20:15.737875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.816 [2024-07-25 01:20:15.737904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.816 qpair failed and we were unable to recover it. 00:34:22.816 [2024-07-25 01:20:15.738102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.816 [2024-07-25 01:20:15.738129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.816 qpair failed and we were unable to recover it. 00:34:22.816 [2024-07-25 01:20:15.738253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.816 [2024-07-25 01:20:15.738280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.816 qpair failed and we were unable to recover it. 00:34:22.816 [2024-07-25 01:20:15.738406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.816 [2024-07-25 01:20:15.738432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.816 qpair failed and we were unable to recover it. 00:34:22.816 [2024-07-25 01:20:15.738576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.816 [2024-07-25 01:20:15.738602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.816 qpair failed and we were unable to recover it. 00:34:22.816 [2024-07-25 01:20:15.738770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.816 [2024-07-25 01:20:15.738796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.816 qpair failed and we were unable to recover it. 00:34:22.816 [2024-07-25 01:20:15.738910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.816 [2024-07-25 01:20:15.738935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.816 qpair failed and we were unable to recover it. 00:34:22.816 [2024-07-25 01:20:15.739079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.816 [2024-07-25 01:20:15.739105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.816 qpair failed and we were unable to recover it. 00:34:22.816 [2024-07-25 01:20:15.739217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.816 [2024-07-25 01:20:15.739249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.816 qpair failed and we were unable to recover it. 00:34:22.816 [2024-07-25 01:20:15.739402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.816 [2024-07-25 01:20:15.739443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.816 qpair failed and we were unable to recover it. 00:34:22.816 [2024-07-25 01:20:15.739613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.816 [2024-07-25 01:20:15.739640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.816 qpair failed and we were unable to recover it. 00:34:22.816 [2024-07-25 01:20:15.739796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.816 [2024-07-25 01:20:15.739823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.816 qpair failed and we were unable to recover it. 00:34:22.816 [2024-07-25 01:20:15.739968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.816 [2024-07-25 01:20:15.740011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.816 qpair failed and we were unable to recover it. 00:34:22.816 [2024-07-25 01:20:15.740197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.816 [2024-07-25 01:20:15.740224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.816 qpair failed and we were unable to recover it. 00:34:22.816 [2024-07-25 01:20:15.740402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.816 [2024-07-25 01:20:15.740431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.816 qpair failed and we were unable to recover it. 00:34:22.816 [2024-07-25 01:20:15.740600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.816 [2024-07-25 01:20:15.740629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.816 qpair failed and we were unable to recover it. 00:34:22.816 [2024-07-25 01:20:15.740793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.816 [2024-07-25 01:20:15.740824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.816 qpair failed and we were unable to recover it. 00:34:22.816 [2024-07-25 01:20:15.740992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.816 [2024-07-25 01:20:15.741021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.816 qpair failed and we were unable to recover it. 00:34:22.816 [2024-07-25 01:20:15.741171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.816 [2024-07-25 01:20:15.741200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.816 qpair failed and we were unable to recover it. 00:34:22.816 [2024-07-25 01:20:15.741353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.816 [2024-07-25 01:20:15.741380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.816 qpair failed and we were unable to recover it. 00:34:22.817 [2024-07-25 01:20:15.741543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.817 [2024-07-25 01:20:15.741581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.817 qpair failed and we were unable to recover it. 00:34:22.817 [2024-07-25 01:20:15.741756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.817 [2024-07-25 01:20:15.741785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.817 qpair failed and we were unable to recover it. 00:34:22.817 [2024-07-25 01:20:15.741921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.817 [2024-07-25 01:20:15.741948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.817 qpair failed and we were unable to recover it. 00:34:22.817 [2024-07-25 01:20:15.742086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.817 [2024-07-25 01:20:15.742121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.817 qpair failed and we were unable to recover it. 00:34:22.817 [2024-07-25 01:20:15.742294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.817 [2024-07-25 01:20:15.742322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.817 qpair failed and we were unable to recover it. 00:34:22.817 [2024-07-25 01:20:15.742463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.817 [2024-07-25 01:20:15.742489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.817 qpair failed and we were unable to recover it. 00:34:22.817 [2024-07-25 01:20:15.742614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.817 [2024-07-25 01:20:15.742657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.817 qpair failed and we were unable to recover it. 00:34:22.817 [2024-07-25 01:20:15.742813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.817 [2024-07-25 01:20:15.742842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.817 qpair failed and we were unable to recover it. 00:34:22.817 [2024-07-25 01:20:15.743025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.817 [2024-07-25 01:20:15.743052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.817 qpair failed and we were unable to recover it. 00:34:22.817 [2024-07-25 01:20:15.743166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.817 [2024-07-25 01:20:15.743192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.817 qpair failed and we were unable to recover it. 00:34:22.817 [2024-07-25 01:20:15.743354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.817 [2024-07-25 01:20:15.743381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.817 qpair failed and we were unable to recover it. 00:34:22.817 [2024-07-25 01:20:15.743529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.817 [2024-07-25 01:20:15.743555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.817 qpair failed and we were unable to recover it. 00:34:22.817 [2024-07-25 01:20:15.743739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.817 [2024-07-25 01:20:15.743769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.817 qpair failed and we were unable to recover it. 00:34:22.817 [2024-07-25 01:20:15.743891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.817 [2024-07-25 01:20:15.743921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.817 qpair failed and we were unable to recover it. 00:34:22.817 [2024-07-25 01:20:15.744076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.817 [2024-07-25 01:20:15.744107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.817 qpair failed and we were unable to recover it. 00:34:22.817 [2024-07-25 01:20:15.744222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.817 [2024-07-25 01:20:15.744254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.817 qpair failed and we were unable to recover it. 00:34:22.817 [2024-07-25 01:20:15.744432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.817 [2024-07-25 01:20:15.744461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.817 qpair failed and we were unable to recover it. 00:34:22.817 [2024-07-25 01:20:15.744618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.817 [2024-07-25 01:20:15.744645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.817 qpair failed and we were unable to recover it. 00:34:22.817 [2024-07-25 01:20:15.744795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.817 [2024-07-25 01:20:15.744822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.817 qpair failed and we were unable to recover it. 00:34:22.817 [2024-07-25 01:20:15.744966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.817 [2024-07-25 01:20:15.745000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.817 qpair failed and we were unable to recover it. 00:34:22.817 [2024-07-25 01:20:15.745117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.817 [2024-07-25 01:20:15.745143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.817 qpair failed and we were unable to recover it. 00:34:22.817 [2024-07-25 01:20:15.745329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.817 [2024-07-25 01:20:15.745359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.817 qpair failed and we were unable to recover it. 00:34:22.817 [2024-07-25 01:20:15.745511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.817 [2024-07-25 01:20:15.745540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.817 qpair failed and we were unable to recover it. 00:34:22.817 [2024-07-25 01:20:15.745676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.817 [2024-07-25 01:20:15.745707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.817 qpair failed and we were unable to recover it. 00:34:22.817 [2024-07-25 01:20:15.745849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.817 [2024-07-25 01:20:15.745885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.817 qpair failed and we were unable to recover it. 00:34:22.817 [2024-07-25 01:20:15.746031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.817 [2024-07-25 01:20:15.746060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.817 qpair failed and we were unable to recover it. 00:34:22.817 [2024-07-25 01:20:15.746222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.817 [2024-07-25 01:20:15.746257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.817 qpair failed and we were unable to recover it. 00:34:22.817 [2024-07-25 01:20:15.746442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.817 [2024-07-25 01:20:15.746471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.817 qpair failed and we were unable to recover it. 00:34:22.817 [2024-07-25 01:20:15.746666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.817 [2024-07-25 01:20:15.746696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.817 qpair failed and we were unable to recover it. 00:34:22.817 [2024-07-25 01:20:15.746886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.817 [2024-07-25 01:20:15.746912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.817 qpair failed and we were unable to recover it. 00:34:22.817 [2024-07-25 01:20:15.747044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.817 [2024-07-25 01:20:15.747075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.817 qpair failed and we were unable to recover it. 00:34:22.818 [2024-07-25 01:20:15.747223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.818 [2024-07-25 01:20:15.747260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.818 qpair failed and we were unable to recover it. 00:34:22.818 [2024-07-25 01:20:15.747424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.818 [2024-07-25 01:20:15.747450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.818 qpair failed and we were unable to recover it. 00:34:22.818 [2024-07-25 01:20:15.747632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.818 [2024-07-25 01:20:15.747661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.818 qpair failed and we were unable to recover it. 00:34:22.818 [2024-07-25 01:20:15.747787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.818 [2024-07-25 01:20:15.747817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.818 qpair failed and we were unable to recover it. 00:34:22.818 [2024-07-25 01:20:15.747951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.818 [2024-07-25 01:20:15.747978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.818 qpair failed and we were unable to recover it. 00:34:22.818 [2024-07-25 01:20:15.748120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.818 [2024-07-25 01:20:15.748146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.818 qpair failed and we were unable to recover it. 00:34:22.818 [2024-07-25 01:20:15.748319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.818 [2024-07-25 01:20:15.748349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.818 qpair failed and we were unable to recover it. 00:34:22.818 [2024-07-25 01:20:15.748515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.818 [2024-07-25 01:20:15.748542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.818 qpair failed and we were unable to recover it. 00:34:22.818 [2024-07-25 01:20:15.748705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.818 [2024-07-25 01:20:15.748734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.818 qpair failed and we were unable to recover it. 00:34:22.818 [2024-07-25 01:20:15.748917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.818 [2024-07-25 01:20:15.748947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.818 qpair failed and we were unable to recover it. 00:34:22.818 [2024-07-25 01:20:15.749136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.818 [2024-07-25 01:20:15.749163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.818 qpair failed and we were unable to recover it. 00:34:22.818 [2024-07-25 01:20:15.749331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.818 [2024-07-25 01:20:15.749361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.818 qpair failed and we were unable to recover it. 00:34:22.818 [2024-07-25 01:20:15.749522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.818 [2024-07-25 01:20:15.749560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.818 qpair failed and we were unable to recover it. 00:34:22.818 [2024-07-25 01:20:15.749724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.818 [2024-07-25 01:20:15.749751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.818 qpair failed and we were unable to recover it. 00:34:22.818 [2024-07-25 01:20:15.749889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.818 [2024-07-25 01:20:15.749933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.818 qpair failed and we were unable to recover it. 00:34:22.818 [2024-07-25 01:20:15.750085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.818 [2024-07-25 01:20:15.750114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.818 qpair failed and we were unable to recover it. 00:34:22.818 [2024-07-25 01:20:15.750293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.818 [2024-07-25 01:20:15.750320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.818 qpair failed and we were unable to recover it. 00:34:22.818 [2024-07-25 01:20:15.750439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.818 [2024-07-25 01:20:15.750481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.818 qpair failed and we were unable to recover it. 00:34:22.818 [2024-07-25 01:20:15.750624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.818 [2024-07-25 01:20:15.750668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.818 qpair failed and we were unable to recover it. 00:34:22.818 [2024-07-25 01:20:15.750822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.818 [2024-07-25 01:20:15.750850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.818 qpair failed and we were unable to recover it. 00:34:22.818 [2024-07-25 01:20:15.751001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.818 [2024-07-25 01:20:15.751029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.818 qpair failed and we were unable to recover it. 00:34:22.818 [2024-07-25 01:20:15.751174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.818 [2024-07-25 01:20:15.751205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.818 qpair failed and we were unable to recover it. 00:34:22.818 [2024-07-25 01:20:15.751356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.818 [2024-07-25 01:20:15.751384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.818 qpair failed and we were unable to recover it. 00:34:22.818 [2024-07-25 01:20:15.751528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.818 [2024-07-25 01:20:15.751555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.818 qpair failed and we were unable to recover it. 00:34:22.818 [2024-07-25 01:20:15.751702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.818 [2024-07-25 01:20:15.751748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.818 qpair failed and we were unable to recover it. 00:34:22.818 [2024-07-25 01:20:15.751918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.818 [2024-07-25 01:20:15.751945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.818 qpair failed and we were unable to recover it. 00:34:22.818 [2024-07-25 01:20:15.752092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.818 [2024-07-25 01:20:15.752118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.818 qpair failed and we were unable to recover it. 00:34:22.818 [2024-07-25 01:20:15.752253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.818 [2024-07-25 01:20:15.752293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.818 qpair failed and we were unable to recover it. 00:34:22.818 [2024-07-25 01:20:15.752488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.818 [2024-07-25 01:20:15.752526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.818 qpair failed and we were unable to recover it. 00:34:22.818 [2024-07-25 01:20:15.752666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.818 [2024-07-25 01:20:15.752696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.818 qpair failed and we were unable to recover it. 00:34:22.818 [2024-07-25 01:20:15.752862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.818 [2024-07-25 01:20:15.752889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.818 qpair failed and we were unable to recover it. 00:34:22.818 [2024-07-25 01:20:15.753061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.818 [2024-07-25 01:20:15.753087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.818 qpair failed and we were unable to recover it. 00:34:22.818 [2024-07-25 01:20:15.753211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.818 [2024-07-25 01:20:15.753237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.818 qpair failed and we were unable to recover it. 00:34:22.818 [2024-07-25 01:20:15.753469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.818 [2024-07-25 01:20:15.753499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.818 qpair failed and we were unable to recover it. 00:34:22.818 [2024-07-25 01:20:15.753697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.818 [2024-07-25 01:20:15.753723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.818 qpair failed and we were unable to recover it. 00:34:22.818 [2024-07-25 01:20:15.753884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.818 [2024-07-25 01:20:15.753914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.818 qpair failed and we were unable to recover it. 00:34:22.818 [2024-07-25 01:20:15.754097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.818 [2024-07-25 01:20:15.754148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.818 qpair failed and we were unable to recover it. 00:34:22.818 [2024-07-25 01:20:15.754318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.819 [2024-07-25 01:20:15.754346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.819 qpair failed and we were unable to recover it. 00:34:22.819 [2024-07-25 01:20:15.754471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.819 [2024-07-25 01:20:15.754497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.819 qpair failed and we were unable to recover it. 00:34:22.819 [2024-07-25 01:20:15.754673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.819 [2024-07-25 01:20:15.754704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.819 qpair failed and we were unable to recover it. 00:34:22.819 [2024-07-25 01:20:15.754847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.819 [2024-07-25 01:20:15.754874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.819 qpair failed and we were unable to recover it. 00:34:22.819 [2024-07-25 01:20:15.754997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.819 [2024-07-25 01:20:15.755023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.819 qpair failed and we were unable to recover it. 00:34:22.819 [2024-07-25 01:20:15.755164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.819 [2024-07-25 01:20:15.755205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.819 qpair failed and we were unable to recover it. 00:34:22.819 [2024-07-25 01:20:15.755350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.819 [2024-07-25 01:20:15.755380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.819 qpair failed and we were unable to recover it. 00:34:22.819 [2024-07-25 01:20:15.755572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.819 [2024-07-25 01:20:15.755602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.819 qpair failed and we were unable to recover it. 00:34:22.819 [2024-07-25 01:20:15.755755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.819 [2024-07-25 01:20:15.755785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.819 qpair failed and we were unable to recover it. 00:34:22.819 [2024-07-25 01:20:15.755927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.819 [2024-07-25 01:20:15.755960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.819 qpair failed and we were unable to recover it. 00:34:22.819 [2024-07-25 01:20:15.756103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.819 [2024-07-25 01:20:15.756130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.819 qpair failed and we were unable to recover it. 00:34:22.819 [2024-07-25 01:20:15.756305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.819 [2024-07-25 01:20:15.756345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.819 qpair failed and we were unable to recover it. 00:34:22.819 [2024-07-25 01:20:15.756515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.819 [2024-07-25 01:20:15.756543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.819 qpair failed and we were unable to recover it. 00:34:22.819 [2024-07-25 01:20:15.756664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.819 [2024-07-25 01:20:15.756691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.819 qpair failed and we were unable to recover it. 00:34:22.819 [2024-07-25 01:20:15.756862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.819 [2024-07-25 01:20:15.756905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.819 qpair failed and we were unable to recover it. 00:34:22.819 [2024-07-25 01:20:15.757072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.819 [2024-07-25 01:20:15.757099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.819 qpair failed and we were unable to recover it. 00:34:22.819 [2024-07-25 01:20:15.757263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.819 [2024-07-25 01:20:15.757301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:22.819 qpair failed and we were unable to recover it. 00:34:22.819 [2024-07-25 01:20:15.757444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.819 [2024-07-25 01:20:15.757473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.819 qpair failed and we were unable to recover it. 00:34:22.819 [2024-07-25 01:20:15.757626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.819 [2024-07-25 01:20:15.757653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.819 qpair failed and we were unable to recover it. 00:34:22.819 [2024-07-25 01:20:15.757792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.819 [2024-07-25 01:20:15.757823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.819 qpair failed and we were unable to recover it. 00:34:22.819 [2024-07-25 01:20:15.758008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.819 [2024-07-25 01:20:15.758038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.819 qpair failed and we were unable to recover it. 00:34:22.819 [2024-07-25 01:20:15.758202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.819 [2024-07-25 01:20:15.758230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.819 qpair failed and we were unable to recover it. 00:34:22.819 [2024-07-25 01:20:15.758402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.819 [2024-07-25 01:20:15.758433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.819 qpair failed and we were unable to recover it. 00:34:22.819 [2024-07-25 01:20:15.758626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.819 [2024-07-25 01:20:15.758656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.819 qpair failed and we were unable to recover it. 00:34:22.819 [2024-07-25 01:20:15.758796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.819 [2024-07-25 01:20:15.758824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.819 qpair failed and we were unable to recover it. 00:34:22.819 [2024-07-25 01:20:15.758974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.819 [2024-07-25 01:20:15.759000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.819 qpair failed and we were unable to recover it. 00:34:22.819 [2024-07-25 01:20:15.759168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.819 [2024-07-25 01:20:15.759211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.819 qpair failed and we were unable to recover it. 00:34:22.819 [2024-07-25 01:20:15.759358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.819 [2024-07-25 01:20:15.759385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.819 qpair failed and we were unable to recover it. 00:34:22.819 [2024-07-25 01:20:15.759573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.819 [2024-07-25 01:20:15.759603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.819 qpair failed and we were unable to recover it. 00:34:22.819 [2024-07-25 01:20:15.759772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.819 [2024-07-25 01:20:15.759800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.819 qpair failed and we were unable to recover it. 00:34:22.819 [2024-07-25 01:20:15.759944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.819 [2024-07-25 01:20:15.759971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.819 qpair failed and we were unable to recover it. 00:34:22.819 [2024-07-25 01:20:15.760126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.819 [2024-07-25 01:20:15.760156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.819 qpair failed and we were unable to recover it. 00:34:22.819 [2024-07-25 01:20:15.760306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.819 [2024-07-25 01:20:15.760336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.819 qpair failed and we were unable to recover it. 00:34:22.819 [2024-07-25 01:20:15.760489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.819 [2024-07-25 01:20:15.760516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.819 qpair failed and we were unable to recover it. 00:34:22.819 [2024-07-25 01:20:15.760677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.819 [2024-07-25 01:20:15.760706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.819 qpair failed and we were unable to recover it. 00:34:22.819 [2024-07-25 01:20:15.760866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.819 [2024-07-25 01:20:15.760892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.819 qpair failed and we were unable to recover it. 00:34:22.819 [2024-07-25 01:20:15.761008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.819 [2024-07-25 01:20:15.761039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.819 qpair failed and we were unable to recover it. 00:34:22.819 [2024-07-25 01:20:15.761188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.819 [2024-07-25 01:20:15.761233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.819 qpair failed and we were unable to recover it. 00:34:22.819 [2024-07-25 01:20:15.761403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.820 [2024-07-25 01:20:15.761432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.820 qpair failed and we were unable to recover it. 00:34:22.820 [2024-07-25 01:20:15.761578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.820 [2024-07-25 01:20:15.761605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.820 qpair failed and we were unable to recover it. 00:34:22.820 [2024-07-25 01:20:15.761775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.820 [2024-07-25 01:20:15.761818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.820 qpair failed and we were unable to recover it. 00:34:22.820 [2024-07-25 01:20:15.761977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.820 [2024-07-25 01:20:15.762006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.820 qpair failed and we were unable to recover it. 00:34:22.820 [2024-07-25 01:20:15.762195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.820 [2024-07-25 01:20:15.762222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.820 qpair failed and we were unable to recover it. 00:34:22.820 [2024-07-25 01:20:15.762397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.820 [2024-07-25 01:20:15.762424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.820 qpair failed and we were unable to recover it. 00:34:22.820 [2024-07-25 01:20:15.762582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.820 [2024-07-25 01:20:15.762612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.820 qpair failed and we were unable to recover it. 00:34:22.820 [2024-07-25 01:20:15.762771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.820 [2024-07-25 01:20:15.762798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.820 qpair failed and we were unable to recover it. 00:34:22.820 [2024-07-25 01:20:15.762941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.820 [2024-07-25 01:20:15.762985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.820 qpair failed and we were unable to recover it. 00:34:22.820 [2024-07-25 01:20:15.763147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.820 [2024-07-25 01:20:15.763177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.820 qpair failed and we were unable to recover it. 00:34:22.820 [2024-07-25 01:20:15.763346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.820 [2024-07-25 01:20:15.763373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.820 qpair failed and we were unable to recover it. 00:34:22.820 [2024-07-25 01:20:15.763558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.820 [2024-07-25 01:20:15.763587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.820 qpair failed and we were unable to recover it. 00:34:22.820 [2024-07-25 01:20:15.763722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.820 [2024-07-25 01:20:15.763752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.820 qpair failed and we were unable to recover it. 00:34:22.820 [2024-07-25 01:20:15.763918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.820 [2024-07-25 01:20:15.763945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.820 qpair failed and we were unable to recover it. 00:34:22.820 [2024-07-25 01:20:15.764093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.820 [2024-07-25 01:20:15.764125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.820 qpair failed and we were unable to recover it. 00:34:22.820 [2024-07-25 01:20:15.764251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.820 [2024-07-25 01:20:15.764279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.820 qpair failed and we were unable to recover it. 00:34:22.820 [2024-07-25 01:20:15.764438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.820 [2024-07-25 01:20:15.764464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.820 qpair failed and we were unable to recover it. 00:34:22.820 [2024-07-25 01:20:15.764629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.820 [2024-07-25 01:20:15.764659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.820 qpair failed and we were unable to recover it. 00:34:22.820 [2024-07-25 01:20:15.764810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.820 [2024-07-25 01:20:15.764840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.820 qpair failed and we were unable to recover it. 00:34:22.820 [2024-07-25 01:20:15.765013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.820 [2024-07-25 01:20:15.765040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.820 qpair failed and we were unable to recover it. 00:34:22.820 [2024-07-25 01:20:15.765225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.820 [2024-07-25 01:20:15.765264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.820 qpair failed and we were unable to recover it. 00:34:22.820 [2024-07-25 01:20:15.765436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.820 [2024-07-25 01:20:15.765466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.820 qpair failed and we were unable to recover it. 00:34:22.820 [2024-07-25 01:20:15.765617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.820 [2024-07-25 01:20:15.765643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.820 qpair failed and we were unable to recover it. 00:34:22.820 [2024-07-25 01:20:15.765841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.820 [2024-07-25 01:20:15.765871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.820 qpair failed and we were unable to recover it. 00:34:22.820 [2024-07-25 01:20:15.766036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.820 [2024-07-25 01:20:15.766064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.820 qpair failed and we were unable to recover it. 00:34:22.820 [2024-07-25 01:20:15.766213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.820 [2024-07-25 01:20:15.766239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.820 qpair failed and we were unable to recover it. 00:34:22.820 [2024-07-25 01:20:15.766360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.820 [2024-07-25 01:20:15.766387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.820 qpair failed and we were unable to recover it. 00:34:22.820 [2024-07-25 01:20:15.766510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.820 [2024-07-25 01:20:15.766537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.820 qpair failed and we were unable to recover it. 00:34:22.820 [2024-07-25 01:20:15.766678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.820 [2024-07-25 01:20:15.766705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.820 qpair failed and we were unable to recover it. 00:34:22.820 [2024-07-25 01:20:15.766901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.820 [2024-07-25 01:20:15.766930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.820 qpair failed and we were unable to recover it. 00:34:22.820 [2024-07-25 01:20:15.767082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.820 [2024-07-25 01:20:15.767110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.820 qpair failed and we were unable to recover it. 00:34:22.820 [2024-07-25 01:20:15.767272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.820 [2024-07-25 01:20:15.767299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.820 qpair failed and we were unable to recover it. 00:34:22.820 [2024-07-25 01:20:15.767410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.820 [2024-07-25 01:20:15.767437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.820 qpair failed and we were unable to recover it. 00:34:22.820 [2024-07-25 01:20:15.767598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.820 [2024-07-25 01:20:15.767627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.820 qpair failed and we were unable to recover it. 00:34:22.820 [2024-07-25 01:20:15.767791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.820 [2024-07-25 01:20:15.767818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.820 qpair failed and we were unable to recover it. 00:34:22.820 [2024-07-25 01:20:15.768002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.820 [2024-07-25 01:20:15.768032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.820 qpair failed and we were unable to recover it. 00:34:22.820 [2024-07-25 01:20:15.768183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.820 [2024-07-25 01:20:15.768212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.820 qpair failed and we were unable to recover it. 00:34:22.820 [2024-07-25 01:20:15.768363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.820 [2024-07-25 01:20:15.768392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.821 qpair failed and we were unable to recover it. 00:34:22.821 [2024-07-25 01:20:15.768564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.821 [2024-07-25 01:20:15.768601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.821 qpair failed and we were unable to recover it. 00:34:22.821 [2024-07-25 01:20:15.768771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.821 [2024-07-25 01:20:15.768801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.821 qpair failed and we were unable to recover it. 00:34:22.821 [2024-07-25 01:20:15.768944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.821 [2024-07-25 01:20:15.768971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.821 qpair failed and we were unable to recover it. 00:34:22.821 [2024-07-25 01:20:15.769116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.821 [2024-07-25 01:20:15.769142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.821 qpair failed and we were unable to recover it. 00:34:22.821 [2024-07-25 01:20:15.769312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.821 [2024-07-25 01:20:15.769342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.821 qpair failed and we were unable to recover it. 00:34:22.821 [2024-07-25 01:20:15.769503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.821 [2024-07-25 01:20:15.769531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.821 qpair failed and we were unable to recover it. 00:34:22.821 [2024-07-25 01:20:15.769675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.821 [2024-07-25 01:20:15.769718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.821 qpair failed and we were unable to recover it. 00:34:22.821 [2024-07-25 01:20:15.769901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.821 [2024-07-25 01:20:15.769930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.821 qpair failed and we were unable to recover it. 00:34:22.821 [2024-07-25 01:20:15.770119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.821 [2024-07-25 01:20:15.770145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.821 qpair failed and we were unable to recover it. 00:34:22.821 [2024-07-25 01:20:15.770297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.821 [2024-07-25 01:20:15.770327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.821 qpair failed and we were unable to recover it. 00:34:22.821 [2024-07-25 01:20:15.770487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.821 [2024-07-25 01:20:15.770517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.821 qpair failed and we were unable to recover it. 00:34:22.821 [2024-07-25 01:20:15.770652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.821 [2024-07-25 01:20:15.770680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.821 qpair failed and we were unable to recover it. 00:34:22.821 [2024-07-25 01:20:15.770827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.821 [2024-07-25 01:20:15.770872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.821 qpair failed and we were unable to recover it. 00:34:22.821 [2024-07-25 01:20:15.771021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.821 [2024-07-25 01:20:15.771051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.821 qpair failed and we were unable to recover it. 00:34:22.821 [2024-07-25 01:20:15.771258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.821 [2024-07-25 01:20:15.771288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.821 qpair failed and we were unable to recover it. 00:34:22.821 [2024-07-25 01:20:15.771468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.821 [2024-07-25 01:20:15.771495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.821 qpair failed and we were unable to recover it. 00:34:22.821 [2024-07-25 01:20:15.771708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.821 [2024-07-25 01:20:15.771738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.821 qpair failed and we were unable to recover it. 00:34:22.821 [2024-07-25 01:20:15.771903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.821 [2024-07-25 01:20:15.771930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.821 qpair failed and we were unable to recover it. 00:34:22.821 [2024-07-25 01:20:15.772116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.821 [2024-07-25 01:20:15.772145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.821 qpair failed and we were unable to recover it. 00:34:22.821 [2024-07-25 01:20:15.772321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.821 [2024-07-25 01:20:15.772348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.821 qpair failed and we were unable to recover it. 00:34:22.821 [2024-07-25 01:20:15.772466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.821 [2024-07-25 01:20:15.772493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.821 qpair failed and we were unable to recover it. 00:34:22.821 [2024-07-25 01:20:15.772606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.821 [2024-07-25 01:20:15.772632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.821 qpair failed and we were unable to recover it. 00:34:22.821 [2024-07-25 01:20:15.772768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.821 [2024-07-25 01:20:15.772794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.821 qpair failed and we were unable to recover it. 00:34:22.821 [2024-07-25 01:20:15.772937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.821 [2024-07-25 01:20:15.772964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.821 qpair failed and we were unable to recover it. 00:34:22.821 [2024-07-25 01:20:15.773106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.821 [2024-07-25 01:20:15.773132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.821 qpair failed and we were unable to recover it. 00:34:22.821 [2024-07-25 01:20:15.773268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.821 [2024-07-25 01:20:15.773299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.821 qpair failed and we were unable to recover it. 00:34:22.821 [2024-07-25 01:20:15.773486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.821 [2024-07-25 01:20:15.773513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.821 qpair failed and we were unable to recover it. 00:34:22.821 [2024-07-25 01:20:15.773639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.821 [2024-07-25 01:20:15.773666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.821 qpair failed and we were unable to recover it. 00:34:22.821 [2024-07-25 01:20:15.773781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.821 [2024-07-25 01:20:15.773807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.821 qpair failed and we were unable to recover it. 00:34:22.821 [2024-07-25 01:20:15.773916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.821 [2024-07-25 01:20:15.773943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.821 qpair failed and we were unable to recover it. 00:34:22.821 [2024-07-25 01:20:15.774082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.821 [2024-07-25 01:20:15.774109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.821 qpair failed and we were unable to recover it. 00:34:22.821 [2024-07-25 01:20:15.774316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.821 [2024-07-25 01:20:15.774346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.821 qpair failed and we were unable to recover it. 00:34:22.821 [2024-07-25 01:20:15.774478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.821 [2024-07-25 01:20:15.774506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.821 qpair failed and we were unable to recover it. 00:34:22.821 [2024-07-25 01:20:15.774655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.821 [2024-07-25 01:20:15.774682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.821 qpair failed and we were unable to recover it. 00:34:22.821 [2024-07-25 01:20:15.774798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.821 [2024-07-25 01:20:15.774824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.821 qpair failed and we were unable to recover it. 00:34:22.821 [2024-07-25 01:20:15.774966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.821 [2024-07-25 01:20:15.774993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.821 qpair failed and we were unable to recover it. 00:34:22.821 [2024-07-25 01:20:15.775135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.821 [2024-07-25 01:20:15.775162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.821 qpair failed and we were unable to recover it. 00:34:22.822 [2024-07-25 01:20:15.775314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.822 [2024-07-25 01:20:15.775341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.822 qpair failed and we were unable to recover it. 00:34:22.822 [2024-07-25 01:20:15.775462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.822 [2024-07-25 01:20:15.775490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.822 qpair failed and we were unable to recover it. 00:34:22.822 [2024-07-25 01:20:15.775611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.822 [2024-07-25 01:20:15.775654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.822 qpair failed and we were unable to recover it. 00:34:22.822 [2024-07-25 01:20:15.775811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.822 [2024-07-25 01:20:15.775845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.822 qpair failed and we were unable to recover it. 00:34:22.822 [2024-07-25 01:20:15.776003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.822 [2024-07-25 01:20:15.776029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.822 qpair failed and we were unable to recover it. 00:34:22.822 [2024-07-25 01:20:15.776218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.822 [2024-07-25 01:20:15.776255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.822 qpair failed and we were unable to recover it. 00:34:22.822 [2024-07-25 01:20:15.776442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.822 [2024-07-25 01:20:15.776471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.822 qpair failed and we were unable to recover it. 00:34:22.822 [2024-07-25 01:20:15.776639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.822 [2024-07-25 01:20:15.776675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.822 qpair failed and we were unable to recover it. 00:34:22.822 [2024-07-25 01:20:15.776875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.822 [2024-07-25 01:20:15.776917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.822 qpair failed and we were unable to recover it. 00:34:22.822 [2024-07-25 01:20:15.777125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.822 [2024-07-25 01:20:15.777160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.822 qpair failed and we were unable to recover it. 00:34:22.822 [2024-07-25 01:20:15.777334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.822 [2024-07-25 01:20:15.777371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.822 qpair failed and we were unable to recover it. 00:34:22.822 [2024-07-25 01:20:15.777560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.822 [2024-07-25 01:20:15.777603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.822 qpair failed and we were unable to recover it. 00:34:22.822 [2024-07-25 01:20:15.777780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.822 [2024-07-25 01:20:15.777813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.822 qpair failed and we were unable to recover it. 00:34:22.822 [2024-07-25 01:20:15.777986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.822 [2024-07-25 01:20:15.778013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.822 qpair failed and we were unable to recover it. 00:34:22.822 [2024-07-25 01:20:15.778135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.822 [2024-07-25 01:20:15.778185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.822 qpair failed and we were unable to recover it. 00:34:22.822 [2024-07-25 01:20:15.778341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.822 [2024-07-25 01:20:15.778372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.822 qpair failed and we were unable to recover it. 00:34:22.822 [2024-07-25 01:20:15.778541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.822 [2024-07-25 01:20:15.778568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.822 qpair failed and we were unable to recover it. 00:34:22.822 [2024-07-25 01:20:15.778695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.822 [2024-07-25 01:20:15.778747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.822 qpair failed and we were unable to recover it. 00:34:22.822 [2024-07-25 01:20:15.778906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.822 [2024-07-25 01:20:15.778947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.822 qpair failed and we were unable to recover it. 00:34:22.822 [2024-07-25 01:20:15.779130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.822 [2024-07-25 01:20:15.779167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.822 qpair failed and we were unable to recover it. 00:34:22.822 [2024-07-25 01:20:15.779346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.822 [2024-07-25 01:20:15.779404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.822 qpair failed and we were unable to recover it. 00:34:22.822 [2024-07-25 01:20:15.779537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.822 [2024-07-25 01:20:15.779569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.822 qpair failed and we were unable to recover it. 00:34:22.822 [2024-07-25 01:20:15.779729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.822 [2024-07-25 01:20:15.779756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.822 qpair failed and we were unable to recover it. 00:34:22.822 [2024-07-25 01:20:15.779913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.822 [2024-07-25 01:20:15.779942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.822 qpair failed and we were unable to recover it. 00:34:22.822 [2024-07-25 01:20:15.780077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.822 [2024-07-25 01:20:15.780106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.822 qpair failed and we were unable to recover it. 00:34:22.822 [2024-07-25 01:20:15.780269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.822 [2024-07-25 01:20:15.780303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.822 qpair failed and we were unable to recover it. 00:34:22.822 [2024-07-25 01:20:15.780536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.822 [2024-07-25 01:20:15.780566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.822 qpair failed and we were unable to recover it. 00:34:22.822 [2024-07-25 01:20:15.780727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.822 [2024-07-25 01:20:15.780756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.822 qpair failed and we were unable to recover it. 00:34:22.822 [2024-07-25 01:20:15.780893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.822 [2024-07-25 01:20:15.780919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.822 qpair failed and we were unable to recover it. 00:34:22.822 [2024-07-25 01:20:15.781161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.822 [2024-07-25 01:20:15.781190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.822 qpair failed and we were unable to recover it. 00:34:22.822 [2024-07-25 01:20:15.781377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.822 [2024-07-25 01:20:15.781404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.822 qpair failed and we were unable to recover it. 00:34:22.822 [2024-07-25 01:20:15.781548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.822 [2024-07-25 01:20:15.781574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.822 qpair failed and we were unable to recover it. 00:34:22.822 [2024-07-25 01:20:15.781717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.823 [2024-07-25 01:20:15.781744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.823 qpair failed and we were unable to recover it. 00:34:22.823 [2024-07-25 01:20:15.781887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.823 [2024-07-25 01:20:15.781914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.823 qpair failed and we were unable to recover it. 00:34:22.823 [2024-07-25 01:20:15.782063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.823 [2024-07-25 01:20:15.782089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.823 qpair failed and we were unable to recover it. 00:34:22.823 [2024-07-25 01:20:15.782274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.823 [2024-07-25 01:20:15.782304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.823 qpair failed and we were unable to recover it. 00:34:22.823 [2024-07-25 01:20:15.782426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.823 [2024-07-25 01:20:15.782454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.823 qpair failed and we were unable to recover it. 00:34:22.823 [2024-07-25 01:20:15.782622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.823 [2024-07-25 01:20:15.782648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.823 qpair failed and we were unable to recover it. 00:34:22.823 [2024-07-25 01:20:15.782815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.823 [2024-07-25 01:20:15.782844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.823 qpair failed and we were unable to recover it. 00:34:22.823 [2024-07-25 01:20:15.783001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.823 [2024-07-25 01:20:15.783031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.823 qpair failed and we were unable to recover it. 00:34:22.823 [2024-07-25 01:20:15.783259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.823 [2024-07-25 01:20:15.783286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.823 qpair failed and we were unable to recover it. 00:34:22.823 [2024-07-25 01:20:15.783465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.823 [2024-07-25 01:20:15.783494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.823 qpair failed and we were unable to recover it. 00:34:22.823 [2024-07-25 01:20:15.783629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.823 [2024-07-25 01:20:15.783657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.823 qpair failed and we were unable to recover it. 00:34:22.823 [2024-07-25 01:20:15.783827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.823 [2024-07-25 01:20:15.783858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.823 qpair failed and we were unable to recover it. 00:34:22.823 [2024-07-25 01:20:15.783972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.823 [2024-07-25 01:20:15.784015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.823 qpair failed and we were unable to recover it. 00:34:22.823 [2024-07-25 01:20:15.784162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.823 [2024-07-25 01:20:15.784191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.823 qpair failed and we were unable to recover it. 00:34:22.823 [2024-07-25 01:20:15.784358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.823 [2024-07-25 01:20:15.784385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.823 qpair failed and we were unable to recover it. 00:34:22.823 [2024-07-25 01:20:15.784497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.823 [2024-07-25 01:20:15.784523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.823 qpair failed and we were unable to recover it. 00:34:22.823 [2024-07-25 01:20:15.784655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.823 [2024-07-25 01:20:15.784681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.823 qpair failed and we were unable to recover it. 00:34:22.823 [2024-07-25 01:20:15.784824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.823 [2024-07-25 01:20:15.784850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.823 qpair failed and we were unable to recover it. 00:34:22.823 [2024-07-25 01:20:15.784993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.823 [2024-07-25 01:20:15.785036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.823 qpair failed and we were unable to recover it. 00:34:22.823 [2024-07-25 01:20:15.785187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.823 [2024-07-25 01:20:15.785215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.823 qpair failed and we were unable to recover it. 00:34:22.823 [2024-07-25 01:20:15.785378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.823 [2024-07-25 01:20:15.785405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.823 qpair failed and we were unable to recover it. 00:34:22.823 [2024-07-25 01:20:15.785524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.823 [2024-07-25 01:20:15.785550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.823 qpair failed and we were unable to recover it. 00:34:22.823 [2024-07-25 01:20:15.785668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.823 [2024-07-25 01:20:15.785695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.823 qpair failed and we were unable to recover it. 00:34:22.823 [2024-07-25 01:20:15.785846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.823 [2024-07-25 01:20:15.785873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.823 qpair failed and we were unable to recover it. 00:34:22.823 [2024-07-25 01:20:15.785984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.823 [2024-07-25 01:20:15.786028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.823 qpair failed and we were unable to recover it. 00:34:22.823 [2024-07-25 01:20:15.786188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.823 [2024-07-25 01:20:15.786216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.823 qpair failed and we were unable to recover it. 00:34:22.823 [2024-07-25 01:20:15.786364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.823 [2024-07-25 01:20:15.786390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.823 qpair failed and we were unable to recover it. 00:34:22.823 [2024-07-25 01:20:15.786531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.823 [2024-07-25 01:20:15.786557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.823 qpair failed and we were unable to recover it. 00:34:22.823 [2024-07-25 01:20:15.786718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.823 [2024-07-25 01:20:15.786744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.823 qpair failed and we were unable to recover it. 00:34:22.823 [2024-07-25 01:20:15.786884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.823 [2024-07-25 01:20:15.786911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.823 qpair failed and we were unable to recover it. 00:34:22.823 [2024-07-25 01:20:15.787111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.823 [2024-07-25 01:20:15.787138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.823 qpair failed and we were unable to recover it. 00:34:22.823 [2024-07-25 01:20:15.787302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.823 [2024-07-25 01:20:15.787329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.823 qpair failed and we were unable to recover it. 00:34:22.823 [2024-07-25 01:20:15.787449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.823 [2024-07-25 01:20:15.787475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.823 qpair failed and we were unable to recover it. 00:34:22.823 [2024-07-25 01:20:15.787590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.823 [2024-07-25 01:20:15.787619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.823 qpair failed and we were unable to recover it. 00:34:22.823 [2024-07-25 01:20:15.787786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.823 [2024-07-25 01:20:15.787813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.823 qpair failed and we were unable to recover it. 00:34:22.823 [2024-07-25 01:20:15.787955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.823 [2024-07-25 01:20:15.787982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.823 qpair failed and we were unable to recover it. 00:34:22.823 [2024-07-25 01:20:15.788102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.823 [2024-07-25 01:20:15.788129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.823 qpair failed and we were unable to recover it. 00:34:22.823 [2024-07-25 01:20:15.788251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.824 [2024-07-25 01:20:15.788278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.824 qpair failed and we were unable to recover it. 00:34:22.824 [2024-07-25 01:20:15.788426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.824 [2024-07-25 01:20:15.788453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.824 qpair failed and we were unable to recover it. 00:34:22.824 [2024-07-25 01:20:15.788619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.824 [2024-07-25 01:20:15.788645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.824 qpair failed and we were unable to recover it. 00:34:22.824 [2024-07-25 01:20:15.788769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.824 [2024-07-25 01:20:15.788795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.824 qpair failed and we were unable to recover it. 00:34:22.824 [2024-07-25 01:20:15.788956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.824 [2024-07-25 01:20:15.788982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.824 qpair failed and we were unable to recover it. 00:34:22.824 [2024-07-25 01:20:15.789123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.824 [2024-07-25 01:20:15.789166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.824 qpair failed and we were unable to recover it. 00:34:22.824 [2024-07-25 01:20:15.789315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.824 [2024-07-25 01:20:15.789343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.824 qpair failed and we were unable to recover it. 00:34:22.824 [2024-07-25 01:20:15.789502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.824 [2024-07-25 01:20:15.789529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.824 qpair failed and we were unable to recover it. 00:34:22.824 [2024-07-25 01:20:15.789710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.824 [2024-07-25 01:20:15.789737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.824 qpair failed and we were unable to recover it. 00:34:22.824 [2024-07-25 01:20:15.789963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.824 [2024-07-25 01:20:15.789990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.824 qpair failed and we were unable to recover it. 00:34:22.824 [2024-07-25 01:20:15.790122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.824 [2024-07-25 01:20:15.790149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.824 qpair failed and we were unable to recover it. 00:34:22.824 [2024-07-25 01:20:15.790267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.824 [2024-07-25 01:20:15.790293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.824 qpair failed and we were unable to recover it. 00:34:22.824 [2024-07-25 01:20:15.790443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.824 [2024-07-25 01:20:15.790470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.824 qpair failed and we were unable to recover it. 00:34:22.824 [2024-07-25 01:20:15.790628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.824 [2024-07-25 01:20:15.790655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.824 qpair failed and we were unable to recover it. 00:34:22.824 [2024-07-25 01:20:15.790824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.824 [2024-07-25 01:20:15.790871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.824 qpair failed and we were unable to recover it. 00:34:22.824 [2024-07-25 01:20:15.790996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.824 [2024-07-25 01:20:15.791023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.824 qpair failed and we were unable to recover it. 00:34:22.824 [2024-07-25 01:20:15.791188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.824 [2024-07-25 01:20:15.791214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.824 qpair failed and we were unable to recover it. 00:34:22.824 [2024-07-25 01:20:15.791373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.824 [2024-07-25 01:20:15.791400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.824 qpair failed and we were unable to recover it. 00:34:22.824 [2024-07-25 01:20:15.791514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.824 [2024-07-25 01:20:15.791540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.824 qpair failed and we were unable to recover it. 00:34:22.824 [2024-07-25 01:20:15.791682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.824 [2024-07-25 01:20:15.791709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.824 qpair failed and we were unable to recover it. 00:34:22.824 [2024-07-25 01:20:15.791845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.824 [2024-07-25 01:20:15.791871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.824 qpair failed and we were unable to recover it. 00:34:22.824 [2024-07-25 01:20:15.791988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.824 [2024-07-25 01:20:15.792015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.824 qpair failed and we were unable to recover it. 00:34:22.824 [2024-07-25 01:20:15.792160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.824 [2024-07-25 01:20:15.792186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.824 qpair failed and we were unable to recover it. 00:34:22.824 [2024-07-25 01:20:15.792330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.824 [2024-07-25 01:20:15.792357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.824 qpair failed and we were unable to recover it. 00:34:22.824 [2024-07-25 01:20:15.792472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.824 [2024-07-25 01:20:15.792498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.824 qpair failed and we were unable to recover it. 00:34:22.824 [2024-07-25 01:20:15.792644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.824 [2024-07-25 01:20:15.792670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.824 qpair failed and we were unable to recover it. 00:34:22.824 [2024-07-25 01:20:15.792848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.824 [2024-07-25 01:20:15.792874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.824 qpair failed and we were unable to recover it. 00:34:22.824 [2024-07-25 01:20:15.793019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.824 [2024-07-25 01:20:15.793046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.824 qpair failed and we were unable to recover it. 00:34:22.824 [2024-07-25 01:20:15.793190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.824 [2024-07-25 01:20:15.793217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.824 qpair failed and we were unable to recover it. 00:34:22.824 [2024-07-25 01:20:15.793366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.824 [2024-07-25 01:20:15.793392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.824 qpair failed and we were unable to recover it. 00:34:22.824 [2024-07-25 01:20:15.793511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.824 [2024-07-25 01:20:15.793538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.824 qpair failed and we were unable to recover it. 00:34:22.824 [2024-07-25 01:20:15.793684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.824 [2024-07-25 01:20:15.793711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.824 qpair failed and we were unable to recover it. 00:34:22.824 [2024-07-25 01:20:15.793829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.824 [2024-07-25 01:20:15.793856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.824 qpair failed and we were unable to recover it. 00:34:22.824 [2024-07-25 01:20:15.793972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.824 [2024-07-25 01:20:15.794000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.824 qpair failed and we were unable to recover it. 00:34:22.824 [2024-07-25 01:20:15.794149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.824 [2024-07-25 01:20:15.794176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.824 qpair failed and we were unable to recover it. 00:34:22.824 [2024-07-25 01:20:15.794301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.824 [2024-07-25 01:20:15.794329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.824 qpair failed and we were unable to recover it. 00:34:22.824 [2024-07-25 01:20:15.794470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.824 [2024-07-25 01:20:15.794496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.824 qpair failed and we were unable to recover it. 00:34:22.824 [2024-07-25 01:20:15.794644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.825 [2024-07-25 01:20:15.794670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.825 qpair failed and we were unable to recover it. 00:34:22.825 [2024-07-25 01:20:15.794818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.825 [2024-07-25 01:20:15.794844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.825 qpair failed and we were unable to recover it. 00:34:22.825 [2024-07-25 01:20:15.794963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.825 [2024-07-25 01:20:15.794990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.825 qpair failed and we were unable to recover it. 00:34:22.825 [2024-07-25 01:20:15.795139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.825 [2024-07-25 01:20:15.795166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.825 qpair failed and we were unable to recover it. 00:34:22.825 [2024-07-25 01:20:15.795326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.825 [2024-07-25 01:20:15.795353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.825 qpair failed and we were unable to recover it. 00:34:22.825 [2024-07-25 01:20:15.795491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.825 [2024-07-25 01:20:15.795517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.825 qpair failed and we were unable to recover it. 00:34:22.825 [2024-07-25 01:20:15.795659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.825 [2024-07-25 01:20:15.795685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.825 qpair failed and we were unable to recover it. 00:34:22.825 [2024-07-25 01:20:15.795829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.825 [2024-07-25 01:20:15.795855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.825 qpair failed and we were unable to recover it. 00:34:22.825 [2024-07-25 01:20:15.796000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.825 [2024-07-25 01:20:15.796026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.825 qpair failed and we were unable to recover it. 00:34:22.825 [2024-07-25 01:20:15.796169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.825 [2024-07-25 01:20:15.796195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.825 qpair failed and we were unable to recover it. 00:34:22.825 [2024-07-25 01:20:15.796316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.825 [2024-07-25 01:20:15.796342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.825 qpair failed and we were unable to recover it. 00:34:22.825 [2024-07-25 01:20:15.796489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.825 [2024-07-25 01:20:15.796515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.825 qpair failed and we were unable to recover it. 00:34:22.825 [2024-07-25 01:20:15.796633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.825 [2024-07-25 01:20:15.796659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.825 qpair failed and we were unable to recover it. 00:34:22.825 [2024-07-25 01:20:15.796776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.825 [2024-07-25 01:20:15.796801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.825 qpair failed and we were unable to recover it. 00:34:22.825 [2024-07-25 01:20:15.796943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.825 [2024-07-25 01:20:15.796970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.825 qpair failed and we were unable to recover it. 00:34:22.825 [2024-07-25 01:20:15.797193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.825 [2024-07-25 01:20:15.797218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.825 qpair failed and we were unable to recover it. 00:34:22.825 [2024-07-25 01:20:15.797375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.825 [2024-07-25 01:20:15.797402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.825 qpair failed and we were unable to recover it. 00:34:22.825 [2024-07-25 01:20:15.797520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.825 [2024-07-25 01:20:15.797551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.825 qpair failed and we were unable to recover it. 00:34:22.825 [2024-07-25 01:20:15.797674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.825 [2024-07-25 01:20:15.797700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.825 qpair failed and we were unable to recover it. 00:34:22.825 [2024-07-25 01:20:15.797823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.825 [2024-07-25 01:20:15.797849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.825 qpair failed and we were unable to recover it. 00:34:22.825 [2024-07-25 01:20:15.797990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.825 [2024-07-25 01:20:15.798015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.825 qpair failed and we were unable to recover it. 00:34:22.825 [2024-07-25 01:20:15.798129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.825 [2024-07-25 01:20:15.798155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.825 qpair failed and we were unable to recover it. 00:34:22.825 [2024-07-25 01:20:15.798271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.825 [2024-07-25 01:20:15.798297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.825 qpair failed and we were unable to recover it. 00:34:22.825 [2024-07-25 01:20:15.798437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.825 [2024-07-25 01:20:15.798463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.825 qpair failed and we were unable to recover it. 00:34:22.825 [2024-07-25 01:20:15.798605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.825 [2024-07-25 01:20:15.798631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.825 qpair failed and we were unable to recover it. 00:34:22.825 [2024-07-25 01:20:15.798777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.825 [2024-07-25 01:20:15.798803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.825 qpair failed and we were unable to recover it. 00:34:22.825 [2024-07-25 01:20:15.798975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.825 [2024-07-25 01:20:15.799002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.825 qpair failed and we were unable to recover it. 00:34:22.825 [2024-07-25 01:20:15.799129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.825 [2024-07-25 01:20:15.799156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.825 qpair failed and we were unable to recover it. 00:34:22.825 [2024-07-25 01:20:15.799308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.825 [2024-07-25 01:20:15.799335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.825 qpair failed and we were unable to recover it. 00:34:22.825 [2024-07-25 01:20:15.799481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.825 [2024-07-25 01:20:15.799507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.825 qpair failed and we were unable to recover it. 00:34:22.825 [2024-07-25 01:20:15.799637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.825 [2024-07-25 01:20:15.799662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.825 qpair failed and we were unable to recover it. 00:34:22.825 [2024-07-25 01:20:15.799813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.825 [2024-07-25 01:20:15.799839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.825 qpair failed and we were unable to recover it. 00:34:22.825 [2024-07-25 01:20:15.799977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.825 [2024-07-25 01:20:15.800003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.825 qpair failed and we were unable to recover it. 00:34:22.825 [2024-07-25 01:20:15.800128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.825 [2024-07-25 01:20:15.800154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.825 qpair failed and we were unable to recover it. 00:34:22.825 [2024-07-25 01:20:15.800275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.825 [2024-07-25 01:20:15.800302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.825 qpair failed and we were unable to recover it. 00:34:22.825 [2024-07-25 01:20:15.800418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.825 [2024-07-25 01:20:15.800444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.825 qpair failed and we were unable to recover it. 00:34:22.825 [2024-07-25 01:20:15.800610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.825 [2024-07-25 01:20:15.800636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.825 qpair failed and we were unable to recover it. 00:34:22.825 [2024-07-25 01:20:15.800860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.825 [2024-07-25 01:20:15.800886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.825 qpair failed and we were unable to recover it. 00:34:22.826 [2024-07-25 01:20:15.800993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.826 [2024-07-25 01:20:15.801019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.826 qpair failed and we were unable to recover it. 00:34:22.826 [2024-07-25 01:20:15.801164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.826 [2024-07-25 01:20:15.801190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.826 qpair failed and we were unable to recover it. 00:34:22.826 [2024-07-25 01:20:15.801341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.826 [2024-07-25 01:20:15.801368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.826 qpair failed and we were unable to recover it. 00:34:22.826 [2024-07-25 01:20:15.801500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.826 [2024-07-25 01:20:15.801527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.826 qpair failed and we were unable to recover it. 00:34:22.826 [2024-07-25 01:20:15.801650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.826 [2024-07-25 01:20:15.801676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.826 qpair failed and we were unable to recover it. 00:34:22.826 [2024-07-25 01:20:15.801822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.826 [2024-07-25 01:20:15.801848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.826 qpair failed and we were unable to recover it. 00:34:22.826 [2024-07-25 01:20:15.802000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.826 [2024-07-25 01:20:15.802026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.826 qpair failed and we were unable to recover it. 00:34:22.826 [2024-07-25 01:20:15.802253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.826 [2024-07-25 01:20:15.802280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.826 qpair failed and we were unable to recover it. 00:34:22.826 [2024-07-25 01:20:15.802433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.826 [2024-07-25 01:20:15.802459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.826 qpair failed and we were unable to recover it. 00:34:22.826 [2024-07-25 01:20:15.802681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.826 [2024-07-25 01:20:15.802706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.826 qpair failed and we were unable to recover it. 00:34:22.826 [2024-07-25 01:20:15.802855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.826 [2024-07-25 01:20:15.802881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.826 qpair failed and we were unable to recover it. 00:34:22.826 [2024-07-25 01:20:15.803023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.826 [2024-07-25 01:20:15.803050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.826 qpair failed and we were unable to recover it. 00:34:22.826 [2024-07-25 01:20:15.803187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.826 [2024-07-25 01:20:15.803213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.826 qpair failed and we were unable to recover it. 00:34:22.826 [2024-07-25 01:20:15.803364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.826 [2024-07-25 01:20:15.803392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.826 qpair failed and we were unable to recover it. 00:34:22.826 [2024-07-25 01:20:15.803507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.826 [2024-07-25 01:20:15.803534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.826 qpair failed and we were unable to recover it. 00:34:22.826 [2024-07-25 01:20:15.803656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.826 [2024-07-25 01:20:15.803682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.826 qpair failed and we were unable to recover it. 00:34:22.826 [2024-07-25 01:20:15.803805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.826 [2024-07-25 01:20:15.803831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.826 qpair failed and we were unable to recover it. 00:34:22.826 [2024-07-25 01:20:15.803952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.826 [2024-07-25 01:20:15.803978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.826 qpair failed and we were unable to recover it. 00:34:22.826 [2024-07-25 01:20:15.804093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.826 [2024-07-25 01:20:15.804119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.826 qpair failed and we were unable to recover it. 00:34:22.826 [2024-07-25 01:20:15.804232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.826 [2024-07-25 01:20:15.804269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.826 qpair failed and we were unable to recover it. 00:34:22.826 [2024-07-25 01:20:15.804416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.826 [2024-07-25 01:20:15.804442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.826 qpair failed and we were unable to recover it. 00:34:22.826 [2024-07-25 01:20:15.804554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.826 [2024-07-25 01:20:15.804580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.826 qpair failed and we were unable to recover it. 00:34:22.826 [2024-07-25 01:20:15.804694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.826 [2024-07-25 01:20:15.804720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.826 qpair failed and we were unable to recover it. 00:34:22.826 [2024-07-25 01:20:15.804864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.826 [2024-07-25 01:20:15.804890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.826 qpair failed and we were unable to recover it. 00:34:22.826 [2024-07-25 01:20:15.805022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.826 [2024-07-25 01:20:15.805048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.826 qpair failed and we were unable to recover it. 00:34:22.826 [2024-07-25 01:20:15.805193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.826 [2024-07-25 01:20:15.805220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.826 qpair failed and we were unable to recover it. 00:34:22.826 [2024-07-25 01:20:15.805347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.826 [2024-07-25 01:20:15.805375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.826 qpair failed and we were unable to recover it. 00:34:22.826 [2024-07-25 01:20:15.805527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.826 [2024-07-25 01:20:15.805553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.826 qpair failed and we were unable to recover it. 00:34:22.826 [2024-07-25 01:20:15.805669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.826 [2024-07-25 01:20:15.805695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.826 qpair failed and we were unable to recover it. 00:34:22.826 [2024-07-25 01:20:15.805834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.826 [2024-07-25 01:20:15.805859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.826 qpair failed and we were unable to recover it. 00:34:22.826 [2024-07-25 01:20:15.806004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.826 [2024-07-25 01:20:15.806031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.826 qpair failed and we were unable to recover it. 00:34:22.826 [2024-07-25 01:20:15.806188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.827 [2024-07-25 01:20:15.806215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.827 qpair failed and we were unable to recover it. 00:34:22.827 [2024-07-25 01:20:15.806347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.827 [2024-07-25 01:20:15.806373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.827 qpair failed and we were unable to recover it. 00:34:22.827 [2024-07-25 01:20:15.806501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.827 [2024-07-25 01:20:15.806528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.827 qpair failed and we were unable to recover it. 00:34:22.827 [2024-07-25 01:20:15.806675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.827 [2024-07-25 01:20:15.806703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.827 qpair failed and we were unable to recover it. 00:34:22.827 [2024-07-25 01:20:15.806823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.827 [2024-07-25 01:20:15.806851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.827 qpair failed and we were unable to recover it. 00:34:22.827 [2024-07-25 01:20:15.806994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.827 [2024-07-25 01:20:15.807020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.827 qpair failed and we were unable to recover it. 00:34:22.827 [2024-07-25 01:20:15.807140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.827 [2024-07-25 01:20:15.807167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.827 qpair failed and we were unable to recover it. 00:34:22.827 [2024-07-25 01:20:15.807286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.827 [2024-07-25 01:20:15.807313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.827 qpair failed and we were unable to recover it. 00:34:22.827 [2024-07-25 01:20:15.807424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.827 [2024-07-25 01:20:15.807451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.827 qpair failed and we were unable to recover it. 00:34:22.827 [2024-07-25 01:20:15.807594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.827 [2024-07-25 01:20:15.807621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.827 qpair failed and we were unable to recover it. 00:34:22.827 [2024-07-25 01:20:15.807755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.827 [2024-07-25 01:20:15.807781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.827 qpair failed and we were unable to recover it. 00:34:22.827 [2024-07-25 01:20:15.807913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.827 [2024-07-25 01:20:15.807939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.827 qpair failed and we were unable to recover it. 00:34:22.827 [2024-07-25 01:20:15.808139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.827 [2024-07-25 01:20:15.808166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.827 qpair failed and we were unable to recover it. 00:34:22.827 [2024-07-25 01:20:15.808293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.827 [2024-07-25 01:20:15.808321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.827 qpair failed and we were unable to recover it. 00:34:22.827 [2024-07-25 01:20:15.808441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.827 [2024-07-25 01:20:15.808468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.827 qpair failed and we were unable to recover it. 00:34:22.827 [2024-07-25 01:20:15.808593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.827 [2024-07-25 01:20:15.808619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.827 qpair failed and we were unable to recover it. 00:34:22.827 [2024-07-25 01:20:15.808765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.827 [2024-07-25 01:20:15.808791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.827 qpair failed and we were unable to recover it. 00:34:22.827 [2024-07-25 01:20:15.808927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.827 [2024-07-25 01:20:15.808953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.827 qpair failed and we were unable to recover it. 00:34:22.827 [2024-07-25 01:20:15.809096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.827 [2024-07-25 01:20:15.809122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.827 qpair failed and we were unable to recover it. 00:34:22.827 [2024-07-25 01:20:15.809238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.827 [2024-07-25 01:20:15.809270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.827 qpair failed and we were unable to recover it. 00:34:22.827 [2024-07-25 01:20:15.809417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.827 [2024-07-25 01:20:15.809443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.827 qpair failed and we were unable to recover it. 00:34:22.827 [2024-07-25 01:20:15.809555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.827 [2024-07-25 01:20:15.809583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.827 qpair failed and we were unable to recover it. 00:34:22.827 [2024-07-25 01:20:15.809725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.827 [2024-07-25 01:20:15.809752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.827 qpair failed and we were unable to recover it. 00:34:22.827 [2024-07-25 01:20:15.809892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.827 [2024-07-25 01:20:15.809918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.827 qpair failed and we were unable to recover it. 00:34:22.827 [2024-07-25 01:20:15.810045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.827 [2024-07-25 01:20:15.810072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.827 qpair failed and we were unable to recover it. 00:34:22.827 [2024-07-25 01:20:15.810185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.827 [2024-07-25 01:20:15.810211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.827 qpair failed and we were unable to recover it. 00:34:22.827 [2024-07-25 01:20:15.810336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.827 [2024-07-25 01:20:15.810363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.827 qpair failed and we were unable to recover it. 00:34:22.827 [2024-07-25 01:20:15.810505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.827 [2024-07-25 01:20:15.810531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.827 qpair failed and we were unable to recover it. 00:34:22.827 [2024-07-25 01:20:15.810646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.827 [2024-07-25 01:20:15.810677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.827 qpair failed and we were unable to recover it. 00:34:22.827 [2024-07-25 01:20:15.810822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.827 [2024-07-25 01:20:15.810849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.827 qpair failed and we were unable to recover it. 00:34:22.827 [2024-07-25 01:20:15.810963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.827 [2024-07-25 01:20:15.810991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.827 qpair failed and we were unable to recover it. 00:34:22.827 [2024-07-25 01:20:15.811129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.827 [2024-07-25 01:20:15.811157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.827 qpair failed and we were unable to recover it. 00:34:22.827 [2024-07-25 01:20:15.811345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.827 [2024-07-25 01:20:15.811372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.827 qpair failed and we were unable to recover it. 00:34:22.827 [2024-07-25 01:20:15.811496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.827 [2024-07-25 01:20:15.811524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.827 qpair failed and we were unable to recover it. 00:34:22.827 [2024-07-25 01:20:15.811667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.827 [2024-07-25 01:20:15.811711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.827 qpair failed and we were unable to recover it. 00:34:22.827 [2024-07-25 01:20:15.811840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.827 [2024-07-25 01:20:15.811870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.827 qpair failed and we were unable to recover it. 00:34:22.827 [2024-07-25 01:20:15.812037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.827 [2024-07-25 01:20:15.812064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.827 qpair failed and we were unable to recover it. 00:34:22.827 [2024-07-25 01:20:15.812191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.827 [2024-07-25 01:20:15.812229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.827 qpair failed and we were unable to recover it. 00:34:22.828 [2024-07-25 01:20:15.812372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.828 [2024-07-25 01:20:15.812398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.828 qpair failed and we were unable to recover it. 00:34:22.828 [2024-07-25 01:20:15.812509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.828 [2024-07-25 01:20:15.812535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.828 qpair failed and we were unable to recover it. 00:34:22.828 [2024-07-25 01:20:15.812664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.828 [2024-07-25 01:20:15.812690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.828 qpair failed and we were unable to recover it. 00:34:22.828 [2024-07-25 01:20:15.812832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.828 [2024-07-25 01:20:15.812862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.828 qpair failed and we were unable to recover it. 00:34:22.828 [2024-07-25 01:20:15.813026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.828 [2024-07-25 01:20:15.813052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.828 qpair failed and we were unable to recover it. 00:34:22.828 [2024-07-25 01:20:15.813196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.828 [2024-07-25 01:20:15.813222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.828 qpair failed and we were unable to recover it. 00:34:22.828 [2024-07-25 01:20:15.813396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.828 [2024-07-25 01:20:15.813422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.828 qpair failed and we were unable to recover it. 00:34:22.828 [2024-07-25 01:20:15.813536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.828 [2024-07-25 01:20:15.813562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.828 qpair failed and we were unable to recover it. 00:34:22.828 [2024-07-25 01:20:15.813671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.828 [2024-07-25 01:20:15.813697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.828 qpair failed and we were unable to recover it. 00:34:22.828 [2024-07-25 01:20:15.813862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.828 [2024-07-25 01:20:15.813891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.828 qpair failed and we were unable to recover it. 00:34:22.828 [2024-07-25 01:20:15.814019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.828 [2024-07-25 01:20:15.814045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.828 qpair failed and we were unable to recover it. 00:34:22.828 [2024-07-25 01:20:15.814186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.828 [2024-07-25 01:20:15.814212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.828 qpair failed and we were unable to recover it. 00:34:22.828 [2024-07-25 01:20:15.814371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.828 [2024-07-25 01:20:15.814398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.828 qpair failed and we were unable to recover it. 00:34:22.828 [2024-07-25 01:20:15.814524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.828 [2024-07-25 01:20:15.814550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.828 qpair failed and we were unable to recover it. 00:34:22.828 [2024-07-25 01:20:15.814675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.828 [2024-07-25 01:20:15.814702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.828 qpair failed and we were unable to recover it. 00:34:22.828 [2024-07-25 01:20:15.814845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.828 [2024-07-25 01:20:15.814871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.828 qpair failed and we were unable to recover it. 00:34:22.828 [2024-07-25 01:20:15.815012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.828 [2024-07-25 01:20:15.815038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.828 qpair failed and we were unable to recover it. 00:34:22.828 [2024-07-25 01:20:15.815203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.828 [2024-07-25 01:20:15.815251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.828 qpair failed and we were unable to recover it. 00:34:22.828 [2024-07-25 01:20:15.815414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.828 [2024-07-25 01:20:15.815443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.828 qpair failed and we were unable to recover it. 00:34:22.828 [2024-07-25 01:20:15.815574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.828 [2024-07-25 01:20:15.815612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:22.828 qpair failed and we were unable to recover it. 00:34:22.828 [2024-07-25 01:20:15.815770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.828 [2024-07-25 01:20:15.815814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.828 qpair failed and we were unable to recover it. 00:34:22.828 [2024-07-25 01:20:15.815960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.828 [2024-07-25 01:20:15.815989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.828 qpair failed and we were unable to recover it. 00:34:22.828 [2024-07-25 01:20:15.816225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.828 [2024-07-25 01:20:15.816257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.828 qpair failed and we were unable to recover it. 00:34:22.828 [2024-07-25 01:20:15.816382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.828 [2024-07-25 01:20:15.816408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.828 qpair failed and we were unable to recover it. 00:34:22.828 [2024-07-25 01:20:15.816555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.828 [2024-07-25 01:20:15.816582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.828 qpair failed and we were unable to recover it. 00:34:22.828 [2024-07-25 01:20:15.816725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.828 [2024-07-25 01:20:15.816751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.828 qpair failed and we were unable to recover it. 00:34:22.828 [2024-07-25 01:20:15.816885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.828 [2024-07-25 01:20:15.816911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.828 qpair failed and we were unable to recover it. 00:34:22.828 [2024-07-25 01:20:15.817033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.828 [2024-07-25 01:20:15.817059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.828 qpair failed and we were unable to recover it. 00:34:22.828 [2024-07-25 01:20:15.817205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.828 [2024-07-25 01:20:15.817231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.828 qpair failed and we were unable to recover it. 00:34:22.828 [2024-07-25 01:20:15.817384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.828 [2024-07-25 01:20:15.817411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.828 qpair failed and we were unable to recover it. 00:34:22.828 [2024-07-25 01:20:15.817577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.828 [2024-07-25 01:20:15.817607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.828 qpair failed and we were unable to recover it. 00:34:22.828 [2024-07-25 01:20:15.817728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.828 [2024-07-25 01:20:15.817756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.828 qpair failed and we were unable to recover it. 00:34:22.828 [2024-07-25 01:20:15.817871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.828 [2024-07-25 01:20:15.817898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.828 qpair failed and we were unable to recover it. 00:34:22.828 [2024-07-25 01:20:15.818043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.828 [2024-07-25 01:20:15.818069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.828 qpair failed and we were unable to recover it. 00:34:22.828 [2024-07-25 01:20:15.818267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.828 [2024-07-25 01:20:15.818294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.828 qpair failed and we were unable to recover it. 00:34:22.828 [2024-07-25 01:20:15.818437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.828 [2024-07-25 01:20:15.818464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.828 qpair failed and we were unable to recover it. 00:34:22.828 [2024-07-25 01:20:15.818605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.828 [2024-07-25 01:20:15.818632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.828 qpair failed and we were unable to recover it. 00:34:22.829 [2024-07-25 01:20:15.818748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.829 [2024-07-25 01:20:15.818785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.829 qpair failed and we were unable to recover it. 00:34:22.829 [2024-07-25 01:20:15.818932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.829 [2024-07-25 01:20:15.818969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.829 qpair failed and we were unable to recover it. 00:34:22.829 [2024-07-25 01:20:15.819113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.829 [2024-07-25 01:20:15.819140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.829 qpair failed and we were unable to recover it. 00:34:22.829 [2024-07-25 01:20:15.819302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.829 [2024-07-25 01:20:15.819329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.829 qpair failed and we were unable to recover it. 00:34:22.829 [2024-07-25 01:20:15.819471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.829 [2024-07-25 01:20:15.819497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.829 qpair failed and we were unable to recover it. 00:34:22.829 [2024-07-25 01:20:15.819646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.829 [2024-07-25 01:20:15.819672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.829 qpair failed and we were unable to recover it. 00:34:22.829 [2024-07-25 01:20:15.819824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.829 [2024-07-25 01:20:15.819850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.829 qpair failed and we were unable to recover it. 00:34:22.829 [2024-07-25 01:20:15.819973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.829 [2024-07-25 01:20:15.819999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.829 qpair failed and we were unable to recover it. 00:34:22.829 [2024-07-25 01:20:15.820110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.829 [2024-07-25 01:20:15.820136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.829 qpair failed and we were unable to recover it. 00:34:22.829 [2024-07-25 01:20:15.820252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.829 [2024-07-25 01:20:15.820289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.829 qpair failed and we were unable to recover it. 00:34:22.829 [2024-07-25 01:20:15.820413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.829 [2024-07-25 01:20:15.820439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.829 qpair failed and we were unable to recover it. 00:34:22.829 [2024-07-25 01:20:15.820600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.829 [2024-07-25 01:20:15.820626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.829 qpair failed and we were unable to recover it. 00:34:22.829 [2024-07-25 01:20:15.820738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.829 [2024-07-25 01:20:15.820765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.829 qpair failed and we were unable to recover it. 00:34:22.829 [2024-07-25 01:20:15.820933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.829 [2024-07-25 01:20:15.820975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.829 qpair failed and we were unable to recover it. 00:34:22.829 [2024-07-25 01:20:15.821114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.829 [2024-07-25 01:20:15.821143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.829 qpair failed and we were unable to recover it. 00:34:22.829 [2024-07-25 01:20:15.821300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.829 [2024-07-25 01:20:15.821327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.829 qpair failed and we were unable to recover it. 00:34:22.829 [2024-07-25 01:20:15.821495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.829 [2024-07-25 01:20:15.821521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.829 qpair failed and we were unable to recover it. 00:34:22.829 [2024-07-25 01:20:15.821680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.829 [2024-07-25 01:20:15.821710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.829 qpair failed and we were unable to recover it. 00:34:22.829 [2024-07-25 01:20:15.821878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.829 [2024-07-25 01:20:15.821904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.829 qpair failed and we were unable to recover it. 00:34:22.829 [2024-07-25 01:20:15.822054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.829 [2024-07-25 01:20:15.822097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.829 qpair failed and we were unable to recover it. 00:34:22.829 [2024-07-25 01:20:15.822279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.829 [2024-07-25 01:20:15.822306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.829 qpair failed and we were unable to recover it. 00:34:22.829 [2024-07-25 01:20:15.822423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.829 [2024-07-25 01:20:15.822449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.829 qpair failed and we were unable to recover it. 00:34:22.829 [2024-07-25 01:20:15.822562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.829 [2024-07-25 01:20:15.822589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.829 qpair failed and we were unable to recover it. 00:34:22.829 [2024-07-25 01:20:15.822767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.829 [2024-07-25 01:20:15.822794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.829 qpair failed and we were unable to recover it. 00:34:22.829 [2024-07-25 01:20:15.822914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.829 [2024-07-25 01:20:15.822941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.829 qpair failed and we were unable to recover it. 00:34:22.829 [2024-07-25 01:20:15.823060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.829 [2024-07-25 01:20:15.823087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.829 qpair failed and we were unable to recover it. 00:34:22.829 [2024-07-25 01:20:15.823252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.829 [2024-07-25 01:20:15.823298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.829 qpair failed and we were unable to recover it. 00:34:22.829 [2024-07-25 01:20:15.823409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.829 [2024-07-25 01:20:15.823435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.829 qpair failed and we were unable to recover it. 00:34:22.829 [2024-07-25 01:20:15.823574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.829 [2024-07-25 01:20:15.823601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.829 qpair failed and we were unable to recover it. 00:34:22.829 [2024-07-25 01:20:15.823718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.829 [2024-07-25 01:20:15.823744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.829 qpair failed and we were unable to recover it. 00:34:22.829 [2024-07-25 01:20:15.823887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.829 [2024-07-25 01:20:15.823913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.829 qpair failed and we were unable to recover it. 00:34:22.829 [2024-07-25 01:20:15.824091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.829 [2024-07-25 01:20:15.824135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.829 qpair failed and we were unable to recover it. 00:34:22.829 [2024-07-25 01:20:15.824279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.829 [2024-07-25 01:20:15.824305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.829 qpair failed and we were unable to recover it. 00:34:22.829 [2024-07-25 01:20:15.824426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.829 [2024-07-25 01:20:15.824456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.829 qpair failed and we were unable to recover it. 00:34:22.829 [2024-07-25 01:20:15.824570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.829 [2024-07-25 01:20:15.824596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.829 qpair failed and we were unable to recover it. 00:34:22.829 [2024-07-25 01:20:15.824703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.829 [2024-07-25 01:20:15.824729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.829 qpair failed and we were unable to recover it. 00:34:22.829 [2024-07-25 01:20:15.824840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.829 [2024-07-25 01:20:15.824867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.829 qpair failed and we were unable to recover it. 00:34:22.829 [2024-07-25 01:20:15.825001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.830 [2024-07-25 01:20:15.825028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.830 qpair failed and we were unable to recover it. 00:34:22.830 [2024-07-25 01:20:15.825149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.830 [2024-07-25 01:20:15.825186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.830 qpair failed and we were unable to recover it. 00:34:22.830 [2024-07-25 01:20:15.825342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.830 [2024-07-25 01:20:15.825369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.830 qpair failed and we were unable to recover it. 00:34:22.830 [2024-07-25 01:20:15.825484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.830 [2024-07-25 01:20:15.825511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.830 qpair failed and we were unable to recover it. 00:34:22.830 [2024-07-25 01:20:15.825650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.830 [2024-07-25 01:20:15.825676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.830 qpair failed and we were unable to recover it. 00:34:22.830 [2024-07-25 01:20:15.825828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.830 [2024-07-25 01:20:15.825854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.830 qpair failed and we were unable to recover it. 00:34:22.830 [2024-07-25 01:20:15.825973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.830 [2024-07-25 01:20:15.825999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.830 qpair failed and we were unable to recover it. 00:34:22.830 [2024-07-25 01:20:15.826167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.830 [2024-07-25 01:20:15.826193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.830 qpair failed and we were unable to recover it. 00:34:22.830 [2024-07-25 01:20:15.826315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.830 [2024-07-25 01:20:15.826342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.830 qpair failed and we were unable to recover it. 00:34:22.830 [2024-07-25 01:20:15.826484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.830 [2024-07-25 01:20:15.826510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.830 qpair failed and we were unable to recover it. 00:34:22.830 [2024-07-25 01:20:15.826658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.830 [2024-07-25 01:20:15.826684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.830 qpair failed and we were unable to recover it. 00:34:22.830 [2024-07-25 01:20:15.826790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.830 [2024-07-25 01:20:15.826815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.830 qpair failed and we were unable to recover it. 00:34:22.830 [2024-07-25 01:20:15.826929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.830 [2024-07-25 01:20:15.826955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.830 qpair failed and we were unable to recover it. 00:34:22.830 [2024-07-25 01:20:15.827101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.830 [2024-07-25 01:20:15.827127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.830 qpair failed and we were unable to recover it. 00:34:22.830 [2024-07-25 01:20:15.827350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.830 [2024-07-25 01:20:15.827376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.830 qpair failed and we were unable to recover it. 00:34:22.830 [2024-07-25 01:20:15.827490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.830 [2024-07-25 01:20:15.827516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.830 qpair failed and we were unable to recover it. 00:34:22.830 [2024-07-25 01:20:15.827683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.830 [2024-07-25 01:20:15.827709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.830 qpair failed and we were unable to recover it. 00:34:22.830 [2024-07-25 01:20:15.827851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.830 [2024-07-25 01:20:15.827877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.830 qpair failed and we were unable to recover it. 00:34:22.830 [2024-07-25 01:20:15.828039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.830 [2024-07-25 01:20:15.828064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.830 qpair failed and we were unable to recover it. 00:34:22.830 [2024-07-25 01:20:15.828198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.830 [2024-07-25 01:20:15.828226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.830 qpair failed and we were unable to recover it. 00:34:22.830 [2024-07-25 01:20:15.828389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.830 [2024-07-25 01:20:15.828416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.830 qpair failed and we were unable to recover it. 00:34:22.830 [2024-07-25 01:20:15.828534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.830 [2024-07-25 01:20:15.828560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.830 qpair failed and we were unable to recover it. 00:34:22.830 [2024-07-25 01:20:15.828688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.830 [2024-07-25 01:20:15.828714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.830 qpair failed and we were unable to recover it. 00:34:22.830 [2024-07-25 01:20:15.828838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.830 [2024-07-25 01:20:15.828864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.830 qpair failed and we were unable to recover it. 00:34:22.830 [2024-07-25 01:20:15.828987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.830 [2024-07-25 01:20:15.829013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.830 qpair failed and we were unable to recover it. 00:34:22.830 [2024-07-25 01:20:15.829156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.830 [2024-07-25 01:20:15.829182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.830 qpair failed and we were unable to recover it. 00:34:22.830 [2024-07-25 01:20:15.829356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.830 [2024-07-25 01:20:15.829383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.830 qpair failed and we were unable to recover it. 00:34:22.830 [2024-07-25 01:20:15.829498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.830 [2024-07-25 01:20:15.829524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.830 qpair failed and we were unable to recover it. 00:34:22.830 [2024-07-25 01:20:15.829637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.830 [2024-07-25 01:20:15.829664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.830 qpair failed and we were unable to recover it. 00:34:22.830 [2024-07-25 01:20:15.829804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.830 [2024-07-25 01:20:15.829830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.830 qpair failed and we were unable to recover it. 00:34:22.830 [2024-07-25 01:20:15.829974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.830 [2024-07-25 01:20:15.830000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.830 qpair failed and we were unable to recover it. 00:34:22.830 [2024-07-25 01:20:15.830167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.830 [2024-07-25 01:20:15.830194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.830 qpair failed and we were unable to recover it. 00:34:22.830 [2024-07-25 01:20:15.830358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.830 [2024-07-25 01:20:15.830385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.830 qpair failed and we were unable to recover it. 00:34:22.830 [2024-07-25 01:20:15.830548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.830 [2024-07-25 01:20:15.830591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.830 qpair failed and we were unable to recover it. 00:34:22.830 [2024-07-25 01:20:15.830744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.830 [2024-07-25 01:20:15.830771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.830 qpair failed and we were unable to recover it. 00:34:22.830 [2024-07-25 01:20:15.830896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.830 [2024-07-25 01:20:15.830922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.830 qpair failed and we were unable to recover it. 00:34:22.830 [2024-07-25 01:20:15.831061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.830 [2024-07-25 01:20:15.831091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.830 qpair failed and we were unable to recover it. 00:34:22.830 [2024-07-25 01:20:15.831251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.830 [2024-07-25 01:20:15.831297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.830 qpair failed and we were unable to recover it. 00:34:22.830 [2024-07-25 01:20:15.831454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.831 [2024-07-25 01:20:15.831480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.831 qpair failed and we were unable to recover it. 00:34:22.831 [2024-07-25 01:20:15.831601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.831 [2024-07-25 01:20:15.831642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.831 qpair failed and we were unable to recover it. 00:34:22.831 [2024-07-25 01:20:15.831810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.831 [2024-07-25 01:20:15.831836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.831 qpair failed and we were unable to recover it. 00:34:22.831 [2024-07-25 01:20:15.831974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.831 [2024-07-25 01:20:15.832001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.831 qpair failed and we were unable to recover it. 00:34:22.831 [2024-07-25 01:20:15.832122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.831 [2024-07-25 01:20:15.832164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.831 qpair failed and we were unable to recover it. 00:34:22.831 [2024-07-25 01:20:15.832323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.831 [2024-07-25 01:20:15.832351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.831 qpair failed and we were unable to recover it. 00:34:22.831 [2024-07-25 01:20:15.832505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.831 [2024-07-25 01:20:15.832534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.831 qpair failed and we were unable to recover it. 00:34:22.831 [2024-07-25 01:20:15.832703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.831 [2024-07-25 01:20:15.832730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.831 qpair failed and we were unable to recover it. 00:34:22.831 [2024-07-25 01:20:15.832894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.831 [2024-07-25 01:20:15.832921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.831 qpair failed and we were unable to recover it. 00:34:22.831 [2024-07-25 01:20:15.833062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.831 [2024-07-25 01:20:15.833089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.831 qpair failed and we were unable to recover it. 00:34:22.831 [2024-07-25 01:20:15.833233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.831 [2024-07-25 01:20:15.833283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.831 qpair failed and we were unable to recover it. 00:34:22.831 [2024-07-25 01:20:15.833428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.831 [2024-07-25 01:20:15.833456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.831 qpair failed and we were unable to recover it. 00:34:22.831 [2024-07-25 01:20:15.833596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.831 [2024-07-25 01:20:15.833622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.831 qpair failed and we were unable to recover it. 00:34:22.831 [2024-07-25 01:20:15.833761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.831 [2024-07-25 01:20:15.833787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.831 qpair failed and we were unable to recover it. 00:34:22.831 [2024-07-25 01:20:15.833912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.831 [2024-07-25 01:20:15.833939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.831 qpair failed and we were unable to recover it. 00:34:22.831 [2024-07-25 01:20:15.834084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.831 [2024-07-25 01:20:15.834110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.831 qpair failed and we were unable to recover it. 00:34:22.831 [2024-07-25 01:20:15.834230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.831 [2024-07-25 01:20:15.834262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.831 qpair failed and we were unable to recover it. 00:34:22.831 [2024-07-25 01:20:15.834420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.831 [2024-07-25 01:20:15.834448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.831 qpair failed and we were unable to recover it. 00:34:22.831 [2024-07-25 01:20:15.834573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.831 [2024-07-25 01:20:15.834599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.831 qpair failed and we were unable to recover it. 00:34:22.831 [2024-07-25 01:20:15.834746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.831 [2024-07-25 01:20:15.834788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.831 qpair failed and we were unable to recover it. 00:34:22.831 [2024-07-25 01:20:15.834935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.831 [2024-07-25 01:20:15.834961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.831 qpair failed and we were unable to recover it. 00:34:22.831 [2024-07-25 01:20:15.835153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.831 [2024-07-25 01:20:15.835179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.831 qpair failed and we were unable to recover it. 00:34:22.831 [2024-07-25 01:20:15.835307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.831 [2024-07-25 01:20:15.835335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.831 qpair failed and we were unable to recover it. 00:34:22.831 [2024-07-25 01:20:15.835453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.831 [2024-07-25 01:20:15.835480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.831 qpair failed and we were unable to recover it. 00:34:22.831 [2024-07-25 01:20:15.835645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.831 [2024-07-25 01:20:15.835671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.831 qpair failed and we were unable to recover it. 00:34:22.831 [2024-07-25 01:20:15.835784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.831 [2024-07-25 01:20:15.835814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.831 qpair failed and we were unable to recover it. 00:34:22.831 [2024-07-25 01:20:15.835962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.831 [2024-07-25 01:20:15.835988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.831 qpair failed and we were unable to recover it. 00:34:22.831 [2024-07-25 01:20:15.836133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.831 [2024-07-25 01:20:15.836158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.831 qpair failed and we were unable to recover it. 00:34:22.831 [2024-07-25 01:20:15.836310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.831 [2024-07-25 01:20:15.836337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.831 qpair failed and we were unable to recover it. 00:34:22.831 [2024-07-25 01:20:15.836460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.831 [2024-07-25 01:20:15.836485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.831 qpair failed and we were unable to recover it. 00:34:22.831 [2024-07-25 01:20:15.836599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.831 [2024-07-25 01:20:15.836625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.831 qpair failed and we were unable to recover it. 00:34:22.831 [2024-07-25 01:20:15.836744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.831 [2024-07-25 01:20:15.836771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.831 qpair failed and we were unable to recover it. 00:34:22.832 [2024-07-25 01:20:15.836961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.832 [2024-07-25 01:20:15.836987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.832 qpair failed and we were unable to recover it. 00:34:22.832 [2024-07-25 01:20:15.837106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.832 [2024-07-25 01:20:15.837133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.832 qpair failed and we were unable to recover it. 00:34:22.832 [2024-07-25 01:20:15.837272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.832 [2024-07-25 01:20:15.837300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.832 qpair failed and we were unable to recover it. 00:34:22.832 [2024-07-25 01:20:15.837413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.832 [2024-07-25 01:20:15.837439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.832 qpair failed and we were unable to recover it. 00:34:22.832 [2024-07-25 01:20:15.837561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.832 [2024-07-25 01:20:15.837587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.832 qpair failed and we were unable to recover it. 00:34:22.832 [2024-07-25 01:20:15.837711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.832 [2024-07-25 01:20:15.837737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.832 qpair failed and we were unable to recover it. 00:34:22.832 [2024-07-25 01:20:15.837889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.832 [2024-07-25 01:20:15.837915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.832 qpair failed and we were unable to recover it. 00:34:22.832 [2024-07-25 01:20:15.838067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.832 [2024-07-25 01:20:15.838094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.832 qpair failed and we were unable to recover it. 00:34:22.832 [2024-07-25 01:20:15.838235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.832 [2024-07-25 01:20:15.838287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.832 qpair failed and we were unable to recover it. 00:34:22.832 [2024-07-25 01:20:15.838409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.832 [2024-07-25 01:20:15.838436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.832 qpair failed and we were unable to recover it. 00:34:22.832 [2024-07-25 01:20:15.838576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.832 [2024-07-25 01:20:15.838603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.832 qpair failed and we were unable to recover it. 00:34:22.832 [2024-07-25 01:20:15.838777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.832 [2024-07-25 01:20:15.838819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.832 qpair failed and we were unable to recover it. 00:34:22.832 [2024-07-25 01:20:15.838997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.832 [2024-07-25 01:20:15.839024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.832 qpair failed and we were unable to recover it. 00:34:22.832 [2024-07-25 01:20:15.839143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.832 [2024-07-25 01:20:15.839169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.832 qpair failed and we were unable to recover it. 00:34:22.832 [2024-07-25 01:20:15.839314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.832 [2024-07-25 01:20:15.839342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.832 qpair failed and we were unable to recover it. 00:34:22.832 [2024-07-25 01:20:15.839463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.832 [2024-07-25 01:20:15.839490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.832 qpair failed and we were unable to recover it. 00:34:22.832 [2024-07-25 01:20:15.839609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.832 [2024-07-25 01:20:15.839635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.832 qpair failed and we were unable to recover it. 00:34:22.832 [2024-07-25 01:20:15.839780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.832 [2024-07-25 01:20:15.839806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.832 qpair failed and we were unable to recover it. 00:34:22.832 [2024-07-25 01:20:15.839969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.832 [2024-07-25 01:20:15.839998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.832 qpair failed and we were unable to recover it. 00:34:22.832 [2024-07-25 01:20:15.840141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.832 [2024-07-25 01:20:15.840166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.832 qpair failed and we were unable to recover it. 00:34:22.832 [2024-07-25 01:20:15.840303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.832 [2024-07-25 01:20:15.840330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.832 qpair failed and we were unable to recover it. 00:34:22.832 [2024-07-25 01:20:15.840569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.832 [2024-07-25 01:20:15.840596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.832 qpair failed and we were unable to recover it. 00:34:22.832 [2024-07-25 01:20:15.840822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.832 [2024-07-25 01:20:15.840849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.832 qpair failed and we were unable to recover it. 00:34:22.832 [2024-07-25 01:20:15.841028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.832 [2024-07-25 01:20:15.841055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.832 qpair failed and we were unable to recover it. 00:34:22.832 [2024-07-25 01:20:15.841176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.832 [2024-07-25 01:20:15.841201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.832 qpair failed and we were unable to recover it. 00:34:22.832 [2024-07-25 01:20:15.841393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.832 [2024-07-25 01:20:15.841420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.832 qpair failed and we were unable to recover it. 00:34:22.832 [2024-07-25 01:20:15.841569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.832 [2024-07-25 01:20:15.841611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.832 qpair failed and we were unable to recover it. 00:34:22.832 [2024-07-25 01:20:15.841731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.832 [2024-07-25 01:20:15.841759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.832 qpair failed and we were unable to recover it. 00:34:22.832 [2024-07-25 01:20:15.841885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.832 [2024-07-25 01:20:15.841911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.832 qpair failed and we were unable to recover it. 00:34:22.832 [2024-07-25 01:20:15.842056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.832 [2024-07-25 01:20:15.842082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.832 qpair failed and we were unable to recover it. 00:34:22.832 [2024-07-25 01:20:15.842235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.832 [2024-07-25 01:20:15.842269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.832 qpair failed and we were unable to recover it. 00:34:22.832 [2024-07-25 01:20:15.842425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.832 [2024-07-25 01:20:15.842451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.832 qpair failed and we were unable to recover it. 00:34:22.832 [2024-07-25 01:20:15.842578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.832 [2024-07-25 01:20:15.842605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.832 qpair failed and we were unable to recover it. 00:34:22.832 [2024-07-25 01:20:15.842765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.832 [2024-07-25 01:20:15.842798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.832 qpair failed and we were unable to recover it. 00:34:22.832 [2024-07-25 01:20:15.842992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.832 [2024-07-25 01:20:15.843018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.832 qpair failed and we were unable to recover it. 00:34:22.832 [2024-07-25 01:20:15.843174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.832 [2024-07-25 01:20:15.843204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.832 qpair failed and we were unable to recover it. 00:34:22.832 [2024-07-25 01:20:15.843352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.832 [2024-07-25 01:20:15.843379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.832 qpair failed and we were unable to recover it. 00:34:22.833 [2024-07-25 01:20:15.843535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.833 [2024-07-25 01:20:15.843561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.833 qpair failed and we were unable to recover it. 00:34:22.833 [2024-07-25 01:20:15.843711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.833 [2024-07-25 01:20:15.843740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.833 qpair failed and we were unable to recover it. 00:34:22.833 [2024-07-25 01:20:15.843866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.833 [2024-07-25 01:20:15.843895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.833 qpair failed and we were unable to recover it. 00:34:22.833 [2024-07-25 01:20:15.844122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.833 [2024-07-25 01:20:15.844149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.833 qpair failed and we were unable to recover it. 00:34:22.833 [2024-07-25 01:20:15.844270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.833 [2024-07-25 01:20:15.844313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.833 qpair failed and we were unable to recover it. 00:34:22.833 [2024-07-25 01:20:15.844438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.833 [2024-07-25 01:20:15.844465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.833 qpair failed and we were unable to recover it. 00:34:22.833 [2024-07-25 01:20:15.844621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.833 [2024-07-25 01:20:15.844647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.833 qpair failed and we were unable to recover it. 00:34:22.833 [2024-07-25 01:20:15.844766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.833 [2024-07-25 01:20:15.844792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.833 qpair failed and we were unable to recover it. 00:34:22.833 [2024-07-25 01:20:15.844927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.833 [2024-07-25 01:20:15.844955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.833 qpair failed and we were unable to recover it. 00:34:22.833 [2024-07-25 01:20:15.845115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.833 [2024-07-25 01:20:15.845142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.833 qpair failed and we were unable to recover it. 00:34:22.833 [2024-07-25 01:20:15.845264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.833 [2024-07-25 01:20:15.845308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.833 qpair failed and we were unable to recover it. 00:34:22.833 [2024-07-25 01:20:15.845440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.833 [2024-07-25 01:20:15.845467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.833 qpair failed and we were unable to recover it. 00:34:22.833 [2024-07-25 01:20:15.845633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.833 [2024-07-25 01:20:15.845659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.833 qpair failed and we were unable to recover it. 00:34:22.833 [2024-07-25 01:20:15.845806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.833 [2024-07-25 01:20:15.845849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.833 qpair failed and we were unable to recover it. 00:34:22.833 [2024-07-25 01:20:15.846008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.833 [2024-07-25 01:20:15.846034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.833 qpair failed and we were unable to recover it. 00:34:22.833 [2024-07-25 01:20:15.846204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.833 [2024-07-25 01:20:15.846230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.833 qpair failed and we were unable to recover it. 00:34:22.833 [2024-07-25 01:20:15.846361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.833 [2024-07-25 01:20:15.846389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.833 qpair failed and we were unable to recover it. 00:34:22.833 [2024-07-25 01:20:15.846604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.833 [2024-07-25 01:20:15.846630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.833 qpair failed and we were unable to recover it. 00:34:22.833 [2024-07-25 01:20:15.846751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.833 [2024-07-25 01:20:15.846777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.833 qpair failed and we were unable to recover it. 00:34:22.833 [2024-07-25 01:20:15.846918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.833 [2024-07-25 01:20:15.846961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.833 qpair failed and we were unable to recover it. 00:34:22.833 [2024-07-25 01:20:15.847120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.833 [2024-07-25 01:20:15.847147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.833 qpair failed and we were unable to recover it. 00:34:22.833 [2024-07-25 01:20:15.847388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.833 [2024-07-25 01:20:15.847415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.833 qpair failed and we were unable to recover it. 00:34:22.833 [2024-07-25 01:20:15.847550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.833 [2024-07-25 01:20:15.847576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.833 qpair failed and we were unable to recover it. 00:34:22.833 [2024-07-25 01:20:15.847722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.833 [2024-07-25 01:20:15.847764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.833 qpair failed and we were unable to recover it. 00:34:22.833 [2024-07-25 01:20:15.847931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.833 [2024-07-25 01:20:15.847957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.833 qpair failed and we were unable to recover it. 00:34:22.833 [2024-07-25 01:20:15.848179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.833 [2024-07-25 01:20:15.848205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.833 qpair failed and we were unable to recover it. 00:34:22.833 [2024-07-25 01:20:15.848335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.833 [2024-07-25 01:20:15.848362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.833 qpair failed and we were unable to recover it. 00:34:22.833 [2024-07-25 01:20:15.848478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.833 [2024-07-25 01:20:15.848504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.833 qpair failed and we were unable to recover it. 00:34:22.833 [2024-07-25 01:20:15.848624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.833 [2024-07-25 01:20:15.848650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.833 qpair failed and we were unable to recover it. 00:34:22.833 [2024-07-25 01:20:15.848813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.833 [2024-07-25 01:20:15.848840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.833 qpair failed and we were unable to recover it. 00:34:22.833 [2024-07-25 01:20:15.848960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.833 [2024-07-25 01:20:15.848987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.833 qpair failed and we were unable to recover it. 00:34:22.833 [2024-07-25 01:20:15.849122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.833 [2024-07-25 01:20:15.849151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.833 qpair failed and we were unable to recover it. 00:34:22.833 [2024-07-25 01:20:15.849318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.833 [2024-07-25 01:20:15.849344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.833 qpair failed and we were unable to recover it. 00:34:22.833 [2024-07-25 01:20:15.849465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.833 [2024-07-25 01:20:15.849492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.833 qpair failed and we were unable to recover it. 00:34:22.833 [2024-07-25 01:20:15.849606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.833 [2024-07-25 01:20:15.849633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.833 qpair failed and we were unable to recover it. 00:34:22.833 [2024-07-25 01:20:15.849759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.833 [2024-07-25 01:20:15.849785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.833 qpair failed and we were unable to recover it. 00:34:22.833 [2024-07-25 01:20:15.849910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.833 [2024-07-25 01:20:15.849939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.833 qpair failed and we were unable to recover it. 00:34:22.833 [2024-07-25 01:20:15.850086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.834 [2024-07-25 01:20:15.850111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.834 qpair failed and we were unable to recover it. 00:34:22.834 [2024-07-25 01:20:15.850261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.834 [2024-07-25 01:20:15.850287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.834 qpair failed and we were unable to recover it. 00:34:22.834 [2024-07-25 01:20:15.850427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.834 [2024-07-25 01:20:15.850454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.834 qpair failed and we were unable to recover it. 00:34:22.834 [2024-07-25 01:20:15.850603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.834 [2024-07-25 01:20:15.850628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.834 qpair failed and we were unable to recover it. 00:34:22.834 [2024-07-25 01:20:15.850777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.834 [2024-07-25 01:20:15.850803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.834 qpair failed and we were unable to recover it. 00:34:22.834 [2024-07-25 01:20:15.850929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.834 [2024-07-25 01:20:15.850955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.834 qpair failed and we were unable to recover it. 00:34:22.834 [2024-07-25 01:20:15.851123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.834 [2024-07-25 01:20:15.851149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.834 qpair failed and we were unable to recover it. 00:34:22.834 [2024-07-25 01:20:15.851262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.834 [2024-07-25 01:20:15.851299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.834 qpair failed and we were unable to recover it. 00:34:22.834 [2024-07-25 01:20:15.851428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.834 [2024-07-25 01:20:15.851454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.834 qpair failed and we were unable to recover it. 00:34:22.834 [2024-07-25 01:20:15.851571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.834 [2024-07-25 01:20:15.851597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.834 qpair failed and we were unable to recover it. 00:34:22.834 [2024-07-25 01:20:15.851737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.834 [2024-07-25 01:20:15.851763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.834 qpair failed and we were unable to recover it. 00:34:22.834 [2024-07-25 01:20:15.851912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.834 [2024-07-25 01:20:15.851939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.834 qpair failed and we were unable to recover it. 00:34:22.834 [2024-07-25 01:20:15.852085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.834 [2024-07-25 01:20:15.852111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.834 qpair failed and we were unable to recover it. 00:34:22.834 [2024-07-25 01:20:15.852226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.834 [2024-07-25 01:20:15.852257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.834 qpair failed and we were unable to recover it. 00:34:22.834 [2024-07-25 01:20:15.852378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.834 [2024-07-25 01:20:15.852403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.834 qpair failed and we were unable to recover it. 00:34:22.834 [2024-07-25 01:20:15.852543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.834 [2024-07-25 01:20:15.852568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.834 qpair failed and we were unable to recover it. 00:34:22.834 [2024-07-25 01:20:15.852706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.834 [2024-07-25 01:20:15.852731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.834 qpair failed and we were unable to recover it. 00:34:22.834 [2024-07-25 01:20:15.852899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.834 [2024-07-25 01:20:15.852925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.834 qpair failed and we were unable to recover it. 00:34:22.834 [2024-07-25 01:20:15.853048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.834 [2024-07-25 01:20:15.853075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.834 qpair failed and we were unable to recover it. 00:34:22.834 [2024-07-25 01:20:15.853217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.834 [2024-07-25 01:20:15.853259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.834 qpair failed and we were unable to recover it. 00:34:22.834 [2024-07-25 01:20:15.853382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.834 [2024-07-25 01:20:15.853408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.834 qpair failed and we were unable to recover it. 00:34:22.834 [2024-07-25 01:20:15.853527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.834 [2024-07-25 01:20:15.853554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.834 qpair failed and we were unable to recover it. 00:34:22.834 [2024-07-25 01:20:15.853695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.834 [2024-07-25 01:20:15.853720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.834 qpair failed and we were unable to recover it. 00:34:22.834 [2024-07-25 01:20:15.853838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.834 [2024-07-25 01:20:15.853864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.834 qpair failed and we were unable to recover it. 00:34:22.834 [2024-07-25 01:20:15.853984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.834 [2024-07-25 01:20:15.854010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.834 qpair failed and we were unable to recover it. 00:34:22.834 [2024-07-25 01:20:15.854125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.834 [2024-07-25 01:20:15.854150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.834 qpair failed and we were unable to recover it. 00:34:22.834 [2024-07-25 01:20:15.854296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.834 [2024-07-25 01:20:15.854323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.834 qpair failed and we were unable to recover it. 00:34:22.834 [2024-07-25 01:20:15.854464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.834 [2024-07-25 01:20:15.854490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.834 qpair failed and we were unable to recover it. 00:34:22.834 [2024-07-25 01:20:15.854624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.834 [2024-07-25 01:20:15.854650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.834 qpair failed and we were unable to recover it. 00:34:22.834 [2024-07-25 01:20:15.854786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.834 [2024-07-25 01:20:15.854812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.834 qpair failed and we were unable to recover it. 00:34:22.834 [2024-07-25 01:20:15.854957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.834 [2024-07-25 01:20:15.854983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.834 qpair failed and we were unable to recover it. 00:34:22.834 [2024-07-25 01:20:15.855130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.834 [2024-07-25 01:20:15.855155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.834 qpair failed and we were unable to recover it. 00:34:22.834 [2024-07-25 01:20:15.855291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.834 [2024-07-25 01:20:15.855318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.834 qpair failed and we were unable to recover it. 00:34:22.834 [2024-07-25 01:20:15.855481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.834 [2024-07-25 01:20:15.855508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.834 qpair failed and we were unable to recover it. 00:34:22.834 [2024-07-25 01:20:15.855684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.834 [2024-07-25 01:20:15.855709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.834 qpair failed and we were unable to recover it. 00:34:22.834 [2024-07-25 01:20:15.855834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.834 [2024-07-25 01:20:15.855859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.834 qpair failed and we were unable to recover it. 00:34:22.834 [2024-07-25 01:20:15.855974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.834 [2024-07-25 01:20:15.855999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.834 qpair failed and we were unable to recover it. 00:34:22.834 [2024-07-25 01:20:15.856168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.835 [2024-07-25 01:20:15.856194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.835 qpair failed and we were unable to recover it. 00:34:22.835 [2024-07-25 01:20:15.856306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.835 [2024-07-25 01:20:15.856332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.835 qpair failed and we were unable to recover it. 00:34:22.835 [2024-07-25 01:20:15.856462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.835 [2024-07-25 01:20:15.856494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.835 qpair failed and we were unable to recover it. 00:34:22.835 [2024-07-25 01:20:15.856641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.835 [2024-07-25 01:20:15.856667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.835 qpair failed and we were unable to recover it. 00:34:22.835 [2024-07-25 01:20:15.856813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.835 [2024-07-25 01:20:15.856838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.835 qpair failed and we were unable to recover it. 00:34:22.835 [2024-07-25 01:20:15.856946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.835 [2024-07-25 01:20:15.856971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.835 qpair failed and we were unable to recover it. 00:34:22.835 [2024-07-25 01:20:15.857086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.835 [2024-07-25 01:20:15.857112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.835 qpair failed and we were unable to recover it. 00:34:22.835 [2024-07-25 01:20:15.857221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.835 [2024-07-25 01:20:15.857253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.835 qpair failed and we were unable to recover it. 00:34:22.835 [2024-07-25 01:20:15.857390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.835 [2024-07-25 01:20:15.857416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.835 qpair failed and we were unable to recover it. 00:34:22.835 [2024-07-25 01:20:15.857535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.835 [2024-07-25 01:20:15.857561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.835 qpair failed and we were unable to recover it. 00:34:22.835 [2024-07-25 01:20:15.857676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.835 [2024-07-25 01:20:15.857703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.835 qpair failed and we were unable to recover it. 00:34:22.835 [2024-07-25 01:20:15.857847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.835 [2024-07-25 01:20:15.857873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.835 qpair failed and we were unable to recover it. 00:34:22.835 [2024-07-25 01:20:15.858012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.835 [2024-07-25 01:20:15.858038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.835 qpair failed and we were unable to recover it. 00:34:22.835 [2024-07-25 01:20:15.858182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.835 [2024-07-25 01:20:15.858207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.835 qpair failed and we were unable to recover it. 00:34:22.835 [2024-07-25 01:20:15.858324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.835 [2024-07-25 01:20:15.858350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.835 qpair failed and we were unable to recover it. 00:34:22.835 [2024-07-25 01:20:15.858466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.835 [2024-07-25 01:20:15.858493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.835 qpair failed and we were unable to recover it. 00:34:22.835 [2024-07-25 01:20:15.858607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.835 [2024-07-25 01:20:15.858633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.835 qpair failed and we were unable to recover it. 00:34:22.835 [2024-07-25 01:20:15.858760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.835 [2024-07-25 01:20:15.858786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.835 qpair failed and we were unable to recover it. 00:34:22.835 [2024-07-25 01:20:15.858901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.835 [2024-07-25 01:20:15.858926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.835 qpair failed and we were unable to recover it. 00:34:22.835 [2024-07-25 01:20:15.859033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.835 [2024-07-25 01:20:15.859058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.835 qpair failed and we were unable to recover it. 00:34:22.835 [2024-07-25 01:20:15.859198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.835 [2024-07-25 01:20:15.859223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.835 qpair failed and we were unable to recover it. 00:34:22.835 [2024-07-25 01:20:15.859451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.835 [2024-07-25 01:20:15.859477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.835 qpair failed and we were unable to recover it. 00:34:22.835 [2024-07-25 01:20:15.859624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.835 [2024-07-25 01:20:15.859650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.835 qpair failed and we were unable to recover it. 00:34:22.835 [2024-07-25 01:20:15.859810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.835 [2024-07-25 01:20:15.859835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.835 qpair failed and we were unable to recover it. 00:34:22.835 [2024-07-25 01:20:15.860054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.835 [2024-07-25 01:20:15.860080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.835 qpair failed and we were unable to recover it. 00:34:22.835 [2024-07-25 01:20:15.860201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.835 [2024-07-25 01:20:15.860226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.835 qpair failed and we were unable to recover it. 00:34:22.835 [2024-07-25 01:20:15.860377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.835 [2024-07-25 01:20:15.860402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.835 qpair failed and we were unable to recover it. 00:34:22.835 [2024-07-25 01:20:15.860510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.835 [2024-07-25 01:20:15.860536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.835 qpair failed and we were unable to recover it. 00:34:22.835 [2024-07-25 01:20:15.860683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.835 [2024-07-25 01:20:15.860708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.835 qpair failed and we were unable to recover it. 00:34:22.835 [2024-07-25 01:20:15.860858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.835 [2024-07-25 01:20:15.860883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.835 qpair failed and we were unable to recover it. 00:34:22.835 [2024-07-25 01:20:15.860990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.835 [2024-07-25 01:20:15.861015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.835 qpair failed and we were unable to recover it. 00:34:22.835 [2024-07-25 01:20:15.861140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.835 [2024-07-25 01:20:15.861165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.835 qpair failed and we were unable to recover it. 00:34:22.835 [2024-07-25 01:20:15.861338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.835 [2024-07-25 01:20:15.861364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.835 qpair failed and we were unable to recover it. 00:34:22.835 [2024-07-25 01:20:15.861478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.835 [2024-07-25 01:20:15.861503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.835 qpair failed and we were unable to recover it. 00:34:22.835 [2024-07-25 01:20:15.861636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.835 [2024-07-25 01:20:15.861661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.835 qpair failed and we were unable to recover it. 00:34:22.835 [2024-07-25 01:20:15.861782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.835 [2024-07-25 01:20:15.861808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.835 qpair failed and we were unable to recover it. 00:34:22.835 [2024-07-25 01:20:15.861949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.835 [2024-07-25 01:20:15.861976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.835 qpair failed and we were unable to recover it. 00:34:22.835 [2024-07-25 01:20:15.862113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.835 [2024-07-25 01:20:15.862138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.835 qpair failed and we were unable to recover it. 00:34:22.835 [2024-07-25 01:20:15.862288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.836 [2024-07-25 01:20:15.862314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.836 qpair failed and we were unable to recover it. 00:34:22.836 [2024-07-25 01:20:15.862440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.836 [2024-07-25 01:20:15.862466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.836 qpair failed and we were unable to recover it. 00:34:22.836 [2024-07-25 01:20:15.862611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.836 [2024-07-25 01:20:15.862636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.836 qpair failed and we were unable to recover it. 00:34:22.836 [2024-07-25 01:20:15.862810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.836 [2024-07-25 01:20:15.862836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.836 qpair failed and we were unable to recover it. 00:34:22.836 [2024-07-25 01:20:15.862946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.836 [2024-07-25 01:20:15.862976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.836 qpair failed and we were unable to recover it. 00:34:22.836 [2024-07-25 01:20:15.863111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.836 [2024-07-25 01:20:15.863139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.836 qpair failed and we were unable to recover it. 00:34:22.836 [2024-07-25 01:20:15.863310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.836 [2024-07-25 01:20:15.863335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.836 qpair failed and we were unable to recover it. 00:34:22.836 [2024-07-25 01:20:15.863444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.836 [2024-07-25 01:20:15.863469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.836 qpair failed and we were unable to recover it. 00:34:22.836 [2024-07-25 01:20:15.863577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.836 [2024-07-25 01:20:15.863602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.836 qpair failed and we were unable to recover it. 00:34:22.836 [2024-07-25 01:20:15.863724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.836 [2024-07-25 01:20:15.863750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.836 qpair failed and we were unable to recover it. 00:34:22.836 [2024-07-25 01:20:15.863865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.836 [2024-07-25 01:20:15.863891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.836 qpair failed and we were unable to recover it. 00:34:22.836 [2024-07-25 01:20:15.864053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.836 [2024-07-25 01:20:15.864079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.836 qpair failed and we were unable to recover it. 00:34:22.836 [2024-07-25 01:20:15.864213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.836 [2024-07-25 01:20:15.864238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.836 qpair failed and we were unable to recover it. 00:34:22.836 [2024-07-25 01:20:15.864359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.836 [2024-07-25 01:20:15.864385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.836 qpair failed and we were unable to recover it. 00:34:22.836 [2024-07-25 01:20:15.864494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.836 [2024-07-25 01:20:15.864519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.836 qpair failed and we were unable to recover it. 00:34:22.836 [2024-07-25 01:20:15.864662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.836 [2024-07-25 01:20:15.864688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.836 qpair failed and we were unable to recover it. 00:34:22.836 [2024-07-25 01:20:15.864828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.836 [2024-07-25 01:20:15.864854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.836 qpair failed and we were unable to recover it. 00:34:22.836 [2024-07-25 01:20:15.864964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.836 [2024-07-25 01:20:15.864990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.836 qpair failed and we were unable to recover it. 00:34:22.836 [2024-07-25 01:20:15.865113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.836 [2024-07-25 01:20:15.865140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.836 qpair failed and we were unable to recover it. 00:34:22.836 [2024-07-25 01:20:15.865273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.836 [2024-07-25 01:20:15.865299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.836 qpair failed and we were unable to recover it. 00:34:22.836 [2024-07-25 01:20:15.865407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.836 [2024-07-25 01:20:15.865433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.836 qpair failed and we were unable to recover it. 00:34:22.836 [2024-07-25 01:20:15.865554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.836 [2024-07-25 01:20:15.865581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.836 qpair failed and we were unable to recover it. 00:34:22.836 [2024-07-25 01:20:15.865755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.836 [2024-07-25 01:20:15.865780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.836 qpair failed and we were unable to recover it. 00:34:22.836 [2024-07-25 01:20:15.865901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.836 [2024-07-25 01:20:15.865928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.836 qpair failed and we were unable to recover it. 00:34:22.836 [2024-07-25 01:20:15.866044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.836 [2024-07-25 01:20:15.866070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.836 qpair failed and we were unable to recover it. 00:34:22.836 [2024-07-25 01:20:15.866237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.836 [2024-07-25 01:20:15.866268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.836 qpair failed and we were unable to recover it. 00:34:22.836 [2024-07-25 01:20:15.866390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.836 [2024-07-25 01:20:15.866416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.836 qpair failed and we were unable to recover it. 00:34:22.836 [2024-07-25 01:20:15.866549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.836 [2024-07-25 01:20:15.866575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.836 qpair failed and we were unable to recover it. 00:34:22.836 [2024-07-25 01:20:15.866743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.836 [2024-07-25 01:20:15.866769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.836 qpair failed and we were unable to recover it. 00:34:22.836 [2024-07-25 01:20:15.866872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.836 [2024-07-25 01:20:15.866897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.836 qpair failed and we were unable to recover it. 00:34:22.836 [2024-07-25 01:20:15.867046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.836 [2024-07-25 01:20:15.867072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.836 qpair failed and we were unable to recover it. 00:34:22.836 [2024-07-25 01:20:15.867182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.837 [2024-07-25 01:20:15.867207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.837 qpair failed and we were unable to recover it. 00:34:22.837 [2024-07-25 01:20:15.867327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.837 [2024-07-25 01:20:15.867353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.837 qpair failed and we were unable to recover it. 00:34:22.837 [2024-07-25 01:20:15.867573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.837 [2024-07-25 01:20:15.867599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.837 qpair failed and we were unable to recover it. 00:34:22.837 [2024-07-25 01:20:15.867771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.837 [2024-07-25 01:20:15.867796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.837 qpair failed and we were unable to recover it. 00:34:22.837 [2024-07-25 01:20:15.867915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.837 [2024-07-25 01:20:15.867940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.837 qpair failed and we were unable to recover it. 00:34:22.837 [2024-07-25 01:20:15.868072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.837 [2024-07-25 01:20:15.868097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.837 qpair failed and we were unable to recover it. 00:34:22.837 [2024-07-25 01:20:15.868202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.837 [2024-07-25 01:20:15.868228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.837 qpair failed and we were unable to recover it. 00:34:22.837 [2024-07-25 01:20:15.868347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.837 [2024-07-25 01:20:15.868374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.837 qpair failed and we were unable to recover it. 00:34:22.837 [2024-07-25 01:20:15.868500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.837 [2024-07-25 01:20:15.868526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.837 qpair failed and we were unable to recover it. 00:34:22.837 [2024-07-25 01:20:15.868748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.837 [2024-07-25 01:20:15.868777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.837 qpair failed and we were unable to recover it. 00:34:22.837 [2024-07-25 01:20:15.868917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.837 [2024-07-25 01:20:15.868943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.837 qpair failed and we were unable to recover it. 00:34:22.837 [2024-07-25 01:20:15.869109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.837 [2024-07-25 01:20:15.869134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.837 qpair failed and we were unable to recover it. 00:34:22.837 [2024-07-25 01:20:15.869268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.837 [2024-07-25 01:20:15.869295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.837 qpair failed and we were unable to recover it. 00:34:22.837 [2024-07-25 01:20:15.869409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.837 [2024-07-25 01:20:15.869439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.837 qpair failed and we were unable to recover it. 00:34:22.837 [2024-07-25 01:20:15.869554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.837 [2024-07-25 01:20:15.869580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.837 qpair failed and we were unable to recover it. 00:34:22.837 [2024-07-25 01:20:15.869694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.837 [2024-07-25 01:20:15.869720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.837 qpair failed and we were unable to recover it. 00:34:22.837 [2024-07-25 01:20:15.869859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.837 [2024-07-25 01:20:15.869886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.837 qpair failed and we were unable to recover it. 00:34:22.837 [2024-07-25 01:20:15.870057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.837 [2024-07-25 01:20:15.870083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.837 qpair failed and we were unable to recover it. 00:34:22.837 [2024-07-25 01:20:15.870203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.837 [2024-07-25 01:20:15.870230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.837 qpair failed and we were unable to recover it. 00:34:22.837 [2024-07-25 01:20:15.870356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.837 [2024-07-25 01:20:15.870382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.837 qpair failed and we were unable to recover it. 00:34:22.837 [2024-07-25 01:20:15.870498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.837 [2024-07-25 01:20:15.870523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.837 qpair failed and we were unable to recover it. 00:34:22.837 [2024-07-25 01:20:15.870668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.837 [2024-07-25 01:20:15.870694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.837 qpair failed and we were unable to recover it. 00:34:22.837 [2024-07-25 01:20:15.870837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.837 [2024-07-25 01:20:15.870863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.837 qpair failed and we were unable to recover it. 00:34:22.837 [2024-07-25 01:20:15.871010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.837 [2024-07-25 01:20:15.871035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.837 qpair failed and we were unable to recover it. 00:34:22.837 [2024-07-25 01:20:15.871158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.837 [2024-07-25 01:20:15.871184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.837 qpair failed and we were unable to recover it. 00:34:22.837 [2024-07-25 01:20:15.871308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.837 [2024-07-25 01:20:15.871334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.837 qpair failed and we were unable to recover it. 00:34:22.837 [2024-07-25 01:20:15.871451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.837 [2024-07-25 01:20:15.871476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.837 qpair failed and we were unable to recover it. 00:34:22.837 [2024-07-25 01:20:15.871603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.837 [2024-07-25 01:20:15.871629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.837 qpair failed and we were unable to recover it. 00:34:22.837 [2024-07-25 01:20:15.871746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.837 [2024-07-25 01:20:15.871771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.837 qpair failed and we were unable to recover it. 00:34:22.837 [2024-07-25 01:20:15.871916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.837 [2024-07-25 01:20:15.871943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.837 qpair failed and we were unable to recover it. 00:34:22.837 [2024-07-25 01:20:15.872059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.837 [2024-07-25 01:20:15.872085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.837 qpair failed and we were unable to recover it. 00:34:22.837 [2024-07-25 01:20:15.872209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.837 [2024-07-25 01:20:15.872236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.837 qpair failed and we were unable to recover it. 00:34:22.837 [2024-07-25 01:20:15.872367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.837 [2024-07-25 01:20:15.872393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.837 qpair failed and we were unable to recover it. 00:34:22.837 [2024-07-25 01:20:15.872522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.837 [2024-07-25 01:20:15.872548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.837 qpair failed and we were unable to recover it. 00:34:22.837 [2024-07-25 01:20:15.872672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.837 [2024-07-25 01:20:15.872699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.837 qpair failed and we were unable to recover it. 00:34:22.837 [2024-07-25 01:20:15.872839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.837 [2024-07-25 01:20:15.872864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.837 qpair failed and we were unable to recover it. 00:34:22.837 [2024-07-25 01:20:15.872984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.837 [2024-07-25 01:20:15.873010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.837 qpair failed and we were unable to recover it. 00:34:22.837 [2024-07-25 01:20:15.873158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.837 [2024-07-25 01:20:15.873184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.838 qpair failed and we were unable to recover it. 00:34:22.838 [2024-07-25 01:20:15.873331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.838 [2024-07-25 01:20:15.873358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.838 qpair failed and we were unable to recover it. 00:34:22.838 [2024-07-25 01:20:15.873472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.838 [2024-07-25 01:20:15.873498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.838 qpair failed and we were unable to recover it. 00:34:22.838 [2024-07-25 01:20:15.873626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.838 [2024-07-25 01:20:15.873652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.838 qpair failed and we were unable to recover it. 00:34:22.838 [2024-07-25 01:20:15.873768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.838 [2024-07-25 01:20:15.873794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.838 qpair failed and we were unable to recover it. 00:34:22.838 [2024-07-25 01:20:15.873934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.838 [2024-07-25 01:20:15.873959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.838 qpair failed and we were unable to recover it. 00:34:22.838 [2024-07-25 01:20:15.874099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.838 [2024-07-25 01:20:15.874125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.838 qpair failed and we were unable to recover it. 00:34:22.838 [2024-07-25 01:20:15.874307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.838 [2024-07-25 01:20:15.874334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.838 qpair failed and we were unable to recover it. 00:34:22.838 [2024-07-25 01:20:15.874450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.838 [2024-07-25 01:20:15.874475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.838 qpair failed and we were unable to recover it. 00:34:22.838 [2024-07-25 01:20:15.874624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.838 [2024-07-25 01:20:15.874650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.838 qpair failed and we were unable to recover it. 00:34:22.838 [2024-07-25 01:20:15.874815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.838 [2024-07-25 01:20:15.874841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.838 qpair failed and we were unable to recover it. 00:34:22.838 [2024-07-25 01:20:15.874979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.838 [2024-07-25 01:20:15.875004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.838 qpair failed and we were unable to recover it. 00:34:22.838 [2024-07-25 01:20:15.875128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.838 [2024-07-25 01:20:15.875153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.838 qpair failed and we were unable to recover it. 00:34:22.838 [2024-07-25 01:20:15.875290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.838 [2024-07-25 01:20:15.875316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.838 qpair failed and we were unable to recover it. 00:34:22.838 [2024-07-25 01:20:15.875462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.838 [2024-07-25 01:20:15.875488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.838 qpair failed and we were unable to recover it. 00:34:22.838 [2024-07-25 01:20:15.875638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.838 [2024-07-25 01:20:15.875665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.838 qpair failed and we were unable to recover it. 00:34:22.838 [2024-07-25 01:20:15.875776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.838 [2024-07-25 01:20:15.875806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.838 qpair failed and we were unable to recover it. 00:34:22.838 [2024-07-25 01:20:15.875978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.838 [2024-07-25 01:20:15.876004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.838 qpair failed and we were unable to recover it. 00:34:22.838 [2024-07-25 01:20:15.876128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.838 [2024-07-25 01:20:15.876154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.838 qpair failed and we were unable to recover it. 00:34:22.838 [2024-07-25 01:20:15.876273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.838 [2024-07-25 01:20:15.876300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.838 qpair failed and we were unable to recover it. 00:34:22.838 [2024-07-25 01:20:15.876419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.838 [2024-07-25 01:20:15.876445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.838 qpair failed and we were unable to recover it. 00:34:22.838 [2024-07-25 01:20:15.876602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.838 [2024-07-25 01:20:15.876627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.838 qpair failed and we were unable to recover it. 00:34:22.838 [2024-07-25 01:20:15.876769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.838 [2024-07-25 01:20:15.876794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.838 qpair failed and we were unable to recover it. 00:34:22.838 [2024-07-25 01:20:15.876914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.838 [2024-07-25 01:20:15.876940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.838 qpair failed and we were unable to recover it. 00:34:22.838 [2024-07-25 01:20:15.877063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.838 [2024-07-25 01:20:15.877089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.838 qpair failed and we were unable to recover it. 00:34:22.838 [2024-07-25 01:20:15.877234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.838 [2024-07-25 01:20:15.877265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.838 qpair failed and we were unable to recover it. 00:34:22.838 [2024-07-25 01:20:15.877379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.838 [2024-07-25 01:20:15.877406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.838 qpair failed and we were unable to recover it. 00:34:22.838 [2024-07-25 01:20:15.877520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.838 [2024-07-25 01:20:15.877546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.838 qpair failed and we were unable to recover it. 00:34:22.838 [2024-07-25 01:20:15.877685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.838 [2024-07-25 01:20:15.877710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.838 qpair failed and we were unable to recover it. 00:34:22.838 [2024-07-25 01:20:15.877881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.838 [2024-07-25 01:20:15.877907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.838 qpair failed and we were unable to recover it. 00:34:22.838 [2024-07-25 01:20:15.878084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.838 [2024-07-25 01:20:15.878110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.838 qpair failed and we were unable to recover it. 00:34:22.838 [2024-07-25 01:20:15.878220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.838 [2024-07-25 01:20:15.878250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.838 qpair failed and we were unable to recover it. 00:34:22.838 [2024-07-25 01:20:15.878371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.838 [2024-07-25 01:20:15.878396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.838 qpair failed and we were unable to recover it. 00:34:22.838 [2024-07-25 01:20:15.878618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.838 [2024-07-25 01:20:15.878643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.838 qpair failed and we were unable to recover it. 00:34:22.838 [2024-07-25 01:20:15.878799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.838 [2024-07-25 01:20:15.878824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.838 qpair failed and we were unable to recover it. 00:34:22.838 [2024-07-25 01:20:15.878975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.838 [2024-07-25 01:20:15.879000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.838 qpair failed and we were unable to recover it. 00:34:22.838 [2024-07-25 01:20:15.879117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.838 [2024-07-25 01:20:15.879143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.838 qpair failed and we were unable to recover it. 00:34:22.838 [2024-07-25 01:20:15.879361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.838 [2024-07-25 01:20:15.879387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.838 qpair failed and we were unable to recover it. 00:34:22.839 [2024-07-25 01:20:15.879501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.839 [2024-07-25 01:20:15.879526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.839 qpair failed and we were unable to recover it. 00:34:22.839 [2024-07-25 01:20:15.879667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.839 [2024-07-25 01:20:15.879692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.839 qpair failed and we were unable to recover it. 00:34:22.839 [2024-07-25 01:20:15.879864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.839 [2024-07-25 01:20:15.879890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.839 qpair failed and we were unable to recover it. 00:34:22.839 [2024-07-25 01:20:15.880022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.839 [2024-07-25 01:20:15.880047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.839 qpair failed and we were unable to recover it. 00:34:22.839 [2024-07-25 01:20:15.880193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.839 [2024-07-25 01:20:15.880218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.839 qpair failed and we were unable to recover it. 00:34:22.839 [2024-07-25 01:20:15.880353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.839 [2024-07-25 01:20:15.880378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.839 qpair failed and we were unable to recover it. 00:34:22.839 [2024-07-25 01:20:15.880505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.839 [2024-07-25 01:20:15.880531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.839 qpair failed and we were unable to recover it. 00:34:22.839 [2024-07-25 01:20:15.880676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.839 [2024-07-25 01:20:15.880701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.839 qpair failed and we were unable to recover it. 00:34:22.839 [2024-07-25 01:20:15.880842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.839 [2024-07-25 01:20:15.880868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.839 qpair failed and we were unable to recover it. 00:34:22.839 [2024-07-25 01:20:15.880984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.839 [2024-07-25 01:20:15.881009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.839 qpair failed and we were unable to recover it. 00:34:22.839 [2024-07-25 01:20:15.881180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.839 [2024-07-25 01:20:15.881206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.839 qpair failed and we were unable to recover it. 00:34:22.839 [2024-07-25 01:20:15.881369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.839 [2024-07-25 01:20:15.881396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.839 qpair failed and we were unable to recover it. 00:34:22.839 [2024-07-25 01:20:15.881500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.839 [2024-07-25 01:20:15.881526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.839 qpair failed and we were unable to recover it. 00:34:22.839 [2024-07-25 01:20:15.881649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.839 [2024-07-25 01:20:15.881675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.839 qpair failed and we were unable to recover it. 00:34:22.839 [2024-07-25 01:20:15.881816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.839 [2024-07-25 01:20:15.881841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.839 qpair failed and we were unable to recover it. 00:34:22.839 [2024-07-25 01:20:15.881988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.839 [2024-07-25 01:20:15.882013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.839 qpair failed and we were unable to recover it. 00:34:22.839 [2024-07-25 01:20:15.882134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.839 [2024-07-25 01:20:15.882159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.839 qpair failed and we were unable to recover it. 00:34:22.839 [2024-07-25 01:20:15.882280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.839 [2024-07-25 01:20:15.882306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.839 qpair failed and we were unable to recover it. 00:34:22.839 [2024-07-25 01:20:15.882428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.839 [2024-07-25 01:20:15.882457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.839 qpair failed and we were unable to recover it. 00:34:22.839 [2024-07-25 01:20:15.882600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.839 [2024-07-25 01:20:15.882626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.839 qpair failed and we were unable to recover it. 00:34:22.839 [2024-07-25 01:20:15.882747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.839 [2024-07-25 01:20:15.882772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.839 qpair failed and we were unable to recover it. 00:34:22.839 [2024-07-25 01:20:15.882884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.839 [2024-07-25 01:20:15.882911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.839 qpair failed and we were unable to recover it. 00:34:22.839 [2024-07-25 01:20:15.883061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.839 [2024-07-25 01:20:15.883087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.839 qpair failed and we were unable to recover it. 00:34:22.839 [2024-07-25 01:20:15.883225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.839 [2024-07-25 01:20:15.883258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.839 qpair failed and we were unable to recover it. 00:34:22.839 [2024-07-25 01:20:15.883382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.839 [2024-07-25 01:20:15.883409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.839 qpair failed and we were unable to recover it. 00:34:22.839 [2024-07-25 01:20:15.883558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.839 [2024-07-25 01:20:15.883583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.839 qpair failed and we were unable to recover it. 00:34:22.839 [2024-07-25 01:20:15.883703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.839 [2024-07-25 01:20:15.883730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.839 qpair failed and we were unable to recover it. 00:34:22.839 [2024-07-25 01:20:15.883845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.839 [2024-07-25 01:20:15.883871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.839 qpair failed and we were unable to recover it. 00:34:22.839 [2024-07-25 01:20:15.884014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.839 [2024-07-25 01:20:15.884040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.839 qpair failed and we were unable to recover it. 00:34:22.839 [2024-07-25 01:20:15.884176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.839 [2024-07-25 01:20:15.884202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.839 qpair failed and we were unable to recover it. 00:34:22.839 [2024-07-25 01:20:15.884320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.839 [2024-07-25 01:20:15.884347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.839 qpair failed and we were unable to recover it. 00:34:22.839 [2024-07-25 01:20:15.884460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.839 [2024-07-25 01:20:15.884485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.839 qpair failed and we were unable to recover it. 00:34:22.839 [2024-07-25 01:20:15.884708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.839 [2024-07-25 01:20:15.884733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.839 qpair failed and we were unable to recover it. 00:34:22.839 [2024-07-25 01:20:15.884882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.839 [2024-07-25 01:20:15.884908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.839 qpair failed and we were unable to recover it. 00:34:22.839 [2024-07-25 01:20:15.885047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.839 [2024-07-25 01:20:15.885072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.839 qpair failed and we were unable to recover it. 00:34:22.839 [2024-07-25 01:20:15.885215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.839 [2024-07-25 01:20:15.885240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.839 qpair failed and we were unable to recover it. 00:34:22.839 [2024-07-25 01:20:15.885395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.839 [2024-07-25 01:20:15.885421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.839 qpair failed and we were unable to recover it. 00:34:22.839 [2024-07-25 01:20:15.885562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.839 [2024-07-25 01:20:15.885588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.839 qpair failed and we were unable to recover it. 00:34:22.840 [2024-07-25 01:20:15.885758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.840 [2024-07-25 01:20:15.885784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.840 qpair failed and we were unable to recover it. 00:34:22.840 [2024-07-25 01:20:15.885938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.840 [2024-07-25 01:20:15.885964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.840 qpair failed and we were unable to recover it. 00:34:22.840 [2024-07-25 01:20:15.886135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.840 [2024-07-25 01:20:15.886162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.840 qpair failed and we were unable to recover it. 00:34:22.840 [2024-07-25 01:20:15.886329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.840 [2024-07-25 01:20:15.886356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.840 qpair failed and we were unable to recover it. 00:34:22.840 [2024-07-25 01:20:15.886530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.840 [2024-07-25 01:20:15.886557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.840 qpair failed and we were unable to recover it. 00:34:22.840 [2024-07-25 01:20:15.886701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.840 [2024-07-25 01:20:15.886727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.840 qpair failed and we were unable to recover it. 00:34:22.840 [2024-07-25 01:20:15.886875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.840 [2024-07-25 01:20:15.886901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.840 qpair failed and we were unable to recover it. 00:34:22.840 [2024-07-25 01:20:15.887094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.840 [2024-07-25 01:20:15.887120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.840 qpair failed and we were unable to recover it. 00:34:22.840 [2024-07-25 01:20:15.887229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.840 [2024-07-25 01:20:15.887260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.840 qpair failed and we were unable to recover it. 00:34:22.840 [2024-07-25 01:20:15.887397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.840 [2024-07-25 01:20:15.887422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.840 qpair failed and we were unable to recover it. 00:34:22.840 [2024-07-25 01:20:15.887535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.840 [2024-07-25 01:20:15.887560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.840 qpair failed and we were unable to recover it. 00:34:22.840 [2024-07-25 01:20:15.887688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.840 [2024-07-25 01:20:15.887713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.840 qpair failed and we were unable to recover it. 00:34:22.840 [2024-07-25 01:20:15.887838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.840 [2024-07-25 01:20:15.887863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.840 qpair failed and we were unable to recover it. 00:34:22.840 [2024-07-25 01:20:15.888005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.840 [2024-07-25 01:20:15.888030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.840 qpair failed and we were unable to recover it. 00:34:22.840 [2024-07-25 01:20:15.888173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.840 [2024-07-25 01:20:15.888200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.840 qpair failed and we were unable to recover it. 00:34:22.840 [2024-07-25 01:20:15.888326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.840 [2024-07-25 01:20:15.888352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.840 qpair failed and we were unable to recover it. 00:34:22.840 [2024-07-25 01:20:15.888462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.840 [2024-07-25 01:20:15.888487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.840 qpair failed and we were unable to recover it. 00:34:22.840 [2024-07-25 01:20:15.888642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.840 [2024-07-25 01:20:15.888667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.840 qpair failed and we were unable to recover it. 00:34:22.840 [2024-07-25 01:20:15.888805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.840 [2024-07-25 01:20:15.888830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.840 qpair failed and we were unable to recover it. 00:34:22.840 [2024-07-25 01:20:15.888941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.840 [2024-07-25 01:20:15.888967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.840 qpair failed and we were unable to recover it. 00:34:22.840 [2024-07-25 01:20:15.889085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.840 [2024-07-25 01:20:15.889117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.840 qpair failed and we were unable to recover it. 00:34:22.840 [2024-07-25 01:20:15.889270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.840 [2024-07-25 01:20:15.889296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.840 qpair failed and we were unable to recover it. 00:34:22.840 [2024-07-25 01:20:15.889419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.840 [2024-07-25 01:20:15.889445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.840 qpair failed and we were unable to recover it. 00:34:22.840 [2024-07-25 01:20:15.889559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.840 [2024-07-25 01:20:15.889584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.840 qpair failed and we were unable to recover it. 00:34:22.840 [2024-07-25 01:20:15.889726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.840 [2024-07-25 01:20:15.889751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.840 qpair failed and we were unable to recover it. 00:34:22.840 [2024-07-25 01:20:15.889893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.840 [2024-07-25 01:20:15.889917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.840 qpair failed and we were unable to recover it. 00:34:22.840 [2024-07-25 01:20:15.890064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.840 [2024-07-25 01:20:15.890090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.840 qpair failed and we were unable to recover it. 00:34:22.840 [2024-07-25 01:20:15.890226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.840 [2024-07-25 01:20:15.890267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.840 qpair failed and we were unable to recover it. 00:34:22.840 [2024-07-25 01:20:15.890393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.840 [2024-07-25 01:20:15.890419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.840 qpair failed and we were unable to recover it. 00:34:22.840 [2024-07-25 01:20:15.890550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.840 [2024-07-25 01:20:15.890576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.840 qpair failed and we were unable to recover it. 00:34:22.840 [2024-07-25 01:20:15.890755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.840 [2024-07-25 01:20:15.890780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.840 qpair failed and we were unable to recover it. 00:34:22.840 [2024-07-25 01:20:15.890894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.840 [2024-07-25 01:20:15.890919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.841 qpair failed and we were unable to recover it. 00:34:22.841 [2024-07-25 01:20:15.891062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.841 [2024-07-25 01:20:15.891088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.841 qpair failed and we were unable to recover it. 00:34:22.841 [2024-07-25 01:20:15.891232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.841 [2024-07-25 01:20:15.891264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.841 qpair failed and we were unable to recover it. 00:34:22.841 [2024-07-25 01:20:15.891409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.841 [2024-07-25 01:20:15.891435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.841 qpair failed and we were unable to recover it. 00:34:22.841 [2024-07-25 01:20:15.891585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.841 [2024-07-25 01:20:15.891612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.841 qpair failed and we were unable to recover it. 00:34:22.841 [2024-07-25 01:20:15.891733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.841 [2024-07-25 01:20:15.891760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.841 qpair failed and we were unable to recover it. 00:34:22.841 [2024-07-25 01:20:15.891867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.841 [2024-07-25 01:20:15.891892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.841 qpair failed and we were unable to recover it. 00:34:22.841 [2024-07-25 01:20:15.892012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.841 [2024-07-25 01:20:15.892039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.841 qpair failed and we were unable to recover it. 00:34:22.841 [2024-07-25 01:20:15.892203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.841 [2024-07-25 01:20:15.892229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.841 qpair failed and we were unable to recover it. 00:34:22.841 [2024-07-25 01:20:15.892351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.841 [2024-07-25 01:20:15.892377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.841 qpair failed and we were unable to recover it. 00:34:22.841 [2024-07-25 01:20:15.892533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.841 [2024-07-25 01:20:15.892558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.841 qpair failed and we were unable to recover it. 00:34:22.841 [2024-07-25 01:20:15.892710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.841 [2024-07-25 01:20:15.892736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.841 qpair failed and we were unable to recover it. 00:34:22.841 [2024-07-25 01:20:15.892859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.841 [2024-07-25 01:20:15.892885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.841 qpair failed and we were unable to recover it. 00:34:22.841 [2024-07-25 01:20:15.893004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.841 [2024-07-25 01:20:15.893031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.841 qpair failed and we were unable to recover it. 00:34:22.841 [2024-07-25 01:20:15.893151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.841 [2024-07-25 01:20:15.893177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.841 qpair failed and we were unable to recover it. 00:34:22.841 [2024-07-25 01:20:15.893284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.841 [2024-07-25 01:20:15.893311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.841 qpair failed and we were unable to recover it. 00:34:22.841 [2024-07-25 01:20:15.893453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.841 [2024-07-25 01:20:15.893480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.841 qpair failed and we were unable to recover it. 00:34:22.841 [2024-07-25 01:20:15.893627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.841 [2024-07-25 01:20:15.893653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.841 qpair failed and we were unable to recover it. 00:34:22.841 [2024-07-25 01:20:15.893807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.841 [2024-07-25 01:20:15.893833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.841 qpair failed and we were unable to recover it. 00:34:22.841 [2024-07-25 01:20:15.893980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.841 [2024-07-25 01:20:15.894006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.841 qpair failed and we were unable to recover it. 00:34:22.841 [2024-07-25 01:20:15.894151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.841 [2024-07-25 01:20:15.894176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.841 qpair failed and we were unable to recover it. 00:34:22.841 [2024-07-25 01:20:15.894320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.841 [2024-07-25 01:20:15.894347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.841 qpair failed and we were unable to recover it. 00:34:22.841 [2024-07-25 01:20:15.894514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.841 [2024-07-25 01:20:15.894540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.841 qpair failed and we were unable to recover it. 00:34:22.841 [2024-07-25 01:20:15.894706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.841 [2024-07-25 01:20:15.894734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.841 qpair failed and we were unable to recover it. 00:34:22.841 [2024-07-25 01:20:15.894878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.841 [2024-07-25 01:20:15.894907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.841 qpair failed and we were unable to recover it. 00:34:22.841 [2024-07-25 01:20:15.895127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.841 [2024-07-25 01:20:15.895156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.841 qpair failed and we were unable to recover it. 00:34:22.841 [2024-07-25 01:20:15.895335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.841 [2024-07-25 01:20:15.895364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.841 qpair failed and we were unable to recover it. 00:34:22.841 [2024-07-25 01:20:15.895547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.841 [2024-07-25 01:20:15.895575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.841 qpair failed and we were unable to recover it. 00:34:22.841 [2024-07-25 01:20:15.895735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.841 [2024-07-25 01:20:15.895764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.841 qpair failed and we were unable to recover it. 00:34:22.841 [2024-07-25 01:20:15.895918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.841 [2024-07-25 01:20:15.895952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.841 qpair failed and we were unable to recover it. 00:34:22.841 [2024-07-25 01:20:15.896114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.841 [2024-07-25 01:20:15.896140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.841 qpair failed and we were unable to recover it. 00:34:22.841 [2024-07-25 01:20:15.896322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.841 [2024-07-25 01:20:15.896349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.841 qpair failed and we were unable to recover it. 00:34:22.841 [2024-07-25 01:20:15.896529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.841 [2024-07-25 01:20:15.896565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.841 qpair failed and we were unable to recover it. 00:34:22.841 [2024-07-25 01:20:15.896710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.841 [2024-07-25 01:20:15.896736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.841 qpair failed and we were unable to recover it. 00:34:22.841 [2024-07-25 01:20:15.896890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.841 [2024-07-25 01:20:15.896916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.841 qpair failed and we were unable to recover it. 00:34:22.841 [2024-07-25 01:20:15.897056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.841 [2024-07-25 01:20:15.897082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.841 qpair failed and we were unable to recover it. 00:34:22.841 [2024-07-25 01:20:15.897207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.841 [2024-07-25 01:20:15.897233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.841 qpair failed and we were unable to recover it. 00:34:22.841 [2024-07-25 01:20:15.897387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.841 [2024-07-25 01:20:15.897413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.842 qpair failed and we were unable to recover it. 00:34:22.842 [2024-07-25 01:20:15.897529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.842 [2024-07-25 01:20:15.897554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.842 qpair failed and we were unable to recover it. 00:34:22.842 [2024-07-25 01:20:15.897675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.842 [2024-07-25 01:20:15.897701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.842 qpair failed and we were unable to recover it. 00:34:22.842 [2024-07-25 01:20:15.897874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.842 [2024-07-25 01:20:15.897900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.842 qpair failed and we were unable to recover it. 00:34:22.842 [2024-07-25 01:20:15.898020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.842 [2024-07-25 01:20:15.898045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.842 qpair failed and we were unable to recover it. 00:34:22.842 [2024-07-25 01:20:15.898159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.842 [2024-07-25 01:20:15.898186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.842 qpair failed and we were unable to recover it. 00:34:22.842 [2024-07-25 01:20:15.898305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.842 [2024-07-25 01:20:15.898331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.842 qpair failed and we were unable to recover it. 00:34:22.842 [2024-07-25 01:20:15.898447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.842 [2024-07-25 01:20:15.898473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.842 qpair failed and we were unable to recover it. 00:34:22.842 [2024-07-25 01:20:15.898645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.842 [2024-07-25 01:20:15.898671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.842 qpair failed and we were unable to recover it. 00:34:22.842 [2024-07-25 01:20:15.898847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.842 [2024-07-25 01:20:15.898873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.842 qpair failed and we were unable to recover it. 00:34:22.842 [2024-07-25 01:20:15.898998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.842 [2024-07-25 01:20:15.899023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.842 qpair failed and we were unable to recover it. 00:34:22.842 [2024-07-25 01:20:15.899154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.842 [2024-07-25 01:20:15.899179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.842 qpair failed and we were unable to recover it. 00:34:22.842 [2024-07-25 01:20:15.899341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.842 [2024-07-25 01:20:15.899367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.842 qpair failed and we were unable to recover it. 00:34:22.842 [2024-07-25 01:20:15.899532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.842 [2024-07-25 01:20:15.899558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.842 qpair failed and we were unable to recover it. 00:34:22.842 [2024-07-25 01:20:15.899693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.842 [2024-07-25 01:20:15.899718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.842 qpair failed and we were unable to recover it. 00:34:22.842 [2024-07-25 01:20:15.899862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.842 [2024-07-25 01:20:15.899887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.842 qpair failed and we were unable to recover it. 00:34:22.842 [2024-07-25 01:20:15.900027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.842 [2024-07-25 01:20:15.900052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.842 qpair failed and we were unable to recover it. 00:34:22.842 [2024-07-25 01:20:15.900197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.842 [2024-07-25 01:20:15.900222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.842 qpair failed and we were unable to recover it. 00:34:22.842 [2024-07-25 01:20:15.900387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.842 [2024-07-25 01:20:15.900413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.842 qpair failed and we were unable to recover it. 00:34:22.842 [2024-07-25 01:20:15.900559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.842 [2024-07-25 01:20:15.900591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.842 qpair failed and we were unable to recover it. 00:34:22.842 [2024-07-25 01:20:15.900736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.842 [2024-07-25 01:20:15.900761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.842 qpair failed and we were unable to recover it. 00:34:22.842 [2024-07-25 01:20:15.900910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.842 [2024-07-25 01:20:15.900935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.842 qpair failed and we were unable to recover it. 00:34:22.842 [2024-07-25 01:20:15.901050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.842 [2024-07-25 01:20:15.901075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.842 qpair failed and we were unable to recover it. 00:34:22.842 [2024-07-25 01:20:15.901218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.842 [2024-07-25 01:20:15.901263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.842 qpair failed and we were unable to recover it. 00:34:22.842 [2024-07-25 01:20:15.901421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.842 [2024-07-25 01:20:15.901448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.842 qpair failed and we were unable to recover it. 00:34:22.842 [2024-07-25 01:20:15.901575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.842 [2024-07-25 01:20:15.901601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.842 qpair failed and we were unable to recover it. 00:34:22.842 [2024-07-25 01:20:15.901715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.842 [2024-07-25 01:20:15.901740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.842 qpair failed and we were unable to recover it. 00:34:22.842 [2024-07-25 01:20:15.901856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.842 [2024-07-25 01:20:15.901882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.842 qpair failed and we were unable to recover it. 00:34:22.842 [2024-07-25 01:20:15.902024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.842 [2024-07-25 01:20:15.902049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.842 qpair failed and we were unable to recover it. 00:34:22.842 [2024-07-25 01:20:15.902159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.842 [2024-07-25 01:20:15.902185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.842 qpair failed and we were unable to recover it. 00:34:22.842 [2024-07-25 01:20:15.902358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.842 [2024-07-25 01:20:15.902384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.842 qpair failed and we were unable to recover it. 00:34:22.842 [2024-07-25 01:20:15.902527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.842 [2024-07-25 01:20:15.902555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.842 qpair failed and we were unable to recover it. 00:34:22.842 [2024-07-25 01:20:15.902697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.842 [2024-07-25 01:20:15.902726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.842 qpair failed and we were unable to recover it. 00:34:22.842 [2024-07-25 01:20:15.902869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.842 [2024-07-25 01:20:15.902895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.842 qpair failed and we were unable to recover it. 00:34:22.842 [2024-07-25 01:20:15.903006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.842 [2024-07-25 01:20:15.903032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.842 qpair failed and we were unable to recover it. 00:34:22.842 [2024-07-25 01:20:15.903146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.842 [2024-07-25 01:20:15.903171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.842 qpair failed and we were unable to recover it. 00:34:22.842 [2024-07-25 01:20:15.903315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.842 [2024-07-25 01:20:15.903340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.842 qpair failed and we were unable to recover it. 00:34:22.842 [2024-07-25 01:20:15.903478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.842 [2024-07-25 01:20:15.903503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.842 qpair failed and we were unable to recover it. 00:34:22.842 [2024-07-25 01:20:15.903645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.843 [2024-07-25 01:20:15.903671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.843 qpair failed and we were unable to recover it. 00:34:22.843 [2024-07-25 01:20:15.903789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.843 [2024-07-25 01:20:15.903815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.843 qpair failed and we were unable to recover it. 00:34:22.843 [2024-07-25 01:20:15.903936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.843 [2024-07-25 01:20:15.903961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.843 qpair failed and we were unable to recover it. 00:34:22.843 [2024-07-25 01:20:15.904108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.843 [2024-07-25 01:20:15.904134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.843 qpair failed and we were unable to recover it. 00:34:22.843 [2024-07-25 01:20:15.904252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.843 [2024-07-25 01:20:15.904278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.843 qpair failed and we were unable to recover it. 00:34:22.843 [2024-07-25 01:20:15.904453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.843 [2024-07-25 01:20:15.904478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.843 qpair failed and we were unable to recover it. 00:34:22.843 [2024-07-25 01:20:15.904622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.843 [2024-07-25 01:20:15.904647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.843 qpair failed and we were unable to recover it. 00:34:22.843 [2024-07-25 01:20:15.904818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.843 [2024-07-25 01:20:15.904844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.843 qpair failed and we were unable to recover it. 00:34:22.843 [2024-07-25 01:20:15.904967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.843 [2024-07-25 01:20:15.904994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.843 qpair failed and we were unable to recover it. 00:34:22.843 [2024-07-25 01:20:15.905138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.843 [2024-07-25 01:20:15.905163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.843 qpair failed and we were unable to recover it. 00:34:22.843 [2024-07-25 01:20:15.905308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.843 [2024-07-25 01:20:15.905333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.843 qpair failed and we were unable to recover it. 00:34:22.843 [2024-07-25 01:20:15.905453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.843 [2024-07-25 01:20:15.905478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.843 qpair failed and we were unable to recover it. 00:34:22.843 [2024-07-25 01:20:15.905614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.843 [2024-07-25 01:20:15.905639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.843 qpair failed and we were unable to recover it. 00:34:22.843 [2024-07-25 01:20:15.905779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.843 [2024-07-25 01:20:15.905805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.843 qpair failed and we were unable to recover it. 00:34:22.843 [2024-07-25 01:20:15.905924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.843 [2024-07-25 01:20:15.905949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.843 qpair failed and we were unable to recover it. 00:34:22.843 [2024-07-25 01:20:15.906064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.843 [2024-07-25 01:20:15.906089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.843 qpair failed and we were unable to recover it. 00:34:22.843 [2024-07-25 01:20:15.906216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.843 [2024-07-25 01:20:15.906247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.843 qpair failed and we were unable to recover it. 00:34:22.843 [2024-07-25 01:20:15.906390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.843 [2024-07-25 01:20:15.906415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.843 qpair failed and we were unable to recover it. 00:34:22.843 [2024-07-25 01:20:15.906555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.843 [2024-07-25 01:20:15.906580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.843 qpair failed and we were unable to recover it. 00:34:22.843 [2024-07-25 01:20:15.906721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.843 [2024-07-25 01:20:15.906746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.843 qpair failed and we were unable to recover it. 00:34:22.843 [2024-07-25 01:20:15.906893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.843 [2024-07-25 01:20:15.906918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.843 qpair failed and we were unable to recover it. 00:34:22.843 [2024-07-25 01:20:15.907054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.843 [2024-07-25 01:20:15.907080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.843 qpair failed and we were unable to recover it. 00:34:22.843 [2024-07-25 01:20:15.907253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.843 [2024-07-25 01:20:15.907279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.843 qpair failed and we were unable to recover it. 00:34:22.843 [2024-07-25 01:20:15.907423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.843 [2024-07-25 01:20:15.907448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.843 qpair failed and we were unable to recover it. 00:34:22.843 [2024-07-25 01:20:15.907566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.843 [2024-07-25 01:20:15.907591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.843 qpair failed and we were unable to recover it. 00:34:22.843 [2024-07-25 01:20:15.907701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.843 [2024-07-25 01:20:15.907727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.843 qpair failed and we were unable to recover it. 00:34:22.843 [2024-07-25 01:20:15.907842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.843 [2024-07-25 01:20:15.907867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.843 qpair failed and we were unable to recover it. 00:34:22.843 [2024-07-25 01:20:15.907978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.843 [2024-07-25 01:20:15.908004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.843 qpair failed and we were unable to recover it. 00:34:22.843 [2024-07-25 01:20:15.908112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.843 [2024-07-25 01:20:15.908138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.843 qpair failed and we were unable to recover it. 00:34:22.843 [2024-07-25 01:20:15.908289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.843 [2024-07-25 01:20:15.908315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.843 qpair failed and we were unable to recover it. 00:34:22.843 [2024-07-25 01:20:15.908467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.843 [2024-07-25 01:20:15.908492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.843 qpair failed and we were unable to recover it. 00:34:22.843 [2024-07-25 01:20:15.908641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.843 [2024-07-25 01:20:15.908666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.843 qpair failed and we were unable to recover it. 00:34:22.843 [2024-07-25 01:20:15.908778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.843 [2024-07-25 01:20:15.908803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.843 qpair failed and we were unable to recover it. 00:34:22.843 [2024-07-25 01:20:15.908948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.843 [2024-07-25 01:20:15.908975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.843 qpair failed and we were unable to recover it. 00:34:22.843 [2024-07-25 01:20:15.909097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.843 [2024-07-25 01:20:15.909127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.843 qpair failed and we were unable to recover it. 00:34:22.843 [2024-07-25 01:20:15.909238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.843 [2024-07-25 01:20:15.909277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.843 qpair failed and we were unable to recover it. 00:34:22.843 [2024-07-25 01:20:15.909428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.843 [2024-07-25 01:20:15.909455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.843 qpair failed and we were unable to recover it. 00:34:22.843 [2024-07-25 01:20:15.909566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.843 [2024-07-25 01:20:15.909592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.843 qpair failed and we were unable to recover it. 00:34:22.844 [2024-07-25 01:20:15.909711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.844 [2024-07-25 01:20:15.909736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.844 qpair failed and we were unable to recover it. 00:34:22.844 [2024-07-25 01:20:15.909883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.844 [2024-07-25 01:20:15.909909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.844 qpair failed and we were unable to recover it. 00:34:22.844 [2024-07-25 01:20:15.910028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.844 [2024-07-25 01:20:15.910054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.844 qpair failed and we were unable to recover it. 00:34:22.844 [2024-07-25 01:20:15.910224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.844 [2024-07-25 01:20:15.910256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.844 qpair failed and we were unable to recover it. 00:34:22.844 [2024-07-25 01:20:15.910373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.844 [2024-07-25 01:20:15.910398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.844 qpair failed and we were unable to recover it. 00:34:22.844 [2024-07-25 01:20:15.910544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.844 [2024-07-25 01:20:15.910570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.844 qpair failed and we were unable to recover it. 00:34:22.844 [2024-07-25 01:20:15.910727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.844 [2024-07-25 01:20:15.910753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.844 qpair failed and we were unable to recover it. 00:34:22.844 [2024-07-25 01:20:15.910867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.844 [2024-07-25 01:20:15.910893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.844 qpair failed and we were unable to recover it. 00:34:22.844 [2024-07-25 01:20:15.911020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.844 [2024-07-25 01:20:15.911047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.844 qpair failed and we were unable to recover it. 00:34:22.844 [2024-07-25 01:20:15.911192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.844 [2024-07-25 01:20:15.911218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.844 qpair failed and we were unable to recover it. 00:34:22.844 [2024-07-25 01:20:15.911383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.844 [2024-07-25 01:20:15.911410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.844 qpair failed and we were unable to recover it. 00:34:22.844 [2024-07-25 01:20:15.911554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.844 [2024-07-25 01:20:15.911580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.844 qpair failed and we were unable to recover it. 00:34:22.844 [2024-07-25 01:20:15.911693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.844 [2024-07-25 01:20:15.911719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.844 qpair failed and we were unable to recover it. 00:34:22.844 [2024-07-25 01:20:15.911861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.844 [2024-07-25 01:20:15.911886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.844 qpair failed and we were unable to recover it. 00:34:22.844 [2024-07-25 01:20:15.912029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.844 [2024-07-25 01:20:15.912055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.844 qpair failed and we were unable to recover it. 00:34:22.844 [2024-07-25 01:20:15.912205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.844 [2024-07-25 01:20:15.912230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.844 qpair failed and we were unable to recover it. 00:34:22.844 [2024-07-25 01:20:15.912397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.844 [2024-07-25 01:20:15.912422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.844 qpair failed and we were unable to recover it. 00:34:22.844 [2024-07-25 01:20:15.912560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.844 [2024-07-25 01:20:15.912586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.844 qpair failed and we were unable to recover it. 00:34:22.844 [2024-07-25 01:20:15.912731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.844 [2024-07-25 01:20:15.912756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.844 qpair failed and we were unable to recover it. 00:34:22.844 [2024-07-25 01:20:15.912863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.844 [2024-07-25 01:20:15.912888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.844 qpair failed and we were unable to recover it. 00:34:22.844 [2024-07-25 01:20:15.913035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.844 [2024-07-25 01:20:15.913061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.844 qpair failed and we were unable to recover it. 00:34:22.844 [2024-07-25 01:20:15.913210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.844 [2024-07-25 01:20:15.913236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.844 qpair failed and we were unable to recover it. 00:34:22.844 [2024-07-25 01:20:15.913398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.844 [2024-07-25 01:20:15.913424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.844 qpair failed and we were unable to recover it. 00:34:22.844 [2024-07-25 01:20:15.913566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.844 [2024-07-25 01:20:15.913592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.844 qpair failed and we were unable to recover it. 00:34:22.844 [2024-07-25 01:20:15.913756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.844 [2024-07-25 01:20:15.913782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.844 qpair failed and we were unable to recover it. 00:34:22.844 [2024-07-25 01:20:15.913973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.844 [2024-07-25 01:20:15.914001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.844 qpair failed and we were unable to recover it. 00:34:22.844 [2024-07-25 01:20:15.914182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.844 [2024-07-25 01:20:15.914210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.844 qpair failed and we were unable to recover it. 00:34:22.844 [2024-07-25 01:20:15.914384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.844 [2024-07-25 01:20:15.914411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.844 qpair failed and we were unable to recover it. 00:34:22.844 [2024-07-25 01:20:15.914555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.844 [2024-07-25 01:20:15.914582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.844 qpair failed and we were unable to recover it. 00:34:22.844 [2024-07-25 01:20:15.914732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.844 [2024-07-25 01:20:15.914759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.844 qpair failed and we were unable to recover it. 00:34:22.844 [2024-07-25 01:20:15.914921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.844 [2024-07-25 01:20:15.914946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.844 qpair failed and we were unable to recover it. 00:34:22.844 [2024-07-25 01:20:15.915086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.844 [2024-07-25 01:20:15.915112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.844 qpair failed and we were unable to recover it. 00:34:22.844 [2024-07-25 01:20:15.915273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.844 [2024-07-25 01:20:15.915299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.844 qpair failed and we were unable to recover it. 00:34:22.844 [2024-07-25 01:20:15.915421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.844 [2024-07-25 01:20:15.915446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.844 qpair failed and we were unable to recover it. 00:34:22.844 [2024-07-25 01:20:15.915568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.844 [2024-07-25 01:20:15.915595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.844 qpair failed and we were unable to recover it. 00:34:22.844 [2024-07-25 01:20:15.915712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.844 [2024-07-25 01:20:15.915738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.844 qpair failed and we were unable to recover it. 00:34:22.844 [2024-07-25 01:20:15.915883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.844 [2024-07-25 01:20:15.915913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.844 qpair failed and we were unable to recover it. 00:34:22.844 [2024-07-25 01:20:15.916070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.844 [2024-07-25 01:20:15.916095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.844 qpair failed and we were unable to recover it. 00:34:22.844 [2024-07-25 01:20:15.916219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.844 [2024-07-25 01:20:15.916251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.844 qpair failed and we were unable to recover it. 00:34:22.844 [2024-07-25 01:20:15.916366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.844 [2024-07-25 01:20:15.916395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.844 qpair failed and we were unable to recover it. 00:34:22.845 [2024-07-25 01:20:15.916548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.845 [2024-07-25 01:20:15.916573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.845 qpair failed and we were unable to recover it. 00:34:22.845 [2024-07-25 01:20:15.916746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.845 [2024-07-25 01:20:15.916772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.845 qpair failed and we were unable to recover it. 00:34:22.845 [2024-07-25 01:20:15.916913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.845 [2024-07-25 01:20:15.916938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.845 qpair failed and we were unable to recover it. 00:34:22.845 [2024-07-25 01:20:15.917108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.845 [2024-07-25 01:20:15.917133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.845 qpair failed and we were unable to recover it. 00:34:22.845 [2024-07-25 01:20:15.917270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.845 [2024-07-25 01:20:15.917297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.845 qpair failed and we were unable to recover it. 00:34:22.845 [2024-07-25 01:20:15.917440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.845 [2024-07-25 01:20:15.917466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.845 qpair failed and we were unable to recover it. 00:34:22.845 [2024-07-25 01:20:15.917635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.845 [2024-07-25 01:20:15.917660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.845 qpair failed and we were unable to recover it. 00:34:22.845 [2024-07-25 01:20:15.917835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.845 [2024-07-25 01:20:15.917860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.845 qpair failed and we were unable to recover it. 00:34:22.845 [2024-07-25 01:20:15.917975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.845 [2024-07-25 01:20:15.918000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.845 qpair failed and we were unable to recover it. 00:34:22.845 [2024-07-25 01:20:15.918115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.845 [2024-07-25 01:20:15.918140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.845 qpair failed and we were unable to recover it. 00:34:22.845 [2024-07-25 01:20:15.918262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.845 [2024-07-25 01:20:15.918291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.845 qpair failed and we were unable to recover it. 00:34:22.845 [2024-07-25 01:20:15.918437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.845 [2024-07-25 01:20:15.918463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.845 qpair failed and we were unable to recover it. 00:34:22.845 [2024-07-25 01:20:15.918599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.845 [2024-07-25 01:20:15.918625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.845 qpair failed and we were unable to recover it. 00:34:22.845 [2024-07-25 01:20:15.918797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.845 [2024-07-25 01:20:15.918822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.845 qpair failed and we were unable to recover it. 00:34:22.845 [2024-07-25 01:20:15.918934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.845 [2024-07-25 01:20:15.918961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.845 qpair failed and we were unable to recover it. 00:34:22.845 [2024-07-25 01:20:15.919104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.845 [2024-07-25 01:20:15.919129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.845 qpair failed and we were unable to recover it. 00:34:22.845 [2024-07-25 01:20:15.919277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.845 [2024-07-25 01:20:15.919310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.845 qpair failed and we were unable to recover it. 00:34:22.845 [2024-07-25 01:20:15.919482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.845 [2024-07-25 01:20:15.919507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.845 qpair failed and we were unable to recover it. 00:34:22.845 [2024-07-25 01:20:15.919683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.845 [2024-07-25 01:20:15.919709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.845 qpair failed and we were unable to recover it. 00:34:22.845 [2024-07-25 01:20:15.919832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.845 [2024-07-25 01:20:15.919858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.845 qpair failed and we were unable to recover it. 00:34:22.845 [2024-07-25 01:20:15.919998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.845 [2024-07-25 01:20:15.920024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.845 qpair failed and we were unable to recover it. 00:34:22.845 [2024-07-25 01:20:15.920161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.845 [2024-07-25 01:20:15.920186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.845 qpair failed and we were unable to recover it. 00:34:22.845 [2024-07-25 01:20:15.920357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.845 [2024-07-25 01:20:15.920384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.845 qpair failed and we were unable to recover it. 00:34:22.845 [2024-07-25 01:20:15.920525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.845 [2024-07-25 01:20:15.920551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.845 qpair failed and we were unable to recover it. 00:34:22.845 [2024-07-25 01:20:15.920722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.845 [2024-07-25 01:20:15.920747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.845 qpair failed and we were unable to recover it. 00:34:22.845 [2024-07-25 01:20:15.920891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.845 [2024-07-25 01:20:15.920917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.845 qpair failed and we were unable to recover it. 00:34:22.845 [2024-07-25 01:20:15.921054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.845 [2024-07-25 01:20:15.921079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.845 qpair failed and we were unable to recover it. 00:34:22.845 [2024-07-25 01:20:15.921213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.845 [2024-07-25 01:20:15.921238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.845 qpair failed and we were unable to recover it. 00:34:22.845 [2024-07-25 01:20:15.921421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.845 [2024-07-25 01:20:15.921447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.845 qpair failed and we were unable to recover it. 00:34:22.845 [2024-07-25 01:20:15.921617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.845 [2024-07-25 01:20:15.921641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.845 qpair failed and we were unable to recover it. 00:34:22.845 [2024-07-25 01:20:15.921755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.845 [2024-07-25 01:20:15.921781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.845 qpair failed and we were unable to recover it. 00:34:22.845 [2024-07-25 01:20:15.921921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.845 [2024-07-25 01:20:15.921947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.845 qpair failed and we were unable to recover it. 00:34:22.845 [2024-07-25 01:20:15.922061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.845 [2024-07-25 01:20:15.922086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.845 qpair failed and we were unable to recover it. 00:34:22.845 [2024-07-25 01:20:15.922221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.845 [2024-07-25 01:20:15.922251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.845 qpair failed and we were unable to recover it. 00:34:22.845 [2024-07-25 01:20:15.922398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.846 [2024-07-25 01:20:15.922423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.846 qpair failed and we were unable to recover it. 00:34:22.846 [2024-07-25 01:20:15.922570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.846 [2024-07-25 01:20:15.922595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.846 qpair failed and we were unable to recover it. 00:34:22.846 [2024-07-25 01:20:15.922705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.846 [2024-07-25 01:20:15.922731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.846 qpair failed and we were unable to recover it. 00:34:22.846 [2024-07-25 01:20:15.922862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.846 [2024-07-25 01:20:15.922888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.846 qpair failed and we were unable to recover it. 00:34:22.846 [2024-07-25 01:20:15.923056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.846 [2024-07-25 01:20:15.923082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.846 qpair failed and we were unable to recover it. 00:34:22.846 [2024-07-25 01:20:15.923223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.846 [2024-07-25 01:20:15.923255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.846 qpair failed and we were unable to recover it. 00:34:22.846 [2024-07-25 01:20:15.923379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.846 [2024-07-25 01:20:15.923404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.846 qpair failed and we were unable to recover it. 00:34:22.846 [2024-07-25 01:20:15.923527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.846 [2024-07-25 01:20:15.923553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.846 qpair failed and we were unable to recover it. 00:34:22.846 [2024-07-25 01:20:15.923690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.846 [2024-07-25 01:20:15.923715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.846 qpair failed and we were unable to recover it. 00:34:22.846 [2024-07-25 01:20:15.923856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.846 [2024-07-25 01:20:15.923881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.846 qpair failed and we were unable to recover it. 00:34:22.846 [2024-07-25 01:20:15.924049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.846 [2024-07-25 01:20:15.924074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.846 qpair failed and we were unable to recover it. 00:34:22.846 [2024-07-25 01:20:15.924189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.846 [2024-07-25 01:20:15.924215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.846 qpair failed and we were unable to recover it. 00:34:22.846 [2024-07-25 01:20:15.924355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.846 [2024-07-25 01:20:15.924392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.846 qpair failed and we were unable to recover it. 00:34:22.846 [2024-07-25 01:20:15.924567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.846 [2024-07-25 01:20:15.924593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.846 qpair failed and we were unable to recover it. 00:34:22.846 [2024-07-25 01:20:15.924710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.846 [2024-07-25 01:20:15.924734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.846 qpair failed and we were unable to recover it. 00:34:22.846 [2024-07-25 01:20:15.924877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.846 [2024-07-25 01:20:15.924902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.846 qpair failed and we were unable to recover it. 00:34:22.846 [2024-07-25 01:20:15.925050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.846 [2024-07-25 01:20:15.925076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.846 qpair failed and we were unable to recover it. 00:34:22.846 [2024-07-25 01:20:15.925262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.846 [2024-07-25 01:20:15.925288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.846 qpair failed and we were unable to recover it. 00:34:22.846 [2024-07-25 01:20:15.925407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.846 [2024-07-25 01:20:15.925434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.846 qpair failed and we were unable to recover it. 00:34:22.846 [2024-07-25 01:20:15.925551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.846 [2024-07-25 01:20:15.925577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.846 qpair failed and we were unable to recover it. 00:34:22.846 [2024-07-25 01:20:15.925722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.846 [2024-07-25 01:20:15.925747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.846 qpair failed and we were unable to recover it. 00:34:22.846 [2024-07-25 01:20:15.925886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.846 [2024-07-25 01:20:15.925912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.846 qpair failed and we were unable to recover it. 00:34:22.846 [2024-07-25 01:20:15.926081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.846 [2024-07-25 01:20:15.926106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.846 qpair failed and we were unable to recover it. 00:34:22.846 [2024-07-25 01:20:15.926253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.846 [2024-07-25 01:20:15.926279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.846 qpair failed and we were unable to recover it. 00:34:22.846 [2024-07-25 01:20:15.926460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.846 [2024-07-25 01:20:15.926485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.846 qpair failed and we were unable to recover it. 00:34:22.846 [2024-07-25 01:20:15.926607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.846 [2024-07-25 01:20:15.926633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.846 qpair failed and we were unable to recover it. 00:34:22.846 [2024-07-25 01:20:15.926802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.846 [2024-07-25 01:20:15.926827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.846 qpair failed and we were unable to recover it. 00:34:22.846 [2024-07-25 01:20:15.926976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.846 [2024-07-25 01:20:15.927012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.846 qpair failed and we were unable to recover it. 00:34:22.846 [2024-07-25 01:20:15.927159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.846 [2024-07-25 01:20:15.927197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.846 qpair failed and we were unable to recover it. 00:34:22.846 [2024-07-25 01:20:15.927381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.846 [2024-07-25 01:20:15.927413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.846 qpair failed and we were unable to recover it. 00:34:22.846 [2024-07-25 01:20:15.927542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.846 [2024-07-25 01:20:15.927569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.846 qpair failed and we were unable to recover it. 00:34:22.846 [2024-07-25 01:20:15.927738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.846 [2024-07-25 01:20:15.927763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.846 qpair failed and we were unable to recover it. 00:34:22.846 [2024-07-25 01:20:15.927931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.846 [2024-07-25 01:20:15.927957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.846 qpair failed and we were unable to recover it. 00:34:22.846 [2024-07-25 01:20:15.928104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.846 [2024-07-25 01:20:15.928130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.846 qpair failed and we were unable to recover it. 00:34:22.846 [2024-07-25 01:20:15.928238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.846 [2024-07-25 01:20:15.928271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.846 qpair failed and we were unable to recover it. 00:34:22.846 [2024-07-25 01:20:15.928417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.846 [2024-07-25 01:20:15.928443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.846 qpair failed and we were unable to recover it. 00:34:22.846 [2024-07-25 01:20:15.928574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.846 [2024-07-25 01:20:15.928599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.846 qpair failed and we were unable to recover it. 00:34:22.846 [2024-07-25 01:20:15.928708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.846 [2024-07-25 01:20:15.928734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.846 qpair failed and we were unable to recover it. 00:34:22.846 [2024-07-25 01:20:15.928844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.846 [2024-07-25 01:20:15.928869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.846 qpair failed and we were unable to recover it. 00:34:22.846 [2024-07-25 01:20:15.929034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.846 [2024-07-25 01:20:15.929062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.846 qpair failed and we were unable to recover it. 00:34:22.846 [2024-07-25 01:20:15.929210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.846 [2024-07-25 01:20:15.929237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.846 qpair failed and we were unable to recover it. 00:34:22.846 [2024-07-25 01:20:15.929401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.847 [2024-07-25 01:20:15.929427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.847 qpair failed and we were unable to recover it. 00:34:22.847 [2024-07-25 01:20:15.929575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.847 [2024-07-25 01:20:15.929600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.847 qpair failed and we were unable to recover it. 00:34:22.847 [2024-07-25 01:20:15.929739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.847 [2024-07-25 01:20:15.929764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.847 qpair failed and we were unable to recover it. 00:34:22.847 [2024-07-25 01:20:15.929887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.847 [2024-07-25 01:20:15.929916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.847 qpair failed and we were unable to recover it. 00:34:22.847 [2024-07-25 01:20:15.930130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.847 [2024-07-25 01:20:15.930158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.847 qpair failed and we were unable to recover it. 00:34:22.847 [2024-07-25 01:20:15.930316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.847 [2024-07-25 01:20:15.930345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.847 qpair failed and we were unable to recover it. 00:34:22.847 [2024-07-25 01:20:15.930509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.847 [2024-07-25 01:20:15.930534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:22.847 qpair failed and we were unable to recover it. 00:34:22.847 [2024-07-25 01:20:15.930657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.129 [2024-07-25 01:20:15.930694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.129 qpair failed and we were unable to recover it. 00:34:23.130 [2024-07-25 01:20:15.930845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.130 [2024-07-25 01:20:15.930884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.130 qpair failed and we were unable to recover it. 00:34:23.130 [2024-07-25 01:20:15.931033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.130 [2024-07-25 01:20:15.931073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.130 qpair failed and we were unable to recover it. 00:34:23.130 [2024-07-25 01:20:15.931269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.130 [2024-07-25 01:20:15.931329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.130 qpair failed and we were unable to recover it. 00:34:23.130 [2024-07-25 01:20:15.931522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.130 [2024-07-25 01:20:15.931558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.130 qpair failed and we were unable to recover it. 00:34:23.130 [2024-07-25 01:20:15.931707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.130 [2024-07-25 01:20:15.931746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.130 qpair failed and we were unable to recover it. 00:34:23.130 [2024-07-25 01:20:15.931929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.130 [2024-07-25 01:20:15.931957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.130 qpair failed and we were unable to recover it. 00:34:23.130 [2024-07-25 01:20:15.932073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.130 [2024-07-25 01:20:15.932099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.130 qpair failed and we were unable to recover it. 00:34:23.130 [2024-07-25 01:20:15.932269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.130 [2024-07-25 01:20:15.932296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.130 qpair failed and we were unable to recover it. 00:34:23.130 [2024-07-25 01:20:15.932418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.130 [2024-07-25 01:20:15.932444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.130 qpair failed and we were unable to recover it. 00:34:23.130 [2024-07-25 01:20:15.932584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.130 [2024-07-25 01:20:15.932609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.130 qpair failed and we were unable to recover it. 00:34:23.130 [2024-07-25 01:20:15.932722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.130 [2024-07-25 01:20:15.932747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.130 qpair failed and we were unable to recover it. 00:34:23.130 [2024-07-25 01:20:15.932871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.130 [2024-07-25 01:20:15.932896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.130 qpair failed and we were unable to recover it. 00:34:23.130 [2024-07-25 01:20:15.933064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.130 [2024-07-25 01:20:15.933090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.130 qpair failed and we were unable to recover it. 00:34:23.130 [2024-07-25 01:20:15.933229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.130 [2024-07-25 01:20:15.933261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.130 qpair failed and we were unable to recover it. 00:34:23.130 [2024-07-25 01:20:15.933405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.130 [2024-07-25 01:20:15.933430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.130 qpair failed and we were unable to recover it. 00:34:23.130 [2024-07-25 01:20:15.933578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.130 [2024-07-25 01:20:15.933604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.130 qpair failed and we were unable to recover it. 00:34:23.130 [2024-07-25 01:20:15.933717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.130 [2024-07-25 01:20:15.933742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.130 qpair failed and we were unable to recover it. 00:34:23.130 [2024-07-25 01:20:15.933885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.130 [2024-07-25 01:20:15.933910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.130 qpair failed and we were unable to recover it. 00:34:23.130 [2024-07-25 01:20:15.934027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.130 [2024-07-25 01:20:15.934052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.130 qpair failed and we were unable to recover it. 00:34:23.130 [2024-07-25 01:20:15.934196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.130 [2024-07-25 01:20:15.934221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.130 qpair failed and we were unable to recover it. 00:34:23.130 [2024-07-25 01:20:15.934369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.130 [2024-07-25 01:20:15.934399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.130 qpair failed and we were unable to recover it. 00:34:23.130 [2024-07-25 01:20:15.934545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.130 [2024-07-25 01:20:15.934570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.130 qpair failed and we were unable to recover it. 00:34:23.130 [2024-07-25 01:20:15.934705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.130 [2024-07-25 01:20:15.934730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.130 qpair failed and we were unable to recover it. 00:34:23.130 [2024-07-25 01:20:15.934874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.130 [2024-07-25 01:20:15.934900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.130 qpair failed and we were unable to recover it. 00:34:23.130 [2024-07-25 01:20:15.935043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.130 [2024-07-25 01:20:15.935069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.130 qpair failed and we were unable to recover it. 00:34:23.130 [2024-07-25 01:20:15.935183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.130 [2024-07-25 01:20:15.935208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.130 qpair failed and we were unable to recover it. 00:34:23.130 [2024-07-25 01:20:15.935370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.130 [2024-07-25 01:20:15.935396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.130 qpair failed and we were unable to recover it. 00:34:23.130 [2024-07-25 01:20:15.935519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.130 [2024-07-25 01:20:15.935543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.130 qpair failed and we were unable to recover it. 00:34:23.130 [2024-07-25 01:20:15.935686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.130 [2024-07-25 01:20:15.935711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.130 qpair failed and we were unable to recover it. 00:34:23.130 [2024-07-25 01:20:15.935859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.130 [2024-07-25 01:20:15.935884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.130 qpair failed and we were unable to recover it. 00:34:23.130 [2024-07-25 01:20:15.936024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.130 [2024-07-25 01:20:15.936049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.130 qpair failed and we were unable to recover it. 00:34:23.130 [2024-07-25 01:20:15.936201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.130 [2024-07-25 01:20:15.936226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.130 qpair failed and we were unable to recover it. 00:34:23.130 [2024-07-25 01:20:15.936364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.131 [2024-07-25 01:20:15.936389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.131 qpair failed and we were unable to recover it. 00:34:23.131 [2024-07-25 01:20:15.936528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.131 [2024-07-25 01:20:15.936553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.131 qpair failed and we were unable to recover it. 00:34:23.131 [2024-07-25 01:20:15.936695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.131 [2024-07-25 01:20:15.936721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.131 qpair failed and we were unable to recover it. 00:34:23.131 [2024-07-25 01:20:15.936863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.131 [2024-07-25 01:20:15.936888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.131 qpair failed and we were unable to recover it. 00:34:23.131 [2024-07-25 01:20:15.937027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.131 [2024-07-25 01:20:15.937053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.131 qpair failed and we were unable to recover it. 00:34:23.131 [2024-07-25 01:20:15.937171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.131 [2024-07-25 01:20:15.937197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.131 qpair failed and we were unable to recover it. 00:34:23.131 [2024-07-25 01:20:15.937349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.131 [2024-07-25 01:20:15.937375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.131 qpair failed and we were unable to recover it. 00:34:23.131 [2024-07-25 01:20:15.937545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.131 [2024-07-25 01:20:15.937573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.131 qpair failed and we were unable to recover it. 00:34:23.131 [2024-07-25 01:20:15.937786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.131 [2024-07-25 01:20:15.937813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.131 qpair failed and we were unable to recover it. 00:34:23.131 [2024-07-25 01:20:15.937985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.131 [2024-07-25 01:20:15.938013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.131 qpair failed and we were unable to recover it. 00:34:23.131 [2024-07-25 01:20:15.938211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.131 [2024-07-25 01:20:15.938264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:23.131 qpair failed and we were unable to recover it. 00:34:23.131 [2024-07-25 01:20:15.938470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.131 [2024-07-25 01:20:15.938498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:23.131 qpair failed and we were unable to recover it. 00:34:23.131 [2024-07-25 01:20:15.938645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.131 [2024-07-25 01:20:15.938672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:23.131 qpair failed and we were unable to recover it. 00:34:23.131 [2024-07-25 01:20:15.938845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.131 [2024-07-25 01:20:15.938871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:23.131 qpair failed and we were unable to recover it. 00:34:23.131 [2024-07-25 01:20:15.938988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.131 [2024-07-25 01:20:15.939014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:23.131 qpair failed and we were unable to recover it. 00:34:23.131 [2024-07-25 01:20:15.939190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.131 [2024-07-25 01:20:15.939217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:23.131 qpair failed and we were unable to recover it. 00:34:23.131 [2024-07-25 01:20:15.939395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.131 [2024-07-25 01:20:15.939422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.131 qpair failed and we were unable to recover it. 00:34:23.131 [2024-07-25 01:20:15.939580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.131 [2024-07-25 01:20:15.939618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.131 qpair failed and we were unable to recover it. 00:34:23.131 [2024-07-25 01:20:15.939770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.131 [2024-07-25 01:20:15.939797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.131 qpair failed and we were unable to recover it. 00:34:23.131 [2024-07-25 01:20:15.939918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.131 [2024-07-25 01:20:15.939943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.131 qpair failed and we were unable to recover it. 00:34:23.131 [2024-07-25 01:20:15.940067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.131 [2024-07-25 01:20:15.940092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.131 qpair failed and we were unable to recover it. 00:34:23.131 [2024-07-25 01:20:15.940220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.131 [2024-07-25 01:20:15.940252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.131 qpair failed and we were unable to recover it. 00:34:23.131 [2024-07-25 01:20:15.940413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.131 [2024-07-25 01:20:15.940439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.131 qpair failed and we were unable to recover it. 00:34:23.131 [2024-07-25 01:20:15.940555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.131 [2024-07-25 01:20:15.940581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.131 qpair failed and we were unable to recover it. 00:34:23.131 [2024-07-25 01:20:15.940755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.131 [2024-07-25 01:20:15.940780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.131 qpair failed and we were unable to recover it. 00:34:23.131 [2024-07-25 01:20:15.940955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.131 [2024-07-25 01:20:15.940980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.131 qpair failed and we were unable to recover it. 00:34:23.131 [2024-07-25 01:20:15.941125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.131 [2024-07-25 01:20:15.941151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.131 qpair failed and we were unable to recover it. 00:34:23.131 [2024-07-25 01:20:15.941278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.131 [2024-07-25 01:20:15.941315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.131 qpair failed and we were unable to recover it. 00:34:23.131 [2024-07-25 01:20:15.941459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.131 [2024-07-25 01:20:15.941484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.131 qpair failed and we were unable to recover it. 00:34:23.131 [2024-07-25 01:20:15.941647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.131 [2024-07-25 01:20:15.941673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.131 qpair failed and we were unable to recover it. 00:34:23.131 [2024-07-25 01:20:15.941816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.131 [2024-07-25 01:20:15.941841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.131 qpair failed and we were unable to recover it. 00:34:23.131 [2024-07-25 01:20:15.941982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.131 [2024-07-25 01:20:15.942007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.131 qpair failed and we were unable to recover it. 00:34:23.131 [2024-07-25 01:20:15.942150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.131 [2024-07-25 01:20:15.942176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.131 qpair failed and we were unable to recover it. 00:34:23.131 [2024-07-25 01:20:15.942305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.131 [2024-07-25 01:20:15.942331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.131 qpair failed and we were unable to recover it. 00:34:23.131 [2024-07-25 01:20:15.942516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.131 [2024-07-25 01:20:15.942555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:23.131 qpair failed and we were unable to recover it. 00:34:23.131 [2024-07-25 01:20:15.942707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.131 [2024-07-25 01:20:15.942733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:23.132 qpair failed and we were unable to recover it. 00:34:23.132 [2024-07-25 01:20:15.942880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.132 [2024-07-25 01:20:15.942906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:23.132 qpair failed and we were unable to recover it. 00:34:23.132 [2024-07-25 01:20:15.943049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.132 [2024-07-25 01:20:15.943075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:23.132 qpair failed and we were unable to recover it. 00:34:23.132 [2024-07-25 01:20:15.943196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.132 [2024-07-25 01:20:15.943223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:23.132 qpair failed and we were unable to recover it. 00:34:23.132 [2024-07-25 01:20:15.943391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.132 [2024-07-25 01:20:15.943419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:23.132 qpair failed and we were unable to recover it. 00:34:23.132 [2024-07-25 01:20:15.943569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.132 [2024-07-25 01:20:15.943595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:23.132 qpair failed and we were unable to recover it. 00:34:23.132 [2024-07-25 01:20:15.943711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.132 [2024-07-25 01:20:15.943737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:23.132 qpair failed and we were unable to recover it. 00:34:23.132 [2024-07-25 01:20:15.943880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.132 [2024-07-25 01:20:15.943911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:23.132 qpair failed and we were unable to recover it. 00:34:23.132 [2024-07-25 01:20:15.944057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.132 [2024-07-25 01:20:15.944084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.132 qpair failed and we were unable to recover it. 00:34:23.132 [2024-07-25 01:20:15.944225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.132 [2024-07-25 01:20:15.944257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.132 qpair failed and we were unable to recover it. 00:34:23.132 [2024-07-25 01:20:15.944397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.132 [2024-07-25 01:20:15.944423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.132 qpair failed and we were unable to recover it. 00:34:23.132 [2024-07-25 01:20:15.944535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.132 [2024-07-25 01:20:15.944560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.132 qpair failed and we were unable to recover it. 00:34:23.132 [2024-07-25 01:20:15.944674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.132 [2024-07-25 01:20:15.944699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.132 qpair failed and we were unable to recover it. 00:34:23.132 [2024-07-25 01:20:15.944823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.132 [2024-07-25 01:20:15.944849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.132 qpair failed and we were unable to recover it. 00:34:23.132 [2024-07-25 01:20:15.944994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.132 [2024-07-25 01:20:15.945020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.132 qpair failed and we were unable to recover it. 00:34:23.132 [2024-07-25 01:20:15.945187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.132 [2024-07-25 01:20:15.945212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.132 qpair failed and we were unable to recover it. 00:34:23.132 [2024-07-25 01:20:15.945370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.132 [2024-07-25 01:20:15.945396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.132 qpair failed and we were unable to recover it. 00:34:23.132 [2024-07-25 01:20:15.945517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.132 [2024-07-25 01:20:15.945542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.132 qpair failed and we were unable to recover it. 00:34:23.132 [2024-07-25 01:20:15.945710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.132 [2024-07-25 01:20:15.945735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.132 qpair failed and we were unable to recover it. 00:34:23.132 [2024-07-25 01:20:15.945857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.132 [2024-07-25 01:20:15.945883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.132 qpair failed and we were unable to recover it. 00:34:23.132 [2024-07-25 01:20:15.946051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.132 [2024-07-25 01:20:15.946077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.132 qpair failed and we were unable to recover it. 00:34:23.132 [2024-07-25 01:20:15.946197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.132 [2024-07-25 01:20:15.946223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.132 qpair failed and we were unable to recover it. 00:34:23.132 [2024-07-25 01:20:15.946356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.132 [2024-07-25 01:20:15.946385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.132 qpair failed and we were unable to recover it. 00:34:23.132 [2024-07-25 01:20:15.946532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.132 [2024-07-25 01:20:15.946560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:23.132 qpair failed and we were unable to recover it. 00:34:23.132 [2024-07-25 01:20:15.946708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.132 [2024-07-25 01:20:15.946734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:23.132 qpair failed and we were unable to recover it. 00:34:23.132 [2024-07-25 01:20:15.946881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.132 [2024-07-25 01:20:15.946907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:23.132 qpair failed and we were unable to recover it. 00:34:23.132 [2024-07-25 01:20:15.947085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.132 [2024-07-25 01:20:15.947111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:23.132 qpair failed and we were unable to recover it. 00:34:23.132 [2024-07-25 01:20:15.947253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.132 [2024-07-25 01:20:15.947280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:23.132 qpair failed and we were unable to recover it. 00:34:23.132 [2024-07-25 01:20:15.947461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.132 [2024-07-25 01:20:15.947488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:23.132 qpair failed and we were unable to recover it. 00:34:23.132 [2024-07-25 01:20:15.947629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.132 [2024-07-25 01:20:15.947655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:23.132 qpair failed and we were unable to recover it. 00:34:23.132 [2024-07-25 01:20:15.947797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.132 [2024-07-25 01:20:15.947824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:23.132 qpair failed and we were unable to recover it. 00:34:23.132 [2024-07-25 01:20:15.947966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.132 [2024-07-25 01:20:15.947992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:23.132 qpair failed and we were unable to recover it. 00:34:23.132 [2024-07-25 01:20:15.948113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.132 [2024-07-25 01:20:15.948141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.132 qpair failed and we were unable to recover it. 00:34:23.132 [2024-07-25 01:20:15.948313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.132 [2024-07-25 01:20:15.948339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.132 qpair failed and we were unable to recover it. 00:34:23.132 [2024-07-25 01:20:15.948459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.132 [2024-07-25 01:20:15.948485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.133 qpair failed and we were unable to recover it. 00:34:23.133 [2024-07-25 01:20:15.948599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.133 [2024-07-25 01:20:15.948624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.133 qpair failed and we were unable to recover it. 00:34:23.133 [2024-07-25 01:20:15.948792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.133 [2024-07-25 01:20:15.948817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.133 qpair failed and we were unable to recover it. 00:34:23.133 [2024-07-25 01:20:15.948933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.133 [2024-07-25 01:20:15.948959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.133 qpair failed and we were unable to recover it. 00:34:23.133 [2024-07-25 01:20:15.949070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.133 [2024-07-25 01:20:15.949096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.133 qpair failed and we were unable to recover it. 00:34:23.133 [2024-07-25 01:20:15.949238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.133 [2024-07-25 01:20:15.949270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.133 qpair failed and we were unable to recover it. 00:34:23.133 [2024-07-25 01:20:15.949411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.133 [2024-07-25 01:20:15.949437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.133 qpair failed and we were unable to recover it. 00:34:23.133 [2024-07-25 01:20:15.949574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.133 [2024-07-25 01:20:15.949600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.133 qpair failed and we were unable to recover it. 00:34:23.133 [2024-07-25 01:20:15.949710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.133 [2024-07-25 01:20:15.949735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.133 qpair failed and we were unable to recover it. 00:34:23.133 [2024-07-25 01:20:15.949909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.133 [2024-07-25 01:20:15.949935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.133 qpair failed and we were unable to recover it. 00:34:23.133 [2024-07-25 01:20:15.950048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.133 [2024-07-25 01:20:15.950073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.133 qpair failed and we were unable to recover it. 00:34:23.133 [2024-07-25 01:20:15.950251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.133 [2024-07-25 01:20:15.950277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.133 qpair failed and we were unable to recover it. 00:34:23.133 [2024-07-25 01:20:15.950399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.133 [2024-07-25 01:20:15.950424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.133 qpair failed and we were unable to recover it. 00:34:23.133 [2024-07-25 01:20:15.950542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.133 [2024-07-25 01:20:15.950572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.133 qpair failed and we were unable to recover it. 00:34:23.133 [2024-07-25 01:20:15.950681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.133 [2024-07-25 01:20:15.950707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.133 qpair failed and we were unable to recover it. 00:34:23.133 [2024-07-25 01:20:15.950855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.133 [2024-07-25 01:20:15.950880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.133 qpair failed and we were unable to recover it. 00:34:23.133 [2024-07-25 01:20:15.951029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.133 [2024-07-25 01:20:15.951054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.133 qpair failed and we were unable to recover it. 00:34:23.133 [2024-07-25 01:20:15.951167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.133 [2024-07-25 01:20:15.951193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.133 qpair failed and we were unable to recover it. 00:34:23.133 [2024-07-25 01:20:15.951368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.133 [2024-07-25 01:20:15.951394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.133 qpair failed and we were unable to recover it. 00:34:23.133 [2024-07-25 01:20:15.951541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.133 [2024-07-25 01:20:15.951566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.133 qpair failed and we were unable to recover it. 00:34:23.133 [2024-07-25 01:20:15.951681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.133 [2024-07-25 01:20:15.951706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.133 qpair failed and we were unable to recover it. 00:34:23.133 [2024-07-25 01:20:15.951850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.133 [2024-07-25 01:20:15.951875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.133 qpair failed and we were unable to recover it. 00:34:23.133 [2024-07-25 01:20:15.951990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.133 [2024-07-25 01:20:15.952016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.133 qpair failed and we were unable to recover it. 00:34:23.133 [2024-07-25 01:20:15.952185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.133 [2024-07-25 01:20:15.952210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.133 qpair failed and we were unable to recover it. 00:34:23.133 [2024-07-25 01:20:15.952361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.133 [2024-07-25 01:20:15.952388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.133 qpair failed and we were unable to recover it. 00:34:23.133 [2024-07-25 01:20:15.952531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.133 [2024-07-25 01:20:15.952557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.133 qpair failed and we were unable to recover it. 00:34:23.133 [2024-07-25 01:20:15.952703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.133 [2024-07-25 01:20:15.952729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.133 qpair failed and we were unable to recover it. 00:34:23.133 [2024-07-25 01:20:15.952889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.133 [2024-07-25 01:20:15.952914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.133 qpair failed and we were unable to recover it. 00:34:23.133 [2024-07-25 01:20:15.953068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.133 [2024-07-25 01:20:15.953093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.133 qpair failed and we were unable to recover it. 00:34:23.133 [2024-07-25 01:20:15.953258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.133 [2024-07-25 01:20:15.953284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.133 qpair failed and we were unable to recover it. 00:34:23.133 [2024-07-25 01:20:15.953432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.133 [2024-07-25 01:20:15.953458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.133 qpair failed and we were unable to recover it. 00:34:23.133 [2024-07-25 01:20:15.953598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.133 [2024-07-25 01:20:15.953624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.133 qpair failed and we were unable to recover it. 00:34:23.133 [2024-07-25 01:20:15.953766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.133 [2024-07-25 01:20:15.953792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.133 qpair failed and we were unable to recover it. 00:34:23.134 [2024-07-25 01:20:15.953936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.134 [2024-07-25 01:20:15.953963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.134 qpair failed and we were unable to recover it. 00:34:23.134 [2024-07-25 01:20:15.954112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.134 [2024-07-25 01:20:15.954137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.134 qpair failed and we were unable to recover it. 00:34:23.134 [2024-07-25 01:20:15.954282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.134 [2024-07-25 01:20:15.954309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.134 qpair failed and we were unable to recover it. 00:34:23.134 [2024-07-25 01:20:15.954473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.134 [2024-07-25 01:20:15.954499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.134 qpair failed and we were unable to recover it. 00:34:23.134 [2024-07-25 01:20:15.954643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.134 [2024-07-25 01:20:15.954668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.134 qpair failed and we were unable to recover it. 00:34:23.134 [2024-07-25 01:20:15.954824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.134 [2024-07-25 01:20:15.954849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.134 qpair failed and we were unable to recover it. 00:34:23.134 [2024-07-25 01:20:15.955021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.134 [2024-07-25 01:20:15.955046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.134 qpair failed and we were unable to recover it. 00:34:23.134 [2024-07-25 01:20:15.955222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.134 [2024-07-25 01:20:15.955254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.134 qpair failed and we were unable to recover it. 00:34:23.134 [2024-07-25 01:20:15.955380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.134 [2024-07-25 01:20:15.955405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.134 qpair failed and we were unable to recover it. 00:34:23.134 [2024-07-25 01:20:15.955520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.134 [2024-07-25 01:20:15.955546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.134 qpair failed and we were unable to recover it. 00:34:23.134 [2024-07-25 01:20:15.955657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.134 [2024-07-25 01:20:15.955682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.134 qpair failed and we were unable to recover it. 00:34:23.134 [2024-07-25 01:20:15.955797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.134 [2024-07-25 01:20:15.955823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.134 qpair failed and we were unable to recover it. 00:34:23.134 [2024-07-25 01:20:15.955995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.134 [2024-07-25 01:20:15.956021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.134 qpair failed and we were unable to recover it. 00:34:23.134 [2024-07-25 01:20:15.956166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.134 [2024-07-25 01:20:15.956191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.134 qpair failed and we were unable to recover it. 00:34:23.134 [2024-07-25 01:20:15.956303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.134 [2024-07-25 01:20:15.956328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.134 qpair failed and we were unable to recover it. 00:34:23.134 [2024-07-25 01:20:15.956448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.134 [2024-07-25 01:20:15.956474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.134 qpair failed and we were unable to recover it. 00:34:23.134 [2024-07-25 01:20:15.956613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.134 [2024-07-25 01:20:15.956639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.134 qpair failed and we were unable to recover it. 00:34:23.134 [2024-07-25 01:20:15.956788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.134 [2024-07-25 01:20:15.956814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.134 qpair failed and we were unable to recover it. 00:34:23.134 [2024-07-25 01:20:15.957112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.134 [2024-07-25 01:20:15.957138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.134 qpair failed and we were unable to recover it. 00:34:23.134 [2024-07-25 01:20:15.957312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.134 [2024-07-25 01:20:15.957338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.134 qpair failed and we were unable to recover it. 00:34:23.134 [2024-07-25 01:20:15.957482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.134 [2024-07-25 01:20:15.957513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.134 qpair failed and we were unable to recover it. 00:34:23.134 [2024-07-25 01:20:15.957697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.134 [2024-07-25 01:20:15.957723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.134 qpair failed and we were unable to recover it. 00:34:23.134 [2024-07-25 01:20:15.957866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.134 [2024-07-25 01:20:15.957893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.134 qpair failed and we were unable to recover it. 00:34:23.134 [2024-07-25 01:20:15.958039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.134 [2024-07-25 01:20:15.958064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.134 qpair failed and we were unable to recover it. 00:34:23.134 [2024-07-25 01:20:15.958230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.134 [2024-07-25 01:20:15.958262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.134 qpair failed and we were unable to recover it. 00:34:23.134 [2024-07-25 01:20:15.958406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.134 [2024-07-25 01:20:15.958431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.134 qpair failed and we were unable to recover it. 00:34:23.134 [2024-07-25 01:20:15.958577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.134 [2024-07-25 01:20:15.958602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.134 qpair failed and we were unable to recover it. 00:34:23.134 [2024-07-25 01:20:15.958749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.134 [2024-07-25 01:20:15.958774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.134 qpair failed and we were unable to recover it. 00:34:23.134 [2024-07-25 01:20:15.958921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.134 [2024-07-25 01:20:15.958946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.134 qpair failed and we were unable to recover it. 00:34:23.134 [2024-07-25 01:20:15.959070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.134 [2024-07-25 01:20:15.959097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.134 qpair failed and we were unable to recover it. 00:34:23.134 [2024-07-25 01:20:15.959249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.134 [2024-07-25 01:20:15.959275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.134 qpair failed and we were unable to recover it. 00:34:23.134 [2024-07-25 01:20:15.959451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.134 [2024-07-25 01:20:15.959477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.134 qpair failed and we were unable to recover it. 00:34:23.134 [2024-07-25 01:20:15.959595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.134 [2024-07-25 01:20:15.959621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.134 qpair failed and we were unable to recover it. 00:34:23.134 [2024-07-25 01:20:15.959761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.134 [2024-07-25 01:20:15.959786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.134 qpair failed and we were unable to recover it. 00:34:23.134 [2024-07-25 01:20:15.959909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.134 [2024-07-25 01:20:15.959934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.135 qpair failed and we were unable to recover it. 00:34:23.135 [2024-07-25 01:20:15.960075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.135 [2024-07-25 01:20:15.960100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.135 qpair failed and we were unable to recover it. 00:34:23.135 [2024-07-25 01:20:15.960220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.135 [2024-07-25 01:20:15.960252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.135 qpair failed and we were unable to recover it. 00:34:23.135 [2024-07-25 01:20:15.960395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.135 [2024-07-25 01:20:15.960421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.135 qpair failed and we were unable to recover it. 00:34:23.135 [2024-07-25 01:20:15.960534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.135 [2024-07-25 01:20:15.960560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.135 qpair failed and we were unable to recover it. 00:34:23.135 [2024-07-25 01:20:15.960703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.135 [2024-07-25 01:20:15.960729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.135 qpair failed and we were unable to recover it. 00:34:23.135 [2024-07-25 01:20:15.960870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.135 [2024-07-25 01:20:15.960895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.135 qpair failed and we were unable to recover it. 00:34:23.135 [2024-07-25 01:20:15.961012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.135 [2024-07-25 01:20:15.961038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.135 qpair failed and we were unable to recover it. 00:34:23.135 [2024-07-25 01:20:15.961156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.135 [2024-07-25 01:20:15.961181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.135 qpair failed and we were unable to recover it. 00:34:23.135 [2024-07-25 01:20:15.961322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.135 [2024-07-25 01:20:15.961361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:23.135 qpair failed and we were unable to recover it. 00:34:23.135 [2024-07-25 01:20:15.961517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.135 [2024-07-25 01:20:15.961546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:23.135 qpair failed and we were unable to recover it. 00:34:23.135 [2024-07-25 01:20:15.961696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.135 [2024-07-25 01:20:15.961723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:23.135 qpair failed and we were unable to recover it. 00:34:23.135 [2024-07-25 01:20:15.961844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.135 [2024-07-25 01:20:15.961870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:23.135 qpair failed and we were unable to recover it. 00:34:23.135 [2024-07-25 01:20:15.962015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.135 [2024-07-25 01:20:15.962041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:23.135 qpair failed and we were unable to recover it. 00:34:23.135 [2024-07-25 01:20:15.962219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.135 [2024-07-25 01:20:15.962253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:23.135 qpair failed and we were unable to recover it. 00:34:23.135 [2024-07-25 01:20:15.962379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.135 [2024-07-25 01:20:15.962406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:23.135 qpair failed and we were unable to recover it. 00:34:23.135 [2024-07-25 01:20:15.962573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.135 [2024-07-25 01:20:15.962599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:23.135 qpair failed and we were unable to recover it. 00:34:23.135 [2024-07-25 01:20:15.962772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.135 [2024-07-25 01:20:15.962799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:23.135 qpair failed and we were unable to recover it. 00:34:23.135 [2024-07-25 01:20:15.962923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.135 [2024-07-25 01:20:15.962950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.135 qpair failed and we were unable to recover it. 00:34:23.135 [2024-07-25 01:20:15.963094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.135 [2024-07-25 01:20:15.963120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.135 qpair failed and we were unable to recover it. 00:34:23.135 [2024-07-25 01:20:15.963238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.135 [2024-07-25 01:20:15.963269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.135 qpair failed and we were unable to recover it. 00:34:23.135 [2024-07-25 01:20:15.963393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.135 [2024-07-25 01:20:15.963419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.135 qpair failed and we were unable to recover it. 00:34:23.135 [2024-07-25 01:20:15.963541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.135 [2024-07-25 01:20:15.963567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.135 qpair failed and we were unable to recover it. 00:34:23.135 [2024-07-25 01:20:15.963716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.135 [2024-07-25 01:20:15.963742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.135 qpair failed and we were unable to recover it. 00:34:23.135 [2024-07-25 01:20:15.963886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.135 [2024-07-25 01:20:15.963912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.135 qpair failed and we were unable to recover it. 00:34:23.135 [2024-07-25 01:20:15.964080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.135 [2024-07-25 01:20:15.964105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.135 qpair failed and we were unable to recover it. 00:34:23.135 [2024-07-25 01:20:15.964255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.135 [2024-07-25 01:20:15.964286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.135 qpair failed and we were unable to recover it. 00:34:23.135 [2024-07-25 01:20:15.964428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.135 [2024-07-25 01:20:15.964454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.135 qpair failed and we were unable to recover it. 00:34:23.135 [2024-07-25 01:20:15.964564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.135 [2024-07-25 01:20:15.964590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.135 qpair failed and we were unable to recover it. 00:34:23.135 [2024-07-25 01:20:15.964743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.135 [2024-07-25 01:20:15.964769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.135 qpair failed and we were unable to recover it. 00:34:23.135 [2024-07-25 01:20:15.964890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.136 [2024-07-25 01:20:15.964915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.136 qpair failed and we were unable to recover it. 00:34:23.136 [2024-07-25 01:20:15.965088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.136 [2024-07-25 01:20:15.965115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.136 qpair failed and we were unable to recover it. 00:34:23.136 [2024-07-25 01:20:15.965259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.136 [2024-07-25 01:20:15.965285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.136 qpair failed and we were unable to recover it. 00:34:23.136 [2024-07-25 01:20:15.965429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.136 [2024-07-25 01:20:15.965454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.136 qpair failed and we were unable to recover it. 00:34:23.136 [2024-07-25 01:20:15.965597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.136 [2024-07-25 01:20:15.965622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.136 qpair failed and we were unable to recover it. 00:34:23.136 [2024-07-25 01:20:15.965793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.136 [2024-07-25 01:20:15.965819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.136 qpair failed and we were unable to recover it. 00:34:23.136 [2024-07-25 01:20:15.965986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.136 [2024-07-25 01:20:15.966012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.136 qpair failed and we were unable to recover it. 00:34:23.136 [2024-07-25 01:20:15.966115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.136 [2024-07-25 01:20:15.966140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.136 qpair failed and we were unable to recover it. 00:34:23.136 [2024-07-25 01:20:15.966290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.136 [2024-07-25 01:20:15.966316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.136 qpair failed and we were unable to recover it. 00:34:23.136 [2024-07-25 01:20:15.966486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.136 [2024-07-25 01:20:15.966511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.136 qpair failed and we were unable to recover it. 00:34:23.136 [2024-07-25 01:20:15.966660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.136 [2024-07-25 01:20:15.966685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.136 qpair failed and we were unable to recover it. 00:34:23.136 [2024-07-25 01:20:15.966838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.136 [2024-07-25 01:20:15.966863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.136 qpair failed and we were unable to recover it. 00:34:23.136 [2024-07-25 01:20:15.966982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.136 [2024-07-25 01:20:15.967008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.136 qpair failed and we were unable to recover it. 00:34:23.136 [2024-07-25 01:20:15.967125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.136 [2024-07-25 01:20:15.967150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.136 qpair failed and we were unable to recover it. 00:34:23.136 [2024-07-25 01:20:15.967296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.136 [2024-07-25 01:20:15.967322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.136 qpair failed and we were unable to recover it. 00:34:23.136 [2024-07-25 01:20:15.967467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.136 [2024-07-25 01:20:15.967493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.136 qpair failed and we were unable to recover it. 00:34:23.136 [2024-07-25 01:20:15.967633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.136 [2024-07-25 01:20:15.967659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.136 qpair failed and we were unable to recover it. 00:34:23.136 [2024-07-25 01:20:15.967807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.136 [2024-07-25 01:20:15.967832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.136 qpair failed and we were unable to recover it. 00:34:23.136 [2024-07-25 01:20:15.967972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.136 [2024-07-25 01:20:15.967997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.136 qpair failed and we were unable to recover it. 00:34:23.136 [2024-07-25 01:20:15.968134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.136 [2024-07-25 01:20:15.968159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.136 qpair failed and we were unable to recover it. 00:34:23.136 [2024-07-25 01:20:15.968306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.136 [2024-07-25 01:20:15.968333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.136 qpair failed and we were unable to recover it. 00:34:23.136 [2024-07-25 01:20:15.968449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.136 [2024-07-25 01:20:15.968474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.136 qpair failed and we were unable to recover it. 00:34:23.136 [2024-07-25 01:20:15.968619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.136 [2024-07-25 01:20:15.968644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.136 qpair failed and we were unable to recover it. 00:34:23.136 [2024-07-25 01:20:15.968797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.136 [2024-07-25 01:20:15.968822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.136 qpair failed and we were unable to recover it. 00:34:23.136 [2024-07-25 01:20:15.968948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.136 [2024-07-25 01:20:15.968976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.136 qpair failed and we were unable to recover it. 00:34:23.136 [2024-07-25 01:20:15.969157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.136 [2024-07-25 01:20:15.969186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.136 qpair failed and we were unable to recover it. 00:34:23.136 [2024-07-25 01:20:15.969325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.136 [2024-07-25 01:20:15.969369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.136 qpair failed and we were unable to recover it. 00:34:23.136 [2024-07-25 01:20:15.969542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.136 [2024-07-25 01:20:15.969570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.136 qpair failed and we were unable to recover it. 00:34:23.136 [2024-07-25 01:20:15.969746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.136 [2024-07-25 01:20:15.969775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.136 qpair failed and we were unable to recover it. 00:34:23.136 [2024-07-25 01:20:15.969949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.136 [2024-07-25 01:20:15.969978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.136 qpair failed and we were unable to recover it. 00:34:23.136 [2024-07-25 01:20:15.970165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.136 [2024-07-25 01:20:15.970208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:23.136 qpair failed and we were unable to recover it. 00:34:23.136 [2024-07-25 01:20:15.970437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.136 [2024-07-25 01:20:15.970467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:23.136 qpair failed and we were unable to recover it. 00:34:23.136 [2024-07-25 01:20:15.970665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.136 [2024-07-25 01:20:15.970737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:23.136 qpair failed and we were unable to recover it. 00:34:23.136 [2024-07-25 01:20:15.970923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.136 [2024-07-25 01:20:15.970949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:23.136 qpair failed and we were unable to recover it. 00:34:23.136 [2024-07-25 01:20:15.971092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.136 [2024-07-25 01:20:15.971118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:23.136 qpair failed and we were unable to recover it. 00:34:23.136 [2024-07-25 01:20:15.971237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.136 [2024-07-25 01:20:15.971270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:23.137 qpair failed and we were unable to recover it. 00:34:23.137 [2024-07-25 01:20:15.971416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.137 [2024-07-25 01:20:15.971448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:23.137 qpair failed and we were unable to recover it. 00:34:23.137 [2024-07-25 01:20:15.971619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.137 [2024-07-25 01:20:15.971644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:23.137 qpair failed and we were unable to recover it. 00:34:23.137 [2024-07-25 01:20:15.971825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.137 [2024-07-25 01:20:15.971851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:23.137 qpair failed and we were unable to recover it. 00:34:23.137 [2024-07-25 01:20:15.971995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.137 [2024-07-25 01:20:15.972021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:23.137 qpair failed and we were unable to recover it. 00:34:23.137 [2024-07-25 01:20:15.972160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.137 [2024-07-25 01:20:15.972186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:23.137 qpair failed and we were unable to recover it. 00:34:23.137 [2024-07-25 01:20:15.972330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.137 [2024-07-25 01:20:15.972357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:23.137 qpair failed and we were unable to recover it. 00:34:23.137 [2024-07-25 01:20:15.972506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.137 [2024-07-25 01:20:15.972535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:23.137 qpair failed and we were unable to recover it. 00:34:23.137 [2024-07-25 01:20:15.972679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.137 [2024-07-25 01:20:15.972706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:23.137 qpair failed and we were unable to recover it. 00:34:23.137 [2024-07-25 01:20:15.972880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.137 [2024-07-25 01:20:15.972906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:23.137 qpair failed and we were unable to recover it. 00:34:23.137 [2024-07-25 01:20:15.973079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.137 [2024-07-25 01:20:15.973105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:23.137 qpair failed and we were unable to recover it. 00:34:23.137 [2024-07-25 01:20:15.973251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.137 [2024-07-25 01:20:15.973278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:23.137 qpair failed and we were unable to recover it. 00:34:23.137 [2024-07-25 01:20:15.973427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.137 [2024-07-25 01:20:15.973452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:23.137 qpair failed and we were unable to recover it. 00:34:23.137 [2024-07-25 01:20:15.973600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.137 [2024-07-25 01:20:15.973625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:23.137 qpair failed and we were unable to recover it. 00:34:23.137 [2024-07-25 01:20:15.973775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.137 [2024-07-25 01:20:15.973801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:23.137 qpair failed and we were unable to recover it. 00:34:23.137 [2024-07-25 01:20:15.973926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.137 [2024-07-25 01:20:15.973952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:23.137 qpair failed and we were unable to recover it. 00:34:23.137 [2024-07-25 01:20:15.974097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.137 [2024-07-25 01:20:15.974123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:23.137 qpair failed and we were unable to recover it. 00:34:23.137 [2024-07-25 01:20:15.974269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.137 [2024-07-25 01:20:15.974296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:23.137 qpair failed and we were unable to recover it. 00:34:23.137 [2024-07-25 01:20:15.974471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.137 [2024-07-25 01:20:15.974496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:23.137 qpair failed and we were unable to recover it. 00:34:23.137 [2024-07-25 01:20:15.974669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.137 [2024-07-25 01:20:15.974696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:23.137 qpair failed and we were unable to recover it. 00:34:23.137 [2024-07-25 01:20:15.974837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.137 [2024-07-25 01:20:15.974863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:23.137 qpair failed and we were unable to recover it. 00:34:23.137 [2024-07-25 01:20:15.975046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.137 [2024-07-25 01:20:15.975072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:23.137 qpair failed and we were unable to recover it. 00:34:23.137 [2024-07-25 01:20:15.975220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.137 [2024-07-25 01:20:15.975251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:23.137 qpair failed and we were unable to recover it. 00:34:23.137 [2024-07-25 01:20:15.975426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.137 [2024-07-25 01:20:15.975452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:23.137 qpair failed and we were unable to recover it. 00:34:23.137 [2024-07-25 01:20:15.975597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.137 [2024-07-25 01:20:15.975624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:23.137 qpair failed and we were unable to recover it. 00:34:23.137 [2024-07-25 01:20:15.975792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.137 [2024-07-25 01:20:15.975817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:23.137 qpair failed and we were unable to recover it. 00:34:23.137 [2024-07-25 01:20:15.975998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.137 [2024-07-25 01:20:15.976024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:23.137 qpair failed and we were unable to recover it. 00:34:23.137 [2024-07-25 01:20:15.976199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.137 [2024-07-25 01:20:15.976225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:23.137 qpair failed and we were unable to recover it. 00:34:23.137 [2024-07-25 01:20:15.976387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.137 [2024-07-25 01:20:15.976415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.137 qpair failed and we were unable to recover it. 00:34:23.137 [2024-07-25 01:20:15.976569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.137 [2024-07-25 01:20:15.976598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.137 qpair failed and we were unable to recover it. 00:34:23.137 [2024-07-25 01:20:15.976749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.137 [2024-07-25 01:20:15.976815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.137 qpair failed and we were unable to recover it. 00:34:23.137 [2024-07-25 01:20:15.977064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.137 [2024-07-25 01:20:15.977115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.137 qpair failed and we were unable to recover it. 00:34:23.137 [2024-07-25 01:20:15.977292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.138 [2024-07-25 01:20:15.977320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.138 qpair failed and we were unable to recover it. 00:34:23.138 [2024-07-25 01:20:15.977477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.138 [2024-07-25 01:20:15.977503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.138 qpair failed and we were unable to recover it. 00:34:23.138 [2024-07-25 01:20:15.977671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.138 [2024-07-25 01:20:15.977696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.138 qpair failed and we were unable to recover it. 00:34:23.138 [2024-07-25 01:20:15.977842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.138 [2024-07-25 01:20:15.977867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.138 qpair failed and we were unable to recover it. 00:34:23.138 [2024-07-25 01:20:15.977986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.138 [2024-07-25 01:20:15.978011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.138 qpair failed and we were unable to recover it. 00:34:23.138 [2024-07-25 01:20:15.978153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.138 [2024-07-25 01:20:15.978179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.138 qpair failed and we were unable to recover it. 00:34:23.138 [2024-07-25 01:20:15.978303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.138 [2024-07-25 01:20:15.978330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.138 qpair failed and we were unable to recover it. 00:34:23.138 [2024-07-25 01:20:15.978486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.138 [2024-07-25 01:20:15.978512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.138 qpair failed and we were unable to recover it. 00:34:23.138 [2024-07-25 01:20:15.978652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.138 [2024-07-25 01:20:15.978678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.138 qpair failed and we were unable to recover it. 00:34:23.138 [2024-07-25 01:20:15.978849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.138 [2024-07-25 01:20:15.978881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.138 qpair failed and we were unable to recover it. 00:34:23.138 [2024-07-25 01:20:15.979056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.138 [2024-07-25 01:20:15.979082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.138 qpair failed and we were unable to recover it. 00:34:23.138 [2024-07-25 01:20:15.979224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.138 [2024-07-25 01:20:15.979255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.138 qpair failed and we were unable to recover it. 00:34:23.138 [2024-07-25 01:20:15.979394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.138 [2024-07-25 01:20:15.979420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.138 qpair failed and we were unable to recover it. 00:34:23.138 [2024-07-25 01:20:15.979559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.138 [2024-07-25 01:20:15.979585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.138 qpair failed and we were unable to recover it. 00:34:23.138 [2024-07-25 01:20:15.979733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.138 [2024-07-25 01:20:15.979758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.138 qpair failed and we were unable to recover it. 00:34:23.138 [2024-07-25 01:20:15.979907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.138 [2024-07-25 01:20:15.979932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.138 qpair failed and we were unable to recover it. 00:34:23.138 [2024-07-25 01:20:15.980104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.138 [2024-07-25 01:20:15.980130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.138 qpair failed and we were unable to recover it. 00:34:23.138 [2024-07-25 01:20:15.980265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.138 [2024-07-25 01:20:15.980290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.138 qpair failed and we were unable to recover it. 00:34:23.138 [2024-07-25 01:20:15.980406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.138 [2024-07-25 01:20:15.980433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.138 qpair failed and we were unable to recover it. 00:34:23.138 [2024-07-25 01:20:15.980543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.138 [2024-07-25 01:20:15.980569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.138 qpair failed and we were unable to recover it. 00:34:23.138 [2024-07-25 01:20:15.980708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.138 [2024-07-25 01:20:15.980734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.138 qpair failed and we were unable to recover it. 00:34:23.138 [2024-07-25 01:20:15.980856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.138 [2024-07-25 01:20:15.980881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.138 qpair failed and we were unable to recover it. 00:34:23.138 [2024-07-25 01:20:15.981020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.138 [2024-07-25 01:20:15.981046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.138 qpair failed and we were unable to recover it. 00:34:23.138 [2024-07-25 01:20:15.981190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.138 [2024-07-25 01:20:15.981216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.138 qpair failed and we were unable to recover it. 00:34:23.138 [2024-07-25 01:20:15.981366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.138 [2024-07-25 01:20:15.981392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.138 qpair failed and we were unable to recover it. 00:34:23.138 [2024-07-25 01:20:15.981503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.138 [2024-07-25 01:20:15.981528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.138 qpair failed and we were unable to recover it. 00:34:23.138 [2024-07-25 01:20:15.981697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.138 [2024-07-25 01:20:15.981722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.138 qpair failed and we were unable to recover it. 00:34:23.138 [2024-07-25 01:20:15.981837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.138 [2024-07-25 01:20:15.981862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.138 qpair failed and we were unable to recover it. 00:34:23.138 [2024-07-25 01:20:15.982033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.138 [2024-07-25 01:20:15.982058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.138 qpair failed and we were unable to recover it. 00:34:23.138 [2024-07-25 01:20:15.982208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.138 [2024-07-25 01:20:15.982234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.138 qpair failed and we were unable to recover it. 00:34:23.138 [2024-07-25 01:20:15.982381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.138 [2024-07-25 01:20:15.982407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.138 qpair failed and we were unable to recover it. 00:34:23.138 [2024-07-25 01:20:15.982529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.138 [2024-07-25 01:20:15.982555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.138 qpair failed and we were unable to recover it. 00:34:23.138 [2024-07-25 01:20:15.982701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.138 [2024-07-25 01:20:15.982726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.138 qpair failed and we were unable to recover it. 00:34:23.138 [2024-07-25 01:20:15.982869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.138 [2024-07-25 01:20:15.982893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.138 qpair failed and we were unable to recover it. 00:34:23.138 [2024-07-25 01:20:15.983039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.138 [2024-07-25 01:20:15.983065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.138 qpair failed and we were unable to recover it. 00:34:23.138 [2024-07-25 01:20:15.983182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.138 [2024-07-25 01:20:15.983207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.138 qpair failed and we were unable to recover it. 00:34:23.138 [2024-07-25 01:20:15.983365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.138 [2024-07-25 01:20:15.983391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.138 qpair failed and we were unable to recover it. 00:34:23.138 [2024-07-25 01:20:15.983532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.138 [2024-07-25 01:20:15.983558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.139 qpair failed and we were unable to recover it. 00:34:23.139 [2024-07-25 01:20:15.983695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.139 [2024-07-25 01:20:15.983721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.139 qpair failed and we were unable to recover it. 00:34:23.139 [2024-07-25 01:20:15.983827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.139 [2024-07-25 01:20:15.983852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.139 qpair failed and we were unable to recover it. 00:34:23.139 [2024-07-25 01:20:15.984019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.139 [2024-07-25 01:20:15.984045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.139 qpair failed and we were unable to recover it. 00:34:23.139 [2024-07-25 01:20:15.984224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.139 [2024-07-25 01:20:15.984256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.139 qpair failed and we were unable to recover it. 00:34:23.139 [2024-07-25 01:20:15.984366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.139 [2024-07-25 01:20:15.984391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.139 qpair failed and we were unable to recover it. 00:34:23.139 [2024-07-25 01:20:15.984505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.139 [2024-07-25 01:20:15.984530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.139 qpair failed and we were unable to recover it. 00:34:23.139 [2024-07-25 01:20:15.984701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.139 [2024-07-25 01:20:15.984726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.139 qpair failed and we were unable to recover it. 00:34:23.139 [2024-07-25 01:20:15.984845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.139 [2024-07-25 01:20:15.984871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.139 qpair failed and we were unable to recover it. 00:34:23.139 [2024-07-25 01:20:15.985042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.139 [2024-07-25 01:20:15.985068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.139 qpair failed and we were unable to recover it. 00:34:23.139 [2024-07-25 01:20:15.985203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.139 [2024-07-25 01:20:15.985228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.139 qpair failed and we were unable to recover it. 00:34:23.139 [2024-07-25 01:20:15.985416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.139 [2024-07-25 01:20:15.985441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.139 qpair failed and we were unable to recover it. 00:34:23.139 [2024-07-25 01:20:15.985590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.139 [2024-07-25 01:20:15.985620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.139 qpair failed and we were unable to recover it. 00:34:23.139 [2024-07-25 01:20:15.985769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.139 [2024-07-25 01:20:15.985794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.139 qpair failed and we were unable to recover it. 00:34:23.139 [2024-07-25 01:20:15.985910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.139 [2024-07-25 01:20:15.985934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.139 qpair failed and we were unable to recover it. 00:34:23.139 [2024-07-25 01:20:15.986108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.139 [2024-07-25 01:20:15.986134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.139 qpair failed and we were unable to recover it. 00:34:23.139 [2024-07-25 01:20:15.986254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.139 [2024-07-25 01:20:15.986280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.139 qpair failed and we were unable to recover it. 00:34:23.139 [2024-07-25 01:20:15.986418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.139 [2024-07-25 01:20:15.986444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.139 qpair failed and we were unable to recover it. 00:34:23.139 [2024-07-25 01:20:15.986592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.139 [2024-07-25 01:20:15.986617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.139 qpair failed and we were unable to recover it. 00:34:23.139 [2024-07-25 01:20:15.986785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.139 [2024-07-25 01:20:15.986810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.139 qpair failed and we were unable to recover it. 00:34:23.139 [2024-07-25 01:20:15.986951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.139 [2024-07-25 01:20:15.986977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.139 qpair failed and we were unable to recover it. 00:34:23.139 [2024-07-25 01:20:15.987129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.139 [2024-07-25 01:20:15.987154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.139 qpair failed and we were unable to recover it. 00:34:23.139 [2024-07-25 01:20:15.987323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.139 [2024-07-25 01:20:15.987349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.139 qpair failed and we were unable to recover it. 00:34:23.139 [2024-07-25 01:20:15.987482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.139 [2024-07-25 01:20:15.987507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.139 qpair failed and we were unable to recover it. 00:34:23.139 [2024-07-25 01:20:15.987653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.139 [2024-07-25 01:20:15.987679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.139 qpair failed and we were unable to recover it. 00:34:23.139 [2024-07-25 01:20:15.987815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.139 [2024-07-25 01:20:15.987840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.139 qpair failed and we were unable to recover it. 00:34:23.139 [2024-07-25 01:20:15.987959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.139 [2024-07-25 01:20:15.987985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.139 qpair failed and we were unable to recover it. 00:34:23.139 [2024-07-25 01:20:15.988133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.139 [2024-07-25 01:20:15.988159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.139 qpair failed and we were unable to recover it. 00:34:23.139 [2024-07-25 01:20:15.988314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.139 [2024-07-25 01:20:15.988353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:23.139 qpair failed and we were unable to recover it. 00:34:23.139 [2024-07-25 01:20:15.988532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.139 [2024-07-25 01:20:15.988560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:23.139 qpair failed and we were unable to recover it. 00:34:23.139 [2024-07-25 01:20:15.988707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.139 [2024-07-25 01:20:15.988734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:23.139 qpair failed and we were unable to recover it. 00:34:23.139 [2024-07-25 01:20:15.988849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.139 [2024-07-25 01:20:15.988875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:23.139 qpair failed and we were unable to recover it. 00:34:23.139 [2024-07-25 01:20:15.988981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.139 [2024-07-25 01:20:15.989006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:23.139 qpair failed and we were unable to recover it. 00:34:23.139 [2024-07-25 01:20:15.989125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.139 [2024-07-25 01:20:15.989151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:23.139 qpair failed and we were unable to recover it. 00:34:23.139 [2024-07-25 01:20:15.989294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.139 [2024-07-25 01:20:15.989320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:23.139 qpair failed and we were unable to recover it. 00:34:23.139 [2024-07-25 01:20:15.989490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.140 [2024-07-25 01:20:15.989516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:23.140 qpair failed and we were unable to recover it. 00:34:23.140 [2024-07-25 01:20:15.989664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.140 [2024-07-25 01:20:15.989690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:23.140 qpair failed and we were unable to recover it. 00:34:23.140 [2024-07-25 01:20:15.989809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.140 [2024-07-25 01:20:15.989836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.140 qpair failed and we were unable to recover it. 00:34:23.140 [2024-07-25 01:20:15.990012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.140 [2024-07-25 01:20:15.990037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.140 qpair failed and we were unable to recover it. 00:34:23.140 [2024-07-25 01:20:15.990182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.140 [2024-07-25 01:20:15.990208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.140 qpair failed and we were unable to recover it. 00:34:23.140 [2024-07-25 01:20:15.990396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.140 [2024-07-25 01:20:15.990424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.140 qpair failed and we were unable to recover it. 00:34:23.140 [2024-07-25 01:20:15.990563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.140 [2024-07-25 01:20:15.990588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.140 qpair failed and we were unable to recover it. 00:34:23.140 [2024-07-25 01:20:15.990726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.140 [2024-07-25 01:20:15.990751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.140 qpair failed and we were unable to recover it. 00:34:23.140 [2024-07-25 01:20:15.990922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.140 [2024-07-25 01:20:15.990948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.140 qpair failed and we were unable to recover it. 00:34:23.140 [2024-07-25 01:20:15.991088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.140 [2024-07-25 01:20:15.991113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.140 qpair failed and we were unable to recover it. 00:34:23.140 [2024-07-25 01:20:15.991255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.140 [2024-07-25 01:20:15.991281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.140 qpair failed and we were unable to recover it. 00:34:23.140 [2024-07-25 01:20:15.991421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.140 [2024-07-25 01:20:15.991446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.140 qpair failed and we were unable to recover it. 00:34:23.140 [2024-07-25 01:20:15.991587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.140 [2024-07-25 01:20:15.991612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.140 qpair failed and we were unable to recover it. 00:34:23.140 [2024-07-25 01:20:15.991759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.140 [2024-07-25 01:20:15.991785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.140 qpair failed and we were unable to recover it. 00:34:23.140 [2024-07-25 01:20:15.991926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.140 [2024-07-25 01:20:15.991952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.140 qpair failed and we were unable to recover it. 00:34:23.140 [2024-07-25 01:20:15.992067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.140 [2024-07-25 01:20:15.992093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.140 qpair failed and we were unable to recover it. 00:34:23.140 [2024-07-25 01:20:15.992246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.140 [2024-07-25 01:20:15.992272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.140 qpair failed and we were unable to recover it. 00:34:23.140 [2024-07-25 01:20:15.992387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.140 [2024-07-25 01:20:15.992417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.140 qpair failed and we were unable to recover it. 00:34:23.140 [2024-07-25 01:20:15.992533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.140 [2024-07-25 01:20:15.992558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.140 qpair failed and we were unable to recover it. 00:34:23.140 [2024-07-25 01:20:15.992734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.140 [2024-07-25 01:20:15.992759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.140 qpair failed and we were unable to recover it. 00:34:23.140 [2024-07-25 01:20:15.992890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.140 [2024-07-25 01:20:15.992915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.140 qpair failed and we were unable to recover it. 00:34:23.140 [2024-07-25 01:20:15.993061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.140 [2024-07-25 01:20:15.993086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.140 qpair failed and we were unable to recover it. 00:34:23.140 [2024-07-25 01:20:15.993210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.140 [2024-07-25 01:20:15.993235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.140 qpair failed and we were unable to recover it. 00:34:23.140 [2024-07-25 01:20:15.993416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.140 [2024-07-25 01:20:15.993441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.140 qpair failed and we were unable to recover it. 00:34:23.140 [2024-07-25 01:20:15.993611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.140 [2024-07-25 01:20:15.993637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.140 qpair failed and we were unable to recover it. 00:34:23.140 [2024-07-25 01:20:15.993785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.140 [2024-07-25 01:20:15.993811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.140 qpair failed and we were unable to recover it. 00:34:23.140 [2024-07-25 01:20:15.993981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.140 [2024-07-25 01:20:15.994005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.140 qpair failed and we were unable to recover it. 00:34:23.140 [2024-07-25 01:20:15.994121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.140 [2024-07-25 01:20:15.994147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.140 qpair failed and we were unable to recover it. 00:34:23.140 [2024-07-25 01:20:15.994288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.140 [2024-07-25 01:20:15.994314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.140 qpair failed and we were unable to recover it. 00:34:23.140 [2024-07-25 01:20:15.994422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.140 [2024-07-25 01:20:15.994447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.140 qpair failed and we were unable to recover it. 00:34:23.140 [2024-07-25 01:20:15.994563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.140 [2024-07-25 01:20:15.994588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.140 qpair failed and we were unable to recover it. 00:34:23.140 [2024-07-25 01:20:15.994709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.140 [2024-07-25 01:20:15.994735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.140 qpair failed and we were unable to recover it. 00:34:23.140 [2024-07-25 01:20:15.994887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.141 [2024-07-25 01:20:15.994913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.141 qpair failed and we were unable to recover it. 00:34:23.141 [2024-07-25 01:20:15.995052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.141 [2024-07-25 01:20:15.995077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.141 qpair failed and we were unable to recover it. 00:34:23.141 [2024-07-25 01:20:15.995193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.141 [2024-07-25 01:20:15.995220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.141 qpair failed and we were unable to recover it. 00:34:23.141 [2024-07-25 01:20:15.995396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.141 [2024-07-25 01:20:15.995422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.141 qpair failed and we were unable to recover it. 00:34:23.141 [2024-07-25 01:20:15.995560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.141 [2024-07-25 01:20:15.995586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.141 qpair failed and we were unable to recover it. 00:34:23.141 [2024-07-25 01:20:15.995768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.141 [2024-07-25 01:20:15.995838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.141 qpair failed and we were unable to recover it. 00:34:23.141 [2024-07-25 01:20:15.996024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.141 [2024-07-25 01:20:15.996052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.141 qpair failed and we were unable to recover it. 00:34:23.141 [2024-07-25 01:20:15.996205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.141 [2024-07-25 01:20:15.996233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.141 qpair failed and we were unable to recover it. 00:34:23.141 [2024-07-25 01:20:15.996423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.141 [2024-07-25 01:20:15.996451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.141 qpair failed and we were unable to recover it. 00:34:23.141 [2024-07-25 01:20:15.996587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.141 [2024-07-25 01:20:15.996613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.141 qpair failed and we were unable to recover it. 00:34:23.141 [2024-07-25 01:20:15.996756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.141 [2024-07-25 01:20:15.996782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.141 qpair failed and we were unable to recover it. 00:34:23.141 [2024-07-25 01:20:15.996929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.141 [2024-07-25 01:20:15.996954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.141 qpair failed and we were unable to recover it. 00:34:23.141 [2024-07-25 01:20:15.997096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.141 [2024-07-25 01:20:15.997122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.141 qpair failed and we were unable to recover it. 00:34:23.141 [2024-07-25 01:20:15.997255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.141 [2024-07-25 01:20:15.997281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.141 qpair failed and we were unable to recover it. 00:34:23.141 [2024-07-25 01:20:15.997430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.141 [2024-07-25 01:20:15.997455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.141 qpair failed and we were unable to recover it. 00:34:23.141 [2024-07-25 01:20:15.997595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.141 [2024-07-25 01:20:15.997621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.141 qpair failed and we were unable to recover it. 00:34:23.141 [2024-07-25 01:20:15.997759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.141 [2024-07-25 01:20:15.997784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.141 qpair failed and we were unable to recover it. 00:34:23.141 [2024-07-25 01:20:15.997914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.141 [2024-07-25 01:20:15.997941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.141 qpair failed and we were unable to recover it. 00:34:23.141 [2024-07-25 01:20:15.998109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.141 [2024-07-25 01:20:15.998135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.141 qpair failed and we were unable to recover it. 00:34:23.141 [2024-07-25 01:20:15.998279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.141 [2024-07-25 01:20:15.998305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.141 qpair failed and we were unable to recover it. 00:34:23.141 [2024-07-25 01:20:15.998445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.141 [2024-07-25 01:20:15.998472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.141 qpair failed and we were unable to recover it. 00:34:23.141 [2024-07-25 01:20:15.998642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.141 [2024-07-25 01:20:15.998668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.141 qpair failed and we were unable to recover it. 00:34:23.141 [2024-07-25 01:20:15.998780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.141 [2024-07-25 01:20:15.998805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.141 qpair failed and we were unable to recover it. 00:34:23.141 [2024-07-25 01:20:15.998925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.141 [2024-07-25 01:20:15.998951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.141 qpair failed and we were unable to recover it. 00:34:23.141 [2024-07-25 01:20:15.999120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.141 [2024-07-25 01:20:15.999145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.141 qpair failed and we were unable to recover it. 00:34:23.141 [2024-07-25 01:20:15.999259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.141 [2024-07-25 01:20:15.999288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.141 qpair failed and we were unable to recover it. 00:34:23.141 [2024-07-25 01:20:15.999428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.141 [2024-07-25 01:20:15.999454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.141 qpair failed and we were unable to recover it. 00:34:23.141 [2024-07-25 01:20:15.999570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.141 [2024-07-25 01:20:15.999596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.141 qpair failed and we were unable to recover it. 00:34:23.141 [2024-07-25 01:20:15.999716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.141 [2024-07-25 01:20:15.999742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.141 qpair failed and we were unable to recover it. 00:34:23.141 [2024-07-25 01:20:15.999865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.141 [2024-07-25 01:20:15.999890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.141 qpair failed and we were unable to recover it. 00:34:23.141 [2024-07-25 01:20:16.000006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.141 [2024-07-25 01:20:16.000031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.141 qpair failed and we were unable to recover it. 00:34:23.141 [2024-07-25 01:20:16.000174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.141 [2024-07-25 01:20:16.000210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.141 qpair failed and we were unable to recover it. 00:34:23.141 [2024-07-25 01:20:16.000366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.141 [2024-07-25 01:20:16.000401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.141 qpair failed and we were unable to recover it. 00:34:23.141 [2024-07-25 01:20:16.000547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.141 [2024-07-25 01:20:16.000573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.141 qpair failed and we were unable to recover it. 00:34:23.141 [2024-07-25 01:20:16.000685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.141 [2024-07-25 01:20:16.000711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.141 qpair failed and we were unable to recover it. 00:34:23.141 [2024-07-25 01:20:16.000860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.141 [2024-07-25 01:20:16.000887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.142 qpair failed and we were unable to recover it. 00:34:23.142 [2024-07-25 01:20:16.001001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.142 [2024-07-25 01:20:16.001028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.142 qpair failed and we were unable to recover it. 00:34:23.142 [2024-07-25 01:20:16.001143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.142 [2024-07-25 01:20:16.001168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.142 qpair failed and we were unable to recover it. 00:34:23.142 [2024-07-25 01:20:16.001310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.142 [2024-07-25 01:20:16.001337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.142 qpair failed and we were unable to recover it. 00:34:23.142 [2024-07-25 01:20:16.001511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.142 [2024-07-25 01:20:16.001537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.142 qpair failed and we were unable to recover it. 00:34:23.142 [2024-07-25 01:20:16.001655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.142 [2024-07-25 01:20:16.001681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.142 qpair failed and we were unable to recover it. 00:34:23.142 [2024-07-25 01:20:16.001803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.142 [2024-07-25 01:20:16.001829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.142 qpair failed and we were unable to recover it. 00:34:23.142 [2024-07-25 01:20:16.001952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.142 [2024-07-25 01:20:16.001977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.142 qpair failed and we were unable to recover it. 00:34:23.142 [2024-07-25 01:20:16.002119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.142 [2024-07-25 01:20:16.002144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.142 qpair failed and we were unable to recover it. 00:34:23.142 [2024-07-25 01:20:16.002319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.142 [2024-07-25 01:20:16.002345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.142 qpair failed and we were unable to recover it. 00:34:23.142 [2024-07-25 01:20:16.002482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.142 [2024-07-25 01:20:16.002507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.142 qpair failed and we were unable to recover it. 00:34:23.142 [2024-07-25 01:20:16.002623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.142 [2024-07-25 01:20:16.002649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.142 qpair failed and we were unable to recover it. 00:34:23.142 [2024-07-25 01:20:16.002819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.142 [2024-07-25 01:20:16.002845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.142 qpair failed and we were unable to recover it. 00:34:23.142 [2024-07-25 01:20:16.002993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.142 [2024-07-25 01:20:16.003018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.142 qpair failed and we were unable to recover it. 00:34:23.142 [2024-07-25 01:20:16.003155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.142 [2024-07-25 01:20:16.003181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.142 qpair failed and we were unable to recover it. 00:34:23.142 [2024-07-25 01:20:16.003355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.142 [2024-07-25 01:20:16.003381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.142 qpair failed and we were unable to recover it. 00:34:23.142 [2024-07-25 01:20:16.003493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.142 [2024-07-25 01:20:16.003520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.142 qpair failed and we were unable to recover it. 00:34:23.142 [2024-07-25 01:20:16.003692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.142 [2024-07-25 01:20:16.003718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.142 qpair failed and we were unable to recover it. 00:34:23.142 [2024-07-25 01:20:16.003833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.142 [2024-07-25 01:20:16.003859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.142 qpair failed and we were unable to recover it. 00:34:23.142 [2024-07-25 01:20:16.003976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.142 [2024-07-25 01:20:16.004002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.142 qpair failed and we were unable to recover it. 00:34:23.142 [2024-07-25 01:20:16.004122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.142 [2024-07-25 01:20:16.004147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.142 qpair failed and we were unable to recover it. 00:34:23.142 [2024-07-25 01:20:16.004291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.142 [2024-07-25 01:20:16.004318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.142 qpair failed and we were unable to recover it. 00:34:23.142 [2024-07-25 01:20:16.004434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.142 [2024-07-25 01:20:16.004461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.142 qpair failed and we were unable to recover it. 00:34:23.142 [2024-07-25 01:20:16.004605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.142 [2024-07-25 01:20:16.004631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.142 qpair failed and we were unable to recover it. 00:34:23.142 [2024-07-25 01:20:16.004780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.142 [2024-07-25 01:20:16.004806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.142 qpair failed and we were unable to recover it. 00:34:23.142 [2024-07-25 01:20:16.004942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.142 [2024-07-25 01:20:16.004968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.142 qpair failed and we were unable to recover it. 00:34:23.142 [2024-07-25 01:20:16.005129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.142 [2024-07-25 01:20:16.005158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.142 qpair failed and we were unable to recover it. 00:34:23.142 [2024-07-25 01:20:16.005314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.142 [2024-07-25 01:20:16.005343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.142 qpair failed and we were unable to recover it. 00:34:23.142 [2024-07-25 01:20:16.005557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.142 [2024-07-25 01:20:16.005585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.142 qpair failed and we were unable to recover it. 00:34:23.142 [2024-07-25 01:20:16.005740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.142 [2024-07-25 01:20:16.005768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.142 qpair failed and we were unable to recover it. 00:34:23.142 [2024-07-25 01:20:16.005951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.142 [2024-07-25 01:20:16.005984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.142 qpair failed and we were unable to recover it. 00:34:23.142 [2024-07-25 01:20:16.006168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.142 [2024-07-25 01:20:16.006196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.142 qpair failed and we were unable to recover it. 00:34:23.142 [2024-07-25 01:20:16.006356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.142 [2024-07-25 01:20:16.006385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.142 qpair failed and we were unable to recover it. 00:34:23.142 [2024-07-25 01:20:16.006544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.142 [2024-07-25 01:20:16.006573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.142 qpair failed and we were unable to recover it. 00:34:23.142 [2024-07-25 01:20:16.006825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.142 [2024-07-25 01:20:16.006877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.142 qpair failed and we were unable to recover it. 00:34:23.142 [2024-07-25 01:20:16.007027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.142 [2024-07-25 01:20:16.007056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.142 qpair failed and we were unable to recover it. 00:34:23.142 [2024-07-25 01:20:16.007257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.142 [2024-07-25 01:20:16.007286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.142 qpair failed and we were unable to recover it. 00:34:23.142 [2024-07-25 01:20:16.007421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.143 [2024-07-25 01:20:16.007447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.143 qpair failed and we were unable to recover it. 00:34:23.143 [2024-07-25 01:20:16.007569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.143 [2024-07-25 01:20:16.007595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.143 qpair failed and we were unable to recover it. 00:34:23.143 [2024-07-25 01:20:16.007742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.143 [2024-07-25 01:20:16.007767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.143 qpair failed and we were unable to recover it. 00:34:23.143 [2024-07-25 01:20:16.007903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.143 [2024-07-25 01:20:16.007928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.143 qpair failed and we were unable to recover it. 00:34:23.143 [2024-07-25 01:20:16.008056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.143 [2024-07-25 01:20:16.008081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.143 qpair failed and we were unable to recover it. 00:34:23.143 [2024-07-25 01:20:16.008221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.143 [2024-07-25 01:20:16.008252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.143 qpair failed and we were unable to recover it. 00:34:23.143 [2024-07-25 01:20:16.008374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.143 [2024-07-25 01:20:16.008399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.143 qpair failed and we were unable to recover it. 00:34:23.143 [2024-07-25 01:20:16.008549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.143 [2024-07-25 01:20:16.008575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.143 qpair failed and we were unable to recover it. 00:34:23.143 [2024-07-25 01:20:16.008732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.143 [2024-07-25 01:20:16.008757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.143 qpair failed and we were unable to recover it. 00:34:23.143 [2024-07-25 01:20:16.008901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.143 [2024-07-25 01:20:16.008926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.143 qpair failed and we were unable to recover it. 00:34:23.143 [2024-07-25 01:20:16.009073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.143 [2024-07-25 01:20:16.009099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.143 qpair failed and we were unable to recover it. 00:34:23.143 [2024-07-25 01:20:16.009272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.143 [2024-07-25 01:20:16.009298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.143 qpair failed and we were unable to recover it. 00:34:23.143 [2024-07-25 01:20:16.009410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.143 [2024-07-25 01:20:16.009435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.143 qpair failed and we were unable to recover it. 00:34:23.143 [2024-07-25 01:20:16.009587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.143 [2024-07-25 01:20:16.009613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.143 qpair failed and we were unable to recover it. 00:34:23.143 [2024-07-25 01:20:16.009755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.143 [2024-07-25 01:20:16.009781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.143 qpair failed and we were unable to recover it. 00:34:23.143 [2024-07-25 01:20:16.009925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.143 [2024-07-25 01:20:16.009951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.143 qpair failed and we were unable to recover it. 00:34:23.143 [2024-07-25 01:20:16.010074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.143 [2024-07-25 01:20:16.010099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.143 qpair failed and we were unable to recover it. 00:34:23.143 [2024-07-25 01:20:16.010256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.143 [2024-07-25 01:20:16.010283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.143 qpair failed and we were unable to recover it. 00:34:23.143 [2024-07-25 01:20:16.010416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.143 [2024-07-25 01:20:16.010441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.143 qpair failed and we were unable to recover it. 00:34:23.143 [2024-07-25 01:20:16.010589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.143 [2024-07-25 01:20:16.010614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.143 qpair failed and we were unable to recover it. 00:34:23.143 [2024-07-25 01:20:16.010785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.143 [2024-07-25 01:20:16.010810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.143 qpair failed and we were unable to recover it. 00:34:23.143 [2024-07-25 01:20:16.010945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.143 [2024-07-25 01:20:16.010971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.143 qpair failed and we were unable to recover it. 00:34:23.143 [2024-07-25 01:20:16.011082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.143 [2024-07-25 01:20:16.011107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.143 qpair failed and we were unable to recover it. 00:34:23.143 [2024-07-25 01:20:16.011253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.143 [2024-07-25 01:20:16.011279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.143 qpair failed and we were unable to recover it. 00:34:23.143 [2024-07-25 01:20:16.011426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.143 [2024-07-25 01:20:16.011451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.143 qpair failed and we were unable to recover it. 00:34:23.143 [2024-07-25 01:20:16.011568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.143 [2024-07-25 01:20:16.011594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.143 qpair failed and we were unable to recover it. 00:34:23.143 [2024-07-25 01:20:16.011743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.143 [2024-07-25 01:20:16.011768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.143 qpair failed and we were unable to recover it. 00:34:23.143 [2024-07-25 01:20:16.011933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.143 [2024-07-25 01:20:16.011958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.143 qpair failed and we were unable to recover it. 00:34:23.143 [2024-07-25 01:20:16.012103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.143 [2024-07-25 01:20:16.012128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.143 qpair failed and we were unable to recover it. 00:34:23.143 [2024-07-25 01:20:16.012303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.143 [2024-07-25 01:20:16.012329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.143 qpair failed and we were unable to recover it. 00:34:23.143 [2024-07-25 01:20:16.012457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.143 [2024-07-25 01:20:16.012482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.143 qpair failed and we were unable to recover it. 00:34:23.143 [2024-07-25 01:20:16.012602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.143 [2024-07-25 01:20:16.012627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.143 qpair failed and we were unable to recover it. 00:34:23.143 [2024-07-25 01:20:16.012775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.143 [2024-07-25 01:20:16.012800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.143 qpair failed and we were unable to recover it. 00:34:23.143 [2024-07-25 01:20:16.012911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.143 [2024-07-25 01:20:16.012942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.144 qpair failed and we were unable to recover it. 00:34:23.144 [2024-07-25 01:20:16.013093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.144 [2024-07-25 01:20:16.013118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.144 qpair failed and we were unable to recover it. 00:34:23.144 [2024-07-25 01:20:16.013259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.144 [2024-07-25 01:20:16.013285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.144 qpair failed and we were unable to recover it. 00:34:23.144 [2024-07-25 01:20:16.013429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.144 [2024-07-25 01:20:16.013454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.144 qpair failed and we were unable to recover it. 00:34:23.144 [2024-07-25 01:20:16.013566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.144 [2024-07-25 01:20:16.013591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.144 qpair failed and we were unable to recover it. 00:34:23.144 [2024-07-25 01:20:16.013762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.144 [2024-07-25 01:20:16.013788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.144 qpair failed and we were unable to recover it. 00:34:23.144 [2024-07-25 01:20:16.013930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.144 [2024-07-25 01:20:16.013956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.144 qpair failed and we were unable to recover it. 00:34:23.144 [2024-07-25 01:20:16.014105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.144 [2024-07-25 01:20:16.014130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.144 qpair failed and we were unable to recover it. 00:34:23.144 [2024-07-25 01:20:16.014249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.144 [2024-07-25 01:20:16.014275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.144 qpair failed and we were unable to recover it. 00:34:23.144 [2024-07-25 01:20:16.014420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.144 [2024-07-25 01:20:16.014445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.144 qpair failed and we were unable to recover it. 00:34:23.144 [2024-07-25 01:20:16.014585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.144 [2024-07-25 01:20:16.014611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.144 qpair failed and we were unable to recover it. 00:34:23.144 [2024-07-25 01:20:16.014717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.144 [2024-07-25 01:20:16.014742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.144 qpair failed and we were unable to recover it. 00:34:23.144 [2024-07-25 01:20:16.014909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.144 [2024-07-25 01:20:16.014934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.144 qpair failed and we were unable to recover it. 00:34:23.144 [2024-07-25 01:20:16.015081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.144 [2024-07-25 01:20:16.015106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.144 qpair failed and we were unable to recover it. 00:34:23.144 [2024-07-25 01:20:16.015279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.144 [2024-07-25 01:20:16.015305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.144 qpair failed and we were unable to recover it. 00:34:23.144 [2024-07-25 01:20:16.015416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.144 [2024-07-25 01:20:16.015441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.144 qpair failed and we were unable to recover it. 00:34:23.144 [2024-07-25 01:20:16.015614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.144 [2024-07-25 01:20:16.015639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.144 qpair failed and we were unable to recover it. 00:34:23.144 [2024-07-25 01:20:16.015776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.144 [2024-07-25 01:20:16.015802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.144 qpair failed and we were unable to recover it. 00:34:23.144 [2024-07-25 01:20:16.015914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.144 [2024-07-25 01:20:16.015939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.144 qpair failed and we were unable to recover it. 00:34:23.144 [2024-07-25 01:20:16.016109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.144 [2024-07-25 01:20:16.016134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.144 qpair failed and we were unable to recover it. 00:34:23.144 [2024-07-25 01:20:16.016253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.144 [2024-07-25 01:20:16.016278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.144 qpair failed and we were unable to recover it. 00:34:23.144 [2024-07-25 01:20:16.016419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.144 [2024-07-25 01:20:16.016444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.144 qpair failed and we were unable to recover it. 00:34:23.144 [2024-07-25 01:20:16.016594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.144 [2024-07-25 01:20:16.016620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.144 qpair failed and we were unable to recover it. 00:34:23.144 [2024-07-25 01:20:16.016760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.144 [2024-07-25 01:20:16.016785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.144 qpair failed and we were unable to recover it. 00:34:23.144 [2024-07-25 01:20:16.016934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.144 [2024-07-25 01:20:16.016959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.144 qpair failed and we were unable to recover it. 00:34:23.144 [2024-07-25 01:20:16.017103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.144 [2024-07-25 01:20:16.017128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.144 qpair failed and we were unable to recover it. 00:34:23.144 [2024-07-25 01:20:16.017273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.144 [2024-07-25 01:20:16.017299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.144 qpair failed and we were unable to recover it. 00:34:23.144 [2024-07-25 01:20:16.017450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.144 [2024-07-25 01:20:16.017476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.144 qpair failed and we were unable to recover it. 00:34:23.144 [2024-07-25 01:20:16.017650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.144 [2024-07-25 01:20:16.017675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.144 qpair failed and we were unable to recover it. 00:34:23.144 [2024-07-25 01:20:16.017849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.144 [2024-07-25 01:20:16.017874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.144 qpair failed and we were unable to recover it. 00:34:23.144 [2024-07-25 01:20:16.018040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.144 [2024-07-25 01:20:16.018065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.144 qpair failed and we were unable to recover it. 00:34:23.144 [2024-07-25 01:20:16.018183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.144 [2024-07-25 01:20:16.018208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.144 qpair failed and we were unable to recover it. 00:34:23.145 [2024-07-25 01:20:16.018335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.145 [2024-07-25 01:20:16.018361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.145 qpair failed and we were unable to recover it. 00:34:23.145 [2024-07-25 01:20:16.018488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.145 [2024-07-25 01:20:16.018513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.145 qpair failed and we were unable to recover it. 00:34:23.145 [2024-07-25 01:20:16.018658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.145 [2024-07-25 01:20:16.018683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.145 qpair failed and we were unable to recover it. 00:34:23.145 [2024-07-25 01:20:16.018851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.145 [2024-07-25 01:20:16.018876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.145 qpair failed and we were unable to recover it. 00:34:23.145 [2024-07-25 01:20:16.019014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.145 [2024-07-25 01:20:16.019040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.145 qpair failed and we were unable to recover it. 00:34:23.145 [2024-07-25 01:20:16.019186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.145 [2024-07-25 01:20:16.019212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.145 qpair failed and we were unable to recover it. 00:34:23.145 [2024-07-25 01:20:16.019376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.145 [2024-07-25 01:20:16.019402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.145 qpair failed and we were unable to recover it. 00:34:23.145 [2024-07-25 01:20:16.019570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.145 [2024-07-25 01:20:16.019595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.145 qpair failed and we were unable to recover it. 00:34:23.145 [2024-07-25 01:20:16.019764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.145 [2024-07-25 01:20:16.019789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.145 qpair failed and we were unable to recover it. 00:34:23.145 [2024-07-25 01:20:16.019932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.145 [2024-07-25 01:20:16.019957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.145 qpair failed and we were unable to recover it. 00:34:23.145 [2024-07-25 01:20:16.020078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.145 [2024-07-25 01:20:16.020103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.145 qpair failed and we were unable to recover it. 00:34:23.145 [2024-07-25 01:20:16.020214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.145 [2024-07-25 01:20:16.020239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.145 qpair failed and we were unable to recover it. 00:34:23.145 [2024-07-25 01:20:16.020386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.145 [2024-07-25 01:20:16.020411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.145 qpair failed and we were unable to recover it. 00:34:23.145 [2024-07-25 01:20:16.020591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.145 [2024-07-25 01:20:16.020616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.145 qpair failed and we were unable to recover it. 00:34:23.145 [2024-07-25 01:20:16.020757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.145 [2024-07-25 01:20:16.020782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.145 qpair failed and we were unable to recover it. 00:34:23.145 [2024-07-25 01:20:16.020951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.145 [2024-07-25 01:20:16.020977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.145 qpair failed and we were unable to recover it. 00:34:23.145 [2024-07-25 01:20:16.021123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.145 [2024-07-25 01:20:16.021149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.145 qpair failed and we were unable to recover it. 00:34:23.145 [2024-07-25 01:20:16.021298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.145 [2024-07-25 01:20:16.021324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.145 qpair failed and we were unable to recover it. 00:34:23.145 [2024-07-25 01:20:16.021469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.145 [2024-07-25 01:20:16.021495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.145 qpair failed and we were unable to recover it. 00:34:23.145 [2024-07-25 01:20:16.021641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.145 [2024-07-25 01:20:16.021666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.145 qpair failed and we were unable to recover it. 00:34:23.145 [2024-07-25 01:20:16.021784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.145 [2024-07-25 01:20:16.021810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.145 qpair failed and we were unable to recover it. 00:34:23.145 [2024-07-25 01:20:16.021950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.145 [2024-07-25 01:20:16.021976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.145 qpair failed and we were unable to recover it. 00:34:23.145 [2024-07-25 01:20:16.022125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.145 [2024-07-25 01:20:16.022150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.145 qpair failed and we were unable to recover it. 00:34:23.145 [2024-07-25 01:20:16.022292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.145 [2024-07-25 01:20:16.022318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.145 qpair failed and we were unable to recover it. 00:34:23.145 [2024-07-25 01:20:16.022429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.145 [2024-07-25 01:20:16.022454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.145 qpair failed and we were unable to recover it. 00:34:23.145 [2024-07-25 01:20:16.022634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.145 [2024-07-25 01:20:16.022659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.145 qpair failed and we were unable to recover it. 00:34:23.145 [2024-07-25 01:20:16.022776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.145 [2024-07-25 01:20:16.022801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.145 qpair failed and we were unable to recover it. 00:34:23.145 [2024-07-25 01:20:16.022969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.145 [2024-07-25 01:20:16.022994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.145 qpair failed and we were unable to recover it. 00:34:23.145 [2024-07-25 01:20:16.023140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.145 [2024-07-25 01:20:16.023165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.145 qpair failed and we were unable to recover it. 00:34:23.145 [2024-07-25 01:20:16.023319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.145 [2024-07-25 01:20:16.023346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.145 qpair failed and we were unable to recover it. 00:34:23.145 [2024-07-25 01:20:16.023520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.145 [2024-07-25 01:20:16.023545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.145 qpair failed and we were unable to recover it. 00:34:23.146 [2024-07-25 01:20:16.023666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.146 [2024-07-25 01:20:16.023692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.146 qpair failed and we were unable to recover it. 00:34:23.146 [2024-07-25 01:20:16.023839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.146 [2024-07-25 01:20:16.023864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.146 qpair failed and we were unable to recover it. 00:34:23.146 [2024-07-25 01:20:16.024013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.146 [2024-07-25 01:20:16.024039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.146 qpair failed and we were unable to recover it. 00:34:23.146 [2024-07-25 01:20:16.024154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.146 [2024-07-25 01:20:16.024179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.146 qpair failed and we were unable to recover it. 00:34:23.146 [2024-07-25 01:20:16.024318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.146 [2024-07-25 01:20:16.024348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.146 qpair failed and we were unable to recover it. 00:34:23.146 [2024-07-25 01:20:16.024490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.146 [2024-07-25 01:20:16.024515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.146 qpair failed and we were unable to recover it. 00:34:23.146 [2024-07-25 01:20:16.024657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.146 [2024-07-25 01:20:16.024683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.146 qpair failed and we were unable to recover it. 00:34:23.146 [2024-07-25 01:20:16.024826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.146 [2024-07-25 01:20:16.024853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.146 qpair failed and we were unable to recover it. 00:34:23.146 [2024-07-25 01:20:16.024960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.146 [2024-07-25 01:20:16.024985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.146 qpair failed and we were unable to recover it. 00:34:23.146 [2024-07-25 01:20:16.025127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.146 [2024-07-25 01:20:16.025152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.146 qpair failed and we were unable to recover it. 00:34:23.146 [2024-07-25 01:20:16.028388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.146 [2024-07-25 01:20:16.028416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.146 qpair failed and we were unable to recover it. 00:34:23.146 [2024-07-25 01:20:16.028582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.146 [2024-07-25 01:20:16.028607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.146 qpair failed and we were unable to recover it. 00:34:23.146 [2024-07-25 01:20:16.028732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.146 [2024-07-25 01:20:16.028757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.146 qpair failed and we were unable to recover it. 00:34:23.146 [2024-07-25 01:20:16.028903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.146 [2024-07-25 01:20:16.028928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.146 qpair failed and we were unable to recover it. 00:34:23.146 [2024-07-25 01:20:16.029082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.146 [2024-07-25 01:20:16.029108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.146 qpair failed and we were unable to recover it. 00:34:23.146 [2024-07-25 01:20:16.029248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.146 [2024-07-25 01:20:16.029273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.146 qpair failed and we were unable to recover it. 00:34:23.146 [2024-07-25 01:20:16.029416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.146 [2024-07-25 01:20:16.029441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.146 qpair failed and we were unable to recover it. 00:34:23.146 [2024-07-25 01:20:16.029608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.146 [2024-07-25 01:20:16.029633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.146 qpair failed and we were unable to recover it. 00:34:23.146 [2024-07-25 01:20:16.029758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.146 [2024-07-25 01:20:16.029785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.146 qpair failed and we were unable to recover it. 00:34:23.146 [2024-07-25 01:20:16.029930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.146 [2024-07-25 01:20:16.029956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.146 qpair failed and we were unable to recover it. 00:34:23.146 [2024-07-25 01:20:16.030099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.146 [2024-07-25 01:20:16.030124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.146 qpair failed and we were unable to recover it. 00:34:23.146 [2024-07-25 01:20:16.030252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.146 [2024-07-25 01:20:16.030278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.146 qpair failed and we were unable to recover it. 00:34:23.146 [2024-07-25 01:20:16.030446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.146 [2024-07-25 01:20:16.030472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.146 qpair failed and we were unable to recover it. 00:34:23.146 [2024-07-25 01:20:16.030613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.146 [2024-07-25 01:20:16.030638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.146 qpair failed and we were unable to recover it. 00:34:23.146 [2024-07-25 01:20:16.030745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.146 [2024-07-25 01:20:16.030771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.146 qpair failed and we were unable to recover it. 00:34:23.146 [2024-07-25 01:20:16.030918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.146 [2024-07-25 01:20:16.030943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.146 qpair failed and we were unable to recover it. 00:34:23.146 [2024-07-25 01:20:16.031053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.146 [2024-07-25 01:20:16.031078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.146 qpair failed and we were unable to recover it. 00:34:23.146 [2024-07-25 01:20:16.031199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.146 [2024-07-25 01:20:16.031225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.146 qpair failed and we were unable to recover it. 00:34:23.146 [2024-07-25 01:20:16.031406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.146 [2024-07-25 01:20:16.031431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.146 qpair failed and we were unable to recover it. 00:34:23.146 [2024-07-25 01:20:16.031553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.146 [2024-07-25 01:20:16.031579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.146 qpair failed and we were unable to recover it. 00:34:23.146 [2024-07-25 01:20:16.031738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.146 [2024-07-25 01:20:16.031764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.146 qpair failed and we were unable to recover it. 00:34:23.146 [2024-07-25 01:20:16.031916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.146 [2024-07-25 01:20:16.031941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.146 qpair failed and we were unable to recover it. 00:34:23.146 [2024-07-25 01:20:16.032116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.146 [2024-07-25 01:20:16.032141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.146 qpair failed and we were unable to recover it. 00:34:23.146 [2024-07-25 01:20:16.032287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.146 [2024-07-25 01:20:16.032313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.146 qpair failed and we were unable to recover it. 00:34:23.146 [2024-07-25 01:20:16.032438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.146 [2024-07-25 01:20:16.032465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.146 qpair failed and we were unable to recover it. 00:34:23.146 [2024-07-25 01:20:16.032612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.146 [2024-07-25 01:20:16.032638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.146 qpair failed and we were unable to recover it. 00:34:23.146 [2024-07-25 01:20:16.032781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.146 [2024-07-25 01:20:16.032806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.147 qpair failed and we were unable to recover it. 00:34:23.147 [2024-07-25 01:20:16.032919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.147 [2024-07-25 01:20:16.032944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.147 qpair failed and we were unable to recover it. 00:34:23.147 [2024-07-25 01:20:16.033060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.147 [2024-07-25 01:20:16.033086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.147 qpair failed and we were unable to recover it. 00:34:23.147 [2024-07-25 01:20:16.033200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.147 [2024-07-25 01:20:16.033225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.147 qpair failed and we were unable to recover it. 00:34:23.147 [2024-07-25 01:20:16.033388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.147 [2024-07-25 01:20:16.033413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.147 qpair failed and we were unable to recover it. 00:34:23.147 [2024-07-25 01:20:16.033528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.147 [2024-07-25 01:20:16.033554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.147 qpair failed and we were unable to recover it. 00:34:23.147 [2024-07-25 01:20:16.033700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.147 [2024-07-25 01:20:16.033727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.147 qpair failed and we were unable to recover it. 00:34:23.147 [2024-07-25 01:20:16.033886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.147 [2024-07-25 01:20:16.033911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.147 qpair failed and we were unable to recover it. 00:34:23.147 [2024-07-25 01:20:16.034054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.147 [2024-07-25 01:20:16.034083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.147 qpair failed and we were unable to recover it. 00:34:23.147 [2024-07-25 01:20:16.034238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.147 [2024-07-25 01:20:16.034287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.147 qpair failed and we were unable to recover it. 00:34:23.147 [2024-07-25 01:20:16.034455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.147 [2024-07-25 01:20:16.034480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.147 qpair failed and we were unable to recover it. 00:34:23.147 [2024-07-25 01:20:16.034602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.147 [2024-07-25 01:20:16.034627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.147 qpair failed and we were unable to recover it. 00:34:23.147 [2024-07-25 01:20:16.034771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.147 [2024-07-25 01:20:16.034796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.147 qpair failed and we were unable to recover it. 00:34:23.147 [2024-07-25 01:20:16.034917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.147 [2024-07-25 01:20:16.034942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.147 qpair failed and we were unable to recover it. 00:34:23.147 [2024-07-25 01:20:16.035085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.147 [2024-07-25 01:20:16.035110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.147 qpair failed and we were unable to recover it. 00:34:23.147 [2024-07-25 01:20:16.035222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.147 [2024-07-25 01:20:16.035255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.147 qpair failed and we were unable to recover it. 00:34:23.147 [2024-07-25 01:20:16.035399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.147 [2024-07-25 01:20:16.035424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.147 qpair failed and we were unable to recover it. 00:34:23.147 [2024-07-25 01:20:16.035546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.147 [2024-07-25 01:20:16.035571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.147 qpair failed and we were unable to recover it. 00:34:23.147 [2024-07-25 01:20:16.035678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.147 [2024-07-25 01:20:16.035703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.147 qpair failed and we were unable to recover it. 00:34:23.147 [2024-07-25 01:20:16.035842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.147 [2024-07-25 01:20:16.035868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.147 qpair failed and we were unable to recover it. 00:34:23.147 [2024-07-25 01:20:16.035988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.147 [2024-07-25 01:20:16.036013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.147 qpair failed and we were unable to recover it. 00:34:23.147 [2024-07-25 01:20:16.036132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.147 [2024-07-25 01:20:16.036157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.147 qpair failed and we were unable to recover it. 00:34:23.147 [2024-07-25 01:20:16.036304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.147 [2024-07-25 01:20:16.036330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.147 qpair failed and we were unable to recover it. 00:34:23.147 [2024-07-25 01:20:16.036473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.147 [2024-07-25 01:20:16.036499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.147 qpair failed and we were unable to recover it. 00:34:23.147 [2024-07-25 01:20:16.036615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.147 [2024-07-25 01:20:16.036640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.147 qpair failed and we were unable to recover it. 00:34:23.147 [2024-07-25 01:20:16.036754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.147 [2024-07-25 01:20:16.036780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.147 qpair failed and we were unable to recover it. 00:34:23.147 [2024-07-25 01:20:16.036992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.147 [2024-07-25 01:20:16.037017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.147 qpair failed and we were unable to recover it. 00:34:23.147 [2024-07-25 01:20:16.037168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.147 [2024-07-25 01:20:16.037193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.147 qpair failed and we were unable to recover it. 00:34:23.147 [2024-07-25 01:20:16.037314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.147 [2024-07-25 01:20:16.037341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.147 qpair failed and we were unable to recover it. 00:34:23.147 [2024-07-25 01:20:16.037479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.147 [2024-07-25 01:20:16.037504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.147 qpair failed and we were unable to recover it. 00:34:23.147 [2024-07-25 01:20:16.037651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.147 [2024-07-25 01:20:16.037676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.147 qpair failed and we were unable to recover it. 00:34:23.147 [2024-07-25 01:20:16.037786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.147 [2024-07-25 01:20:16.037812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.147 qpair failed and we were unable to recover it. 00:34:23.147 [2024-07-25 01:20:16.037951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.147 [2024-07-25 01:20:16.037977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.147 qpair failed and we were unable to recover it. 00:34:23.147 [2024-07-25 01:20:16.038139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.147 [2024-07-25 01:20:16.038168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.147 qpair failed and we were unable to recover it. 00:34:23.147 [2024-07-25 01:20:16.038346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.147 [2024-07-25 01:20:16.038375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.147 qpair failed and we were unable to recover it. 00:34:23.147 [2024-07-25 01:20:16.038541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.147 [2024-07-25 01:20:16.038567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.147 qpair failed and we were unable to recover it. 00:34:23.147 [2024-07-25 01:20:16.038686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.148 [2024-07-25 01:20:16.038711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.148 qpair failed and we were unable to recover it. 00:34:23.148 [2024-07-25 01:20:16.038814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.148 [2024-07-25 01:20:16.038839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.148 qpair failed and we were unable to recover it. 00:34:23.148 [2024-07-25 01:20:16.038979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.148 [2024-07-25 01:20:16.039005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.148 qpair failed and we were unable to recover it. 00:34:23.148 [2024-07-25 01:20:16.039173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.148 [2024-07-25 01:20:16.039201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.148 qpair failed and we were unable to recover it. 00:34:23.148 [2024-07-25 01:20:16.039345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.148 [2024-07-25 01:20:16.039371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.148 qpair failed and we were unable to recover it. 00:34:23.148 [2024-07-25 01:20:16.039538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.148 [2024-07-25 01:20:16.039563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.148 qpair failed and we were unable to recover it. 00:34:23.148 [2024-07-25 01:20:16.039738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.148 [2024-07-25 01:20:16.039763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.148 qpair failed and we were unable to recover it. 00:34:23.148 [2024-07-25 01:20:16.039883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.148 [2024-07-25 01:20:16.039908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.148 qpair failed and we were unable to recover it. 00:34:23.148 [2024-07-25 01:20:16.040071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.148 [2024-07-25 01:20:16.040099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.148 qpair failed and we were unable to recover it. 00:34:23.148 [2024-07-25 01:20:16.040256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.148 [2024-07-25 01:20:16.040282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.148 qpair failed and we were unable to recover it. 00:34:23.148 [2024-07-25 01:20:16.040428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.148 [2024-07-25 01:20:16.040454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.148 qpair failed and we were unable to recover it. 00:34:23.148 [2024-07-25 01:20:16.040588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.148 [2024-07-25 01:20:16.040613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.148 qpair failed and we were unable to recover it. 00:34:23.148 [2024-07-25 01:20:16.040757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.148 [2024-07-25 01:20:16.040787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.148 qpair failed and we were unable to recover it. 00:34:23.148 [2024-07-25 01:20:16.040940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.148 [2024-07-25 01:20:16.040965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.148 qpair failed and we were unable to recover it. 00:34:23.148 [2024-07-25 01:20:16.041105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.148 [2024-07-25 01:20:16.041131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.148 qpair failed and we were unable to recover it. 00:34:23.148 [2024-07-25 01:20:16.041265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.148 [2024-07-25 01:20:16.041291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.148 qpair failed and we were unable to recover it. 00:34:23.148 [2024-07-25 01:20:16.041437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.148 [2024-07-25 01:20:16.041462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.148 qpair failed and we were unable to recover it. 00:34:23.148 [2024-07-25 01:20:16.041627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.148 [2024-07-25 01:20:16.041652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.148 qpair failed and we were unable to recover it. 00:34:23.148 [2024-07-25 01:20:16.041769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.148 [2024-07-25 01:20:16.041795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.148 qpair failed and we were unable to recover it. 00:34:23.148 [2024-07-25 01:20:16.041935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.148 [2024-07-25 01:20:16.041961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.148 qpair failed and we were unable to recover it. 00:34:23.148 [2024-07-25 01:20:16.042089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.148 [2024-07-25 01:20:16.042118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.148 qpair failed and we were unable to recover it. 00:34:23.148 [2024-07-25 01:20:16.042260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.148 [2024-07-25 01:20:16.042286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.148 qpair failed and we were unable to recover it. 00:34:23.148 [2024-07-25 01:20:16.042457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.148 [2024-07-25 01:20:16.042483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.148 qpair failed and we were unable to recover it. 00:34:23.148 [2024-07-25 01:20:16.042599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.148 [2024-07-25 01:20:16.042625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.148 qpair failed and we were unable to recover it. 00:34:23.148 [2024-07-25 01:20:16.042745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.148 [2024-07-25 01:20:16.042770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.148 qpair failed and we were unable to recover it. 00:34:23.148 [2024-07-25 01:20:16.042899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.148 [2024-07-25 01:20:16.042924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.148 qpair failed and we were unable to recover it. 00:34:23.148 [2024-07-25 01:20:16.043064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.148 [2024-07-25 01:20:16.043090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.148 qpair failed and we were unable to recover it. 00:34:23.148 [2024-07-25 01:20:16.043262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.148 [2024-07-25 01:20:16.043288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.148 qpair failed and we were unable to recover it. 00:34:23.148 [2024-07-25 01:20:16.043434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.148 [2024-07-25 01:20:16.043459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.148 qpair failed and we were unable to recover it. 00:34:23.148 [2024-07-25 01:20:16.043601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.148 [2024-07-25 01:20:16.043627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.148 qpair failed and we were unable to recover it. 00:34:23.148 [2024-07-25 01:20:16.043777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.148 [2024-07-25 01:20:16.043804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.148 qpair failed and we were unable to recover it. 00:34:23.148 [2024-07-25 01:20:16.043951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.148 [2024-07-25 01:20:16.043977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.148 qpair failed and we were unable to recover it. 00:34:23.148 [2024-07-25 01:20:16.044092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.148 [2024-07-25 01:20:16.044119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.148 qpair failed and we were unable to recover it. 00:34:23.148 [2024-07-25 01:20:16.044238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.148 [2024-07-25 01:20:16.044272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.148 qpair failed and we were unable to recover it. 00:34:23.148 [2024-07-25 01:20:16.044389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.148 [2024-07-25 01:20:16.044415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.148 qpair failed and we were unable to recover it. 00:34:23.148 [2024-07-25 01:20:16.044549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.149 [2024-07-25 01:20:16.044574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.149 qpair failed and we were unable to recover it. 00:34:23.149 [2024-07-25 01:20:16.044689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.149 [2024-07-25 01:20:16.044715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.149 qpair failed and we were unable to recover it. 00:34:23.149 [2024-07-25 01:20:16.044881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.149 [2024-07-25 01:20:16.044906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.149 qpair failed and we were unable to recover it. 00:34:23.149 [2024-07-25 01:20:16.045054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.149 [2024-07-25 01:20:16.045081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.149 qpair failed and we were unable to recover it. 00:34:23.149 [2024-07-25 01:20:16.045224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.149 [2024-07-25 01:20:16.045256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.149 qpair failed and we were unable to recover it. 00:34:23.149 [2024-07-25 01:20:16.045379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.149 [2024-07-25 01:20:16.045405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.149 qpair failed and we were unable to recover it. 00:34:23.149 [2024-07-25 01:20:16.045527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.149 [2024-07-25 01:20:16.045552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.149 qpair failed and we were unable to recover it. 00:34:23.149 [2024-07-25 01:20:16.045670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.149 [2024-07-25 01:20:16.045695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.149 qpair failed and we were unable to recover it. 00:34:23.149 [2024-07-25 01:20:16.045835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.149 [2024-07-25 01:20:16.045860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.149 qpair failed and we were unable to recover it. 00:34:23.149 [2024-07-25 01:20:16.045976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.149 [2024-07-25 01:20:16.046002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.149 qpair failed and we were unable to recover it. 00:34:23.149 [2024-07-25 01:20:16.046128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.149 [2024-07-25 01:20:16.046153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.149 qpair failed and we were unable to recover it. 00:34:23.149 [2024-07-25 01:20:16.046322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.149 [2024-07-25 01:20:16.046348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.149 qpair failed and we were unable to recover it. 00:34:23.149 [2024-07-25 01:20:16.046490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.149 [2024-07-25 01:20:16.046515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.149 qpair failed and we were unable to recover it. 00:34:23.149 [2024-07-25 01:20:16.046631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.149 [2024-07-25 01:20:16.046656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.149 qpair failed and we were unable to recover it. 00:34:23.149 [2024-07-25 01:20:16.046799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.149 [2024-07-25 01:20:16.046824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.149 qpair failed and we were unable to recover it. 00:34:23.149 [2024-07-25 01:20:16.046940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.149 [2024-07-25 01:20:16.046966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.149 qpair failed and we were unable to recover it. 00:34:23.149 [2024-07-25 01:20:16.047087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.149 [2024-07-25 01:20:16.047112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.149 qpair failed and we were unable to recover it. 00:34:23.149 [2024-07-25 01:20:16.047224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.149 [2024-07-25 01:20:16.047259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.149 qpair failed and we were unable to recover it. 00:34:23.149 [2024-07-25 01:20:16.047403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.149 [2024-07-25 01:20:16.047428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.149 qpair failed and we were unable to recover it. 00:34:23.149 [2024-07-25 01:20:16.047600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.149 [2024-07-25 01:20:16.047626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.149 qpair failed and we were unable to recover it. 00:34:23.149 [2024-07-25 01:20:16.047769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.149 [2024-07-25 01:20:16.047794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.149 qpair failed and we were unable to recover it. 00:34:23.149 [2024-07-25 01:20:16.047938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.149 [2024-07-25 01:20:16.047963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.149 qpair failed and we were unable to recover it. 00:34:23.149 [2024-07-25 01:20:16.048100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.149 [2024-07-25 01:20:16.048126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.149 qpair failed and we were unable to recover it. 00:34:23.149 [2024-07-25 01:20:16.048282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.149 [2024-07-25 01:20:16.048309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.149 qpair failed and we were unable to recover it. 00:34:23.149 [2024-07-25 01:20:16.048427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.149 [2024-07-25 01:20:16.048453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.149 qpair failed and we were unable to recover it. 00:34:23.149 [2024-07-25 01:20:16.048599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.149 [2024-07-25 01:20:16.048625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.150 qpair failed and we were unable to recover it. 00:34:23.150 [2024-07-25 01:20:16.048790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.150 [2024-07-25 01:20:16.048815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.150 qpair failed and we were unable to recover it. 00:34:23.150 [2024-07-25 01:20:16.048934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.150 [2024-07-25 01:20:16.048960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.150 qpair failed and we were unable to recover it. 00:34:23.150 [2024-07-25 01:20:16.049091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.150 [2024-07-25 01:20:16.049117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.150 qpair failed and we were unable to recover it. 00:34:23.150 [2024-07-25 01:20:16.049262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.150 [2024-07-25 01:20:16.049288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.150 qpair failed and we were unable to recover it. 00:34:23.150 [2024-07-25 01:20:16.049417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.150 [2024-07-25 01:20:16.049444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.150 qpair failed and we were unable to recover it. 00:34:23.150 [2024-07-25 01:20:16.049630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.150 [2024-07-25 01:20:16.049656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.150 qpair failed and we were unable to recover it. 00:34:23.150 [2024-07-25 01:20:16.049770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.150 [2024-07-25 01:20:16.049795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.150 qpair failed and we were unable to recover it. 00:34:23.150 [2024-07-25 01:20:16.049918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.150 [2024-07-25 01:20:16.049944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.150 qpair failed and we were unable to recover it. 00:34:23.150 [2024-07-25 01:20:16.050059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.150 [2024-07-25 01:20:16.050085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.150 qpair failed and we were unable to recover it. 00:34:23.150 [2024-07-25 01:20:16.050201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.150 [2024-07-25 01:20:16.050227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.150 qpair failed and we were unable to recover it. 00:34:23.150 [2024-07-25 01:20:16.050357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.150 [2024-07-25 01:20:16.050384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.150 qpair failed and we were unable to recover it. 00:34:23.150 [2024-07-25 01:20:16.050503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.150 [2024-07-25 01:20:16.050529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.150 qpair failed and we were unable to recover it. 00:34:23.150 [2024-07-25 01:20:16.050675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.150 [2024-07-25 01:20:16.050701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.150 qpair failed and we were unable to recover it. 00:34:23.150 [2024-07-25 01:20:16.050816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.150 [2024-07-25 01:20:16.050843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.150 qpair failed and we were unable to recover it. 00:34:23.150 [2024-07-25 01:20:16.050981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.150 [2024-07-25 01:20:16.051006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.150 qpair failed and we were unable to recover it. 00:34:23.150 [2024-07-25 01:20:16.051159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.150 [2024-07-25 01:20:16.051185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.150 qpair failed and we were unable to recover it. 00:34:23.150 [2024-07-25 01:20:16.051309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.150 [2024-07-25 01:20:16.051335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.150 qpair failed and we were unable to recover it. 00:34:23.150 [2024-07-25 01:20:16.051456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.150 [2024-07-25 01:20:16.051482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.150 qpair failed and we were unable to recover it. 00:34:23.150 [2024-07-25 01:20:16.051593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.150 [2024-07-25 01:20:16.051619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.150 qpair failed and we were unable to recover it. 00:34:23.150 [2024-07-25 01:20:16.051768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.150 [2024-07-25 01:20:16.051793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.150 qpair failed and we were unable to recover it. 00:34:23.150 [2024-07-25 01:20:16.051939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.150 [2024-07-25 01:20:16.051964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.150 qpair failed and we were unable to recover it. 00:34:23.150 [2024-07-25 01:20:16.052119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.150 [2024-07-25 01:20:16.052148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.150 qpair failed and we were unable to recover it. 00:34:23.150 [2024-07-25 01:20:16.052306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.150 [2024-07-25 01:20:16.052331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.150 qpair failed and we were unable to recover it. 00:34:23.150 [2024-07-25 01:20:16.052450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.150 [2024-07-25 01:20:16.052476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.150 qpair failed and we were unable to recover it. 00:34:23.150 [2024-07-25 01:20:16.052588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.150 [2024-07-25 01:20:16.052614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.150 qpair failed and we were unable to recover it. 00:34:23.150 [2024-07-25 01:20:16.052733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.150 [2024-07-25 01:20:16.052759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.150 qpair failed and we were unable to recover it. 00:34:23.150 [2024-07-25 01:20:16.052880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.150 [2024-07-25 01:20:16.052905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.150 qpair failed and we were unable to recover it. 00:34:23.150 [2024-07-25 01:20:16.053021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.150 [2024-07-25 01:20:16.053047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.150 qpair failed and we were unable to recover it. 00:34:23.150 [2024-07-25 01:20:16.053184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.150 [2024-07-25 01:20:16.053210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.150 qpair failed and we were unable to recover it. 00:34:23.150 [2024-07-25 01:20:16.053365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.150 [2024-07-25 01:20:16.053391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.150 qpair failed and we were unable to recover it. 00:34:23.150 [2024-07-25 01:20:16.053501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.150 [2024-07-25 01:20:16.053527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.150 qpair failed and we were unable to recover it. 00:34:23.150 [2024-07-25 01:20:16.053669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.150 [2024-07-25 01:20:16.053698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.150 qpair failed and we were unable to recover it. 00:34:23.150 [2024-07-25 01:20:16.053843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.150 [2024-07-25 01:20:16.053868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.150 qpair failed and we were unable to recover it. 00:34:23.150 [2024-07-25 01:20:16.054015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.150 [2024-07-25 01:20:16.054041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.150 qpair failed and we were unable to recover it. 00:34:23.150 [2024-07-25 01:20:16.054181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.150 [2024-07-25 01:20:16.054207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.150 qpair failed and we were unable to recover it. 00:34:23.150 [2024-07-25 01:20:16.054327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.150 [2024-07-25 01:20:16.054352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.150 qpair failed and we were unable to recover it. 00:34:23.150 [2024-07-25 01:20:16.054517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.150 [2024-07-25 01:20:16.054543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.150 qpair failed and we were unable to recover it. 00:34:23.150 [2024-07-25 01:20:16.054709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.151 [2024-07-25 01:20:16.054735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.151 qpair failed and we were unable to recover it. 00:34:23.151 [2024-07-25 01:20:16.054903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.151 [2024-07-25 01:20:16.054928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.151 qpair failed and we were unable to recover it. 00:34:23.151 [2024-07-25 01:20:16.055091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.151 [2024-07-25 01:20:16.055119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.151 qpair failed and we were unable to recover it. 00:34:23.151 [2024-07-25 01:20:16.055295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.151 [2024-07-25 01:20:16.055321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.151 qpair failed and we were unable to recover it. 00:34:23.151 [2024-07-25 01:20:16.055436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.151 [2024-07-25 01:20:16.055461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.151 qpair failed and we were unable to recover it. 00:34:23.151 [2024-07-25 01:20:16.055581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.151 [2024-07-25 01:20:16.055606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.151 qpair failed and we were unable to recover it. 00:34:23.151 [2024-07-25 01:20:16.055755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.151 [2024-07-25 01:20:16.055781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.151 qpair failed and we were unable to recover it. 00:34:23.151 [2024-07-25 01:20:16.055894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.151 [2024-07-25 01:20:16.055920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.151 qpair failed and we were unable to recover it. 00:34:23.151 [2024-07-25 01:20:16.056054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.151 [2024-07-25 01:20:16.056080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.151 qpair failed and we were unable to recover it. 00:34:23.151 [2024-07-25 01:20:16.056261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.151 [2024-07-25 01:20:16.056286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.151 qpair failed and we were unable to recover it. 00:34:23.151 [2024-07-25 01:20:16.056449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.151 [2024-07-25 01:20:16.056475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.151 qpair failed and we were unable to recover it. 00:34:23.151 [2024-07-25 01:20:16.056623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.151 [2024-07-25 01:20:16.056648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.151 qpair failed and we were unable to recover it. 00:34:23.151 [2024-07-25 01:20:16.056798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.151 [2024-07-25 01:20:16.056824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.151 qpair failed and we were unable to recover it. 00:34:23.151 [2024-07-25 01:20:16.056943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.151 [2024-07-25 01:20:16.056970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.151 qpair failed and we were unable to recover it. 00:34:23.151 [2024-07-25 01:20:16.057092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.151 [2024-07-25 01:20:16.057118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.151 qpair failed and we were unable to recover it. 00:34:23.151 [2024-07-25 01:20:16.057238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.151 [2024-07-25 01:20:16.057283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.151 qpair failed and we were unable to recover it. 00:34:23.151 [2024-07-25 01:20:16.057436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.151 [2024-07-25 01:20:16.057462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.151 qpair failed and we were unable to recover it. 00:34:23.151 [2024-07-25 01:20:16.057598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.151 [2024-07-25 01:20:16.057624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.151 qpair failed and we were unable to recover it. 00:34:23.151 [2024-07-25 01:20:16.057743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.151 [2024-07-25 01:20:16.057769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.151 qpair failed and we were unable to recover it. 00:34:23.151 [2024-07-25 01:20:16.057904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.151 [2024-07-25 01:20:16.057930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.151 qpair failed and we were unable to recover it. 00:34:23.151 [2024-07-25 01:20:16.058075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.151 [2024-07-25 01:20:16.058101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.151 qpair failed and we were unable to recover it. 00:34:23.151 [2024-07-25 01:20:16.058248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.151 [2024-07-25 01:20:16.058274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.151 qpair failed and we were unable to recover it. 00:34:23.151 [2024-07-25 01:20:16.058405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.151 [2024-07-25 01:20:16.058430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.151 qpair failed and we were unable to recover it. 00:34:23.151 [2024-07-25 01:20:16.058550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.151 [2024-07-25 01:20:16.058575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.151 qpair failed and we were unable to recover it. 00:34:23.151 [2024-07-25 01:20:16.058711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.151 [2024-07-25 01:20:16.058736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.151 qpair failed and we were unable to recover it. 00:34:23.151 [2024-07-25 01:20:16.058880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.151 [2024-07-25 01:20:16.058905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.151 qpair failed and we were unable to recover it. 00:34:23.151 [2024-07-25 01:20:16.059048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.151 [2024-07-25 01:20:16.059073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.151 qpair failed and we were unable to recover it. 00:34:23.151 [2024-07-25 01:20:16.059188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.151 [2024-07-25 01:20:16.059213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.151 qpair failed and we were unable to recover it. 00:34:23.151 [2024-07-25 01:20:16.059362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.151 [2024-07-25 01:20:16.059388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.151 qpair failed and we were unable to recover it. 00:34:23.151 [2024-07-25 01:20:16.059530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.151 [2024-07-25 01:20:16.059555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.151 qpair failed and we were unable to recover it. 00:34:23.151 [2024-07-25 01:20:16.059672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.151 [2024-07-25 01:20:16.059699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.151 qpair failed and we were unable to recover it. 00:34:23.151 [2024-07-25 01:20:16.059854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.151 [2024-07-25 01:20:16.059879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.151 qpair failed and we were unable to recover it. 00:34:23.151 [2024-07-25 01:20:16.060032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.151 [2024-07-25 01:20:16.060058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.151 qpair failed and we were unable to recover it. 00:34:23.151 [2024-07-25 01:20:16.060238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.151 [2024-07-25 01:20:16.060270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.151 qpair failed and we were unable to recover it. 00:34:23.151 [2024-07-25 01:20:16.060399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.151 [2024-07-25 01:20:16.060429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.151 qpair failed and we were unable to recover it. 00:34:23.151 [2024-07-25 01:20:16.060584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.151 [2024-07-25 01:20:16.060610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.151 qpair failed and we were unable to recover it. 00:34:23.151 [2024-07-25 01:20:16.060738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.151 [2024-07-25 01:20:16.060764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.151 qpair failed and we were unable to recover it. 00:34:23.151 [2024-07-25 01:20:16.060914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.151 [2024-07-25 01:20:16.060939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.151 qpair failed and we were unable to recover it. 00:34:23.152 [2024-07-25 01:20:16.061081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.152 [2024-07-25 01:20:16.061106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.152 qpair failed and we were unable to recover it. 00:34:23.152 [2024-07-25 01:20:16.061226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.152 [2024-07-25 01:20:16.061271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.152 qpair failed and we were unable to recover it. 00:34:23.152 [2024-07-25 01:20:16.061406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.152 [2024-07-25 01:20:16.061432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.152 qpair failed and we were unable to recover it. 00:34:23.152 [2024-07-25 01:20:16.061541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.152 [2024-07-25 01:20:16.061567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.152 qpair failed and we were unable to recover it. 00:34:23.152 [2024-07-25 01:20:16.061674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.152 [2024-07-25 01:20:16.061699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.152 qpair failed and we were unable to recover it. 00:34:23.152 [2024-07-25 01:20:16.061847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.152 [2024-07-25 01:20:16.061872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.152 qpair failed and we were unable to recover it. 00:34:23.152 [2024-07-25 01:20:16.062019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.152 [2024-07-25 01:20:16.062045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.152 qpair failed and we were unable to recover it. 00:34:23.152 [2024-07-25 01:20:16.062183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.152 [2024-07-25 01:20:16.062208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.152 qpair failed and we were unable to recover it. 00:34:23.152 [2024-07-25 01:20:16.062352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.152 [2024-07-25 01:20:16.062378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.152 qpair failed and we were unable to recover it. 00:34:23.152 [2024-07-25 01:20:16.062523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.152 [2024-07-25 01:20:16.062549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.152 qpair failed and we were unable to recover it. 00:34:23.152 [2024-07-25 01:20:16.062699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.152 [2024-07-25 01:20:16.062725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.152 qpair failed and we were unable to recover it. 00:34:23.152 [2024-07-25 01:20:16.062868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.152 [2024-07-25 01:20:16.062893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.152 qpair failed and we were unable to recover it. 00:34:23.152 [2024-07-25 01:20:16.063029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.152 [2024-07-25 01:20:16.063057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.152 qpair failed and we were unable to recover it. 00:34:23.152 [2024-07-25 01:20:16.063205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.152 [2024-07-25 01:20:16.063233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.152 qpair failed and we were unable to recover it. 00:34:23.152 [2024-07-25 01:20:16.063403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.152 [2024-07-25 01:20:16.063429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.152 qpair failed and we were unable to recover it. 00:34:23.152 [2024-07-25 01:20:16.063574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.152 [2024-07-25 01:20:16.063599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.152 qpair failed and we were unable to recover it. 00:34:23.152 [2024-07-25 01:20:16.063738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.152 [2024-07-25 01:20:16.063763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.152 qpair failed and we were unable to recover it. 00:34:23.152 [2024-07-25 01:20:16.063900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.152 [2024-07-25 01:20:16.063928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.152 qpair failed and we were unable to recover it. 00:34:23.152 [2024-07-25 01:20:16.064087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.152 [2024-07-25 01:20:16.064129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.152 qpair failed and we were unable to recover it. 00:34:23.152 [2024-07-25 01:20:16.064300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.152 [2024-07-25 01:20:16.064326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.152 qpair failed and we were unable to recover it. 00:34:23.152 [2024-07-25 01:20:16.064473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.152 [2024-07-25 01:20:16.064499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.152 qpair failed and we were unable to recover it. 00:34:23.152 [2024-07-25 01:20:16.064644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.152 [2024-07-25 01:20:16.064670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.152 qpair failed and we were unable to recover it. 00:34:23.152 [2024-07-25 01:20:16.064793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.152 [2024-07-25 01:20:16.064818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.152 qpair failed and we were unable to recover it. 00:34:23.152 [2024-07-25 01:20:16.064963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.152 [2024-07-25 01:20:16.064989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.152 qpair failed and we were unable to recover it. 00:34:23.152 [2024-07-25 01:20:16.065132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.152 [2024-07-25 01:20:16.065157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.152 qpair failed and we were unable to recover it. 00:34:23.152 [2024-07-25 01:20:16.065339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.152 [2024-07-25 01:20:16.065365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.152 qpair failed and we were unable to recover it. 00:34:23.152 [2024-07-25 01:20:16.065515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.152 [2024-07-25 01:20:16.065540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.152 qpair failed and we were unable to recover it. 00:34:23.152 [2024-07-25 01:20:16.065681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.152 [2024-07-25 01:20:16.065707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.152 qpair failed and we were unable to recover it. 00:34:23.152 [2024-07-25 01:20:16.065877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.152 [2024-07-25 01:20:16.065902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.152 qpair failed and we were unable to recover it. 00:34:23.152 [2024-07-25 01:20:16.066041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.152 [2024-07-25 01:20:16.066066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.152 qpair failed and we were unable to recover it. 00:34:23.152 [2024-07-25 01:20:16.066179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.152 [2024-07-25 01:20:16.066205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.152 qpair failed and we were unable to recover it. 00:34:23.152 [2024-07-25 01:20:16.066355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.152 [2024-07-25 01:20:16.066381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.152 qpair failed and we were unable to recover it. 00:34:23.152 [2024-07-25 01:20:16.066522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.152 [2024-07-25 01:20:16.066547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.152 qpair failed and we were unable to recover it. 00:34:23.152 [2024-07-25 01:20:16.066658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.152 [2024-07-25 01:20:16.066684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.152 qpair failed and we were unable to recover it. 00:34:23.152 [2024-07-25 01:20:16.066801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.152 [2024-07-25 01:20:16.066828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.152 qpair failed and we were unable to recover it. 00:34:23.152 [2024-07-25 01:20:16.066974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.152 [2024-07-25 01:20:16.067000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.152 qpair failed and we were unable to recover it. 00:34:23.152 [2024-07-25 01:20:16.067118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.152 [2024-07-25 01:20:16.067149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.152 qpair failed and we were unable to recover it. 00:34:23.152 [2024-07-25 01:20:16.067264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.153 [2024-07-25 01:20:16.067291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.153 qpair failed and we were unable to recover it. 00:34:23.153 [2024-07-25 01:20:16.067467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.153 [2024-07-25 01:20:16.067492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.153 qpair failed and we were unable to recover it. 00:34:23.153 [2024-07-25 01:20:16.067663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.153 [2024-07-25 01:20:16.067688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.153 qpair failed and we were unable to recover it. 00:34:23.153 [2024-07-25 01:20:16.067802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.153 [2024-07-25 01:20:16.067828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.153 qpair failed and we were unable to recover it. 00:34:23.153 [2024-07-25 01:20:16.067945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.153 [2024-07-25 01:20:16.067971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.153 qpair failed and we were unable to recover it. 00:34:23.153 [2024-07-25 01:20:16.068088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.153 [2024-07-25 01:20:16.068115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.153 qpair failed and we were unable to recover it. 00:34:23.153 [2024-07-25 01:20:16.068264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.153 [2024-07-25 01:20:16.068293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.153 qpair failed and we were unable to recover it. 00:34:23.153 [2024-07-25 01:20:16.068445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.153 [2024-07-25 01:20:16.068471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.153 qpair failed and we were unable to recover it. 00:34:23.153 [2024-07-25 01:20:16.068593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.153 [2024-07-25 01:20:16.068620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.153 qpair failed and we were unable to recover it. 00:34:23.153 [2024-07-25 01:20:16.068800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.153 [2024-07-25 01:20:16.068825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.153 qpair failed and we were unable to recover it. 00:34:23.153 [2024-07-25 01:20:16.068974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.153 [2024-07-25 01:20:16.069000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.153 qpair failed and we were unable to recover it. 00:34:23.153 [2024-07-25 01:20:16.069112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.153 [2024-07-25 01:20:16.069138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.153 qpair failed and we were unable to recover it. 00:34:23.153 [2024-07-25 01:20:16.069264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.153 [2024-07-25 01:20:16.069290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.153 qpair failed and we were unable to recover it. 00:34:23.153 [2024-07-25 01:20:16.069451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.153 [2024-07-25 01:20:16.069477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.153 qpair failed and we were unable to recover it. 00:34:23.153 [2024-07-25 01:20:16.069590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.153 [2024-07-25 01:20:16.069615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.153 qpair failed and we were unable to recover it. 00:34:23.153 [2024-07-25 01:20:16.069761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.153 [2024-07-25 01:20:16.069786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.153 qpair failed and we were unable to recover it. 00:34:23.153 [2024-07-25 01:20:16.069959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.153 [2024-07-25 01:20:16.069984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.153 qpair failed and we were unable to recover it. 00:34:23.153 [2024-07-25 01:20:16.070133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.153 [2024-07-25 01:20:16.070159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.153 qpair failed and we were unable to recover it. 00:34:23.153 [2024-07-25 01:20:16.070275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.153 [2024-07-25 01:20:16.070301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.153 qpair failed and we were unable to recover it. 00:34:23.153 [2024-07-25 01:20:16.070427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.153 [2024-07-25 01:20:16.070452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.153 qpair failed and we were unable to recover it. 00:34:23.153 [2024-07-25 01:20:16.070599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.153 [2024-07-25 01:20:16.070624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.153 qpair failed and we were unable to recover it. 00:34:23.153 [2024-07-25 01:20:16.070733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.153 [2024-07-25 01:20:16.070759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.153 qpair failed and we were unable to recover it. 00:34:23.153 [2024-07-25 01:20:16.070877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.153 [2024-07-25 01:20:16.070902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.153 qpair failed and we were unable to recover it. 00:34:23.153 [2024-07-25 01:20:16.071011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.153 [2024-07-25 01:20:16.071036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.153 qpair failed and we were unable to recover it. 00:34:23.153 [2024-07-25 01:20:16.071157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.153 [2024-07-25 01:20:16.071182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.153 qpair failed and we were unable to recover it. 00:34:23.153 [2024-07-25 01:20:16.071302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.153 [2024-07-25 01:20:16.071327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.153 qpair failed and we were unable to recover it. 00:34:23.153 [2024-07-25 01:20:16.071445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.153 [2024-07-25 01:20:16.071472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.153 qpair failed and we were unable to recover it. 00:34:23.153 [2024-07-25 01:20:16.071617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.153 [2024-07-25 01:20:16.071642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.153 qpair failed and we were unable to recover it. 00:34:23.153 [2024-07-25 01:20:16.071757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.153 [2024-07-25 01:20:16.071782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.153 qpair failed and we were unable to recover it. 00:34:23.153 [2024-07-25 01:20:16.071930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.153 [2024-07-25 01:20:16.071955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.153 qpair failed and we were unable to recover it. 00:34:23.153 [2024-07-25 01:20:16.072098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.153 [2024-07-25 01:20:16.072123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.153 qpair failed and we were unable to recover it. 00:34:23.153 [2024-07-25 01:20:16.072239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.153 [2024-07-25 01:20:16.072271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.153 qpair failed and we were unable to recover it. 00:34:23.153 [2024-07-25 01:20:16.072409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.153 [2024-07-25 01:20:16.072434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.153 qpair failed and we were unable to recover it. 00:34:23.153 [2024-07-25 01:20:16.072551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.153 [2024-07-25 01:20:16.072576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.153 qpair failed and we were unable to recover it. 00:34:23.153 [2024-07-25 01:20:16.072688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.153 [2024-07-25 01:20:16.072713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.153 qpair failed and we were unable to recover it. 00:34:23.153 [2024-07-25 01:20:16.072881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.153 [2024-07-25 01:20:16.072906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.153 qpair failed and we were unable to recover it. 00:34:23.153 [2024-07-25 01:20:16.073017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.153 [2024-07-25 01:20:16.073042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.153 qpair failed and we were unable to recover it. 00:34:23.153 [2024-07-25 01:20:16.073186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.153 [2024-07-25 01:20:16.073211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.153 qpair failed and we were unable to recover it. 00:34:23.153 [2024-07-25 01:20:16.073365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.154 [2024-07-25 01:20:16.073391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.154 qpair failed and we were unable to recover it. 00:34:23.154 [2024-07-25 01:20:16.073510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.154 [2024-07-25 01:20:16.073539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.154 qpair failed and we were unable to recover it. 00:34:23.154 [2024-07-25 01:20:16.073683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.154 [2024-07-25 01:20:16.073709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.154 qpair failed and we were unable to recover it. 00:34:23.154 [2024-07-25 01:20:16.073881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.154 [2024-07-25 01:20:16.073907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.154 qpair failed and we were unable to recover it. 00:34:23.154 [2024-07-25 01:20:16.074024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.154 [2024-07-25 01:20:16.074049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.154 qpair failed and we were unable to recover it. 00:34:23.154 [2024-07-25 01:20:16.074186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.154 [2024-07-25 01:20:16.074212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.154 qpair failed and we were unable to recover it. 00:34:23.154 [2024-07-25 01:20:16.074360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.154 [2024-07-25 01:20:16.074386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.154 qpair failed and we were unable to recover it. 00:34:23.154 [2024-07-25 01:20:16.074545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.154 [2024-07-25 01:20:16.074570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.154 qpair failed and we were unable to recover it. 00:34:23.154 [2024-07-25 01:20:16.074707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.154 [2024-07-25 01:20:16.074732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.154 qpair failed and we were unable to recover it. 00:34:23.154 [2024-07-25 01:20:16.074875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.154 [2024-07-25 01:20:16.074901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.154 qpair failed and we were unable to recover it. 00:34:23.154 [2024-07-25 01:20:16.075057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.154 [2024-07-25 01:20:16.075082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.154 qpair failed and we were unable to recover it. 00:34:23.154 [2024-07-25 01:20:16.075233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.154 [2024-07-25 01:20:16.075266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.154 qpair failed and we were unable to recover it. 00:34:23.154 [2024-07-25 01:20:16.075407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.154 [2024-07-25 01:20:16.075432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.154 qpair failed and we were unable to recover it. 00:34:23.154 [2024-07-25 01:20:16.075548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.154 [2024-07-25 01:20:16.075573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.154 qpair failed and we were unable to recover it. 00:34:23.154 [2024-07-25 01:20:16.075727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.154 [2024-07-25 01:20:16.075753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.154 qpair failed and we were unable to recover it. 00:34:23.154 [2024-07-25 01:20:16.075876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.154 [2024-07-25 01:20:16.075902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.154 qpair failed and we were unable to recover it. 00:34:23.154 [2024-07-25 01:20:16.076043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.154 [2024-07-25 01:20:16.076069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.154 qpair failed and we were unable to recover it. 00:34:23.154 [2024-07-25 01:20:16.076216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.154 [2024-07-25 01:20:16.076248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.154 qpair failed and we were unable to recover it. 00:34:23.154 [2024-07-25 01:20:16.076366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.154 [2024-07-25 01:20:16.076391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.154 qpair failed and we were unable to recover it. 00:34:23.154 [2024-07-25 01:20:16.076537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.154 [2024-07-25 01:20:16.076563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.154 qpair failed and we were unable to recover it. 00:34:23.154 [2024-07-25 01:20:16.076684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.154 [2024-07-25 01:20:16.076709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.154 qpair failed and we were unable to recover it. 00:34:23.154 [2024-07-25 01:20:16.076838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.154 [2024-07-25 01:20:16.076863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.154 qpair failed and we were unable to recover it. 00:34:23.154 [2024-07-25 01:20:16.076972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.154 [2024-07-25 01:20:16.076997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.154 qpair failed and we were unable to recover it. 00:34:23.154 [2024-07-25 01:20:16.077145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.154 [2024-07-25 01:20:16.077171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.154 qpair failed and we were unable to recover it. 00:34:23.154 [2024-07-25 01:20:16.077303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.154 [2024-07-25 01:20:16.077329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.154 qpair failed and we were unable to recover it. 00:34:23.154 [2024-07-25 01:20:16.077488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.154 [2024-07-25 01:20:16.077514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.154 qpair failed and we were unable to recover it. 00:34:23.154 [2024-07-25 01:20:16.077676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.154 [2024-07-25 01:20:16.077701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.154 qpair failed and we were unable to recover it. 00:34:23.154 [2024-07-25 01:20:16.077870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.154 [2024-07-25 01:20:16.077895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.154 qpair failed and we were unable to recover it. 00:34:23.154 [2024-07-25 01:20:16.078060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.154 [2024-07-25 01:20:16.078088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.154 qpair failed and we were unable to recover it. 00:34:23.154 [2024-07-25 01:20:16.078253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.154 [2024-07-25 01:20:16.078278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.154 qpair failed and we were unable to recover it. 00:34:23.154 [2024-07-25 01:20:16.078433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.155 [2024-07-25 01:20:16.078458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.155 qpair failed and we were unable to recover it. 00:34:23.155 [2024-07-25 01:20:16.078630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.155 [2024-07-25 01:20:16.078656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.155 qpair failed and we were unable to recover it. 00:34:23.155 [2024-07-25 01:20:16.078803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.155 [2024-07-25 01:20:16.078828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.155 qpair failed and we were unable to recover it. 00:34:23.155 [2024-07-25 01:20:16.078970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.155 [2024-07-25 01:20:16.078996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.155 qpair failed and we were unable to recover it. 00:34:23.155 [2024-07-25 01:20:16.079111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.155 [2024-07-25 01:20:16.079136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.155 qpair failed and we were unable to recover it. 00:34:23.155 [2024-07-25 01:20:16.079308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.155 [2024-07-25 01:20:16.079334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.155 qpair failed and we were unable to recover it. 00:34:23.155 [2024-07-25 01:20:16.079501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.155 [2024-07-25 01:20:16.079527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.155 qpair failed and we were unable to recover it. 00:34:23.155 [2024-07-25 01:20:16.079646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.155 [2024-07-25 01:20:16.079671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.155 qpair failed and we were unable to recover it. 00:34:23.155 [2024-07-25 01:20:16.079809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.155 [2024-07-25 01:20:16.079834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.155 qpair failed and we were unable to recover it. 00:34:23.155 [2024-07-25 01:20:16.080008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.155 [2024-07-25 01:20:16.080034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.155 qpair failed and we were unable to recover it. 00:34:23.155 [2024-07-25 01:20:16.080172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.155 [2024-07-25 01:20:16.080197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.155 qpair failed and we were unable to recover it. 00:34:23.155 [2024-07-25 01:20:16.080325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.155 [2024-07-25 01:20:16.080355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.155 qpair failed and we were unable to recover it. 00:34:23.155 [2024-07-25 01:20:16.080500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.155 [2024-07-25 01:20:16.080526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.155 qpair failed and we were unable to recover it. 00:34:23.155 [2024-07-25 01:20:16.080700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.155 [2024-07-25 01:20:16.080729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.155 qpair failed and we were unable to recover it. 00:34:23.155 [2024-07-25 01:20:16.080888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.155 [2024-07-25 01:20:16.080916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.155 qpair failed and we were unable to recover it. 00:34:23.155 [2024-07-25 01:20:16.081084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.155 [2024-07-25 01:20:16.081110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.155 qpair failed and we were unable to recover it. 00:34:23.155 [2024-07-25 01:20:16.081258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.155 [2024-07-25 01:20:16.081300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.155 qpair failed and we were unable to recover it. 00:34:23.155 [2024-07-25 01:20:16.081457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.155 [2024-07-25 01:20:16.081483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.155 qpair failed and we were unable to recover it. 00:34:23.155 [2024-07-25 01:20:16.081611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.155 [2024-07-25 01:20:16.081636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.155 qpair failed and we were unable to recover it. 00:34:23.155 [2024-07-25 01:20:16.081766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.155 [2024-07-25 01:20:16.081792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.155 qpair failed and we were unable to recover it. 00:34:23.155 [2024-07-25 01:20:16.081907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.155 [2024-07-25 01:20:16.081934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.155 qpair failed and we were unable to recover it. 00:34:23.155 [2024-07-25 01:20:16.082083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.155 [2024-07-25 01:20:16.082110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.155 qpair failed and we were unable to recover it. 00:34:23.155 [2024-07-25 01:20:16.082261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.155 [2024-07-25 01:20:16.082287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.155 qpair failed and we were unable to recover it. 00:34:23.155 [2024-07-25 01:20:16.082415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.155 [2024-07-25 01:20:16.082440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.155 qpair failed and we were unable to recover it. 00:34:23.155 [2024-07-25 01:20:16.082590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.155 [2024-07-25 01:20:16.082616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.155 qpair failed and we were unable to recover it. 00:34:23.155 [2024-07-25 01:20:16.082771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.155 [2024-07-25 01:20:16.082796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.155 qpair failed and we were unable to recover it. 00:34:23.155 [2024-07-25 01:20:16.082910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.155 [2024-07-25 01:20:16.082935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.155 qpair failed and we were unable to recover it. 00:34:23.155 [2024-07-25 01:20:16.083081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.155 [2024-07-25 01:20:16.083107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.155 qpair failed and we were unable to recover it. 00:34:23.155 [2024-07-25 01:20:16.083272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.155 [2024-07-25 01:20:16.083297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.155 qpair failed and we were unable to recover it. 00:34:23.155 [2024-07-25 01:20:16.083468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.155 [2024-07-25 01:20:16.083493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.155 qpair failed and we were unable to recover it. 00:34:23.155 [2024-07-25 01:20:16.083631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.155 [2024-07-25 01:20:16.083656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.155 qpair failed and we were unable to recover it. 00:34:23.155 [2024-07-25 01:20:16.083772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.155 [2024-07-25 01:20:16.083797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.155 qpair failed and we were unable to recover it. 00:34:23.155 [2024-07-25 01:20:16.083967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.155 [2024-07-25 01:20:16.083992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.155 qpair failed and we were unable to recover it. 00:34:23.155 [2024-07-25 01:20:16.084139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.155 [2024-07-25 01:20:16.084165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.155 qpair failed and we were unable to recover it. 00:34:23.155 [2024-07-25 01:20:16.084293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.155 [2024-07-25 01:20:16.084319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.155 qpair failed and we were unable to recover it. 00:34:23.155 [2024-07-25 01:20:16.084435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.155 [2024-07-25 01:20:16.084460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.155 qpair failed and we were unable to recover it. 00:34:23.155 [2024-07-25 01:20:16.084575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.155 [2024-07-25 01:20:16.084601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.155 qpair failed and we were unable to recover it. 00:34:23.155 [2024-07-25 01:20:16.084725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.155 [2024-07-25 01:20:16.084750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.155 qpair failed and we were unable to recover it. 00:34:23.156 [2024-07-25 01:20:16.084924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.156 [2024-07-25 01:20:16.084949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.156 qpair failed and we were unable to recover it. 00:34:23.156 [2024-07-25 01:20:16.085109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.156 [2024-07-25 01:20:16.085137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.156 qpair failed and we were unable to recover it. 00:34:23.156 [2024-07-25 01:20:16.085262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.156 [2024-07-25 01:20:16.085288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.156 qpair failed and we were unable to recover it. 00:34:23.156 [2024-07-25 01:20:16.085469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.156 [2024-07-25 01:20:16.085495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.156 qpair failed and we were unable to recover it. 00:34:23.156 [2024-07-25 01:20:16.085643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.156 [2024-07-25 01:20:16.085668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.156 qpair failed and we were unable to recover it. 00:34:23.156 [2024-07-25 01:20:16.085792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.156 [2024-07-25 01:20:16.085817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.156 qpair failed and we were unable to recover it. 00:34:23.156 [2024-07-25 01:20:16.085963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.156 [2024-07-25 01:20:16.085988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.156 qpair failed and we were unable to recover it. 00:34:23.156 [2024-07-25 01:20:16.086147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.156 [2024-07-25 01:20:16.086173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.156 qpair failed and we were unable to recover it. 00:34:23.156 [2024-07-25 01:20:16.086285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.156 [2024-07-25 01:20:16.086311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.156 qpair failed and we were unable to recover it. 00:34:23.156 [2024-07-25 01:20:16.086464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.156 [2024-07-25 01:20:16.086490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.156 qpair failed and we were unable to recover it. 00:34:23.156 [2024-07-25 01:20:16.086638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.156 [2024-07-25 01:20:16.086664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.156 qpair failed and we were unable to recover it. 00:34:23.156 [2024-07-25 01:20:16.086806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.156 [2024-07-25 01:20:16.086832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.156 qpair failed and we were unable to recover it. 00:34:23.156 [2024-07-25 01:20:16.086976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.156 [2024-07-25 01:20:16.087001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.156 qpair failed and we were unable to recover it. 00:34:23.156 [2024-07-25 01:20:16.087138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.156 [2024-07-25 01:20:16.087167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.156 qpair failed and we were unable to recover it. 00:34:23.156 [2024-07-25 01:20:16.087306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.156 [2024-07-25 01:20:16.087332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.156 qpair failed and we were unable to recover it. 00:34:23.156 [2024-07-25 01:20:16.087480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.156 [2024-07-25 01:20:16.087505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.156 qpair failed and we were unable to recover it. 00:34:23.156 [2024-07-25 01:20:16.087649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.156 [2024-07-25 01:20:16.087675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.156 qpair failed and we were unable to recover it. 00:34:23.156 [2024-07-25 01:20:16.087841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.156 [2024-07-25 01:20:16.087865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.156 qpair failed and we were unable to recover it. 00:34:23.156 [2024-07-25 01:20:16.087989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.156 [2024-07-25 01:20:16.088014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.156 qpair failed and we were unable to recover it. 00:34:23.156 [2024-07-25 01:20:16.088132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.156 [2024-07-25 01:20:16.088158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.156 qpair failed and we were unable to recover it. 00:34:23.156 [2024-07-25 01:20:16.088313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.156 [2024-07-25 01:20:16.088339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.156 qpair failed and we were unable to recover it. 00:34:23.156 [2024-07-25 01:20:16.088510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.156 [2024-07-25 01:20:16.088535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.156 qpair failed and we were unable to recover it. 00:34:23.156 [2024-07-25 01:20:16.088702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.156 [2024-07-25 01:20:16.088727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.156 qpair failed and we were unable to recover it. 00:34:23.156 [2024-07-25 01:20:16.088886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.156 [2024-07-25 01:20:16.088913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.156 qpair failed and we were unable to recover it. 00:34:23.156 [2024-07-25 01:20:16.089058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.156 [2024-07-25 01:20:16.089083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.156 qpair failed and we were unable to recover it. 00:34:23.156 [2024-07-25 01:20:16.089239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.156 [2024-07-25 01:20:16.089290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.156 qpair failed and we were unable to recover it. 00:34:23.156 [2024-07-25 01:20:16.089434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.156 [2024-07-25 01:20:16.089460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.156 qpair failed and we were unable to recover it. 00:34:23.156 [2024-07-25 01:20:16.089637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.156 [2024-07-25 01:20:16.089663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.156 qpair failed and we were unable to recover it. 00:34:23.156 [2024-07-25 01:20:16.089776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.156 [2024-07-25 01:20:16.089801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.156 qpair failed and we were unable to recover it. 00:34:23.156 [2024-07-25 01:20:16.089918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.156 [2024-07-25 01:20:16.089945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.156 qpair failed and we were unable to recover it. 00:34:23.156 [2024-07-25 01:20:16.090068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.156 [2024-07-25 01:20:16.090094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.156 qpair failed and we were unable to recover it. 00:34:23.156 [2024-07-25 01:20:16.090215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.156 [2024-07-25 01:20:16.090249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.156 qpair failed and we were unable to recover it. 00:34:23.156 [2024-07-25 01:20:16.090372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.156 [2024-07-25 01:20:16.090399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.156 qpair failed and we were unable to recover it. 00:34:23.156 [2024-07-25 01:20:16.090513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.156 [2024-07-25 01:20:16.090538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.156 qpair failed and we were unable to recover it. 00:34:23.156 [2024-07-25 01:20:16.090646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.156 [2024-07-25 01:20:16.090671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.156 qpair failed and we were unable to recover it. 00:34:23.156 [2024-07-25 01:20:16.090811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.156 [2024-07-25 01:20:16.090837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.156 qpair failed and we were unable to recover it. 00:34:23.156 [2024-07-25 01:20:16.090956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.156 [2024-07-25 01:20:16.090981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.156 qpair failed and we were unable to recover it. 00:34:23.156 [2024-07-25 01:20:16.091120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.156 [2024-07-25 01:20:16.091145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.157 qpair failed and we were unable to recover it. 00:34:23.157 [2024-07-25 01:20:16.091292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.157 [2024-07-25 01:20:16.091318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.157 qpair failed and we were unable to recover it. 00:34:23.157 [2024-07-25 01:20:16.091483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.157 [2024-07-25 01:20:16.091508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.157 qpair failed and we were unable to recover it. 00:34:23.157 [2024-07-25 01:20:16.091630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.157 [2024-07-25 01:20:16.091655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.157 qpair failed and we were unable to recover it. 00:34:23.157 [2024-07-25 01:20:16.091780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.157 [2024-07-25 01:20:16.091807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.157 qpair failed and we were unable to recover it. 00:34:23.157 [2024-07-25 01:20:16.091944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.157 [2024-07-25 01:20:16.091969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.157 qpair failed and we were unable to recover it. 00:34:23.157 [2024-07-25 01:20:16.092141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.157 [2024-07-25 01:20:16.092167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.157 qpair failed and we were unable to recover it. 00:34:23.157 [2024-07-25 01:20:16.092295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.157 [2024-07-25 01:20:16.092321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.157 qpair failed and we were unable to recover it. 00:34:23.157 [2024-07-25 01:20:16.092475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.157 [2024-07-25 01:20:16.092502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.157 qpair failed and we were unable to recover it. 00:34:23.157 [2024-07-25 01:20:16.092650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.157 [2024-07-25 01:20:16.092676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.157 qpair failed and we were unable to recover it. 00:34:23.157 [2024-07-25 01:20:16.092837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.157 [2024-07-25 01:20:16.092863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.157 qpair failed and we were unable to recover it. 00:34:23.157 [2024-07-25 01:20:16.093023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.157 [2024-07-25 01:20:16.093049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.157 qpair failed and we were unable to recover it. 00:34:23.157 [2024-07-25 01:20:16.093230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.157 [2024-07-25 01:20:16.093268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.157 qpair failed and we were unable to recover it. 00:34:23.157 [2024-07-25 01:20:16.093442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.157 [2024-07-25 01:20:16.093468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.157 qpair failed and we were unable to recover it. 00:34:23.157 [2024-07-25 01:20:16.093639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.157 [2024-07-25 01:20:16.093665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.157 qpair failed and we were unable to recover it. 00:34:23.157 [2024-07-25 01:20:16.093780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.157 [2024-07-25 01:20:16.093804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.157 qpair failed and we were unable to recover it. 00:34:23.157 [2024-07-25 01:20:16.093973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.157 [2024-07-25 01:20:16.094002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.157 qpair failed and we were unable to recover it. 00:34:23.157 [2024-07-25 01:20:16.094147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.157 [2024-07-25 01:20:16.094175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.157 qpair failed and we were unable to recover it. 00:34:23.157 [2024-07-25 01:20:16.094337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.157 [2024-07-25 01:20:16.094362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.157 qpair failed and we were unable to recover it. 00:34:23.157 [2024-07-25 01:20:16.094483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.157 [2024-07-25 01:20:16.094509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.157 qpair failed and we were unable to recover it. 00:34:23.157 [2024-07-25 01:20:16.094626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.157 [2024-07-25 01:20:16.094653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.157 qpair failed and we were unable to recover it. 00:34:23.157 [2024-07-25 01:20:16.094820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.157 [2024-07-25 01:20:16.094845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.157 qpair failed and we were unable to recover it. 00:34:23.157 [2024-07-25 01:20:16.094992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.157 [2024-07-25 01:20:16.095017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.157 qpair failed and we were unable to recover it. 00:34:23.157 [2024-07-25 01:20:16.095163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.157 [2024-07-25 01:20:16.095189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.157 qpair failed and we were unable to recover it. 00:34:23.157 [2024-07-25 01:20:16.095339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.157 [2024-07-25 01:20:16.095365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.157 qpair failed and we were unable to recover it. 00:34:23.157 [2024-07-25 01:20:16.095526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.157 [2024-07-25 01:20:16.095555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.157 qpair failed and we were unable to recover it. 00:34:23.157 [2024-07-25 01:20:16.095761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.157 [2024-07-25 01:20:16.095789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.157 qpair failed and we were unable to recover it. 00:34:23.157 [2024-07-25 01:20:16.095975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.157 [2024-07-25 01:20:16.096003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.157 qpair failed and we were unable to recover it. 00:34:23.157 [2024-07-25 01:20:16.096161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.157 [2024-07-25 01:20:16.096189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.157 qpair failed and we were unable to recover it. 00:34:23.157 [2024-07-25 01:20:16.096343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.157 [2024-07-25 01:20:16.096371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.157 qpair failed and we were unable to recover it. 00:34:23.157 [2024-07-25 01:20:16.096563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.157 [2024-07-25 01:20:16.096591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.157 qpair failed and we were unable to recover it. 00:34:23.157 [2024-07-25 01:20:16.096776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.157 [2024-07-25 01:20:16.096805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.157 qpair failed and we were unable to recover it. 00:34:23.157 [2024-07-25 01:20:16.097012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.157 [2024-07-25 01:20:16.097040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.157 qpair failed and we were unable to recover it. 00:34:23.157 [2024-07-25 01:20:16.097190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.157 [2024-07-25 01:20:16.097218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.157 qpair failed and we were unable to recover it. 00:34:23.157 [2024-07-25 01:20:16.097426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.157 [2024-07-25 01:20:16.097456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.157 qpair failed and we were unable to recover it. 00:34:23.157 [2024-07-25 01:20:16.097639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.157 [2024-07-25 01:20:16.097667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.157 qpair failed and we were unable to recover it. 00:34:23.157 [2024-07-25 01:20:16.097805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.157 [2024-07-25 01:20:16.097830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.157 qpair failed and we were unable to recover it. 00:34:23.157 [2024-07-25 01:20:16.097950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.157 [2024-07-25 01:20:16.097976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.157 qpair failed and we were unable to recover it. 00:34:23.157 [2024-07-25 01:20:16.098127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.158 [2024-07-25 01:20:16.098153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.158 qpair failed and we were unable to recover it. 00:34:23.158 [2024-07-25 01:20:16.098302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.158 [2024-07-25 01:20:16.098332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.158 qpair failed and we were unable to recover it. 00:34:23.158 [2024-07-25 01:20:16.098577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.158 [2024-07-25 01:20:16.098629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.158 qpair failed and we were unable to recover it. 00:34:23.158 [2024-07-25 01:20:16.098818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.158 [2024-07-25 01:20:16.098843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.158 qpair failed and we were unable to recover it. 00:34:23.158 [2024-07-25 01:20:16.099010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.158 [2024-07-25 01:20:16.099039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.158 qpair failed and we were unable to recover it. 00:34:23.158 [2024-07-25 01:20:16.099237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.158 [2024-07-25 01:20:16.099270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.158 qpair failed and we were unable to recover it. 00:34:23.158 [2024-07-25 01:20:16.099422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.158 [2024-07-25 01:20:16.099450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.158 qpair failed and we were unable to recover it. 00:34:23.158 [2024-07-25 01:20:16.099558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.158 [2024-07-25 01:20:16.099583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.158 qpair failed and we were unable to recover it. 00:34:23.158 [2024-07-25 01:20:16.099755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.158 [2024-07-25 01:20:16.099780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.158 qpair failed and we were unable to recover it. 00:34:23.158 [2024-07-25 01:20:16.099928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.158 [2024-07-25 01:20:16.099955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.158 qpair failed and we were unable to recover it. 00:34:23.158 [2024-07-25 01:20:16.100125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.158 [2024-07-25 01:20:16.100150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.158 qpair failed and we were unable to recover it. 00:34:23.158 [2024-07-25 01:20:16.100293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.158 [2024-07-25 01:20:16.100320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.158 qpair failed and we were unable to recover it. 00:34:23.158 [2024-07-25 01:20:16.100471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.158 [2024-07-25 01:20:16.100497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.158 qpair failed and we were unable to recover it. 00:34:23.158 [2024-07-25 01:20:16.100638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.158 [2024-07-25 01:20:16.100664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.158 qpair failed and we were unable to recover it. 00:34:23.158 [2024-07-25 01:20:16.100809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.158 [2024-07-25 01:20:16.100834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.158 qpair failed and we were unable to recover it. 00:34:23.158 [2024-07-25 01:20:16.101015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.158 [2024-07-25 01:20:16.101041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.158 qpair failed and we were unable to recover it. 00:34:23.158 [2024-07-25 01:20:16.101210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.158 [2024-07-25 01:20:16.101235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.158 qpair failed and we were unable to recover it. 00:34:23.158 [2024-07-25 01:20:16.101394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.158 [2024-07-25 01:20:16.101419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.158 qpair failed and we were unable to recover it. 00:34:23.158 [2024-07-25 01:20:16.101559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.158 [2024-07-25 01:20:16.101589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.158 qpair failed and we were unable to recover it. 00:34:23.158 [2024-07-25 01:20:16.101731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.158 [2024-07-25 01:20:16.101756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.158 qpair failed and we were unable to recover it. 00:34:23.158 [2024-07-25 01:20:16.101960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.158 [2024-07-25 01:20:16.101988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.158 qpair failed and we were unable to recover it. 00:34:23.158 [2024-07-25 01:20:16.102143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.158 [2024-07-25 01:20:16.102171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.158 qpair failed and we were unable to recover it. 00:34:23.158 [2024-07-25 01:20:16.102352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.158 [2024-07-25 01:20:16.102381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.158 qpair failed and we were unable to recover it. 00:34:23.158 [2024-07-25 01:20:16.102605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.158 [2024-07-25 01:20:16.102633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.158 qpair failed and we were unable to recover it. 00:34:23.158 [2024-07-25 01:20:16.102795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.158 [2024-07-25 01:20:16.102851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.158 qpair failed and we were unable to recover it. 00:34:23.158 [2024-07-25 01:20:16.103067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.158 [2024-07-25 01:20:16.103095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.158 qpair failed and we were unable to recover it. 00:34:23.158 [2024-07-25 01:20:16.103232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.158 [2024-07-25 01:20:16.103263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.158 qpair failed and we were unable to recover it. 00:34:23.158 [2024-07-25 01:20:16.103380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.158 [2024-07-25 01:20:16.103405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.158 qpair failed and we were unable to recover it. 00:34:23.158 [2024-07-25 01:20:16.103556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.158 [2024-07-25 01:20:16.103587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.158 qpair failed and we were unable to recover it. 00:34:23.158 [2024-07-25 01:20:16.103737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.158 [2024-07-25 01:20:16.103763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.158 qpair failed and we were unable to recover it. 00:34:23.158 [2024-07-25 01:20:16.103959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.158 [2024-07-25 01:20:16.103989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.158 qpair failed and we were unable to recover it. 00:34:23.158 [2024-07-25 01:20:16.104166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.158 [2024-07-25 01:20:16.104195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.158 qpair failed and we were unable to recover it. 00:34:23.158 [2024-07-25 01:20:16.104361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.158 [2024-07-25 01:20:16.104391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.158 qpair failed and we were unable to recover it. 00:34:23.158 [2024-07-25 01:20:16.104542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.158 [2024-07-25 01:20:16.104570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.158 qpair failed and we were unable to recover it. 00:34:23.158 [2024-07-25 01:20:16.104724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.158 [2024-07-25 01:20:16.104753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.158 qpair failed and we were unable to recover it. 00:34:23.158 [2024-07-25 01:20:16.104890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.158 [2024-07-25 01:20:16.104916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.158 qpair failed and we were unable to recover it. 00:34:23.158 [2024-07-25 01:20:16.105090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.158 [2024-07-25 01:20:16.105115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.158 qpair failed and we were unable to recover it. 00:34:23.159 [2024-07-25 01:20:16.105306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.159 [2024-07-25 01:20:16.105335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.159 qpair failed and we were unable to recover it. 00:34:23.159 [2024-07-25 01:20:16.105610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.159 [2024-07-25 01:20:16.105671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.159 qpair failed and we were unable to recover it. 00:34:23.159 [2024-07-25 01:20:16.105846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.159 [2024-07-25 01:20:16.105874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.159 qpair failed and we were unable to recover it. 00:34:23.159 [2024-07-25 01:20:16.106032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.159 [2024-07-25 01:20:16.106062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.159 qpair failed and we were unable to recover it. 00:34:23.159 [2024-07-25 01:20:16.106250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.159 [2024-07-25 01:20:16.106292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.159 qpair failed and we were unable to recover it. 00:34:23.159 [2024-07-25 01:20:16.106437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.159 [2024-07-25 01:20:16.106463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.159 qpair failed and we were unable to recover it. 00:34:23.159 [2024-07-25 01:20:16.106574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.159 [2024-07-25 01:20:16.106600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.159 qpair failed and we were unable to recover it. 00:34:23.159 [2024-07-25 01:20:16.106771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.159 [2024-07-25 01:20:16.106797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.159 qpair failed and we were unable to recover it. 00:34:23.159 [2024-07-25 01:20:16.106927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.159 [2024-07-25 01:20:16.106953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.159 qpair failed and we were unable to recover it. 00:34:23.159 [2024-07-25 01:20:16.107117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.159 [2024-07-25 01:20:16.107145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.159 qpair failed and we were unable to recover it. 00:34:23.159 [2024-07-25 01:20:16.107299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.159 [2024-07-25 01:20:16.107324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.159 qpair failed and we were unable to recover it. 00:34:23.159 [2024-07-25 01:20:16.107468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.159 [2024-07-25 01:20:16.107494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.159 qpair failed and we were unable to recover it. 00:34:23.159 [2024-07-25 01:20:16.107604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.159 [2024-07-25 01:20:16.107629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.159 qpair failed and we were unable to recover it. 00:34:23.159 [2024-07-25 01:20:16.107775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.159 [2024-07-25 01:20:16.107800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.159 qpair failed and we were unable to recover it. 00:34:23.159 [2024-07-25 01:20:16.107906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.159 [2024-07-25 01:20:16.107931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.159 qpair failed and we were unable to recover it. 00:34:23.159 [2024-07-25 01:20:16.108098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.159 [2024-07-25 01:20:16.108126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.159 qpair failed and we were unable to recover it. 00:34:23.159 [2024-07-25 01:20:16.108253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.159 [2024-07-25 01:20:16.108278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.159 qpair failed and we were unable to recover it. 00:34:23.159 [2024-07-25 01:20:16.108423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.159 [2024-07-25 01:20:16.108449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.159 qpair failed and we were unable to recover it. 00:34:23.159 [2024-07-25 01:20:16.108590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.159 [2024-07-25 01:20:16.108616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.159 qpair failed and we were unable to recover it. 00:34:23.159 [2024-07-25 01:20:16.108762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.159 [2024-07-25 01:20:16.108788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.159 qpair failed and we were unable to recover it. 00:34:23.159 [2024-07-25 01:20:16.108931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.159 [2024-07-25 01:20:16.108957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.159 qpair failed and we were unable to recover it. 00:34:23.159 [2024-07-25 01:20:16.109123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.159 [2024-07-25 01:20:16.109148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.159 qpair failed and we were unable to recover it. 00:34:23.159 [2024-07-25 01:20:16.109292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.159 [2024-07-25 01:20:16.109318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.159 qpair failed and we were unable to recover it. 00:34:23.159 [2024-07-25 01:20:16.109462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.159 [2024-07-25 01:20:16.109488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.159 qpair failed and we were unable to recover it. 00:34:23.159 [2024-07-25 01:20:16.109623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.159 [2024-07-25 01:20:16.109648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.159 qpair failed and we were unable to recover it. 00:34:23.159 [2024-07-25 01:20:16.109831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.159 [2024-07-25 01:20:16.109856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.159 qpair failed and we were unable to recover it. 00:34:23.159 [2024-07-25 01:20:16.109999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.159 [2024-07-25 01:20:16.110023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.159 qpair failed and we were unable to recover it. 00:34:23.159 [2024-07-25 01:20:16.110186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.159 [2024-07-25 01:20:16.110214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.159 qpair failed and we were unable to recover it. 00:34:23.159 [2024-07-25 01:20:16.110356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.159 [2024-07-25 01:20:16.110382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.159 qpair failed and we were unable to recover it. 00:34:23.159 [2024-07-25 01:20:16.110577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.159 [2024-07-25 01:20:16.110604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.159 qpair failed and we were unable to recover it. 00:34:23.159 [2024-07-25 01:20:16.110785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.159 [2024-07-25 01:20:16.110813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.159 qpair failed and we were unable to recover it. 00:34:23.159 [2024-07-25 01:20:16.111002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.159 [2024-07-25 01:20:16.111030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.159 qpair failed and we were unable to recover it. 00:34:23.159 [2024-07-25 01:20:16.111187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.159 [2024-07-25 01:20:16.111216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.159 qpair failed and we were unable to recover it. 00:34:23.159 [2024-07-25 01:20:16.111412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.159 [2024-07-25 01:20:16.111441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.160 qpair failed and we were unable to recover it. 00:34:23.160 [2024-07-25 01:20:16.111653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.160 [2024-07-25 01:20:16.111681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.160 qpair failed and we were unable to recover it. 00:34:23.160 [2024-07-25 01:20:16.111910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.160 [2024-07-25 01:20:16.111966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.160 qpair failed and we were unable to recover it. 00:34:23.160 [2024-07-25 01:20:16.112166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.160 [2024-07-25 01:20:16.112194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.160 qpair failed and we were unable to recover it. 00:34:23.160 [2024-07-25 01:20:16.112378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.160 [2024-07-25 01:20:16.112407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.160 qpair failed and we were unable to recover it. 00:34:23.160 [2024-07-25 01:20:16.112570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.160 [2024-07-25 01:20:16.112598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.160 qpair failed and we were unable to recover it. 00:34:23.160 [2024-07-25 01:20:16.112866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.160 [2024-07-25 01:20:16.112919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.160 qpair failed and we were unable to recover it. 00:34:23.160 [2024-07-25 01:20:16.113103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.160 [2024-07-25 01:20:16.113130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.160 qpair failed and we were unable to recover it. 00:34:23.160 [2024-07-25 01:20:16.113318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.160 [2024-07-25 01:20:16.113344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.160 qpair failed and we were unable to recover it. 00:34:23.160 [2024-07-25 01:20:16.113492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.160 [2024-07-25 01:20:16.113517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.160 qpair failed and we were unable to recover it. 00:34:23.160 [2024-07-25 01:20:16.113625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.160 [2024-07-25 01:20:16.113650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.160 qpair failed and we were unable to recover it. 00:34:23.160 [2024-07-25 01:20:16.113766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.160 [2024-07-25 01:20:16.113792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.160 qpair failed and we were unable to recover it. 00:34:23.160 [2024-07-25 01:20:16.113935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.160 [2024-07-25 01:20:16.113962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.160 qpair failed and we were unable to recover it. 00:34:23.160 [2024-07-25 01:20:16.114081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.160 [2024-07-25 01:20:16.114107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.160 qpair failed and we were unable to recover it. 00:34:23.160 [2024-07-25 01:20:16.114252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.160 [2024-07-25 01:20:16.114278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.160 qpair failed and we were unable to recover it. 00:34:23.160 [2024-07-25 01:20:16.114422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.160 [2024-07-25 01:20:16.114451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.160 qpair failed and we were unable to recover it. 00:34:23.160 [2024-07-25 01:20:16.114592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.160 [2024-07-25 01:20:16.114617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.160 qpair failed and we were unable to recover it. 00:34:23.160 [2024-07-25 01:20:16.114731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.160 [2024-07-25 01:20:16.114756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.160 qpair failed and we were unable to recover it. 00:34:23.160 [2024-07-25 01:20:16.114905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.160 [2024-07-25 01:20:16.114930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.160 qpair failed and we were unable to recover it. 00:34:23.160 [2024-07-25 01:20:16.115096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.160 [2024-07-25 01:20:16.115124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.160 qpair failed and we were unable to recover it. 00:34:23.160 [2024-07-25 01:20:16.115285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.160 [2024-07-25 01:20:16.115311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.160 qpair failed and we were unable to recover it. 00:34:23.160 [2024-07-25 01:20:16.115451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.160 [2024-07-25 01:20:16.115475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.160 qpair failed and we were unable to recover it. 00:34:23.160 [2024-07-25 01:20:16.115597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.160 [2024-07-25 01:20:16.115622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.160 qpair failed and we were unable to recover it. 00:34:23.160 [2024-07-25 01:20:16.115738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.160 [2024-07-25 01:20:16.115763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.160 qpair failed and we were unable to recover it. 00:34:23.160 [2024-07-25 01:20:16.115889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.160 [2024-07-25 01:20:16.115914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.160 qpair failed and we were unable to recover it. 00:34:23.160 [2024-07-25 01:20:16.116023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.160 [2024-07-25 01:20:16.116065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.160 qpair failed and we were unable to recover it. 00:34:23.160 [2024-07-25 01:20:16.116220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.160 [2024-07-25 01:20:16.116251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.160 qpair failed and we were unable to recover it. 00:34:23.160 [2024-07-25 01:20:16.116376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.160 [2024-07-25 01:20:16.116402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.160 qpair failed and we were unable to recover it. 00:34:23.160 [2024-07-25 01:20:16.116521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.160 [2024-07-25 01:20:16.116547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.160 qpair failed and we were unable to recover it. 00:34:23.160 [2024-07-25 01:20:16.116667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.160 [2024-07-25 01:20:16.116692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.160 qpair failed and we were unable to recover it. 00:34:23.160 [2024-07-25 01:20:16.116813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.160 [2024-07-25 01:20:16.116838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.160 qpair failed and we were unable to recover it. 00:34:23.160 [2024-07-25 01:20:16.116950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.160 [2024-07-25 01:20:16.116975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.160 qpair failed and we were unable to recover it. 00:34:23.160 [2024-07-25 01:20:16.117119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.160 [2024-07-25 01:20:16.117144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.160 qpair failed and we were unable to recover it. 00:34:23.160 [2024-07-25 01:20:16.117268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.160 [2024-07-25 01:20:16.117294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.160 qpair failed and we were unable to recover it. 00:34:23.160 [2024-07-25 01:20:16.117445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.160 [2024-07-25 01:20:16.117471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.160 qpair failed and we were unable to recover it. 00:34:23.160 [2024-07-25 01:20:16.117581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.160 [2024-07-25 01:20:16.117606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.160 qpair failed and we were unable to recover it. 00:34:23.160 [2024-07-25 01:20:16.117724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.160 [2024-07-25 01:20:16.117751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.160 qpair failed and we were unable to recover it. 00:34:23.160 [2024-07-25 01:20:16.117896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.160 [2024-07-25 01:20:16.117922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.160 qpair failed and we were unable to recover it. 00:34:23.160 [2024-07-25 01:20:16.118102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.161 [2024-07-25 01:20:16.118128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.161 qpair failed and we were unable to recover it. 00:34:23.161 [2024-07-25 01:20:16.118288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.161 [2024-07-25 01:20:16.118314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.161 qpair failed and we were unable to recover it. 00:34:23.161 [2024-07-25 01:20:16.118449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.161 [2024-07-25 01:20:16.118474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.161 qpair failed and we were unable to recover it. 00:34:23.161 [2024-07-25 01:20:16.118593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.161 [2024-07-25 01:20:16.118619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.161 qpair failed and we were unable to recover it. 00:34:23.161 [2024-07-25 01:20:16.118769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.161 [2024-07-25 01:20:16.118796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.161 qpair failed and we were unable to recover it. 00:34:23.161 [2024-07-25 01:20:16.118961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.161 [2024-07-25 01:20:16.118986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.161 qpair failed and we were unable to recover it. 00:34:23.161 [2024-07-25 01:20:16.119157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.161 [2024-07-25 01:20:16.119182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.161 qpair failed and we were unable to recover it. 00:34:23.161 [2024-07-25 01:20:16.119319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.161 [2024-07-25 01:20:16.119344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.161 qpair failed and we were unable to recover it. 00:34:23.161 [2024-07-25 01:20:16.119519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.161 [2024-07-25 01:20:16.119544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.161 qpair failed and we were unable to recover it. 00:34:23.161 [2024-07-25 01:20:16.119688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.161 [2024-07-25 01:20:16.119717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.161 qpair failed and we were unable to recover it. 00:34:23.161 [2024-07-25 01:20:16.119929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.161 [2024-07-25 01:20:16.119957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.161 qpair failed and we were unable to recover it. 00:34:23.161 [2024-07-25 01:20:16.120121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.161 [2024-07-25 01:20:16.120146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.161 qpair failed and we were unable to recover it. 00:34:23.161 [2024-07-25 01:20:16.120285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.161 [2024-07-25 01:20:16.120313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.161 qpair failed and we were unable to recover it. 00:34:23.161 [2024-07-25 01:20:16.120487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.161 [2024-07-25 01:20:16.120513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.161 qpair failed and we were unable to recover it. 00:34:23.161 [2024-07-25 01:20:16.120678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.161 [2024-07-25 01:20:16.120703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.161 qpair failed and we were unable to recover it. 00:34:23.161 [2024-07-25 01:20:16.120870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.161 [2024-07-25 01:20:16.120895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.161 qpair failed and we were unable to recover it. 00:34:23.161 [2024-07-25 01:20:16.121042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.161 [2024-07-25 01:20:16.121067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.161 qpair failed and we were unable to recover it. 00:34:23.161 [2024-07-25 01:20:16.121247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.161 [2024-07-25 01:20:16.121277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.161 qpair failed and we were unable to recover it. 00:34:23.161 [2024-07-25 01:20:16.121427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.161 [2024-07-25 01:20:16.121452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.161 qpair failed and we were unable to recover it. 00:34:23.161 [2024-07-25 01:20:16.121600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.161 [2024-07-25 01:20:16.121625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.161 qpair failed and we were unable to recover it. 00:34:23.161 [2024-07-25 01:20:16.121791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.161 [2024-07-25 01:20:16.121817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.161 qpair failed and we were unable to recover it. 00:34:23.161 [2024-07-25 01:20:16.121958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.161 [2024-07-25 01:20:16.121984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.161 qpair failed and we were unable to recover it. 00:34:23.161 [2024-07-25 01:20:16.122097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.161 [2024-07-25 01:20:16.122122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.161 qpair failed and we were unable to recover it. 00:34:23.161 [2024-07-25 01:20:16.122279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.161 [2024-07-25 01:20:16.122305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.161 qpair failed and we were unable to recover it. 00:34:23.161 [2024-07-25 01:20:16.122450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.161 [2024-07-25 01:20:16.122475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.161 qpair failed and we were unable to recover it. 00:34:23.161 [2024-07-25 01:20:16.122617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.161 [2024-07-25 01:20:16.122642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.161 qpair failed and we were unable to recover it. 00:34:23.161 [2024-07-25 01:20:16.122783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.161 [2024-07-25 01:20:16.122807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.161 qpair failed and we were unable to recover it. 00:34:23.161 [2024-07-25 01:20:16.122978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.161 [2024-07-25 01:20:16.123003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.161 qpair failed and we were unable to recover it. 00:34:23.161 [2024-07-25 01:20:16.123125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.161 [2024-07-25 01:20:16.123151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.161 qpair failed and we were unable to recover it. 00:34:23.161 [2024-07-25 01:20:16.123297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.161 [2024-07-25 01:20:16.123323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.161 qpair failed and we were unable to recover it. 00:34:23.161 [2024-07-25 01:20:16.123463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.161 [2024-07-25 01:20:16.123488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.161 qpair failed and we were unable to recover it. 00:34:23.161 [2024-07-25 01:20:16.123625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.161 [2024-07-25 01:20:16.123650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.161 qpair failed and we were unable to recover it. 00:34:23.161 [2024-07-25 01:20:16.123813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.161 [2024-07-25 01:20:16.123842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.161 qpair failed and we were unable to recover it. 00:34:23.161 [2024-07-25 01:20:16.124048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.161 [2024-07-25 01:20:16.124076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.161 qpair failed and we were unable to recover it. 00:34:23.161 [2024-07-25 01:20:16.124214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.161 [2024-07-25 01:20:16.124240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.161 qpair failed and we were unable to recover it. 00:34:23.161 [2024-07-25 01:20:16.124415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.161 [2024-07-25 01:20:16.124440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.161 qpair failed and we were unable to recover it. 00:34:23.161 [2024-07-25 01:20:16.124590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.161 [2024-07-25 01:20:16.124617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.161 qpair failed and we were unable to recover it. 00:34:23.161 [2024-07-25 01:20:16.124762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.161 [2024-07-25 01:20:16.124787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.161 qpair failed and we were unable to recover it. 00:34:23.161 [2024-07-25 01:20:16.124945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.162 [2024-07-25 01:20:16.124970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.162 qpair failed and we were unable to recover it. 00:34:23.162 [2024-07-25 01:20:16.125139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.162 [2024-07-25 01:20:16.125164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.162 qpair failed and we were unable to recover it. 00:34:23.162 [2024-07-25 01:20:16.125288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.162 [2024-07-25 01:20:16.125314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.162 qpair failed and we were unable to recover it. 00:34:23.162 [2024-07-25 01:20:16.125458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.162 [2024-07-25 01:20:16.125483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.162 qpair failed and we were unable to recover it. 00:34:23.162 [2024-07-25 01:20:16.125663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.162 [2024-07-25 01:20:16.125689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.162 qpair failed and we were unable to recover it. 00:34:23.162 [2024-07-25 01:20:16.125845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.162 [2024-07-25 01:20:16.125870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.162 qpair failed and we were unable to recover it. 00:34:23.162 [2024-07-25 01:20:16.125990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.162 [2024-07-25 01:20:16.126015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.162 qpair failed and we were unable to recover it. 00:34:23.162 [2024-07-25 01:20:16.126173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.162 [2024-07-25 01:20:16.126202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.162 qpair failed and we were unable to recover it. 00:34:23.162 [2024-07-25 01:20:16.126344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.162 [2024-07-25 01:20:16.126370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.162 qpair failed and we were unable to recover it. 00:34:23.162 [2024-07-25 01:20:16.126480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.162 [2024-07-25 01:20:16.126505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.162 qpair failed and we were unable to recover it. 00:34:23.162 [2024-07-25 01:20:16.126649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.162 [2024-07-25 01:20:16.126675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.162 qpair failed and we were unable to recover it. 00:34:23.162 [2024-07-25 01:20:16.126842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.162 [2024-07-25 01:20:16.126867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.162 qpair failed and we were unable to recover it. 00:34:23.162 [2024-07-25 01:20:16.127028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.162 [2024-07-25 01:20:16.127056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.162 qpair failed and we were unable to recover it. 00:34:23.162 [2024-07-25 01:20:16.127286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.162 [2024-07-25 01:20:16.127313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.162 qpair failed and we were unable to recover it. 00:34:23.162 [2024-07-25 01:20:16.127452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.162 [2024-07-25 01:20:16.127478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.162 qpair failed and we were unable to recover it. 00:34:23.162 [2024-07-25 01:20:16.127626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.162 [2024-07-25 01:20:16.127652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.162 qpair failed and we were unable to recover it. 00:34:23.162 [2024-07-25 01:20:16.127792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.162 [2024-07-25 01:20:16.127817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.162 qpair failed and we were unable to recover it. 00:34:23.162 [2024-07-25 01:20:16.127951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.162 [2024-07-25 01:20:16.127976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.162 qpair failed and we were unable to recover it. 00:34:23.162 [2024-07-25 01:20:16.128095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.162 [2024-07-25 01:20:16.128120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.162 qpair failed and we were unable to recover it. 00:34:23.162 [2024-07-25 01:20:16.128232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.162 [2024-07-25 01:20:16.128268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.162 qpair failed and we were unable to recover it. 00:34:23.162 [2024-07-25 01:20:16.128424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.162 [2024-07-25 01:20:16.128449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.162 qpair failed and we were unable to recover it. 00:34:23.162 [2024-07-25 01:20:16.128594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.162 [2024-07-25 01:20:16.128619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.162 qpair failed and we were unable to recover it. 00:34:23.162 [2024-07-25 01:20:16.128787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.162 [2024-07-25 01:20:16.128813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.162 qpair failed and we were unable to recover it. 00:34:23.162 [2024-07-25 01:20:16.128977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.162 [2024-07-25 01:20:16.129002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.162 qpair failed and we were unable to recover it. 00:34:23.162 [2024-07-25 01:20:16.129118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.162 [2024-07-25 01:20:16.129143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.162 qpair failed and we were unable to recover it. 00:34:23.162 [2024-07-25 01:20:16.129308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.162 [2024-07-25 01:20:16.129334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.162 qpair failed and we were unable to recover it. 00:34:23.162 [2024-07-25 01:20:16.129482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.162 [2024-07-25 01:20:16.129508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.162 qpair failed and we were unable to recover it. 00:34:23.162 [2024-07-25 01:20:16.129629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.162 [2024-07-25 01:20:16.129655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.162 qpair failed and we were unable to recover it. 00:34:23.162 [2024-07-25 01:20:16.129801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.162 [2024-07-25 01:20:16.129826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.162 qpair failed and we were unable to recover it. 00:34:23.162 [2024-07-25 01:20:16.129994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.162 [2024-07-25 01:20:16.130019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.162 qpair failed and we were unable to recover it. 00:34:23.162 [2024-07-25 01:20:16.130143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.162 [2024-07-25 01:20:16.130168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.162 qpair failed and we were unable to recover it. 00:34:23.162 [2024-07-25 01:20:16.130340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.162 [2024-07-25 01:20:16.130366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.162 qpair failed and we were unable to recover it. 00:34:23.162 [2024-07-25 01:20:16.130502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.162 [2024-07-25 01:20:16.130528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.162 qpair failed and we were unable to recover it. 00:34:23.162 [2024-07-25 01:20:16.130674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.162 [2024-07-25 01:20:16.130700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.162 qpair failed and we were unable to recover it. 00:34:23.162 [2024-07-25 01:20:16.130842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.162 [2024-07-25 01:20:16.130868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.162 qpair failed and we were unable to recover it. 00:34:23.162 [2024-07-25 01:20:16.131014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.162 [2024-07-25 01:20:16.131039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.162 qpair failed and we were unable to recover it. 00:34:23.162 [2024-07-25 01:20:16.131181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.162 [2024-07-25 01:20:16.131207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.162 qpair failed and we were unable to recover it. 00:34:23.162 [2024-07-25 01:20:16.131358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.162 [2024-07-25 01:20:16.131384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.162 qpair failed and we were unable to recover it. 00:34:23.162 [2024-07-25 01:20:16.131532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.163 [2024-07-25 01:20:16.131558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.163 qpair failed and we were unable to recover it. 00:34:23.163 [2024-07-25 01:20:16.131696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.163 [2024-07-25 01:20:16.131721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.163 qpair failed and we were unable to recover it. 00:34:23.163 [2024-07-25 01:20:16.131839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.163 [2024-07-25 01:20:16.131864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.163 qpair failed and we were unable to recover it. 00:34:23.163 [2024-07-25 01:20:16.132000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.163 [2024-07-25 01:20:16.132025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.163 qpair failed and we were unable to recover it. 00:34:23.163 [2024-07-25 01:20:16.132172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.163 [2024-07-25 01:20:16.132197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.163 qpair failed and we were unable to recover it. 00:34:23.163 [2024-07-25 01:20:16.132361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.163 [2024-07-25 01:20:16.132387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.163 qpair failed and we were unable to recover it. 00:34:23.163 [2024-07-25 01:20:16.132525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.163 [2024-07-25 01:20:16.132550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.163 qpair failed and we were unable to recover it. 00:34:23.163 [2024-07-25 01:20:16.132720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.163 [2024-07-25 01:20:16.132745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.163 qpair failed and we were unable to recover it. 00:34:23.163 [2024-07-25 01:20:16.132875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.163 [2024-07-25 01:20:16.132901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.163 qpair failed and we were unable to recover it. 00:34:23.163 [2024-07-25 01:20:16.133041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.163 [2024-07-25 01:20:16.133066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.163 qpair failed and we were unable to recover it. 00:34:23.163 [2024-07-25 01:20:16.133230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.163 [2024-07-25 01:20:16.133268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.163 qpair failed and we were unable to recover it. 00:34:23.163 [2024-07-25 01:20:16.133435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.163 [2024-07-25 01:20:16.133460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.163 qpair failed and we were unable to recover it. 00:34:23.163 [2024-07-25 01:20:16.133629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.163 [2024-07-25 01:20:16.133655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.163 qpair failed and we were unable to recover it. 00:34:23.163 [2024-07-25 01:20:16.133797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.163 [2024-07-25 01:20:16.133822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.163 qpair failed and we were unable to recover it. 00:34:23.163 [2024-07-25 01:20:16.133937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.163 [2024-07-25 01:20:16.133962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.163 qpair failed and we were unable to recover it. 00:34:23.163 [2024-07-25 01:20:16.134104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.163 [2024-07-25 01:20:16.134130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.163 qpair failed and we were unable to recover it. 00:34:23.163 [2024-07-25 01:20:16.134259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.163 [2024-07-25 01:20:16.134286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.163 qpair failed and we were unable to recover it. 00:34:23.163 [2024-07-25 01:20:16.134433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.163 [2024-07-25 01:20:16.134458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.163 qpair failed and we were unable to recover it. 00:34:23.163 [2024-07-25 01:20:16.134584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.163 [2024-07-25 01:20:16.134609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.163 qpair failed and we were unable to recover it. 00:34:23.163 [2024-07-25 01:20:16.134716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.163 [2024-07-25 01:20:16.134742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.163 qpair failed and we were unable to recover it. 00:34:23.163 [2024-07-25 01:20:16.134918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.163 [2024-07-25 01:20:16.134944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.163 qpair failed and we were unable to recover it. 00:34:23.163 [2024-07-25 01:20:16.135093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.163 [2024-07-25 01:20:16.135122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.163 qpair failed and we were unable to recover it. 00:34:23.163 [2024-07-25 01:20:16.135294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.163 [2024-07-25 01:20:16.135320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.163 qpair failed and we were unable to recover it. 00:34:23.163 [2024-07-25 01:20:16.135464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.163 [2024-07-25 01:20:16.135489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.163 qpair failed and we were unable to recover it. 00:34:23.163 [2024-07-25 01:20:16.135634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.163 [2024-07-25 01:20:16.135661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.163 qpair failed and we were unable to recover it. 00:34:23.163 [2024-07-25 01:20:16.135766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.163 [2024-07-25 01:20:16.135791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.163 qpair failed and we were unable to recover it. 00:34:23.163 [2024-07-25 01:20:16.135934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.163 [2024-07-25 01:20:16.135959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.163 qpair failed and we were unable to recover it. 00:34:23.163 [2024-07-25 01:20:16.136127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.163 [2024-07-25 01:20:16.136152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.163 qpair failed and we were unable to recover it. 00:34:23.163 [2024-07-25 01:20:16.136273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.163 [2024-07-25 01:20:16.136298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.163 qpair failed and we were unable to recover it. 00:34:23.163 [2024-07-25 01:20:16.136443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.163 [2024-07-25 01:20:16.136468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.163 qpair failed and we were unable to recover it. 00:34:23.163 [2024-07-25 01:20:16.136603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.163 [2024-07-25 01:20:16.136628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.163 qpair failed and we were unable to recover it. 00:34:23.163 [2024-07-25 01:20:16.136766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.163 [2024-07-25 01:20:16.136791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.163 qpair failed and we were unable to recover it. 00:34:23.163 [2024-07-25 01:20:16.136936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.164 [2024-07-25 01:20:16.136961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.164 qpair failed and we were unable to recover it. 00:34:23.164 [2024-07-25 01:20:16.137105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.164 [2024-07-25 01:20:16.137131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.164 qpair failed and we were unable to recover it. 00:34:23.164 [2024-07-25 01:20:16.137299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.164 [2024-07-25 01:20:16.137325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.164 qpair failed and we were unable to recover it. 00:34:23.164 [2024-07-25 01:20:16.137440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.164 [2024-07-25 01:20:16.137465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.164 qpair failed and we were unable to recover it. 00:34:23.164 [2024-07-25 01:20:16.137611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.164 [2024-07-25 01:20:16.137636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.164 qpair failed and we were unable to recover it. 00:34:23.164 [2024-07-25 01:20:16.137749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.164 [2024-07-25 01:20:16.137775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.164 qpair failed and we were unable to recover it. 00:34:23.164 [2024-07-25 01:20:16.137941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.164 [2024-07-25 01:20:16.137966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.164 qpair failed and we were unable to recover it. 00:34:23.164 [2024-07-25 01:20:16.138105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.164 [2024-07-25 01:20:16.138132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.164 qpair failed and we were unable to recover it. 00:34:23.164 [2024-07-25 01:20:16.138280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.164 [2024-07-25 01:20:16.138306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.164 qpair failed and we were unable to recover it. 00:34:23.164 [2024-07-25 01:20:16.138449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.164 [2024-07-25 01:20:16.138474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.164 qpair failed and we were unable to recover it. 00:34:23.164 [2024-07-25 01:20:16.138643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.164 [2024-07-25 01:20:16.138669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.164 qpair failed and we were unable to recover it. 00:34:23.164 [2024-07-25 01:20:16.138779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.164 [2024-07-25 01:20:16.138803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.164 qpair failed and we were unable to recover it. 00:34:23.164 [2024-07-25 01:20:16.138918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.164 [2024-07-25 01:20:16.138943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.164 qpair failed and we were unable to recover it. 00:34:23.164 [2024-07-25 01:20:16.139093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.164 [2024-07-25 01:20:16.139118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.164 qpair failed and we were unable to recover it. 00:34:23.164 [2024-07-25 01:20:16.139262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.164 [2024-07-25 01:20:16.139288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.164 qpair failed and we were unable to recover it. 00:34:23.164 [2024-07-25 01:20:16.139431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.164 [2024-07-25 01:20:16.139457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.164 qpair failed and we were unable to recover it. 00:34:23.164 [2024-07-25 01:20:16.139643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.164 [2024-07-25 01:20:16.139669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.164 qpair failed and we were unable to recover it. 00:34:23.164 [2024-07-25 01:20:16.139808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.164 [2024-07-25 01:20:16.139833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.164 qpair failed and we were unable to recover it. 00:34:23.164 [2024-07-25 01:20:16.139990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.164 [2024-07-25 01:20:16.140015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.164 qpair failed and we were unable to recover it. 00:34:23.164 [2024-07-25 01:20:16.140171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.164 [2024-07-25 01:20:16.140196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.164 qpair failed and we were unable to recover it. 00:34:23.164 [2024-07-25 01:20:16.140361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.164 [2024-07-25 01:20:16.140387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.164 qpair failed and we were unable to recover it. 00:34:23.164 [2024-07-25 01:20:16.140512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.164 [2024-07-25 01:20:16.140538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.164 qpair failed and we were unable to recover it. 00:34:23.164 [2024-07-25 01:20:16.140706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.164 [2024-07-25 01:20:16.140731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.164 qpair failed and we were unable to recover it. 00:34:23.164 [2024-07-25 01:20:16.140906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.164 [2024-07-25 01:20:16.140931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.164 qpair failed and we were unable to recover it. 00:34:23.164 [2024-07-25 01:20:16.141073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.164 [2024-07-25 01:20:16.141099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.164 qpair failed and we were unable to recover it. 00:34:23.164 [2024-07-25 01:20:16.141261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.164 [2024-07-25 01:20:16.141287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.164 qpair failed and we were unable to recover it. 00:34:23.164 [2024-07-25 01:20:16.141436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.164 [2024-07-25 01:20:16.141462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.164 qpair failed and we were unable to recover it. 00:34:23.164 [2024-07-25 01:20:16.141581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.164 [2024-07-25 01:20:16.141606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.164 qpair failed and we were unable to recover it. 00:34:23.164 [2024-07-25 01:20:16.141776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.164 [2024-07-25 01:20:16.141802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.164 qpair failed and we were unable to recover it. 00:34:23.164 [2024-07-25 01:20:16.141947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.164 [2024-07-25 01:20:16.141976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.164 qpair failed and we were unable to recover it. 00:34:23.164 [2024-07-25 01:20:16.142119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.164 [2024-07-25 01:20:16.142145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.164 qpair failed and we were unable to recover it. 00:34:23.164 [2024-07-25 01:20:16.142319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.164 [2024-07-25 01:20:16.142344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.164 qpair failed and we were unable to recover it. 00:34:23.164 [2024-07-25 01:20:16.142456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.164 [2024-07-25 01:20:16.142481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.164 qpair failed and we were unable to recover it. 00:34:23.164 [2024-07-25 01:20:16.142617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.164 [2024-07-25 01:20:16.142643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.164 qpair failed and we were unable to recover it. 00:34:23.164 [2024-07-25 01:20:16.142757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.164 [2024-07-25 01:20:16.142782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.164 qpair failed and we were unable to recover it. 00:34:23.164 [2024-07-25 01:20:16.142896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.164 [2024-07-25 01:20:16.142921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.164 qpair failed and we were unable to recover it. 00:34:23.164 [2024-07-25 01:20:16.143030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.164 [2024-07-25 01:20:16.143055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.164 qpair failed and we were unable to recover it. 00:34:23.164 [2024-07-25 01:20:16.143198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.165 [2024-07-25 01:20:16.143223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.165 qpair failed and we were unable to recover it. 00:34:23.165 [2024-07-25 01:20:16.143353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.165 [2024-07-25 01:20:16.143378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.165 qpair failed and we were unable to recover it. 00:34:23.165 [2024-07-25 01:20:16.143500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.165 [2024-07-25 01:20:16.143525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.165 qpair failed and we were unable to recover it. 00:34:23.165 [2024-07-25 01:20:16.143663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.165 [2024-07-25 01:20:16.143688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.165 qpair failed and we were unable to recover it. 00:34:23.165 [2024-07-25 01:20:16.143823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.165 [2024-07-25 01:20:16.143849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.165 qpair failed and we were unable to recover it. 00:34:23.165 [2024-07-25 01:20:16.144004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.165 [2024-07-25 01:20:16.144029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.165 qpair failed and we were unable to recover it. 00:34:23.165 [2024-07-25 01:20:16.144170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.165 [2024-07-25 01:20:16.144195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.165 qpair failed and we were unable to recover it. 00:34:23.165 [2024-07-25 01:20:16.144352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.165 [2024-07-25 01:20:16.144377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.165 qpair failed and we were unable to recover it. 00:34:23.165 [2024-07-25 01:20:16.144545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.165 [2024-07-25 01:20:16.144570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.165 qpair failed and we were unable to recover it. 00:34:23.165 [2024-07-25 01:20:16.144712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.165 [2024-07-25 01:20:16.144737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.165 qpair failed and we were unable to recover it. 00:34:23.165 [2024-07-25 01:20:16.144911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.165 [2024-07-25 01:20:16.144937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.165 qpair failed and we were unable to recover it. 00:34:23.165 [2024-07-25 01:20:16.145113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.165 [2024-07-25 01:20:16.145142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.165 qpair failed and we were unable to recover it. 00:34:23.165 [2024-07-25 01:20:16.145305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.165 [2024-07-25 01:20:16.145331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.165 qpair failed and we were unable to recover it. 00:34:23.165 [2024-07-25 01:20:16.145477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.165 [2024-07-25 01:20:16.145503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.165 qpair failed and we were unable to recover it. 00:34:23.165 [2024-07-25 01:20:16.145669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.165 [2024-07-25 01:20:16.145694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.165 qpair failed and we were unable to recover it. 00:34:23.165 [2024-07-25 01:20:16.145840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.165 [2024-07-25 01:20:16.145865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.165 qpair failed and we were unable to recover it. 00:34:23.165 [2024-07-25 01:20:16.146015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.165 [2024-07-25 01:20:16.146040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.165 qpair failed and we were unable to recover it. 00:34:23.165 [2024-07-25 01:20:16.146158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.165 [2024-07-25 01:20:16.146183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.165 qpair failed and we were unable to recover it. 00:34:23.165 [2024-07-25 01:20:16.146294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.165 [2024-07-25 01:20:16.146320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.165 qpair failed and we were unable to recover it. 00:34:23.165 [2024-07-25 01:20:16.146482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.165 [2024-07-25 01:20:16.146508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.165 qpair failed and we were unable to recover it. 00:34:23.165 [2024-07-25 01:20:16.146653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.165 [2024-07-25 01:20:16.146678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.165 qpair failed and we were unable to recover it. 00:34:23.165 [2024-07-25 01:20:16.146814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.165 [2024-07-25 01:20:16.146839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.165 qpair failed and we were unable to recover it. 00:34:23.165 [2024-07-25 01:20:16.146950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.165 [2024-07-25 01:20:16.146975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.165 qpair failed and we were unable to recover it. 00:34:23.165 [2024-07-25 01:20:16.147150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.165 [2024-07-25 01:20:16.147176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.165 qpair failed and we were unable to recover it. 00:34:23.165 [2024-07-25 01:20:16.147318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.165 [2024-07-25 01:20:16.147344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.165 qpair failed and we were unable to recover it. 00:34:23.165 [2024-07-25 01:20:16.147454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.165 [2024-07-25 01:20:16.147479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.165 qpair failed and we were unable to recover it. 00:34:23.165 [2024-07-25 01:20:16.147593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.165 [2024-07-25 01:20:16.147618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.165 qpair failed and we were unable to recover it. 00:34:23.165 [2024-07-25 01:20:16.147742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.165 [2024-07-25 01:20:16.147767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.165 qpair failed and we were unable to recover it. 00:34:23.165 [2024-07-25 01:20:16.147908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.165 [2024-07-25 01:20:16.147934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.165 qpair failed and we were unable to recover it. 00:34:23.165 [2024-07-25 01:20:16.148083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.165 [2024-07-25 01:20:16.148109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.165 qpair failed and we were unable to recover it. 00:34:23.165 [2024-07-25 01:20:16.148216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.165 [2024-07-25 01:20:16.148240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.165 qpair failed and we were unable to recover it. 00:34:23.165 [2024-07-25 01:20:16.148395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.165 [2024-07-25 01:20:16.148420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.165 qpair failed and we were unable to recover it. 00:34:23.165 [2024-07-25 01:20:16.148568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.165 [2024-07-25 01:20:16.148597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.165 qpair failed and we were unable to recover it. 00:34:23.165 [2024-07-25 01:20:16.148718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.165 [2024-07-25 01:20:16.148744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.165 qpair failed and we were unable to recover it. 00:34:23.165 [2024-07-25 01:20:16.148910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.165 [2024-07-25 01:20:16.148936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.165 qpair failed and we were unable to recover it. 00:34:23.165 [2024-07-25 01:20:16.149083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.165 [2024-07-25 01:20:16.149108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.165 qpair failed and we were unable to recover it. 00:34:23.165 [2024-07-25 01:20:16.149267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.165 [2024-07-25 01:20:16.149294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.165 qpair failed and we were unable to recover it. 00:34:23.165 [2024-07-25 01:20:16.149415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.165 [2024-07-25 01:20:16.149441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.165 qpair failed and we were unable to recover it. 00:34:23.165 [2024-07-25 01:20:16.149562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.166 [2024-07-25 01:20:16.149587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.166 qpair failed and we were unable to recover it. 00:34:23.166 [2024-07-25 01:20:16.149729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.166 [2024-07-25 01:20:16.149755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.166 qpair failed and we were unable to recover it. 00:34:23.166 [2024-07-25 01:20:16.149929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.166 [2024-07-25 01:20:16.149955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.166 qpair failed and we were unable to recover it. 00:34:23.166 [2024-07-25 01:20:16.150086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.166 [2024-07-25 01:20:16.150112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.166 qpair failed and we were unable to recover it. 00:34:23.166 [2024-07-25 01:20:16.150234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.166 [2024-07-25 01:20:16.150266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.166 qpair failed and we were unable to recover it. 00:34:23.166 [2024-07-25 01:20:16.150418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.166 [2024-07-25 01:20:16.150443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.166 qpair failed and we were unable to recover it. 00:34:23.166 [2024-07-25 01:20:16.150589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.166 [2024-07-25 01:20:16.150616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.166 qpair failed and we were unable to recover it. 00:34:23.166 [2024-07-25 01:20:16.150765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.166 [2024-07-25 01:20:16.150790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.166 qpair failed and we were unable to recover it. 00:34:23.166 [2024-07-25 01:20:16.150961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.166 [2024-07-25 01:20:16.150987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.166 qpair failed and we were unable to recover it. 00:34:23.166 [2024-07-25 01:20:16.151127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.166 [2024-07-25 01:20:16.151153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.166 qpair failed and we were unable to recover it. 00:34:23.166 [2024-07-25 01:20:16.151303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.166 [2024-07-25 01:20:16.151329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.166 qpair failed and we were unable to recover it. 00:34:23.166 [2024-07-25 01:20:16.151472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.166 [2024-07-25 01:20:16.151497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.166 qpair failed and we were unable to recover it. 00:34:23.166 [2024-07-25 01:20:16.151608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.166 [2024-07-25 01:20:16.151634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.166 qpair failed and we were unable to recover it. 00:34:23.166 [2024-07-25 01:20:16.151803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.166 [2024-07-25 01:20:16.151828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.166 qpair failed and we were unable to recover it. 00:34:23.166 [2024-07-25 01:20:16.151942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.166 [2024-07-25 01:20:16.151968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.166 qpair failed and we were unable to recover it. 00:34:23.166 [2024-07-25 01:20:16.152114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.166 [2024-07-25 01:20:16.152140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.166 qpair failed and we were unable to recover it. 00:34:23.166 [2024-07-25 01:20:16.152286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.166 [2024-07-25 01:20:16.152311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.166 qpair failed and we were unable to recover it. 00:34:23.166 [2024-07-25 01:20:16.152462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.166 [2024-07-25 01:20:16.152488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.166 qpair failed and we were unable to recover it. 00:34:23.166 [2024-07-25 01:20:16.152637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.166 [2024-07-25 01:20:16.152662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.166 qpair failed and we were unable to recover it. 00:34:23.166 [2024-07-25 01:20:16.152801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.166 [2024-07-25 01:20:16.152826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.166 qpair failed and we were unable to recover it. 00:34:23.166 [2024-07-25 01:20:16.152968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.166 [2024-07-25 01:20:16.152993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.166 qpair failed and we were unable to recover it. 00:34:23.166 [2024-07-25 01:20:16.153145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.166 [2024-07-25 01:20:16.153171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.166 qpair failed and we were unable to recover it. 00:34:23.166 [2024-07-25 01:20:16.153290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.166 [2024-07-25 01:20:16.153315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.166 qpair failed and we were unable to recover it. 00:34:23.166 [2024-07-25 01:20:16.153440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.166 [2024-07-25 01:20:16.153465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.166 qpair failed and we were unable to recover it. 00:34:23.166 [2024-07-25 01:20:16.153587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.166 [2024-07-25 01:20:16.153613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.166 qpair failed and we were unable to recover it. 00:34:23.166 [2024-07-25 01:20:16.153782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.166 [2024-07-25 01:20:16.153808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.166 qpair failed and we were unable to recover it. 00:34:23.166 [2024-07-25 01:20:16.153924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.166 [2024-07-25 01:20:16.153950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.166 qpair failed and we were unable to recover it. 00:34:23.166 [2024-07-25 01:20:16.154140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.166 [2024-07-25 01:20:16.154169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.166 qpair failed and we were unable to recover it. 00:34:23.166 [2024-07-25 01:20:16.154298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.166 [2024-07-25 01:20:16.154324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.166 qpair failed and we were unable to recover it. 00:34:23.166 [2024-07-25 01:20:16.154499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.166 [2024-07-25 01:20:16.154524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.166 qpair failed and we were unable to recover it. 00:34:23.166 [2024-07-25 01:20:16.154667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.166 [2024-07-25 01:20:16.154692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.166 qpair failed and we were unable to recover it. 00:34:23.166 [2024-07-25 01:20:16.154840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.166 [2024-07-25 01:20:16.154865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.166 qpair failed and we were unable to recover it. 00:34:23.166 [2024-07-25 01:20:16.155008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.166 [2024-07-25 01:20:16.155033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.166 qpair failed and we were unable to recover it. 00:34:23.166 [2024-07-25 01:20:16.155223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.166 [2024-07-25 01:20:16.155259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.166 qpair failed and we were unable to recover it. 00:34:23.166 [2024-07-25 01:20:16.155428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.166 [2024-07-25 01:20:16.155457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.166 qpair failed and we were unable to recover it. 00:34:23.166 [2024-07-25 01:20:16.155602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.166 [2024-07-25 01:20:16.155629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.166 qpair failed and we were unable to recover it. 00:34:23.166 [2024-07-25 01:20:16.155744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.166 [2024-07-25 01:20:16.155770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.166 qpair failed and we were unable to recover it. 00:34:23.166 [2024-07-25 01:20:16.155910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.166 [2024-07-25 01:20:16.155935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.166 qpair failed and we were unable to recover it. 00:34:23.167 [2024-07-25 01:20:16.156070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.167 [2024-07-25 01:20:16.156096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.167 qpair failed and we were unable to recover it. 00:34:23.167 [2024-07-25 01:20:16.156238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.167 [2024-07-25 01:20:16.156270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.167 qpair failed and we were unable to recover it. 00:34:23.167 [2024-07-25 01:20:16.156394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.167 [2024-07-25 01:20:16.156419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.167 qpair failed and we were unable to recover it. 00:34:23.167 [2024-07-25 01:20:16.156603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.167 [2024-07-25 01:20:16.156628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.167 qpair failed and we were unable to recover it. 00:34:23.167 [2024-07-25 01:20:16.156776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.167 [2024-07-25 01:20:16.156801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.167 qpair failed and we were unable to recover it. 00:34:23.167 [2024-07-25 01:20:16.156920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.167 [2024-07-25 01:20:16.156945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.167 qpair failed and we were unable to recover it. 00:34:23.167 [2024-07-25 01:20:16.157063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.167 [2024-07-25 01:20:16.157088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.167 qpair failed and we were unable to recover it. 00:34:23.167 [2024-07-25 01:20:16.157209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.167 [2024-07-25 01:20:16.157235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.167 qpair failed and we were unable to recover it. 00:34:23.167 [2024-07-25 01:20:16.157422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.167 [2024-07-25 01:20:16.157448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.167 qpair failed and we were unable to recover it. 00:34:23.167 [2024-07-25 01:20:16.157568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.167 [2024-07-25 01:20:16.157594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.167 qpair failed and we were unable to recover it. 00:34:23.167 [2024-07-25 01:20:16.157770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.167 [2024-07-25 01:20:16.157796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.167 qpair failed and we were unable to recover it. 00:34:23.167 [2024-07-25 01:20:16.157912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.167 [2024-07-25 01:20:16.157937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.167 qpair failed and we were unable to recover it. 00:34:23.167 [2024-07-25 01:20:16.158063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.167 [2024-07-25 01:20:16.158088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.167 qpair failed and we were unable to recover it. 00:34:23.167 [2024-07-25 01:20:16.158259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.167 [2024-07-25 01:20:16.158285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.167 qpair failed and we were unable to recover it. 00:34:23.167 [2024-07-25 01:20:16.158398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.167 [2024-07-25 01:20:16.158424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.167 qpair failed and we were unable to recover it. 00:34:23.167 [2024-07-25 01:20:16.158564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.167 [2024-07-25 01:20:16.158590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.167 qpair failed and we were unable to recover it. 00:34:23.167 [2024-07-25 01:20:16.158757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.167 [2024-07-25 01:20:16.158783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.167 qpair failed and we were unable to recover it. 00:34:23.167 [2024-07-25 01:20:16.158926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.167 [2024-07-25 01:20:16.158952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.167 qpair failed and we were unable to recover it. 00:34:23.167 [2024-07-25 01:20:16.159099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.167 [2024-07-25 01:20:16.159125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.167 qpair failed and we were unable to recover it. 00:34:23.167 [2024-07-25 01:20:16.159248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.167 [2024-07-25 01:20:16.159276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.167 qpair failed and we were unable to recover it. 00:34:23.167 [2024-07-25 01:20:16.159401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.167 [2024-07-25 01:20:16.159426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.167 qpair failed and we were unable to recover it. 00:34:23.167 [2024-07-25 01:20:16.159582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.167 [2024-07-25 01:20:16.159607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.167 qpair failed and we were unable to recover it. 00:34:23.167 [2024-07-25 01:20:16.159773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.167 [2024-07-25 01:20:16.159798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.167 qpair failed and we were unable to recover it. 00:34:23.167 [2024-07-25 01:20:16.159969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.167 [2024-07-25 01:20:16.159994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.167 qpair failed and we were unable to recover it. 00:34:23.167 [2024-07-25 01:20:16.160131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.167 [2024-07-25 01:20:16.160157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.167 qpair failed and we were unable to recover it. 00:34:23.167 [2024-07-25 01:20:16.160305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.167 [2024-07-25 01:20:16.160330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.167 qpair failed and we were unable to recover it. 00:34:23.167 [2024-07-25 01:20:16.160447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.167 [2024-07-25 01:20:16.160472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.167 qpair failed and we were unable to recover it. 00:34:23.167 [2024-07-25 01:20:16.160648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.167 [2024-07-25 01:20:16.160674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.167 qpair failed and we were unable to recover it. 00:34:23.167 [2024-07-25 01:20:16.160841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.167 [2024-07-25 01:20:16.160866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.167 qpair failed and we were unable to recover it. 00:34:23.167 [2024-07-25 01:20:16.161004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.167 [2024-07-25 01:20:16.161029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.167 qpair failed and we were unable to recover it. 00:34:23.167 [2024-07-25 01:20:16.161194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.167 [2024-07-25 01:20:16.161220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.167 qpair failed and we were unable to recover it. 00:34:23.167 [2024-07-25 01:20:16.161341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.167 [2024-07-25 01:20:16.161367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.167 qpair failed and we were unable to recover it. 00:34:23.167 [2024-07-25 01:20:16.161483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.167 [2024-07-25 01:20:16.161509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.167 qpair failed and we were unable to recover it. 00:34:23.167 [2024-07-25 01:20:16.161656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.167 [2024-07-25 01:20:16.161681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.167 qpair failed and we were unable to recover it. 00:34:23.167 [2024-07-25 01:20:16.161817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.167 [2024-07-25 01:20:16.161842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.167 qpair failed and we were unable to recover it. 00:34:23.167 [2024-07-25 01:20:16.161987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.167 [2024-07-25 01:20:16.162012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.167 qpair failed and we were unable to recover it. 00:34:23.167 [2024-07-25 01:20:16.162154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.167 [2024-07-25 01:20:16.162182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.167 qpair failed and we were unable to recover it. 00:34:23.167 [2024-07-25 01:20:16.162350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.167 [2024-07-25 01:20:16.162376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.167 qpair failed and we were unable to recover it. 00:34:23.168 [2024-07-25 01:20:16.162495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.168 [2024-07-25 01:20:16.162520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.168 qpair failed and we were unable to recover it. 00:34:23.168 [2024-07-25 01:20:16.162694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.168 [2024-07-25 01:20:16.162720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.168 qpair failed and we were unable to recover it. 00:34:23.168 [2024-07-25 01:20:16.162864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.168 [2024-07-25 01:20:16.162890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.168 qpair failed and we were unable to recover it. 00:34:23.168 [2024-07-25 01:20:16.163067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.168 [2024-07-25 01:20:16.163095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.168 qpair failed and we were unable to recover it. 00:34:23.168 [2024-07-25 01:20:16.163252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.168 [2024-07-25 01:20:16.163278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.168 qpair failed and we were unable to recover it. 00:34:23.168 [2024-07-25 01:20:16.163417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.168 [2024-07-25 01:20:16.163442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.168 qpair failed and we were unable to recover it. 00:34:23.168 [2024-07-25 01:20:16.163558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.168 [2024-07-25 01:20:16.163584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.168 qpair failed and we were unable to recover it. 00:34:23.168 [2024-07-25 01:20:16.163754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.168 [2024-07-25 01:20:16.163779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.168 qpair failed and we were unable to recover it. 00:34:23.168 [2024-07-25 01:20:16.163895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.168 [2024-07-25 01:20:16.163920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.168 qpair failed and we were unable to recover it. 00:34:23.168 [2024-07-25 01:20:16.164097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.168 [2024-07-25 01:20:16.164123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.168 qpair failed and we were unable to recover it. 00:34:23.168 [2024-07-25 01:20:16.164300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.168 [2024-07-25 01:20:16.164326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.168 qpair failed and we were unable to recover it. 00:34:23.168 [2024-07-25 01:20:16.164468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.168 [2024-07-25 01:20:16.164494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.168 qpair failed and we were unable to recover it. 00:34:23.168 [2024-07-25 01:20:16.164610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.168 [2024-07-25 01:20:16.164636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.168 qpair failed and we were unable to recover it. 00:34:23.168 [2024-07-25 01:20:16.164758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.168 [2024-07-25 01:20:16.164783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.168 qpair failed and we were unable to recover it. 00:34:23.168 [2024-07-25 01:20:16.164889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.168 [2024-07-25 01:20:16.164914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.168 qpair failed and we were unable to recover it. 00:34:23.168 [2024-07-25 01:20:16.165038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.168 [2024-07-25 01:20:16.165063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.168 qpair failed and we were unable to recover it. 00:34:23.168 [2024-07-25 01:20:16.165206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.168 [2024-07-25 01:20:16.165231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.168 qpair failed and we were unable to recover it. 00:34:23.168 [2024-07-25 01:20:16.165370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.168 [2024-07-25 01:20:16.165397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.168 qpair failed and we were unable to recover it. 00:34:23.168 [2024-07-25 01:20:16.165512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.168 [2024-07-25 01:20:16.165537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.168 qpair failed and we were unable to recover it. 00:34:23.168 [2024-07-25 01:20:16.165653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.168 [2024-07-25 01:20:16.165678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.168 qpair failed and we were unable to recover it. 00:34:23.168 [2024-07-25 01:20:16.165818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.168 [2024-07-25 01:20:16.165843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.168 qpair failed and we were unable to recover it. 00:34:23.168 [2024-07-25 01:20:16.165990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.168 [2024-07-25 01:20:16.166016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.168 qpair failed and we were unable to recover it. 00:34:23.168 [2024-07-25 01:20:16.166139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.168 [2024-07-25 01:20:16.166164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.168 qpair failed and we were unable to recover it. 00:34:23.168 [2024-07-25 01:20:16.166320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.168 [2024-07-25 01:20:16.166346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.168 qpair failed and we were unable to recover it. 00:34:23.168 [2024-07-25 01:20:16.166514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.168 [2024-07-25 01:20:16.166539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.168 qpair failed and we were unable to recover it. 00:34:23.168 [2024-07-25 01:20:16.166685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.168 [2024-07-25 01:20:16.166711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.168 qpair failed and we were unable to recover it. 00:34:23.168 [2024-07-25 01:20:16.166822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.168 [2024-07-25 01:20:16.166846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.168 qpair failed and we were unable to recover it. 00:34:23.168 [2024-07-25 01:20:16.166990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.168 [2024-07-25 01:20:16.167015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.168 qpair failed and we were unable to recover it. 00:34:23.168 [2024-07-25 01:20:16.167160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.168 [2024-07-25 01:20:16.167187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.168 qpair failed and we were unable to recover it. 00:34:23.168 [2024-07-25 01:20:16.167336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.168 [2024-07-25 01:20:16.167362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.168 qpair failed and we were unable to recover it. 00:34:23.168 [2024-07-25 01:20:16.167507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.168 [2024-07-25 01:20:16.167533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.168 qpair failed and we were unable to recover it. 00:34:23.168 [2024-07-25 01:20:16.167690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.168 [2024-07-25 01:20:16.167718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.168 qpair failed and we were unable to recover it. 00:34:23.169 [2024-07-25 01:20:16.167871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.169 [2024-07-25 01:20:16.167896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.169 qpair failed and we were unable to recover it. 00:34:23.169 [2024-07-25 01:20:16.168041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.169 [2024-07-25 01:20:16.168070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.169 qpair failed and we were unable to recover it. 00:34:23.169 [2024-07-25 01:20:16.168277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.169 [2024-07-25 01:20:16.168306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.169 qpair failed and we were unable to recover it. 00:34:23.169 [2024-07-25 01:20:16.168469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.169 [2024-07-25 01:20:16.168495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.169 qpair failed and we were unable to recover it. 00:34:23.169 [2024-07-25 01:20:16.168638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.169 [2024-07-25 01:20:16.168663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.169 qpair failed and we were unable to recover it. 00:34:23.169 [2024-07-25 01:20:16.168834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.169 [2024-07-25 01:20:16.168860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.169 qpair failed and we were unable to recover it. 00:34:23.169 [2024-07-25 01:20:16.168976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.169 [2024-07-25 01:20:16.169007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.169 qpair failed and we were unable to recover it. 00:34:23.169 [2024-07-25 01:20:16.169130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.169 [2024-07-25 01:20:16.169157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.169 qpair failed and we were unable to recover it. 00:34:23.169 [2024-07-25 01:20:16.169272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.169 [2024-07-25 01:20:16.169297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.169 qpair failed and we were unable to recover it. 00:34:23.169 [2024-07-25 01:20:16.169447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.169 [2024-07-25 01:20:16.169472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.169 qpair failed and we were unable to recover it. 00:34:23.169 [2024-07-25 01:20:16.169610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.169 [2024-07-25 01:20:16.169635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.169 qpair failed and we were unable to recover it. 00:34:23.169 [2024-07-25 01:20:16.169783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.169 [2024-07-25 01:20:16.169808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.169 qpair failed and we were unable to recover it. 00:34:23.169 [2024-07-25 01:20:16.169958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.169 [2024-07-25 01:20:16.169983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.169 qpair failed and we were unable to recover it. 00:34:23.169 [2024-07-25 01:20:16.170121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.169 [2024-07-25 01:20:16.170146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.169 qpair failed and we were unable to recover it. 00:34:23.169 [2024-07-25 01:20:16.170268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.169 [2024-07-25 01:20:16.170294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.169 qpair failed and we were unable to recover it. 00:34:23.169 [2024-07-25 01:20:16.170465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.169 [2024-07-25 01:20:16.170491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.169 qpair failed and we were unable to recover it. 00:34:23.169 [2024-07-25 01:20:16.170626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.169 [2024-07-25 01:20:16.170651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.169 qpair failed and we were unable to recover it. 00:34:23.169 [2024-07-25 01:20:16.170763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.169 [2024-07-25 01:20:16.170789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.169 qpair failed and we were unable to recover it. 00:34:23.169 [2024-07-25 01:20:16.170957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.169 [2024-07-25 01:20:16.170983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.169 qpair failed and we were unable to recover it. 00:34:23.169 [2024-07-25 01:20:16.171110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.169 [2024-07-25 01:20:16.171135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.169 qpair failed and we were unable to recover it. 00:34:23.169 [2024-07-25 01:20:16.171256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.169 [2024-07-25 01:20:16.171282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.169 qpair failed and we were unable to recover it. 00:34:23.169 [2024-07-25 01:20:16.171422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.169 [2024-07-25 01:20:16.171447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.169 qpair failed and we were unable to recover it. 00:34:23.169 [2024-07-25 01:20:16.171613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.169 [2024-07-25 01:20:16.171638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.169 qpair failed and we were unable to recover it. 00:34:23.169 [2024-07-25 01:20:16.171782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.169 [2024-07-25 01:20:16.171807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.169 qpair failed and we were unable to recover it. 00:34:23.169 [2024-07-25 01:20:16.171916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.169 [2024-07-25 01:20:16.171942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.169 qpair failed and we were unable to recover it. 00:34:23.169 [2024-07-25 01:20:16.172117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.169 [2024-07-25 01:20:16.172142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.169 qpair failed and we were unable to recover it. 00:34:23.169 [2024-07-25 01:20:16.172258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.169 [2024-07-25 01:20:16.172284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.169 qpair failed and we were unable to recover it. 00:34:23.169 [2024-07-25 01:20:16.172423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.169 [2024-07-25 01:20:16.172448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.169 qpair failed and we were unable to recover it. 00:34:23.169 [2024-07-25 01:20:16.172567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.169 [2024-07-25 01:20:16.172592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.169 qpair failed and we were unable to recover it. 00:34:23.169 [2024-07-25 01:20:16.172761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.169 [2024-07-25 01:20:16.172786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.169 qpair failed and we were unable to recover it. 00:34:23.169 [2024-07-25 01:20:16.172925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.169 [2024-07-25 01:20:16.172950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.169 qpair failed and we were unable to recover it. 00:34:23.169 [2024-07-25 01:20:16.173094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.169 [2024-07-25 01:20:16.173119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.169 qpair failed and we were unable to recover it. 00:34:23.169 [2024-07-25 01:20:16.173273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.169 [2024-07-25 01:20:16.173300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.169 qpair failed and we were unable to recover it. 00:34:23.169 [2024-07-25 01:20:16.173448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.169 [2024-07-25 01:20:16.173474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.169 qpair failed and we were unable to recover it. 00:34:23.169 [2024-07-25 01:20:16.173617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.169 [2024-07-25 01:20:16.173644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.169 qpair failed and we were unable to recover it. 00:34:23.169 [2024-07-25 01:20:16.173767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.169 [2024-07-25 01:20:16.173794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.169 qpair failed and we were unable to recover it. 00:34:23.169 [2024-07-25 01:20:16.173965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.169 [2024-07-25 01:20:16.173990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.169 qpair failed and we were unable to recover it. 00:34:23.169 [2024-07-25 01:20:16.174132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.169 [2024-07-25 01:20:16.174158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.169 qpair failed and we were unable to recover it. 00:34:23.170 [2024-07-25 01:20:16.174301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.170 [2024-07-25 01:20:16.174327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.170 qpair failed and we were unable to recover it. 00:34:23.170 [2024-07-25 01:20:16.174477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.170 [2024-07-25 01:20:16.174502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.170 qpair failed and we were unable to recover it. 00:34:23.170 [2024-07-25 01:20:16.174646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.170 [2024-07-25 01:20:16.174671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.170 qpair failed and we were unable to recover it. 00:34:23.170 [2024-07-25 01:20:16.174788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.170 [2024-07-25 01:20:16.174815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.170 qpair failed and we were unable to recover it. 00:34:23.170 [2024-07-25 01:20:16.174932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.170 [2024-07-25 01:20:16.174957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.170 qpair failed and we were unable to recover it. 00:34:23.170 [2024-07-25 01:20:16.175096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.170 [2024-07-25 01:20:16.175121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.170 qpair failed and we were unable to recover it. 00:34:23.170 [2024-07-25 01:20:16.175270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.170 [2024-07-25 01:20:16.175297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.170 qpair failed and we were unable to recover it. 00:34:23.170 [2024-07-25 01:20:16.175444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.170 [2024-07-25 01:20:16.175470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.170 qpair failed and we were unable to recover it. 00:34:23.170 [2024-07-25 01:20:16.175615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.170 [2024-07-25 01:20:16.175645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.170 qpair failed and we were unable to recover it. 00:34:23.170 [2024-07-25 01:20:16.175810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.170 [2024-07-25 01:20:16.175835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.170 qpair failed and we were unable to recover it. 00:34:23.170 [2024-07-25 01:20:16.175957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.170 [2024-07-25 01:20:16.175982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.170 qpair failed and we were unable to recover it. 00:34:23.170 [2024-07-25 01:20:16.176129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.170 [2024-07-25 01:20:16.176156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.170 qpair failed and we were unable to recover it. 00:34:23.170 [2024-07-25 01:20:16.176327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.170 [2024-07-25 01:20:16.176353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.170 qpair failed and we were unable to recover it. 00:34:23.170 [2024-07-25 01:20:16.176498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.170 [2024-07-25 01:20:16.176523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.170 qpair failed and we were unable to recover it. 00:34:23.170 [2024-07-25 01:20:16.176661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.170 [2024-07-25 01:20:16.176687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.170 qpair failed and we were unable to recover it. 00:34:23.170 [2024-07-25 01:20:16.176860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.170 [2024-07-25 01:20:16.176889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.170 qpair failed and we were unable to recover it. 00:34:23.170 [2024-07-25 01:20:16.177077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.170 [2024-07-25 01:20:16.177102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.170 qpair failed and we were unable to recover it. 00:34:23.170 [2024-07-25 01:20:16.177254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.170 [2024-07-25 01:20:16.177280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.170 qpair failed and we were unable to recover it. 00:34:23.170 [2024-07-25 01:20:16.177450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.170 [2024-07-25 01:20:16.177475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.170 qpair failed and we were unable to recover it. 00:34:23.170 [2024-07-25 01:20:16.177623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.170 [2024-07-25 01:20:16.177649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.170 qpair failed and we were unable to recover it. 00:34:23.170 [2024-07-25 01:20:16.177794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.170 [2024-07-25 01:20:16.177819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.170 qpair failed and we were unable to recover it. 00:34:23.170 [2024-07-25 01:20:16.177968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.170 [2024-07-25 01:20:16.177993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.170 qpair failed and we were unable to recover it. 00:34:23.170 [2024-07-25 01:20:16.178109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.170 [2024-07-25 01:20:16.178136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.170 qpair failed and we were unable to recover it. 00:34:23.170 [2024-07-25 01:20:16.178295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.170 [2024-07-25 01:20:16.178322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.170 qpair failed and we were unable to recover it. 00:34:23.170 [2024-07-25 01:20:16.178462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.170 [2024-07-25 01:20:16.178487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.170 qpair failed and we were unable to recover it. 00:34:23.170 [2024-07-25 01:20:16.178634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.170 [2024-07-25 01:20:16.178659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.170 qpair failed and we were unable to recover it. 00:34:23.170 [2024-07-25 01:20:16.178805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.170 [2024-07-25 01:20:16.178831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.170 qpair failed and we were unable to recover it. 00:34:23.170 [2024-07-25 01:20:16.178991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.170 [2024-07-25 01:20:16.179019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.170 qpair failed and we were unable to recover it. 00:34:23.170 [2024-07-25 01:20:16.179221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.170 [2024-07-25 01:20:16.179256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.170 qpair failed and we were unable to recover it. 00:34:23.170 [2024-07-25 01:20:16.179390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.170 [2024-07-25 01:20:16.179415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.170 qpair failed and we were unable to recover it. 00:34:23.170 [2024-07-25 01:20:16.179567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.170 [2024-07-25 01:20:16.179592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.170 qpair failed and we were unable to recover it. 00:34:23.170 [2024-07-25 01:20:16.179737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.170 [2024-07-25 01:20:16.179763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.170 qpair failed and we were unable to recover it. 00:34:23.170 [2024-07-25 01:20:16.179913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.170 [2024-07-25 01:20:16.179938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.170 qpair failed and we were unable to recover it. 00:34:23.170 [2024-07-25 01:20:16.180077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.170 [2024-07-25 01:20:16.180103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.170 qpair failed and we were unable to recover it. 00:34:23.170 [2024-07-25 01:20:16.180220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.170 [2024-07-25 01:20:16.180250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.170 qpair failed and we were unable to recover it. 00:34:23.170 [2024-07-25 01:20:16.180401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.170 [2024-07-25 01:20:16.180427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.170 qpair failed and we were unable to recover it. 00:34:23.170 [2024-07-25 01:20:16.180548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.170 [2024-07-25 01:20:16.180573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.170 qpair failed and we were unable to recover it. 00:34:23.170 [2024-07-25 01:20:16.180719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.170 [2024-07-25 01:20:16.180744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.171 qpair failed and we were unable to recover it. 00:34:23.171 [2024-07-25 01:20:16.180912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.171 [2024-07-25 01:20:16.180937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.171 qpair failed and we were unable to recover it. 00:34:23.171 [2024-07-25 01:20:16.181087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.171 [2024-07-25 01:20:16.181113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.171 qpair failed and we were unable to recover it. 00:34:23.171 [2024-07-25 01:20:16.181228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.171 [2024-07-25 01:20:16.181265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.171 qpair failed and we were unable to recover it. 00:34:23.171 [2024-07-25 01:20:16.181421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.171 [2024-07-25 01:20:16.181447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.171 qpair failed and we were unable to recover it. 00:34:23.171 [2024-07-25 01:20:16.181567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.171 [2024-07-25 01:20:16.181593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.171 qpair failed and we were unable to recover it. 00:34:23.171 [2024-07-25 01:20:16.181762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.171 [2024-07-25 01:20:16.181788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.171 qpair failed and we were unable to recover it. 00:34:23.171 [2024-07-25 01:20:16.181926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.171 [2024-07-25 01:20:16.181951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.171 qpair failed and we were unable to recover it. 00:34:23.171 [2024-07-25 01:20:16.182060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.171 [2024-07-25 01:20:16.182085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.171 qpair failed and we were unable to recover it. 00:34:23.171 [2024-07-25 01:20:16.182201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.171 [2024-07-25 01:20:16.182226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.171 qpair failed and we were unable to recover it. 00:34:23.171 [2024-07-25 01:20:16.182411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.171 [2024-07-25 01:20:16.182436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.171 qpair failed and we were unable to recover it. 00:34:23.171 [2024-07-25 01:20:16.182559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.171 [2024-07-25 01:20:16.182589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.171 qpair failed and we were unable to recover it. 00:34:23.171 [2024-07-25 01:20:16.182702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.171 [2024-07-25 01:20:16.182728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.171 qpair failed and we were unable to recover it. 00:34:23.171 [2024-07-25 01:20:16.182867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.171 [2024-07-25 01:20:16.182893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.171 qpair failed and we were unable to recover it. 00:34:23.171 [2024-07-25 01:20:16.183061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.171 [2024-07-25 01:20:16.183090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.171 qpair failed and we were unable to recover it. 00:34:23.171 [2024-07-25 01:20:16.183253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.171 [2024-07-25 01:20:16.183299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.171 qpair failed and we were unable to recover it. 00:34:23.171 [2024-07-25 01:20:16.183468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.171 [2024-07-25 01:20:16.183493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.171 qpair failed and we were unable to recover it. 00:34:23.171 [2024-07-25 01:20:16.183640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.171 [2024-07-25 01:20:16.183666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.171 qpair failed and we were unable to recover it. 00:34:23.171 [2024-07-25 01:20:16.183840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.171 [2024-07-25 01:20:16.183865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.171 qpair failed and we were unable to recover it. 00:34:23.171 [2024-07-25 01:20:16.184003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.171 [2024-07-25 01:20:16.184029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.171 qpair failed and we were unable to recover it. 00:34:23.171 [2024-07-25 01:20:16.184149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.171 [2024-07-25 01:20:16.184175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.171 qpair failed and we were unable to recover it. 00:34:23.171 [2024-07-25 01:20:16.184343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.171 [2024-07-25 01:20:16.184369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.171 qpair failed and we were unable to recover it. 00:34:23.171 [2024-07-25 01:20:16.184492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.171 [2024-07-25 01:20:16.184517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.171 qpair failed and we were unable to recover it. 00:34:23.171 [2024-07-25 01:20:16.184696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.171 [2024-07-25 01:20:16.184721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.171 qpair failed and we were unable to recover it. 00:34:23.171 [2024-07-25 01:20:16.184837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.171 [2024-07-25 01:20:16.184862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.171 qpair failed and we were unable to recover it. 00:34:23.171 [2024-07-25 01:20:16.184986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.171 [2024-07-25 01:20:16.185012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.171 qpair failed and we were unable to recover it. 00:34:23.171 [2024-07-25 01:20:16.185187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.171 [2024-07-25 01:20:16.185212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.171 qpair failed and we were unable to recover it. 00:34:23.171 [2024-07-25 01:20:16.185362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.171 [2024-07-25 01:20:16.185388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.171 qpair failed and we were unable to recover it. 00:34:23.171 [2024-07-25 01:20:16.185497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.171 [2024-07-25 01:20:16.185522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.171 qpair failed and we were unable to recover it. 00:34:23.171 [2024-07-25 01:20:16.185666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.171 [2024-07-25 01:20:16.185692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.171 qpair failed and we were unable to recover it. 00:34:23.171 [2024-07-25 01:20:16.185807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.171 [2024-07-25 01:20:16.185834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.171 qpair failed and we were unable to recover it. 00:34:23.171 [2024-07-25 01:20:16.186002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.171 [2024-07-25 01:20:16.186027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.171 qpair failed and we were unable to recover it. 00:34:23.171 [2024-07-25 01:20:16.186162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.171 [2024-07-25 01:20:16.186186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.171 qpair failed and we were unable to recover it. 00:34:23.171 [2024-07-25 01:20:16.186355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.171 [2024-07-25 01:20:16.186380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.171 qpair failed and we were unable to recover it. 00:34:23.171 [2024-07-25 01:20:16.186529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.171 [2024-07-25 01:20:16.186554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.171 qpair failed and we were unable to recover it. 00:34:23.171 [2024-07-25 01:20:16.186665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.171 [2024-07-25 01:20:16.186690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.171 qpair failed and we were unable to recover it. 00:34:23.171 [2024-07-25 01:20:16.186858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.171 [2024-07-25 01:20:16.186884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.171 qpair failed and we were unable to recover it. 00:34:23.171 [2024-07-25 01:20:16.187123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.171 [2024-07-25 01:20:16.187150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.171 qpair failed and we were unable to recover it. 00:34:23.171 [2024-07-25 01:20:16.187324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.172 [2024-07-25 01:20:16.187351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.172 qpair failed and we were unable to recover it. 00:34:23.172 [2024-07-25 01:20:16.187494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.172 [2024-07-25 01:20:16.187520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.172 qpair failed and we were unable to recover it. 00:34:23.172 [2024-07-25 01:20:16.187662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.172 [2024-07-25 01:20:16.187687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.172 qpair failed and we were unable to recover it. 00:34:23.172 [2024-07-25 01:20:16.187833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.172 [2024-07-25 01:20:16.187858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.172 qpair failed and we were unable to recover it. 00:34:23.172 [2024-07-25 01:20:16.187999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.172 [2024-07-25 01:20:16.188024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.172 qpair failed and we were unable to recover it. 00:34:23.172 [2024-07-25 01:20:16.188163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.172 [2024-07-25 01:20:16.188188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.172 qpair failed and we were unable to recover it. 00:34:23.172 [2024-07-25 01:20:16.188358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.172 [2024-07-25 01:20:16.188384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.172 qpair failed and we were unable to recover it. 00:34:23.172 [2024-07-25 01:20:16.188527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.172 [2024-07-25 01:20:16.188552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.172 qpair failed and we were unable to recover it. 00:34:23.172 [2024-07-25 01:20:16.188703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.172 [2024-07-25 01:20:16.188729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.172 qpair failed and we were unable to recover it. 00:34:23.172 [2024-07-25 01:20:16.188849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.172 [2024-07-25 01:20:16.188875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.172 qpair failed and we were unable to recover it. 00:34:23.172 [2024-07-25 01:20:16.188999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.172 [2024-07-25 01:20:16.189024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.172 qpair failed and we were unable to recover it. 00:34:23.172 [2024-07-25 01:20:16.189133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.172 [2024-07-25 01:20:16.189158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.172 qpair failed and we were unable to recover it. 00:34:23.172 [2024-07-25 01:20:16.189306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.172 [2024-07-25 01:20:16.189332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.172 qpair failed and we were unable to recover it. 00:34:23.172 [2024-07-25 01:20:16.189476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.172 [2024-07-25 01:20:16.189506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.172 qpair failed and we were unable to recover it. 00:34:23.172 [2024-07-25 01:20:16.189661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.172 [2024-07-25 01:20:16.189686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.172 qpair failed and we were unable to recover it. 00:34:23.172 [2024-07-25 01:20:16.189824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.172 [2024-07-25 01:20:16.189849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.172 qpair failed and we were unable to recover it. 00:34:23.172 [2024-07-25 01:20:16.189964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.172 [2024-07-25 01:20:16.189990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.172 qpair failed and we were unable to recover it. 00:34:23.172 [2024-07-25 01:20:16.190150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.172 [2024-07-25 01:20:16.190175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.172 qpair failed and we were unable to recover it. 00:34:23.172 [2024-07-25 01:20:16.190320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.172 [2024-07-25 01:20:16.190346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.172 qpair failed and we were unable to recover it. 00:34:23.172 [2024-07-25 01:20:16.190468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.172 [2024-07-25 01:20:16.190493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.172 qpair failed and we were unable to recover it. 00:34:23.172 [2024-07-25 01:20:16.190637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.172 [2024-07-25 01:20:16.190662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.172 qpair failed and we were unable to recover it. 00:34:23.172 [2024-07-25 01:20:16.190777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.172 [2024-07-25 01:20:16.190802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.172 qpair failed and we were unable to recover it. 00:34:23.172 [2024-07-25 01:20:16.190918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.172 [2024-07-25 01:20:16.190943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.172 qpair failed and we were unable to recover it. 00:34:23.172 [2024-07-25 01:20:16.191085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.172 [2024-07-25 01:20:16.191110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.172 qpair failed and we were unable to recover it. 00:34:23.172 [2024-07-25 01:20:16.191278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.172 [2024-07-25 01:20:16.191304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.172 qpair failed and we were unable to recover it. 00:34:23.172 [2024-07-25 01:20:16.191440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.172 [2024-07-25 01:20:16.191465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.172 qpair failed and we were unable to recover it. 00:34:23.172 [2024-07-25 01:20:16.191607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.172 [2024-07-25 01:20:16.191633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.172 qpair failed and we were unable to recover it. 00:34:23.172 [2024-07-25 01:20:16.191754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.172 [2024-07-25 01:20:16.191779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.172 qpair failed and we were unable to recover it. 00:34:23.172 [2024-07-25 01:20:16.191987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.172 [2024-07-25 01:20:16.192012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.172 qpair failed and we were unable to recover it. 00:34:23.172 [2024-07-25 01:20:16.192158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.172 [2024-07-25 01:20:16.192198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.172 qpair failed and we were unable to recover it. 00:34:23.172 [2024-07-25 01:20:16.192374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.172 [2024-07-25 01:20:16.192399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.172 qpair failed and we were unable to recover it. 00:34:23.172 [2024-07-25 01:20:16.192515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.172 [2024-07-25 01:20:16.192540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.172 qpair failed and we were unable to recover it. 00:34:23.172 [2024-07-25 01:20:16.192657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.172 [2024-07-25 01:20:16.192683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.172 qpair failed and we were unable to recover it. 00:34:23.173 [2024-07-25 01:20:16.192868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.173 [2024-07-25 01:20:16.192896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.173 qpair failed and we were unable to recover it. 00:34:23.173 [2024-07-25 01:20:16.193055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.173 [2024-07-25 01:20:16.193083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.173 qpair failed and we were unable to recover it. 00:34:23.173 [2024-07-25 01:20:16.193266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.173 [2024-07-25 01:20:16.193295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.173 qpair failed and we were unable to recover it. 00:34:23.173 [2024-07-25 01:20:16.193485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.173 [2024-07-25 01:20:16.193510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.173 qpair failed and we were unable to recover it. 00:34:23.173 [2024-07-25 01:20:16.193673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.173 [2024-07-25 01:20:16.193702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.173 qpair failed and we were unable to recover it. 00:34:23.173 [2024-07-25 01:20:16.193850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.173 [2024-07-25 01:20:16.193877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.173 qpair failed and we were unable to recover it. 00:34:23.173 [2024-07-25 01:20:16.194061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.173 [2024-07-25 01:20:16.194089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.173 qpair failed and we were unable to recover it. 00:34:23.173 [2024-07-25 01:20:16.194262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.173 [2024-07-25 01:20:16.194289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.173 qpair failed and we were unable to recover it. 00:34:23.173 [2024-07-25 01:20:16.194445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.173 [2024-07-25 01:20:16.194473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.173 qpair failed and we were unable to recover it. 00:34:23.173 [2024-07-25 01:20:16.194664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.173 [2024-07-25 01:20:16.194713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.173 qpair failed and we were unable to recover it. 00:34:23.173 [2024-07-25 01:20:16.194845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.173 [2024-07-25 01:20:16.194875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.173 qpair failed and we were unable to recover it. 00:34:23.173 [2024-07-25 01:20:16.195016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.173 [2024-07-25 01:20:16.195041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.173 qpair failed and we were unable to recover it. 00:34:23.173 [2024-07-25 01:20:16.195160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.173 [2024-07-25 01:20:16.195185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.173 qpair failed and we were unable to recover it. 00:34:23.173 [2024-07-25 01:20:16.195351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.173 [2024-07-25 01:20:16.195394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.173 qpair failed and we were unable to recover it. 00:34:23.173 [2024-07-25 01:20:16.195528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.173 [2024-07-25 01:20:16.195556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.173 qpair failed and we were unable to recover it. 00:34:23.173 [2024-07-25 01:20:16.195688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.173 [2024-07-25 01:20:16.195714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.173 qpair failed and we were unable to recover it. 00:34:23.173 [2024-07-25 01:20:16.195864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.173 [2024-07-25 01:20:16.195890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.173 qpair failed and we were unable to recover it. 00:34:23.173 [2024-07-25 01:20:16.196036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.173 [2024-07-25 01:20:16.196078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.173 qpair failed and we were unable to recover it. 00:34:23.173 [2024-07-25 01:20:16.196207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.173 [2024-07-25 01:20:16.196235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.173 qpair failed and we were unable to recover it. 00:34:23.173 [2024-07-25 01:20:16.196389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.173 [2024-07-25 01:20:16.196414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.173 qpair failed and we were unable to recover it. 00:34:23.173 [2024-07-25 01:20:16.196553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.173 [2024-07-25 01:20:16.196578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.173 qpair failed and we were unable to recover it. 00:34:23.173 [2024-07-25 01:20:16.196747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.173 [2024-07-25 01:20:16.196775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.173 qpair failed and we were unable to recover it. 00:34:23.173 [2024-07-25 01:20:16.196933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.173 [2024-07-25 01:20:16.196960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.173 qpair failed and we were unable to recover it. 00:34:23.173 [2024-07-25 01:20:16.197107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.173 [2024-07-25 01:20:16.197132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.173 qpair failed and we were unable to recover it. 00:34:23.173 [2024-07-25 01:20:16.197302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.173 [2024-07-25 01:20:16.197328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.173 qpair failed and we were unable to recover it. 00:34:23.173 [2024-07-25 01:20:16.197455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.173 [2024-07-25 01:20:16.197483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.173 qpair failed and we were unable to recover it. 00:34:23.173 [2024-07-25 01:20:16.197597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.173 [2024-07-25 01:20:16.197625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.173 qpair failed and we were unable to recover it. 00:34:23.173 [2024-07-25 01:20:16.197791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.173 [2024-07-25 01:20:16.197816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.173 qpair failed and we were unable to recover it. 00:34:23.173 [2024-07-25 01:20:16.197972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.173 [2024-07-25 01:20:16.198001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.173 qpair failed and we were unable to recover it. 00:34:23.173 [2024-07-25 01:20:16.198206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.173 [2024-07-25 01:20:16.198231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.173 qpair failed and we were unable to recover it. 00:34:23.173 [2024-07-25 01:20:16.198379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.173 [2024-07-25 01:20:16.198404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.173 qpair failed and we were unable to recover it. 00:34:23.173 [2024-07-25 01:20:16.198546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.173 [2024-07-25 01:20:16.198571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.173 qpair failed and we were unable to recover it. 00:34:23.173 [2024-07-25 01:20:16.198686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.173 [2024-07-25 01:20:16.198726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.173 qpair failed and we were unable to recover it. 00:34:23.173 [2024-07-25 01:20:16.198877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.173 [2024-07-25 01:20:16.198905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.173 qpair failed and we were unable to recover it. 00:34:23.173 [2024-07-25 01:20:16.199036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.173 [2024-07-25 01:20:16.199065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.173 qpair failed and we were unable to recover it. 00:34:23.173 [2024-07-25 01:20:16.199256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.173 [2024-07-25 01:20:16.199282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.173 qpair failed and we were unable to recover it. 00:34:23.173 [2024-07-25 01:20:16.199402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.173 [2024-07-25 01:20:16.199427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.173 qpair failed and we were unable to recover it. 00:34:23.173 [2024-07-25 01:20:16.199577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.174 [2024-07-25 01:20:16.199602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.174 qpair failed and we were unable to recover it. 00:34:23.174 [2024-07-25 01:20:16.199778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.174 [2024-07-25 01:20:16.199806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.174 qpair failed and we were unable to recover it. 00:34:23.174 [2024-07-25 01:20:16.199940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.174 [2024-07-25 01:20:16.199965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.174 qpair failed and we were unable to recover it. 00:34:23.174 [2024-07-25 01:20:16.200131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.174 [2024-07-25 01:20:16.200176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.174 qpair failed and we were unable to recover it. 00:34:23.174 [2024-07-25 01:20:16.200330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.174 [2024-07-25 01:20:16.200358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.174 qpair failed and we were unable to recover it. 00:34:23.174 [2024-07-25 01:20:16.200539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.174 [2024-07-25 01:20:16.200567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.174 qpair failed and we were unable to recover it. 00:34:23.174 [2024-07-25 01:20:16.200696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.174 [2024-07-25 01:20:16.200721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.174 qpair failed and we were unable to recover it. 00:34:23.174 [2024-07-25 01:20:16.200887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.174 [2024-07-25 01:20:16.200930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.174 qpair failed and we were unable to recover it. 00:34:23.174 [2024-07-25 01:20:16.201048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.174 [2024-07-25 01:20:16.201076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.174 qpair failed and we were unable to recover it. 00:34:23.174 [2024-07-25 01:20:16.201223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.174 [2024-07-25 01:20:16.201260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.174 qpair failed and we were unable to recover it. 00:34:23.174 [2024-07-25 01:20:16.201424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.174 [2024-07-25 01:20:16.201453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.174 qpair failed and we were unable to recover it. 00:34:23.174 [2024-07-25 01:20:16.201571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.174 [2024-07-25 01:20:16.201614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.174 qpair failed and we were unable to recover it. 00:34:23.174 [2024-07-25 01:20:16.201770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.174 [2024-07-25 01:20:16.201798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.174 qpair failed and we were unable to recover it. 00:34:23.174 [2024-07-25 01:20:16.201948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.174 [2024-07-25 01:20:16.201976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.174 qpair failed and we were unable to recover it. 00:34:23.174 [2024-07-25 01:20:16.202139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.174 [2024-07-25 01:20:16.202165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.174 qpair failed and we were unable to recover it. 00:34:23.174 [2024-07-25 01:20:16.202323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.174 [2024-07-25 01:20:16.202352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.174 qpair failed and we were unable to recover it. 00:34:23.174 [2024-07-25 01:20:16.202536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.174 [2024-07-25 01:20:16.202564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.174 qpair failed and we were unable to recover it. 00:34:23.174 [2024-07-25 01:20:16.202761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.174 [2024-07-25 01:20:16.202786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.174 qpair failed and we were unable to recover it. 00:34:23.174 [2024-07-25 01:20:16.202925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.174 [2024-07-25 01:20:16.202951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.174 qpair failed and we were unable to recover it. 00:34:23.174 [2024-07-25 01:20:16.203087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.174 [2024-07-25 01:20:16.203132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.174 qpair failed and we were unable to recover it. 00:34:23.174 [2024-07-25 01:20:16.203264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.174 [2024-07-25 01:20:16.203293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.174 qpair failed and we were unable to recover it. 00:34:23.174 [2024-07-25 01:20:16.203420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.174 [2024-07-25 01:20:16.203447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.174 qpair failed and we were unable to recover it. 00:34:23.174 [2024-07-25 01:20:16.203625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.174 [2024-07-25 01:20:16.203651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.174 qpair failed and we were unable to recover it. 00:34:23.174 [2024-07-25 01:20:16.203797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.174 [2024-07-25 01:20:16.203838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.174 qpair failed and we were unable to recover it. 00:34:23.174 [2024-07-25 01:20:16.203968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.174 [2024-07-25 01:20:16.203996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.174 qpair failed and we were unable to recover it. 00:34:23.174 [2024-07-25 01:20:16.204141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.174 [2024-07-25 01:20:16.204168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.174 qpair failed and we were unable to recover it. 00:34:23.174 [2024-07-25 01:20:16.204298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.174 [2024-07-25 01:20:16.204324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.174 qpair failed and we were unable to recover it. 00:34:23.174 [2024-07-25 01:20:16.204438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.174 [2024-07-25 01:20:16.204463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.174 qpair failed and we were unable to recover it. 00:34:23.174 [2024-07-25 01:20:16.204632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.174 [2024-07-25 01:20:16.204660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.174 qpair failed and we were unable to recover it. 00:34:23.174 [2024-07-25 01:20:16.204782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.174 [2024-07-25 01:20:16.204809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.174 qpair failed and we were unable to recover it. 00:34:23.174 [2024-07-25 01:20:16.204945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.174 [2024-07-25 01:20:16.204971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.174 qpair failed and we were unable to recover it. 00:34:23.174 [2024-07-25 01:20:16.205153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.174 [2024-07-25 01:20:16.205181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.174 qpair failed and we were unable to recover it. 00:34:23.174 [2024-07-25 01:20:16.205352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.174 [2024-07-25 01:20:16.205378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.174 qpair failed and we were unable to recover it. 00:34:23.174 [2024-07-25 01:20:16.205518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.174 [2024-07-25 01:20:16.205559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.174 qpair failed and we were unable to recover it. 00:34:23.174 [2024-07-25 01:20:16.205704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.174 [2024-07-25 01:20:16.205729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.174 qpair failed and we were unable to recover it. 00:34:23.174 [2024-07-25 01:20:16.205869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.174 [2024-07-25 01:20:16.205894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.174 qpair failed and we were unable to recover it. 00:34:23.174 [2024-07-25 01:20:16.206016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.174 [2024-07-25 01:20:16.206044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.174 qpair failed and we were unable to recover it. 00:34:23.174 [2024-07-25 01:20:16.206248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.174 [2024-07-25 01:20:16.206275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.175 qpair failed and we were unable to recover it. 00:34:23.175 [2024-07-25 01:20:16.206382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.175 [2024-07-25 01:20:16.206408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.175 qpair failed and we were unable to recover it. 00:34:23.175 [2024-07-25 01:20:16.206582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.175 [2024-07-25 01:20:16.206607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.175 qpair failed and we were unable to recover it. 00:34:23.175 [2024-07-25 01:20:16.206730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.175 [2024-07-25 01:20:16.206755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.175 qpair failed and we were unable to recover it. 00:34:23.175 [2024-07-25 01:20:16.206925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.175 [2024-07-25 01:20:16.206957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.175 qpair failed and we were unable to recover it. 00:34:23.175 [2024-07-25 01:20:16.207100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.175 [2024-07-25 01:20:16.207126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.175 qpair failed and we were unable to recover it. 00:34:23.175 [2024-07-25 01:20:16.207284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.175 [2024-07-25 01:20:16.207327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.175 qpair failed and we were unable to recover it. 00:34:23.175 [2024-07-25 01:20:16.207446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.175 [2024-07-25 01:20:16.207474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.175 qpair failed and we were unable to recover it. 00:34:23.175 [2024-07-25 01:20:16.207607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.175 [2024-07-25 01:20:16.207635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.175 qpair failed and we were unable to recover it. 00:34:23.175 [2024-07-25 01:20:16.207778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.175 [2024-07-25 01:20:16.207804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.175 qpair failed and we were unable to recover it. 00:34:23.175 [2024-07-25 01:20:16.207968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.175 [2024-07-25 01:20:16.207993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.175 qpair failed and we were unable to recover it. 00:34:23.175 [2024-07-25 01:20:16.208148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.175 [2024-07-25 01:20:16.208176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.175 qpair failed and we were unable to recover it. 00:34:23.175 [2024-07-25 01:20:16.208312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.175 [2024-07-25 01:20:16.208341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.175 qpair failed and we were unable to recover it. 00:34:23.175 [2024-07-25 01:20:16.208474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.175 [2024-07-25 01:20:16.208514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.175 qpair failed and we were unable to recover it. 00:34:23.175 [2024-07-25 01:20:16.208634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.175 [2024-07-25 01:20:16.208660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.175 qpair failed and we were unable to recover it. 00:34:23.175 [2024-07-25 01:20:16.208831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.175 [2024-07-25 01:20:16.208860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.175 qpair failed and we were unable to recover it. 00:34:23.175 [2024-07-25 01:20:16.209041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.175 [2024-07-25 01:20:16.209069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.175 qpair failed and we were unable to recover it. 00:34:23.175 [2024-07-25 01:20:16.209233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.175 [2024-07-25 01:20:16.209265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.175 qpair failed and we were unable to recover it. 00:34:23.175 [2024-07-25 01:20:16.209423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.175 [2024-07-25 01:20:16.209452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.175 qpair failed and we were unable to recover it. 00:34:23.175 [2024-07-25 01:20:16.209576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.175 [2024-07-25 01:20:16.209604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.175 qpair failed and we were unable to recover it. 00:34:23.175 [2024-07-25 01:20:16.209757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.175 [2024-07-25 01:20:16.209785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.175 qpair failed and we were unable to recover it. 00:34:23.175 [2024-07-25 01:20:16.209911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.175 [2024-07-25 01:20:16.209936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.175 qpair failed and we were unable to recover it. 00:34:23.175 [2024-07-25 01:20:16.210083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.175 [2024-07-25 01:20:16.210108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.175 qpair failed and we were unable to recover it. 00:34:23.175 [2024-07-25 01:20:16.210213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.175 [2024-07-25 01:20:16.210238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.175 qpair failed and we were unable to recover it. 00:34:23.175 [2024-07-25 01:20:16.210385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.175 [2024-07-25 01:20:16.210413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.175 qpair failed and we were unable to recover it. 00:34:23.175 [2024-07-25 01:20:16.210575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.175 [2024-07-25 01:20:16.210600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.175 qpair failed and we were unable to recover it. 00:34:23.175 [2024-07-25 01:20:16.210752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.175 [2024-07-25 01:20:16.210780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.175 qpair failed and we were unable to recover it. 00:34:23.175 [2024-07-25 01:20:16.210931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.175 [2024-07-25 01:20:16.210959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.175 qpair failed and we were unable to recover it. 00:34:23.175 [2024-07-25 01:20:16.211117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.175 [2024-07-25 01:20:16.211145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.175 qpair failed and we were unable to recover it. 00:34:23.175 [2024-07-25 01:20:16.211300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.175 [2024-07-25 01:20:16.211326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.175 qpair failed and we were unable to recover it. 00:34:23.175 [2024-07-25 01:20:16.211473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.175 [2024-07-25 01:20:16.211512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.175 qpair failed and we were unable to recover it. 00:34:23.175 [2024-07-25 01:20:16.211708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.175 [2024-07-25 01:20:16.211759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.175 qpair failed and we were unable to recover it. 00:34:23.175 [2024-07-25 01:20:16.211924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.175 [2024-07-25 01:20:16.211954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.175 qpair failed and we were unable to recover it. 00:34:23.175 [2024-07-25 01:20:16.212142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.175 [2024-07-25 01:20:16.212183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.175 qpair failed and we were unable to recover it. 00:34:23.175 [2024-07-25 01:20:16.212328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.175 [2024-07-25 01:20:16.212353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.175 qpair failed and we were unable to recover it. 00:34:23.175 [2024-07-25 01:20:16.212474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.175 [2024-07-25 01:20:16.212499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.175 qpair failed and we were unable to recover it. 00:34:23.175 [2024-07-25 01:20:16.212662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.175 [2024-07-25 01:20:16.212692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.175 qpair failed and we were unable to recover it. 00:34:23.175 [2024-07-25 01:20:16.212856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.175 [2024-07-25 01:20:16.212882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.175 qpair failed and we were unable to recover it. 00:34:23.175 [2024-07-25 01:20:16.212990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.175 [2024-07-25 01:20:16.213016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.175 qpair failed and we were unable to recover it. 00:34:23.175 [2024-07-25 01:20:16.213155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.175 [2024-07-25 01:20:16.213180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.175 qpair failed and we were unable to recover it. 00:34:23.175 [2024-07-25 01:20:16.213379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.175 [2024-07-25 01:20:16.213406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.175 qpair failed and we were unable to recover it. 00:34:23.176 [2024-07-25 01:20:16.213555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.176 [2024-07-25 01:20:16.213581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.176 qpair failed and we were unable to recover it. 00:34:23.176 [2024-07-25 01:20:16.213706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.176 [2024-07-25 01:20:16.213732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.176 qpair failed and we were unable to recover it. 00:34:23.176 [2024-07-25 01:20:16.213878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.176 [2024-07-25 01:20:16.213907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.176 qpair failed and we were unable to recover it. 00:34:23.176 [2024-07-25 01:20:16.214067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.176 [2024-07-25 01:20:16.214095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.176 qpair failed and we were unable to recover it. 00:34:23.176 [2024-07-25 01:20:16.214254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.176 [2024-07-25 01:20:16.214280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.176 qpair failed and we were unable to recover it. 00:34:23.176 [2024-07-25 01:20:16.214418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.176 [2024-07-25 01:20:16.214443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.176 qpair failed and we were unable to recover it. 00:34:23.176 [2024-07-25 01:20:16.214585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.176 [2024-07-25 01:20:16.214610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.176 qpair failed and we were unable to recover it. 00:34:23.176 [2024-07-25 01:20:16.214746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.176 [2024-07-25 01:20:16.214775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.176 qpair failed and we were unable to recover it. 00:34:23.176 [2024-07-25 01:20:16.214938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.176 [2024-07-25 01:20:16.214963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.176 qpair failed and we were unable to recover it. 00:34:23.176 [2024-07-25 01:20:16.215074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.176 [2024-07-25 01:20:16.215099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.176 qpair failed and we were unable to recover it. 00:34:23.176 [2024-07-25 01:20:16.215268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.176 [2024-07-25 01:20:16.215297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.176 qpair failed and we were unable to recover it. 00:34:23.176 [2024-07-25 01:20:16.215426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.176 [2024-07-25 01:20:16.215454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.176 qpair failed and we were unable to recover it. 00:34:23.176 [2024-07-25 01:20:16.215624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.176 [2024-07-25 01:20:16.215653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.176 qpair failed and we were unable to recover it. 00:34:23.176 [2024-07-25 01:20:16.215800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.176 [2024-07-25 01:20:16.215826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.176 qpair failed and we were unable to recover it. 00:34:23.176 [2024-07-25 01:20:16.215938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.176 [2024-07-25 01:20:16.215964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.176 qpair failed and we were unable to recover it. 00:34:23.176 [2024-07-25 01:20:16.216092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.176 [2024-07-25 01:20:16.216120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.176 qpair failed and we were unable to recover it. 00:34:23.176 [2024-07-25 01:20:16.216290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.176 [2024-07-25 01:20:16.216317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.176 qpair failed and we were unable to recover it. 00:34:23.176 [2024-07-25 01:20:16.216456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.176 [2024-07-25 01:20:16.216500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.176 qpair failed and we were unable to recover it. 00:34:23.176 [2024-07-25 01:20:16.216670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.176 [2024-07-25 01:20:16.216698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.176 qpair failed and we were unable to recover it. 00:34:23.176 [2024-07-25 01:20:16.216891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.176 [2024-07-25 01:20:16.216920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.176 qpair failed and we were unable to recover it. 00:34:23.176 [2024-07-25 01:20:16.217052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.176 [2024-07-25 01:20:16.217077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.176 qpair failed and we were unable to recover it. 00:34:23.176 [2024-07-25 01:20:16.217188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.176 [2024-07-25 01:20:16.217213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.176 qpair failed and we were unable to recover it. 00:34:23.176 [2024-07-25 01:20:16.217363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.176 [2024-07-25 01:20:16.217392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.176 qpair failed and we were unable to recover it. 00:34:23.176 [2024-07-25 01:20:16.217553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.176 [2024-07-25 01:20:16.217581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.176 qpair failed and we were unable to recover it. 00:34:23.176 [2024-07-25 01:20:16.217741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.176 [2024-07-25 01:20:16.217767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.176 qpair failed and we were unable to recover it. 00:34:23.176 [2024-07-25 01:20:16.217905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.176 [2024-07-25 01:20:16.217948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.176 qpair failed and we were unable to recover it. 00:34:23.176 [2024-07-25 01:20:16.218102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.176 [2024-07-25 01:20:16.218127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.176 qpair failed and we were unable to recover it. 00:34:23.176 [2024-07-25 01:20:16.218273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.176 [2024-07-25 01:20:16.218299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.176 qpair failed and we were unable to recover it. 00:34:23.176 [2024-07-25 01:20:16.218436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.176 [2024-07-25 01:20:16.218461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.176 qpair failed and we were unable to recover it. 00:34:23.176 [2024-07-25 01:20:16.218576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.176 [2024-07-25 01:20:16.218618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.176 qpair failed and we were unable to recover it. 00:34:23.176 [2024-07-25 01:20:16.218774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.176 [2024-07-25 01:20:16.218802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.176 qpair failed and we were unable to recover it. 00:34:23.176 [2024-07-25 01:20:16.218917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.176 [2024-07-25 01:20:16.218945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.176 qpair failed and we were unable to recover it. 00:34:23.176 [2024-07-25 01:20:16.219080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.176 [2024-07-25 01:20:16.219105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.176 qpair failed and we were unable to recover it. 00:34:23.176 [2024-07-25 01:20:16.219213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.176 [2024-07-25 01:20:16.219238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.176 qpair failed and we were unable to recover it. 00:34:23.176 [2024-07-25 01:20:16.219402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.176 [2024-07-25 01:20:16.219428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.176 qpair failed and we were unable to recover it. 00:34:23.176 [2024-07-25 01:20:16.219569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.176 [2024-07-25 01:20:16.219597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.176 qpair failed and we were unable to recover it. 00:34:23.176 [2024-07-25 01:20:16.219736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.176 [2024-07-25 01:20:16.219762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.176 qpair failed and we were unable to recover it. 00:34:23.176 [2024-07-25 01:20:16.219934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.176 [2024-07-25 01:20:16.219959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.176 qpair failed and we were unable to recover it. 00:34:23.176 [2024-07-25 01:20:16.220106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.176 [2024-07-25 01:20:16.220147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.176 qpair failed and we were unable to recover it. 00:34:23.176 [2024-07-25 01:20:16.220306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.176 [2024-07-25 01:20:16.220332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.176 qpair failed and we were unable to recover it. 00:34:23.176 [2024-07-25 01:20:16.220478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.176 [2024-07-25 01:20:16.220503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.176 qpair failed and we were unable to recover it. 00:34:23.177 [2024-07-25 01:20:16.220666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.177 [2024-07-25 01:20:16.220694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.177 qpair failed and we were unable to recover it. 00:34:23.177 [2024-07-25 01:20:16.220819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.177 [2024-07-25 01:20:16.220847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.177 qpair failed and we were unable to recover it. 00:34:23.177 [2024-07-25 01:20:16.221002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.177 [2024-07-25 01:20:16.221030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.177 qpair failed and we were unable to recover it. 00:34:23.177 [2024-07-25 01:20:16.221177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.177 [2024-07-25 01:20:16.221203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.177 qpair failed and we were unable to recover it. 00:34:23.177 [2024-07-25 01:20:16.221381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.177 [2024-07-25 01:20:16.221408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.177 qpair failed and we were unable to recover it. 00:34:23.177 [2024-07-25 01:20:16.221555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.177 [2024-07-25 01:20:16.221580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.177 qpair failed and we were unable to recover it. 00:34:23.177 [2024-07-25 01:20:16.221731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.177 [2024-07-25 01:20:16.221759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.177 qpair failed and we were unable to recover it. 00:34:23.177 [2024-07-25 01:20:16.221919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.177 [2024-07-25 01:20:16.221944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.177 qpair failed and we were unable to recover it. 00:34:23.177 [2024-07-25 01:20:16.222061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.177 [2024-07-25 01:20:16.222086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.177 qpair failed and we were unable to recover it. 00:34:23.177 [2024-07-25 01:20:16.222199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.177 [2024-07-25 01:20:16.222226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.177 qpair failed and we were unable to recover it. 00:34:23.177 [2024-07-25 01:20:16.222414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.177 [2024-07-25 01:20:16.222442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.177 qpair failed and we were unable to recover it. 00:34:23.177 [2024-07-25 01:20:16.222631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.177 [2024-07-25 01:20:16.222662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.177 qpair failed and we were unable to recover it. 00:34:23.177 [2024-07-25 01:20:16.222778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.177 [2024-07-25 01:20:16.222820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.177 qpair failed and we were unable to recover it. 00:34:23.177 [2024-07-25 01:20:16.222979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.177 [2024-07-25 01:20:16.223007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.177 qpair failed and we were unable to recover it. 00:34:23.177 [2024-07-25 01:20:16.223125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.177 [2024-07-25 01:20:16.223154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.177 qpair failed and we were unable to recover it. 00:34:23.177 [2024-07-25 01:20:16.223312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.177 [2024-07-25 01:20:16.223338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.177 qpair failed and we were unable to recover it. 00:34:23.177 [2024-07-25 01:20:16.223448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.177 [2024-07-25 01:20:16.223473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.177 qpair failed and we were unable to recover it. 00:34:23.177 [2024-07-25 01:20:16.223635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.177 [2024-07-25 01:20:16.223663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.177 qpair failed and we were unable to recover it. 00:34:23.177 [2024-07-25 01:20:16.223813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.177 [2024-07-25 01:20:16.223842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.177 qpair failed and we were unable to recover it. 00:34:23.177 [2024-07-25 01:20:16.223974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.177 [2024-07-25 01:20:16.223999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.177 qpair failed and we were unable to recover it. 00:34:23.177 [2024-07-25 01:20:16.224116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.177 [2024-07-25 01:20:16.224141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.177 qpair failed and we were unable to recover it. 00:34:23.177 [2024-07-25 01:20:16.224313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.177 [2024-07-25 01:20:16.224341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.177 qpair failed and we were unable to recover it. 00:34:23.177 [2024-07-25 01:20:16.224460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.177 [2024-07-25 01:20:16.224488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.177 qpair failed and we were unable to recover it. 00:34:23.177 [2024-07-25 01:20:16.224681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.177 [2024-07-25 01:20:16.224706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.177 qpair failed and we were unable to recover it. 00:34:23.177 [2024-07-25 01:20:16.224833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.177 [2024-07-25 01:20:16.224861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.177 qpair failed and we were unable to recover it. 00:34:23.177 [2024-07-25 01:20:16.225024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.177 [2024-07-25 01:20:16.225050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.177 qpair failed and we were unable to recover it. 00:34:23.177 [2024-07-25 01:20:16.225218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.177 [2024-07-25 01:20:16.225249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.177 qpair failed and we were unable to recover it. 00:34:23.177 [2024-07-25 01:20:16.225406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.177 [2024-07-25 01:20:16.225431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.177 qpair failed and we were unable to recover it. 00:34:23.177 [2024-07-25 01:20:16.225619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.177 [2024-07-25 01:20:16.225647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.177 qpair failed and we were unable to recover it. 00:34:23.177 [2024-07-25 01:20:16.225832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.177 [2024-07-25 01:20:16.225880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.177 qpair failed and we were unable to recover it. 00:34:23.177 [2024-07-25 01:20:16.226021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.177 [2024-07-25 01:20:16.226048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.177 qpair failed and we were unable to recover it. 00:34:23.177 [2024-07-25 01:20:16.226203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.177 [2024-07-25 01:20:16.226228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.177 qpair failed and we were unable to recover it. 00:34:23.177 [2024-07-25 01:20:16.226376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.177 [2024-07-25 01:20:16.226401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.177 qpair failed and we were unable to recover it. 00:34:23.177 [2024-07-25 01:20:16.226520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.177 [2024-07-25 01:20:16.226547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.177 qpair failed and we were unable to recover it. 00:34:23.177 [2024-07-25 01:20:16.226678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.177 [2024-07-25 01:20:16.226706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.177 qpair failed and we were unable to recover it. 00:34:23.177 [2024-07-25 01:20:16.226874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.178 [2024-07-25 01:20:16.226899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.178 qpair failed and we were unable to recover it. 00:34:23.178 [2024-07-25 01:20:16.227014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.178 [2024-07-25 01:20:16.227040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.178 qpair failed and we were unable to recover it. 00:34:23.178 [2024-07-25 01:20:16.227178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.178 [2024-07-25 01:20:16.227203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.178 qpair failed and we were unable to recover it. 00:34:23.178 [2024-07-25 01:20:16.227392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.178 [2024-07-25 01:20:16.227417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.178 qpair failed and we were unable to recover it. 00:34:23.178 [2024-07-25 01:20:16.227535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.178 [2024-07-25 01:20:16.227560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.178 qpair failed and we were unable to recover it. 00:34:23.178 [2024-07-25 01:20:16.227704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.178 [2024-07-25 01:20:16.227729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.178 qpair failed and we were unable to recover it. 00:34:23.178 [2024-07-25 01:20:16.227854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.178 [2024-07-25 01:20:16.227880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.178 qpair failed and we were unable to recover it. 00:34:23.178 [2024-07-25 01:20:16.228046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.178 [2024-07-25 01:20:16.228074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.178 qpair failed and we were unable to recover it. 00:34:23.178 [2024-07-25 01:20:16.228203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.178 [2024-07-25 01:20:16.228229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.178 qpair failed and we were unable to recover it. 00:34:23.178 [2024-07-25 01:20:16.228394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.178 [2024-07-25 01:20:16.228436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.178 qpair failed and we were unable to recover it. 00:34:23.178 [2024-07-25 01:20:16.228576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.178 [2024-07-25 01:20:16.228605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.178 qpair failed and we were unable to recover it. 00:34:23.178 [2024-07-25 01:20:16.228762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.178 [2024-07-25 01:20:16.228791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.178 qpair failed and we were unable to recover it. 00:34:23.178 [2024-07-25 01:20:16.228977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.178 [2024-07-25 01:20:16.229003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.178 qpair failed and we were unable to recover it. 00:34:23.178 [2024-07-25 01:20:16.229166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.178 [2024-07-25 01:20:16.229194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.178 qpair failed and we were unable to recover it. 00:34:23.178 [2024-07-25 01:20:16.229382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.178 [2024-07-25 01:20:16.229408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.178 qpair failed and we were unable to recover it. 00:34:23.178 [2024-07-25 01:20:16.229524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.178 [2024-07-25 01:20:16.229550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.178 qpair failed and we were unable to recover it. 00:34:23.178 [2024-07-25 01:20:16.229664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.178 [2024-07-25 01:20:16.229693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.178 qpair failed and we were unable to recover it. 00:34:23.178 [2024-07-25 01:20:16.229870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.178 [2024-07-25 01:20:16.229895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.178 qpair failed and we were unable to recover it. 00:34:23.178 [2024-07-25 01:20:16.230041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.178 [2024-07-25 01:20:16.230069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.178 qpair failed and we were unable to recover it. 00:34:23.178 [2024-07-25 01:20:16.230190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.178 [2024-07-25 01:20:16.230218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.178 qpair failed and we were unable to recover it. 00:34:23.178 [2024-07-25 01:20:16.230360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.178 [2024-07-25 01:20:16.230385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.178 qpair failed and we were unable to recover it. 00:34:23.178 [2024-07-25 01:20:16.230527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.178 [2024-07-25 01:20:16.230553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.178 qpair failed and we were unable to recover it. 00:34:23.178 [2024-07-25 01:20:16.230663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.178 [2024-07-25 01:20:16.230688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.178 qpair failed and we were unable to recover it. 00:34:23.178 [2024-07-25 01:20:16.230878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.178 [2024-07-25 01:20:16.230906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.178 qpair failed and we were unable to recover it. 00:34:23.178 [2024-07-25 01:20:16.231096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.178 [2024-07-25 01:20:16.231122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.178 qpair failed and we were unable to recover it. 00:34:23.178 [2024-07-25 01:20:16.231285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.178 [2024-07-25 01:20:16.231314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.178 qpair failed and we were unable to recover it. 00:34:23.178 [2024-07-25 01:20:16.231477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.178 [2024-07-25 01:20:16.231506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.178 qpair failed and we were unable to recover it. 00:34:23.178 [2024-07-25 01:20:16.231664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.179 [2024-07-25 01:20:16.231691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.179 qpair failed and we were unable to recover it. 00:34:23.179 [2024-07-25 01:20:16.231819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.179 [2024-07-25 01:20:16.231844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.179 qpair failed and we were unable to recover it. 00:34:23.179 [2024-07-25 01:20:16.231989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.179 [2024-07-25 01:20:16.232015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.179 qpair failed and we were unable to recover it. 00:34:23.179 [2024-07-25 01:20:16.232165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.179 [2024-07-25 01:20:16.232191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.179 qpair failed and we were unable to recover it. 00:34:23.179 [2024-07-25 01:20:16.232335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.179 [2024-07-25 01:20:16.232376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.179 qpair failed and we were unable to recover it. 00:34:23.179 [2024-07-25 01:20:16.232515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.179 [2024-07-25 01:20:16.232542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.179 qpair failed and we were unable to recover it. 00:34:23.179 [2024-07-25 01:20:16.232702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.179 [2024-07-25 01:20:16.232731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.179 qpair failed and we were unable to recover it. 00:34:23.179 [2024-07-25 01:20:16.232946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.179 [2024-07-25 01:20:16.232993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.179 qpair failed and we were unable to recover it. 00:34:23.179 [2024-07-25 01:20:16.233149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.179 [2024-07-25 01:20:16.233176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.179 qpair failed and we were unable to recover it. 00:34:23.179 [2024-07-25 01:20:16.233347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.179 [2024-07-25 01:20:16.233372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.179 qpair failed and we were unable to recover it. 00:34:23.179 [2024-07-25 01:20:16.233497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.179 [2024-07-25 01:20:16.233526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.179 qpair failed and we were unable to recover it. 00:34:23.179 [2024-07-25 01:20:16.233683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.179 [2024-07-25 01:20:16.233712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.179 qpair failed and we were unable to recover it. 00:34:23.179 [2024-07-25 01:20:16.233842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.179 [2024-07-25 01:20:16.233870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.179 qpair failed and we were unable to recover it. 00:34:23.179 [2024-07-25 01:20:16.234024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.179 [2024-07-25 01:20:16.234050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.179 qpair failed and we were unable to recover it. 00:34:23.179 [2024-07-25 01:20:16.234217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.179 [2024-07-25 01:20:16.234248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.179 qpair failed and we were unable to recover it. 00:34:23.179 [2024-07-25 01:20:16.234379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.179 [2024-07-25 01:20:16.234404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.179 qpair failed and we were unable to recover it. 00:34:23.179 [2024-07-25 01:20:16.234604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.179 [2024-07-25 01:20:16.234632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.179 qpair failed and we were unable to recover it. 00:34:23.179 [2024-07-25 01:20:16.234791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.179 [2024-07-25 01:20:16.234816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.179 qpair failed and we were unable to recover it. 00:34:23.179 [2024-07-25 01:20:16.234962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.179 [2024-07-25 01:20:16.235003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.179 qpair failed and we were unable to recover it. 00:34:23.179 [2024-07-25 01:20:16.235191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.179 [2024-07-25 01:20:16.235219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.179 qpair failed and we were unable to recover it. 00:34:23.179 [2024-07-25 01:20:16.235394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.179 [2024-07-25 01:20:16.235420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.179 qpair failed and we were unable to recover it. 00:34:23.179 [2024-07-25 01:20:16.235561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.179 [2024-07-25 01:20:16.235587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.179 qpair failed and we were unable to recover it. 00:34:23.179 [2024-07-25 01:20:16.235753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.179 [2024-07-25 01:20:16.235781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.179 qpair failed and we were unable to recover it. 00:34:23.179 [2024-07-25 01:20:16.235907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.179 [2024-07-25 01:20:16.235935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.179 qpair failed and we were unable to recover it. 00:34:23.179 [2024-07-25 01:20:16.236093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.179 [2024-07-25 01:20:16.236122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.179 qpair failed and we were unable to recover it. 00:34:23.179 [2024-07-25 01:20:16.236298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.179 [2024-07-25 01:20:16.236334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.179 qpair failed and we were unable to recover it. 00:34:23.179 [2024-07-25 01:20:16.236492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.179 [2024-07-25 01:20:16.236517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.179 qpair failed and we were unable to recover it. 00:34:23.179 [2024-07-25 01:20:16.236660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.179 [2024-07-25 01:20:16.236685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.179 qpair failed and we were unable to recover it. 00:34:23.179 [2024-07-25 01:20:16.236852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.179 [2024-07-25 01:20:16.236881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.179 qpair failed and we were unable to recover it. 00:34:23.179 [2024-07-25 01:20:16.237073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.179 [2024-07-25 01:20:16.237102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.179 qpair failed and we were unable to recover it. 00:34:23.179 [2024-07-25 01:20:16.237236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.179 [2024-07-25 01:20:16.237306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.179 qpair failed and we were unable to recover it. 00:34:23.179 [2024-07-25 01:20:16.237461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.179 [2024-07-25 01:20:16.237489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.179 qpair failed and we were unable to recover it. 00:34:23.179 [2024-07-25 01:20:16.237647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.179 [2024-07-25 01:20:16.237675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.179 qpair failed and we were unable to recover it. 00:34:23.179 [2024-07-25 01:20:16.237840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.179 [2024-07-25 01:20:16.237865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.179 qpair failed and we were unable to recover it. 00:34:23.179 [2024-07-25 01:20:16.238013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.179 [2024-07-25 01:20:16.238054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.179 qpair failed and we were unable to recover it. 00:34:23.179 [2024-07-25 01:20:16.238184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.179 [2024-07-25 01:20:16.238212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.179 qpair failed and we were unable to recover it. 00:34:23.179 [2024-07-25 01:20:16.238392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.179 [2024-07-25 01:20:16.238418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.179 qpair failed and we were unable to recover it. 00:34:23.179 [2024-07-25 01:20:16.238561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.180 [2024-07-25 01:20:16.238587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.180 qpair failed and we were unable to recover it. 00:34:23.180 [2024-07-25 01:20:16.238773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.180 [2024-07-25 01:20:16.238801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.180 qpair failed and we were unable to recover it. 00:34:23.180 [2024-07-25 01:20:16.238949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.180 [2024-07-25 01:20:16.238978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.180 qpair failed and we were unable to recover it. 00:34:23.180 [2024-07-25 01:20:16.239128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.180 [2024-07-25 01:20:16.239156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.180 qpair failed and we were unable to recover it. 00:34:23.180 [2024-07-25 01:20:16.239312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.180 [2024-07-25 01:20:16.239337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.180 qpair failed and we were unable to recover it. 00:34:23.180 [2024-07-25 01:20:16.239457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.180 [2024-07-25 01:20:16.239482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.180 qpair failed and we were unable to recover it. 00:34:23.180 [2024-07-25 01:20:16.239637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.180 [2024-07-25 01:20:16.239665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.180 qpair failed and we were unable to recover it. 00:34:23.180 [2024-07-25 01:20:16.239849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.180 [2024-07-25 01:20:16.239877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.180 qpair failed and we were unable to recover it. 00:34:23.180 [2024-07-25 01:20:16.239995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.180 [2024-07-25 01:20:16.240035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.180 qpair failed and we were unable to recover it. 00:34:23.180 [2024-07-25 01:20:16.240143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.180 [2024-07-25 01:20:16.240168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.180 qpair failed and we were unable to recover it. 00:34:23.180 [2024-07-25 01:20:16.240341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.180 [2024-07-25 01:20:16.240371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.180 qpair failed and we were unable to recover it. 00:34:23.180 [2024-07-25 01:20:16.240532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.180 [2024-07-25 01:20:16.240559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.180 qpair failed and we were unable to recover it. 00:34:23.180 [2024-07-25 01:20:16.240718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.180 [2024-07-25 01:20:16.240743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.180 qpair failed and we were unable to recover it. 00:34:23.180 [2024-07-25 01:20:16.240860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.180 [2024-07-25 01:20:16.240885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.180 qpair failed and we were unable to recover it. 00:34:23.180 [2024-07-25 01:20:16.241001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.180 [2024-07-25 01:20:16.241026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.180 qpair failed and we were unable to recover it. 00:34:23.180 [2024-07-25 01:20:16.241219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.180 [2024-07-25 01:20:16.241271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.180 qpair failed and we were unable to recover it. 00:34:23.180 [2024-07-25 01:20:16.241450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.180 [2024-07-25 01:20:16.241475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.180 qpair failed and we were unable to recover it. 00:34:23.180 [2024-07-25 01:20:16.241642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.180 [2024-07-25 01:20:16.241670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.180 qpair failed and we were unable to recover it. 00:34:23.180 [2024-07-25 01:20:16.241790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.180 [2024-07-25 01:20:16.241818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.180 qpair failed and we were unable to recover it. 00:34:23.180 [2024-07-25 01:20:16.241990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.180 [2024-07-25 01:20:16.242016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.180 qpair failed and we were unable to recover it. 00:34:23.180 [2024-07-25 01:20:16.242158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.180 [2024-07-25 01:20:16.242184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.180 qpair failed and we were unable to recover it. 00:34:23.180 [2024-07-25 01:20:16.242300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.180 [2024-07-25 01:20:16.242327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.180 qpair failed and we were unable to recover it. 00:34:23.180 [2024-07-25 01:20:16.242476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.180 [2024-07-25 01:20:16.242504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.180 qpair failed and we were unable to recover it. 00:34:23.180 [2024-07-25 01:20:16.242678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.180 [2024-07-25 01:20:16.242703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.180 qpair failed and we were unable to recover it. 00:34:23.180 [2024-07-25 01:20:16.242844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.180 [2024-07-25 01:20:16.242870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.180 qpair failed and we were unable to recover it. 00:34:23.180 [2024-07-25 01:20:16.243009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.180 [2024-07-25 01:20:16.243034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.180 qpair failed and we were unable to recover it. 00:34:23.180 [2024-07-25 01:20:16.243156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.180 [2024-07-25 01:20:16.243182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.180 qpair failed and we were unable to recover it. 00:34:23.180 [2024-07-25 01:20:16.243323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.180 [2024-07-25 01:20:16.243352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.180 qpair failed and we were unable to recover it. 00:34:23.180 [2024-07-25 01:20:16.243513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.180 [2024-07-25 01:20:16.243539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.180 qpair failed and we were unable to recover it. 00:34:23.180 [2024-07-25 01:20:16.243680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.180 [2024-07-25 01:20:16.243708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.180 qpair failed and we were unable to recover it. 00:34:23.180 [2024-07-25 01:20:16.243847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.180 [2024-07-25 01:20:16.243873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.180 qpair failed and we were unable to recover it. 00:34:23.180 [2024-07-25 01:20:16.244037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.180 [2024-07-25 01:20:16.244064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.180 qpair failed and we were unable to recover it. 00:34:23.180 [2024-07-25 01:20:16.244232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.180 [2024-07-25 01:20:16.244271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.180 qpair failed and we were unable to recover it. 00:34:23.180 [2024-07-25 01:20:16.244464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.180 [2024-07-25 01:20:16.244492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.180 qpair failed and we were unable to recover it. 00:34:23.180 [2024-07-25 01:20:16.244705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.180 [2024-07-25 01:20:16.244733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.180 qpair failed and we were unable to recover it. 00:34:23.180 [2024-07-25 01:20:16.244913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.180 [2024-07-25 01:20:16.244941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.180 qpair failed and we were unable to recover it. 00:34:23.180 [2024-07-25 01:20:16.245098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.180 [2024-07-25 01:20:16.245123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.180 qpair failed and we were unable to recover it. 00:34:23.180 [2024-07-25 01:20:16.245254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.180 [2024-07-25 01:20:16.245280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.180 qpair failed and we were unable to recover it. 00:34:23.181 [2024-07-25 01:20:16.245420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.181 [2024-07-25 01:20:16.245445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.181 qpair failed and we were unable to recover it. 00:34:23.181 [2024-07-25 01:20:16.245583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.181 [2024-07-25 01:20:16.245612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.181 qpair failed and we were unable to recover it. 00:34:23.181 [2024-07-25 01:20:16.245782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.181 [2024-07-25 01:20:16.245807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.181 qpair failed and we were unable to recover it. 00:34:23.181 [2024-07-25 01:20:16.245955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.181 [2024-07-25 01:20:16.245980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.181 qpair failed and we were unable to recover it. 00:34:23.181 [2024-07-25 01:20:16.246098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.181 [2024-07-25 01:20:16.246125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.181 qpair failed and we were unable to recover it. 00:34:23.181 [2024-07-25 01:20:16.246251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.181 [2024-07-25 01:20:16.246278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.181 qpair failed and we were unable to recover it. 00:34:23.181 [2024-07-25 01:20:16.246469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.181 [2024-07-25 01:20:16.246495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.181 qpair failed and we were unable to recover it. 00:34:23.181 [2024-07-25 01:20:16.246638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.181 [2024-07-25 01:20:16.246663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.181 qpair failed and we were unable to recover it. 00:34:23.181 [2024-07-25 01:20:16.246852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.181 [2024-07-25 01:20:16.246877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.181 qpair failed and we were unable to recover it. 00:34:23.181 [2024-07-25 01:20:16.247028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.181 [2024-07-25 01:20:16.247054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.181 qpair failed and we were unable to recover it. 00:34:23.181 [2024-07-25 01:20:16.247221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.181 [2024-07-25 01:20:16.247259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.181 qpair failed and we were unable to recover it. 00:34:23.181 [2024-07-25 01:20:16.247427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.181 [2024-07-25 01:20:16.247452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.181 qpair failed and we were unable to recover it. 00:34:23.181 [2024-07-25 01:20:16.247637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.181 [2024-07-25 01:20:16.247672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.181 qpair failed and we were unable to recover it. 00:34:23.181 [2024-07-25 01:20:16.247843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.181 [2024-07-25 01:20:16.247871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.181 qpair failed and we were unable to recover it. 00:34:23.181 [2024-07-25 01:20:16.248034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.181 [2024-07-25 01:20:16.248059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.181 qpair failed and we were unable to recover it. 00:34:23.181 [2024-07-25 01:20:16.248180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.181 [2024-07-25 01:20:16.248222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.181 qpair failed and we were unable to recover it. 00:34:23.181 [2024-07-25 01:20:16.248365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.181 [2024-07-25 01:20:16.248393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.181 qpair failed and we were unable to recover it. 00:34:23.181 [2024-07-25 01:20:16.248521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.181 [2024-07-25 01:20:16.248549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.181 qpair failed and we were unable to recover it. 00:34:23.181 [2024-07-25 01:20:16.248745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.181 [2024-07-25 01:20:16.248770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.181 qpair failed and we were unable to recover it. 00:34:23.181 [2024-07-25 01:20:16.248927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.181 [2024-07-25 01:20:16.248955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.181 qpair failed and we were unable to recover it. 00:34:23.181 [2024-07-25 01:20:16.249146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.181 [2024-07-25 01:20:16.249171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.181 qpair failed and we were unable to recover it. 00:34:23.181 [2024-07-25 01:20:16.249320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.181 [2024-07-25 01:20:16.249364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.181 qpair failed and we were unable to recover it. 00:34:23.181 [2024-07-25 01:20:16.249540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.181 [2024-07-25 01:20:16.249565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.181 qpair failed and we were unable to recover it. 00:34:23.181 [2024-07-25 01:20:16.249721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.181 [2024-07-25 01:20:16.249749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.181 qpair failed and we were unable to recover it. 00:34:23.181 [2024-07-25 01:20:16.249871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.181 [2024-07-25 01:20:16.249900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.181 qpair failed and we were unable to recover it. 00:34:23.181 [2024-07-25 01:20:16.250054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.181 [2024-07-25 01:20:16.250083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.181 qpair failed and we were unable to recover it. 00:34:23.181 [2024-07-25 01:20:16.250211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.181 [2024-07-25 01:20:16.250237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.181 qpair failed and we were unable to recover it. 00:34:23.181 [2024-07-25 01:20:16.250378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.181 [2024-07-25 01:20:16.250404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.181 qpair failed and we were unable to recover it. 00:34:23.181 [2024-07-25 01:20:16.250540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.181 [2024-07-25 01:20:16.250566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.181 qpair failed and we were unable to recover it. 00:34:23.181 [2024-07-25 01:20:16.250704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.181 [2024-07-25 01:20:16.250731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.181 qpair failed and we were unable to recover it. 00:34:23.181 [2024-07-25 01:20:16.250899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.181 [2024-07-25 01:20:16.250925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.181 qpair failed and we were unable to recover it. 00:34:23.181 [2024-07-25 01:20:16.251112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.181 [2024-07-25 01:20:16.251140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.181 qpair failed and we were unable to recover it. 00:34:23.181 [2024-07-25 01:20:16.251259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.181 [2024-07-25 01:20:16.251289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.181 qpair failed and we were unable to recover it. 00:34:23.181 [2024-07-25 01:20:16.251422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.181 [2024-07-25 01:20:16.251450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.181 qpair failed and we were unable to recover it. 00:34:23.181 [2024-07-25 01:20:16.251619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.181 [2024-07-25 01:20:16.251648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.181 qpair failed and we were unable to recover it. 00:34:23.181 [2024-07-25 01:20:16.251842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.181 [2024-07-25 01:20:16.251871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.181 qpair failed and we were unable to recover it. 00:34:23.181 [2024-07-25 01:20:16.252010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.181 [2024-07-25 01:20:16.252036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.181 qpair failed and we were unable to recover it. 00:34:23.181 [2024-07-25 01:20:16.252213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.182 [2024-07-25 01:20:16.252256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.182 qpair failed and we were unable to recover it. 00:34:23.182 [2024-07-25 01:20:16.252417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.182 [2024-07-25 01:20:16.252443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.182 qpair failed and we were unable to recover it. 00:34:23.182 [2024-07-25 01:20:16.252557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.182 [2024-07-25 01:20:16.252583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.182 qpair failed and we were unable to recover it. 00:34:23.182 [2024-07-25 01:20:16.252751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.182 [2024-07-25 01:20:16.252776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.182 qpair failed and we were unable to recover it. 00:34:23.182 [2024-07-25 01:20:16.252921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.182 [2024-07-25 01:20:16.252946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.182 qpair failed and we were unable to recover it. 00:34:23.182 [2024-07-25 01:20:16.253096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.182 [2024-07-25 01:20:16.253122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.182 qpair failed and we were unable to recover it. 00:34:23.182 [2024-07-25 01:20:16.253297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.182 [2024-07-25 01:20:16.253323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.182 qpair failed and we were unable to recover it. 00:34:23.182 [2024-07-25 01:20:16.253475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.182 [2024-07-25 01:20:16.253501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.182 qpair failed and we were unable to recover it. 00:34:23.182 [2024-07-25 01:20:16.253631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.182 [2024-07-25 01:20:16.253657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.182 qpair failed and we were unable to recover it. 00:34:23.182 [2024-07-25 01:20:16.253812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.182 [2024-07-25 01:20:16.253850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.182 qpair failed and we were unable to recover it. 00:34:23.182 [2024-07-25 01:20:16.253973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.182 [2024-07-25 01:20:16.253999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.182 qpair failed and we were unable to recover it. 00:34:23.182 [2024-07-25 01:20:16.254148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.182 [2024-07-25 01:20:16.254174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.182 qpair failed and we were unable to recover it. 00:34:23.182 [2024-07-25 01:20:16.254296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.182 [2024-07-25 01:20:16.254321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.182 qpair failed and we were unable to recover it. 00:34:23.182 [2024-07-25 01:20:16.254437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.182 [2024-07-25 01:20:16.254463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.182 qpair failed and we were unable to recover it. 00:34:23.182 [2024-07-25 01:20:16.254629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.182 [2024-07-25 01:20:16.254654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.182 qpair failed and we were unable to recover it. 00:34:23.182 [2024-07-25 01:20:16.254795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.182 [2024-07-25 01:20:16.254820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.182 qpair failed and we were unable to recover it. 00:34:23.182 [2024-07-25 01:20:16.254963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.182 [2024-07-25 01:20:16.254989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.182 qpair failed and we were unable to recover it. 00:34:23.182 [2024-07-25 01:20:16.255111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.182 [2024-07-25 01:20:16.255136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.182 qpair failed and we were unable to recover it. 00:34:23.182 [2024-07-25 01:20:16.255283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.182 [2024-07-25 01:20:16.255309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.182 qpair failed and we were unable to recover it. 00:34:23.182 [2024-07-25 01:20:16.255433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.182 [2024-07-25 01:20:16.255458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.182 qpair failed and we were unable to recover it. 00:34:23.182 [2024-07-25 01:20:16.255632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.182 [2024-07-25 01:20:16.255669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.182 qpair failed and we were unable to recover it. 00:34:23.465 [2024-07-25 01:20:16.255814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.466 [2024-07-25 01:20:16.255862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.466 qpair failed and we were unable to recover it. 00:34:23.466 [2024-07-25 01:20:16.256006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.466 [2024-07-25 01:20:16.256049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.466 qpair failed and we were unable to recover it. 00:34:23.466 [2024-07-25 01:20:16.256269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.466 [2024-07-25 01:20:16.256321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.466 qpair failed and we were unable to recover it. 00:34:23.466 [2024-07-25 01:20:16.256501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.466 [2024-07-25 01:20:16.256540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.466 qpair failed and we were unable to recover it. 00:34:23.466 [2024-07-25 01:20:16.256695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.466 [2024-07-25 01:20:16.256723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.466 qpair failed and we were unable to recover it. 00:34:23.466 [2024-07-25 01:20:16.256869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.466 [2024-07-25 01:20:16.256895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.466 qpair failed and we were unable to recover it. 00:34:23.466 [2024-07-25 01:20:16.257023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.466 [2024-07-25 01:20:16.257050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.466 qpair failed and we were unable to recover it. 00:34:23.466 [2024-07-25 01:20:16.257207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.466 [2024-07-25 01:20:16.257235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.466 qpair failed and we were unable to recover it. 00:34:23.466 [2024-07-25 01:20:16.257384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.466 [2024-07-25 01:20:16.257410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.466 qpair failed and we were unable to recover it. 00:34:23.466 [2024-07-25 01:20:16.257526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.466 [2024-07-25 01:20:16.257551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.466 qpair failed and we were unable to recover it. 00:34:23.466 [2024-07-25 01:20:16.257662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.466 [2024-07-25 01:20:16.257687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.466 qpair failed and we were unable to recover it. 00:34:23.466 [2024-07-25 01:20:16.257830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.466 [2024-07-25 01:20:16.257855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.466 qpair failed and we were unable to recover it. 00:34:23.466 [2024-07-25 01:20:16.257965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.466 [2024-07-25 01:20:16.257991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.466 qpair failed and we were unable to recover it. 00:34:23.466 [2024-07-25 01:20:16.258100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.466 [2024-07-25 01:20:16.258125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.466 qpair failed and we were unable to recover it. 00:34:23.466 [2024-07-25 01:20:16.258272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.466 [2024-07-25 01:20:16.258298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.466 qpair failed and we were unable to recover it. 00:34:23.466 [2024-07-25 01:20:16.258420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.466 [2024-07-25 01:20:16.258445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.466 qpair failed and we were unable to recover it. 00:34:23.466 [2024-07-25 01:20:16.258584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.466 [2024-07-25 01:20:16.258609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.466 qpair failed and we were unable to recover it. 00:34:23.466 [2024-07-25 01:20:16.258729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.466 [2024-07-25 01:20:16.258754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.466 qpair failed and we were unable to recover it. 00:34:23.466 [2024-07-25 01:20:16.258881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.466 [2024-07-25 01:20:16.258908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.466 qpair failed and we were unable to recover it. 00:34:23.466 [2024-07-25 01:20:16.259021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.466 [2024-07-25 01:20:16.259047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.466 qpair failed and we were unable to recover it. 00:34:23.466 [2024-07-25 01:20:16.259185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.466 [2024-07-25 01:20:16.259213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.466 qpair failed and we were unable to recover it. 00:34:23.466 [2024-07-25 01:20:16.259369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.466 [2024-07-25 01:20:16.259395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.466 qpair failed and we were unable to recover it. 00:34:23.466 [2024-07-25 01:20:16.259534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.466 [2024-07-25 01:20:16.259559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.466 qpair failed and we were unable to recover it. 00:34:23.466 [2024-07-25 01:20:16.259727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.466 [2024-07-25 01:20:16.259752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.466 qpair failed and we were unable to recover it. 00:34:23.466 [2024-07-25 01:20:16.259871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.466 [2024-07-25 01:20:16.259897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.466 qpair failed and we were unable to recover it. 00:34:23.466 [2024-07-25 01:20:16.260003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.466 [2024-07-25 01:20:16.260029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.466 qpair failed and we were unable to recover it. 00:34:23.466 [2024-07-25 01:20:16.260172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.466 [2024-07-25 01:20:16.260197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.466 qpair failed and we were unable to recover it. 00:34:23.466 [2024-07-25 01:20:16.260343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.466 [2024-07-25 01:20:16.260383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.466 qpair failed and we were unable to recover it. 00:34:23.466 [2024-07-25 01:20:16.260499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.466 [2024-07-25 01:20:16.260526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.466 qpair failed and we were unable to recover it. 00:34:23.466 [2024-07-25 01:20:16.260672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.466 [2024-07-25 01:20:16.260698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.466 qpair failed and we were unable to recover it. 00:34:23.466 [2024-07-25 01:20:16.260844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.466 [2024-07-25 01:20:16.260870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.466 qpair failed and we were unable to recover it. 00:34:23.466 [2024-07-25 01:20:16.261020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.466 [2024-07-25 01:20:16.261045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.466 qpair failed and we were unable to recover it. 00:34:23.466 [2024-07-25 01:20:16.261191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.466 [2024-07-25 01:20:16.261221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.466 qpair failed and we were unable to recover it. 00:34:23.466 [2024-07-25 01:20:16.261394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.466 [2024-07-25 01:20:16.261420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.466 qpair failed and we were unable to recover it. 00:34:23.466 [2024-07-25 01:20:16.261566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.466 [2024-07-25 01:20:16.261591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.466 qpair failed and we were unable to recover it. 00:34:23.467 [2024-07-25 01:20:16.261703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.467 [2024-07-25 01:20:16.261729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.467 qpair failed and we were unable to recover it. 00:34:23.467 [2024-07-25 01:20:16.261907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.467 [2024-07-25 01:20:16.261932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.467 qpair failed and we were unable to recover it. 00:34:23.467 [2024-07-25 01:20:16.262045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.467 [2024-07-25 01:20:16.262070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.467 qpair failed and we were unable to recover it. 00:34:23.467 [2024-07-25 01:20:16.262248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.467 [2024-07-25 01:20:16.262274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.467 qpair failed and we were unable to recover it. 00:34:23.467 [2024-07-25 01:20:16.262413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.467 [2024-07-25 01:20:16.262438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.467 qpair failed and we were unable to recover it. 00:34:23.467 [2024-07-25 01:20:16.262578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.467 [2024-07-25 01:20:16.262603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.467 qpair failed and we were unable to recover it. 00:34:23.467 [2024-07-25 01:20:16.262742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.467 [2024-07-25 01:20:16.262767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.467 qpair failed and we were unable to recover it. 00:34:23.467 [2024-07-25 01:20:16.262887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.467 [2024-07-25 01:20:16.262913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.467 qpair failed and we were unable to recover it. 00:34:23.467 [2024-07-25 01:20:16.263050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.467 [2024-07-25 01:20:16.263083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.467 qpair failed and we were unable to recover it. 00:34:23.467 [2024-07-25 01:20:16.263232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.467 [2024-07-25 01:20:16.263266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.467 qpair failed and we were unable to recover it. 00:34:23.467 [2024-07-25 01:20:16.263387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.467 [2024-07-25 01:20:16.263414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.467 qpair failed and we were unable to recover it. 00:34:23.467 [2024-07-25 01:20:16.263527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.467 [2024-07-25 01:20:16.263552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.467 qpair failed and we were unable to recover it. 00:34:23.467 [2024-07-25 01:20:16.263722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.467 [2024-07-25 01:20:16.263747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.467 qpair failed and we were unable to recover it. 00:34:23.467 [2024-07-25 01:20:16.263867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.467 [2024-07-25 01:20:16.263894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.467 qpair failed and we were unable to recover it. 00:34:23.467 [2024-07-25 01:20:16.264068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.467 [2024-07-25 01:20:16.264094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.467 qpair failed and we were unable to recover it. 00:34:23.467 [2024-07-25 01:20:16.264212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.467 [2024-07-25 01:20:16.264239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.467 qpair failed and we were unable to recover it. 00:34:23.467 [2024-07-25 01:20:16.264369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.467 [2024-07-25 01:20:16.264396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.467 qpair failed and we were unable to recover it. 00:34:23.467 [2024-07-25 01:20:16.264543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.467 [2024-07-25 01:20:16.264568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.467 qpair failed and we were unable to recover it. 00:34:23.467 [2024-07-25 01:20:16.264691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.467 [2024-07-25 01:20:16.264716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.467 qpair failed and we were unable to recover it. 00:34:23.467 [2024-07-25 01:20:16.264841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.467 [2024-07-25 01:20:16.264867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.467 qpair failed and we were unable to recover it. 00:34:23.467 [2024-07-25 01:20:16.265008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.467 [2024-07-25 01:20:16.265033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.467 qpair failed and we were unable to recover it. 00:34:23.467 [2024-07-25 01:20:16.265165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.467 [2024-07-25 01:20:16.265194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.467 qpair failed and we were unable to recover it. 00:34:23.467 [2024-07-25 01:20:16.265374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.467 [2024-07-25 01:20:16.265405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.467 qpair failed and we were unable to recover it. 00:34:23.467 [2024-07-25 01:20:16.265585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.467 [2024-07-25 01:20:16.265614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.467 qpair failed and we were unable to recover it. 00:34:23.467 [2024-07-25 01:20:16.265832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.467 [2024-07-25 01:20:16.265860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.467 qpair failed and we were unable to recover it. 00:34:23.467 [2024-07-25 01:20:16.266058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.467 [2024-07-25 01:20:16.266083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.467 qpair failed and we were unable to recover it. 00:34:23.467 [2024-07-25 01:20:16.266272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.467 [2024-07-25 01:20:16.266298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.467 qpair failed and we were unable to recover it. 00:34:23.468 [2024-07-25 01:20:16.266443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.468 [2024-07-25 01:20:16.266469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.468 qpair failed and we were unable to recover it. 00:34:23.468 [2024-07-25 01:20:16.266616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.468 [2024-07-25 01:20:16.266641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.468 qpair failed and we were unable to recover it. 00:34:23.468 [2024-07-25 01:20:16.266815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.468 [2024-07-25 01:20:16.266841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.468 qpair failed and we were unable to recover it. 00:34:23.468 [2024-07-25 01:20:16.267013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.468 [2024-07-25 01:20:16.267039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.468 qpair failed and we were unable to recover it. 00:34:23.468 [2024-07-25 01:20:16.267195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.468 [2024-07-25 01:20:16.267224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.468 qpair failed and we were unable to recover it. 00:34:23.468 [2024-07-25 01:20:16.267402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.468 [2024-07-25 01:20:16.267428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.468 qpair failed and we were unable to recover it. 00:34:23.468 [2024-07-25 01:20:16.267605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.468 [2024-07-25 01:20:16.267634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.468 qpair failed and we were unable to recover it. 00:34:23.468 [2024-07-25 01:20:16.267785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.468 [2024-07-25 01:20:16.267814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.468 qpair failed and we were unable to recover it. 00:34:23.468 [2024-07-25 01:20:16.268024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.468 [2024-07-25 01:20:16.268053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.468 qpair failed and we were unable to recover it. 00:34:23.468 [2024-07-25 01:20:16.268231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.468 [2024-07-25 01:20:16.268268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.468 qpair failed and we were unable to recover it. 00:34:23.468 [2024-07-25 01:20:16.268454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.468 [2024-07-25 01:20:16.268481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.468 qpair failed and we were unable to recover it. 00:34:23.468 [2024-07-25 01:20:16.268605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.468 [2024-07-25 01:20:16.268630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.468 qpair failed and we were unable to recover it. 00:34:23.468 [2024-07-25 01:20:16.268798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.468 [2024-07-25 01:20:16.268826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.468 qpair failed and we were unable to recover it. 00:34:23.468 [2024-07-25 01:20:16.268979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.468 [2024-07-25 01:20:16.269007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.468 qpair failed and we were unable to recover it. 00:34:23.468 [2024-07-25 01:20:16.269150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.468 [2024-07-25 01:20:16.269179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.468 qpair failed and we were unable to recover it. 00:34:23.468 [2024-07-25 01:20:16.269330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.468 [2024-07-25 01:20:16.269359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.468 qpair failed and we were unable to recover it. 00:34:23.468 [2024-07-25 01:20:16.269535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.468 [2024-07-25 01:20:16.269563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.468 qpair failed and we were unable to recover it. 00:34:23.468 [2024-07-25 01:20:16.269741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.468 [2024-07-25 01:20:16.269770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.468 qpair failed and we were unable to recover it. 00:34:23.468 [2024-07-25 01:20:16.269971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.468 [2024-07-25 01:20:16.269998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.468 qpair failed and we were unable to recover it. 00:34:23.468 [2024-07-25 01:20:16.270180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.468 [2024-07-25 01:20:16.270208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.468 qpair failed and we were unable to recover it. 00:34:23.468 [2024-07-25 01:20:16.270399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.468 [2024-07-25 01:20:16.270427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.468 qpair failed and we were unable to recover it. 00:34:23.468 [2024-07-25 01:20:16.270600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.468 [2024-07-25 01:20:16.270627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.468 qpair failed and we were unable to recover it. 00:34:23.468 [2024-07-25 01:20:16.270831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.468 [2024-07-25 01:20:16.270859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.468 qpair failed and we were unable to recover it. 00:34:23.468 [2024-07-25 01:20:16.271018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.468 [2024-07-25 01:20:16.271044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.468 qpair failed and we were unable to recover it. 00:34:23.468 [2024-07-25 01:20:16.271206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.468 [2024-07-25 01:20:16.271231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.468 qpair failed and we were unable to recover it. 00:34:23.468 [2024-07-25 01:20:16.271404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.468 [2024-07-25 01:20:16.271431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.468 qpair failed and we were unable to recover it. 00:34:23.468 [2024-07-25 01:20:16.271653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.468 [2024-07-25 01:20:16.271680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.468 qpair failed and we were unable to recover it. 00:34:23.468 [2024-07-25 01:20:16.271837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.468 [2024-07-25 01:20:16.271864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.468 qpair failed and we were unable to recover it. 00:34:23.468 [2024-07-25 01:20:16.272045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.468 [2024-07-25 01:20:16.272072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.468 qpair failed and we were unable to recover it. 00:34:23.468 [2024-07-25 01:20:16.272258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.468 [2024-07-25 01:20:16.272287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.468 qpair failed and we were unable to recover it. 00:34:23.468 [2024-07-25 01:20:16.272475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.468 [2024-07-25 01:20:16.272502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.468 qpair failed and we were unable to recover it. 00:34:23.468 [2024-07-25 01:20:16.272645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.468 [2024-07-25 01:20:16.272671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.468 qpair failed and we were unable to recover it. 00:34:23.468 [2024-07-25 01:20:16.272791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.468 [2024-07-25 01:20:16.272818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.468 qpair failed and we were unable to recover it. 00:34:23.468 [2024-07-25 01:20:16.272984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.468 [2024-07-25 01:20:16.273010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.468 qpair failed and we were unable to recover it. 00:34:23.468 [2024-07-25 01:20:16.273189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.468 [2024-07-25 01:20:16.273214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.468 qpair failed and we were unable to recover it. 00:34:23.468 [2024-07-25 01:20:16.273379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.468 [2024-07-25 01:20:16.273406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.468 qpair failed and we were unable to recover it. 00:34:23.468 [2024-07-25 01:20:16.273549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.468 [2024-07-25 01:20:16.273575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.468 qpair failed and we were unable to recover it. 00:34:23.468 [2024-07-25 01:20:16.273711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.468 [2024-07-25 01:20:16.273736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.469 qpair failed and we were unable to recover it. 00:34:23.469 [2024-07-25 01:20:16.273853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.469 [2024-07-25 01:20:16.273880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.469 qpair failed and we were unable to recover it. 00:34:23.469 [2024-07-25 01:20:16.274025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.469 [2024-07-25 01:20:16.274051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.469 qpair failed and we were unable to recover it. 00:34:23.469 [2024-07-25 01:20:16.274203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.469 [2024-07-25 01:20:16.274228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.469 qpair failed and we were unable to recover it. 00:34:23.469 [2024-07-25 01:20:16.274393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.469 [2024-07-25 01:20:16.274419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.469 qpair failed and we were unable to recover it. 00:34:23.469 [2024-07-25 01:20:16.274579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.469 [2024-07-25 01:20:16.274605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.469 qpair failed and we were unable to recover it. 00:34:23.469 [2024-07-25 01:20:16.274758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.469 [2024-07-25 01:20:16.274783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.469 qpair failed and we were unable to recover it. 00:34:23.469 [2024-07-25 01:20:16.274926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.469 [2024-07-25 01:20:16.274951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.469 qpair failed and we were unable to recover it. 00:34:23.469 [2024-07-25 01:20:16.275122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.469 [2024-07-25 01:20:16.275147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.469 qpair failed and we were unable to recover it. 00:34:23.469 [2024-07-25 01:20:16.275326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.469 [2024-07-25 01:20:16.275353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.469 qpair failed and we were unable to recover it. 00:34:23.469 [2024-07-25 01:20:16.275476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.469 [2024-07-25 01:20:16.275501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.469 qpair failed and we were unable to recover it. 00:34:23.469 [2024-07-25 01:20:16.275615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.469 [2024-07-25 01:20:16.275644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.469 qpair failed and we were unable to recover it. 00:34:23.469 [2024-07-25 01:20:16.275789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.469 [2024-07-25 01:20:16.275814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.469 qpair failed and we were unable to recover it. 00:34:23.469 [2024-07-25 01:20:16.275991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.469 [2024-07-25 01:20:16.276017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.469 qpair failed and we were unable to recover it. 00:34:23.469 [2024-07-25 01:20:16.276185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.469 [2024-07-25 01:20:16.276210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.469 qpair failed and we were unable to recover it. 00:34:23.469 [2024-07-25 01:20:16.276355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.469 [2024-07-25 01:20:16.276380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.469 qpair failed and we were unable to recover it. 00:34:23.469 [2024-07-25 01:20:16.276504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.469 [2024-07-25 01:20:16.276529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.469 qpair failed and we were unable to recover it. 00:34:23.469 [2024-07-25 01:20:16.276678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.469 [2024-07-25 01:20:16.276704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.469 qpair failed and we were unable to recover it. 00:34:23.469 [2024-07-25 01:20:16.276848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.469 [2024-07-25 01:20:16.276874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.469 qpair failed and we were unable to recover it. 00:34:23.469 [2024-07-25 01:20:16.277014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.469 [2024-07-25 01:20:16.277039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.469 qpair failed and we were unable to recover it. 00:34:23.469 [2024-07-25 01:20:16.277162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.469 [2024-07-25 01:20:16.277187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.469 qpair failed and we were unable to recover it. 00:34:23.469 [2024-07-25 01:20:16.277333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.469 [2024-07-25 01:20:16.277359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.469 qpair failed and we were unable to recover it. 00:34:23.469 [2024-07-25 01:20:16.277485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.469 [2024-07-25 01:20:16.277511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.469 qpair failed and we were unable to recover it. 00:34:23.469 [2024-07-25 01:20:16.277627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.469 [2024-07-25 01:20:16.277653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.469 qpair failed and we were unable to recover it. 00:34:23.469 [2024-07-25 01:20:16.277789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.469 [2024-07-25 01:20:16.277814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.469 qpair failed and we were unable to recover it. 00:34:23.469 [2024-07-25 01:20:16.277964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.469 [2024-07-25 01:20:16.277990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.469 qpair failed and we were unable to recover it. 00:34:23.469 [2024-07-25 01:20:16.278157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.469 [2024-07-25 01:20:16.278183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.469 qpair failed and we were unable to recover it. 00:34:23.469 [2024-07-25 01:20:16.278331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.469 [2024-07-25 01:20:16.278357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.469 qpair failed and we were unable to recover it. 00:34:23.469 [2024-07-25 01:20:16.278500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.469 [2024-07-25 01:20:16.278526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.469 qpair failed and we were unable to recover it. 00:34:23.469 [2024-07-25 01:20:16.278669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.469 [2024-07-25 01:20:16.278695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.469 qpair failed and we were unable to recover it. 00:34:23.469 [2024-07-25 01:20:16.278812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.469 [2024-07-25 01:20:16.278837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.469 qpair failed and we were unable to recover it. 00:34:23.469 [2024-07-25 01:20:16.278956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.469 [2024-07-25 01:20:16.278981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.469 qpair failed and we were unable to recover it. 00:34:23.469 [2024-07-25 01:20:16.279127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.469 [2024-07-25 01:20:16.279152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.469 qpair failed and we were unable to recover it. 00:34:23.469 [2024-07-25 01:20:16.279300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.469 [2024-07-25 01:20:16.279326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.469 qpair failed and we were unable to recover it. 00:34:23.470 [2024-07-25 01:20:16.279446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.470 [2024-07-25 01:20:16.279471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.470 qpair failed and we were unable to recover it. 00:34:23.470 [2024-07-25 01:20:16.279591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.470 [2024-07-25 01:20:16.279616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.470 qpair failed and we were unable to recover it. 00:34:23.470 [2024-07-25 01:20:16.279735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.470 [2024-07-25 01:20:16.279762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.470 qpair failed and we were unable to recover it. 00:34:23.470 [2024-07-25 01:20:16.279887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.470 [2024-07-25 01:20:16.279913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.470 qpair failed and we were unable to recover it. 00:34:23.470 [2024-07-25 01:20:16.280058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.470 [2024-07-25 01:20:16.280083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.470 qpair failed and we were unable to recover it. 00:34:23.470 [2024-07-25 01:20:16.280254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.470 [2024-07-25 01:20:16.280280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.470 qpair failed and we were unable to recover it. 00:34:23.470 [2024-07-25 01:20:16.280401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.470 [2024-07-25 01:20:16.280427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.470 qpair failed and we were unable to recover it. 00:34:23.470 [2024-07-25 01:20:16.280542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.470 [2024-07-25 01:20:16.280567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.470 qpair failed and we were unable to recover it. 00:34:23.470 [2024-07-25 01:20:16.280734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.470 [2024-07-25 01:20:16.280759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.470 qpair failed and we were unable to recover it. 00:34:23.470 [2024-07-25 01:20:16.280879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.470 [2024-07-25 01:20:16.280905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.470 qpair failed and we were unable to recover it. 00:34:23.470 [2024-07-25 01:20:16.281037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.470 [2024-07-25 01:20:16.281062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.470 qpair failed and we were unable to recover it. 00:34:23.470 [2024-07-25 01:20:16.281180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.470 [2024-07-25 01:20:16.281207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.470 qpair failed and we were unable to recover it. 00:34:23.470 [2024-07-25 01:20:16.281358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.470 [2024-07-25 01:20:16.281385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.470 qpair failed and we were unable to recover it. 00:34:23.470 [2024-07-25 01:20:16.281498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.470 [2024-07-25 01:20:16.281525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.470 qpair failed and we were unable to recover it. 00:34:23.470 [2024-07-25 01:20:16.281671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.470 [2024-07-25 01:20:16.281696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.470 qpair failed and we were unable to recover it. 00:34:23.470 [2024-07-25 01:20:16.281838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.470 [2024-07-25 01:20:16.281864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.470 qpair failed and we were unable to recover it. 00:34:23.470 [2024-07-25 01:20:16.282008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.470 [2024-07-25 01:20:16.282033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.470 qpair failed and we were unable to recover it. 00:34:23.470 [2024-07-25 01:20:16.282158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.470 [2024-07-25 01:20:16.282187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.470 qpair failed and we were unable to recover it. 00:34:23.470 [2024-07-25 01:20:16.282341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.470 [2024-07-25 01:20:16.282366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.470 qpair failed and we were unable to recover it. 00:34:23.470 [2024-07-25 01:20:16.282516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.470 [2024-07-25 01:20:16.282541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.470 qpair failed and we were unable to recover it. 00:34:23.470 [2024-07-25 01:20:16.282657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.470 [2024-07-25 01:20:16.282682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.470 qpair failed and we were unable to recover it. 00:34:23.470 [2024-07-25 01:20:16.282803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.470 [2024-07-25 01:20:16.282828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.470 qpair failed and we were unable to recover it. 00:34:23.470 [2024-07-25 01:20:16.282974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.470 [2024-07-25 01:20:16.283000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.470 qpair failed and we were unable to recover it. 00:34:23.470 [2024-07-25 01:20:16.283148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.470 [2024-07-25 01:20:16.283173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.470 qpair failed and we were unable to recover it. 00:34:23.470 [2024-07-25 01:20:16.283315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.470 [2024-07-25 01:20:16.283341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.470 qpair failed and we were unable to recover it. 00:34:23.470 [2024-07-25 01:20:16.283455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.470 [2024-07-25 01:20:16.283481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.470 qpair failed and we were unable to recover it. 00:34:23.470 [2024-07-25 01:20:16.283597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.470 [2024-07-25 01:20:16.283622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.470 qpair failed and we were unable to recover it. 00:34:23.470 [2024-07-25 01:20:16.283781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.470 [2024-07-25 01:20:16.283806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.470 qpair failed and we were unable to recover it. 00:34:23.470 [2024-07-25 01:20:16.283921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.470 [2024-07-25 01:20:16.283947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.470 qpair failed and we were unable to recover it. 00:34:23.470 [2024-07-25 01:20:16.284070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.470 [2024-07-25 01:20:16.284097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.470 qpair failed and we were unable to recover it. 00:34:23.470 [2024-07-25 01:20:16.284265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.470 [2024-07-25 01:20:16.284290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.470 qpair failed and we were unable to recover it. 00:34:23.470 [2024-07-25 01:20:16.284412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.470 [2024-07-25 01:20:16.284436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.470 qpair failed and we were unable to recover it. 00:34:23.470 [2024-07-25 01:20:16.284550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.470 [2024-07-25 01:20:16.284575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.470 qpair failed and we were unable to recover it. 00:34:23.470 [2024-07-25 01:20:16.284691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.470 [2024-07-25 01:20:16.284715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.470 qpair failed and we were unable to recover it. 00:34:23.470 [2024-07-25 01:20:16.284885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.470 [2024-07-25 01:20:16.284910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.470 qpair failed and we were unable to recover it. 00:34:23.470 [2024-07-25 01:20:16.285058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.470 [2024-07-25 01:20:16.285084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.470 qpair failed and we were unable to recover it. 00:34:23.471 [2024-07-25 01:20:16.285198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.471 [2024-07-25 01:20:16.285224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.471 qpair failed and we were unable to recover it. 00:34:23.471 [2024-07-25 01:20:16.285374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.471 [2024-07-25 01:20:16.285400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.471 qpair failed and we were unable to recover it. 00:34:23.471 [2024-07-25 01:20:16.285520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.471 [2024-07-25 01:20:16.285544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.471 qpair failed and we were unable to recover it. 00:34:23.471 [2024-07-25 01:20:16.285701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.471 [2024-07-25 01:20:16.285725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.471 qpair failed and we were unable to recover it. 00:34:23.471 [2024-07-25 01:20:16.285866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.471 [2024-07-25 01:20:16.285891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.471 qpair failed and we were unable to recover it. 00:34:23.471 [2024-07-25 01:20:16.286021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.471 [2024-07-25 01:20:16.286045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.471 qpair failed and we were unable to recover it. 00:34:23.471 [2024-07-25 01:20:16.286173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.471 [2024-07-25 01:20:16.286198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.471 qpair failed and we were unable to recover it. 00:34:23.471 [2024-07-25 01:20:16.286332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.471 [2024-07-25 01:20:16.286358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.471 qpair failed and we were unable to recover it. 00:34:23.471 [2024-07-25 01:20:16.286506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.471 [2024-07-25 01:20:16.286531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.471 qpair failed and we were unable to recover it. 00:34:23.471 [2024-07-25 01:20:16.286677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.471 [2024-07-25 01:20:16.286702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.471 qpair failed and we were unable to recover it. 00:34:23.471 [2024-07-25 01:20:16.286875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.471 [2024-07-25 01:20:16.286900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.471 qpair failed and we were unable to recover it. 00:34:23.471 [2024-07-25 01:20:16.287012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.471 [2024-07-25 01:20:16.287037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.471 qpair failed and we were unable to recover it. 00:34:23.471 [2024-07-25 01:20:16.287181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.471 [2024-07-25 01:20:16.287206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.471 qpair failed and we were unable to recover it. 00:34:23.471 [2024-07-25 01:20:16.287366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.471 [2024-07-25 01:20:16.287392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.471 qpair failed and we were unable to recover it. 00:34:23.471 [2024-07-25 01:20:16.287500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.471 [2024-07-25 01:20:16.287525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.471 qpair failed and we were unable to recover it. 00:34:23.471 [2024-07-25 01:20:16.287648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.471 [2024-07-25 01:20:16.287674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.471 qpair failed and we were unable to recover it. 00:34:23.471 [2024-07-25 01:20:16.287793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.471 [2024-07-25 01:20:16.287818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.471 qpair failed and we were unable to recover it. 00:34:23.471 [2024-07-25 01:20:16.287944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.471 [2024-07-25 01:20:16.287969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.471 qpair failed and we were unable to recover it. 00:34:23.471 [2024-07-25 01:20:16.288136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.471 [2024-07-25 01:20:16.288160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.471 qpair failed and we were unable to recover it. 00:34:23.471 [2024-07-25 01:20:16.288313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.471 [2024-07-25 01:20:16.288339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.471 qpair failed and we were unable to recover it. 00:34:23.471 [2024-07-25 01:20:16.288461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.471 [2024-07-25 01:20:16.288486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.471 qpair failed and we were unable to recover it. 00:34:23.471 [2024-07-25 01:20:16.288617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.471 [2024-07-25 01:20:16.288646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.471 qpair failed and we were unable to recover it. 00:34:23.471 [2024-07-25 01:20:16.288790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.471 [2024-07-25 01:20:16.288814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.471 qpair failed and we were unable to recover it. 00:34:23.471 [2024-07-25 01:20:16.288984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.471 [2024-07-25 01:20:16.289010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.471 qpair failed and we were unable to recover it. 00:34:23.471 [2024-07-25 01:20:16.289137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.471 [2024-07-25 01:20:16.289163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.471 qpair failed and we were unable to recover it. 00:34:23.471 [2024-07-25 01:20:16.289303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.471 [2024-07-25 01:20:16.289329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.471 qpair failed and we were unable to recover it. 00:34:23.471 [2024-07-25 01:20:16.289474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.471 [2024-07-25 01:20:16.289498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.471 qpair failed and we were unable to recover it. 00:34:23.471 [2024-07-25 01:20:16.289618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.471 [2024-07-25 01:20:16.289644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.471 qpair failed and we were unable to recover it. 00:34:23.471 [2024-07-25 01:20:16.289757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.471 [2024-07-25 01:20:16.289782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.471 qpair failed and we were unable to recover it. 00:34:23.472 [2024-07-25 01:20:16.289932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.472 [2024-07-25 01:20:16.289956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.472 qpair failed and we were unable to recover it. 00:34:23.472 [2024-07-25 01:20:16.290096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.472 [2024-07-25 01:20:16.290120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.472 qpair failed and we were unable to recover it. 00:34:23.472 [2024-07-25 01:20:16.290262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.472 [2024-07-25 01:20:16.290287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.472 qpair failed and we were unable to recover it. 00:34:23.472 [2024-07-25 01:20:16.290433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.472 [2024-07-25 01:20:16.290459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.472 qpair failed and we were unable to recover it. 00:34:23.472 [2024-07-25 01:20:16.290603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.472 [2024-07-25 01:20:16.290628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.472 qpair failed and we were unable to recover it. 00:34:23.472 [2024-07-25 01:20:16.290769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.472 [2024-07-25 01:20:16.290794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.472 qpair failed and we were unable to recover it. 00:34:23.472 [2024-07-25 01:20:16.290944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.472 [2024-07-25 01:20:16.290969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.472 qpair failed and we were unable to recover it. 00:34:23.472 [2024-07-25 01:20:16.291112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.472 [2024-07-25 01:20:16.291137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.472 qpair failed and we were unable to recover it. 00:34:23.472 [2024-07-25 01:20:16.291282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.472 [2024-07-25 01:20:16.291308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.472 qpair failed and we were unable to recover it. 00:34:23.472 [2024-07-25 01:20:16.291445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.472 [2024-07-25 01:20:16.291469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.472 qpair failed and we were unable to recover it. 00:34:23.472 [2024-07-25 01:20:16.291587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.472 [2024-07-25 01:20:16.291613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.472 qpair failed and we were unable to recover it. 00:34:23.472 [2024-07-25 01:20:16.291735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.472 [2024-07-25 01:20:16.291759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.472 qpair failed and we were unable to recover it. 00:34:23.472 [2024-07-25 01:20:16.291895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.472 [2024-07-25 01:20:16.291920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.472 qpair failed and we were unable to recover it. 00:34:23.472 [2024-07-25 01:20:16.292047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.472 [2024-07-25 01:20:16.292072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.472 qpair failed and we were unable to recover it. 00:34:23.472 [2024-07-25 01:20:16.292252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.472 [2024-07-25 01:20:16.292277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.472 qpair failed and we were unable to recover it. 00:34:23.472 [2024-07-25 01:20:16.292442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.472 [2024-07-25 01:20:16.292466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.472 qpair failed and we were unable to recover it. 00:34:23.472 [2024-07-25 01:20:16.292579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.472 [2024-07-25 01:20:16.292604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.472 qpair failed and we were unable to recover it. 00:34:23.472 [2024-07-25 01:20:16.292746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.472 [2024-07-25 01:20:16.292770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.472 qpair failed and we were unable to recover it. 00:34:23.472 [2024-07-25 01:20:16.292892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.472 [2024-07-25 01:20:16.292918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.472 qpair failed and we were unable to recover it. 00:34:23.472 [2024-07-25 01:20:16.293051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.472 [2024-07-25 01:20:16.293077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.472 qpair failed and we were unable to recover it. 00:34:23.472 [2024-07-25 01:20:16.293221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.472 [2024-07-25 01:20:16.293259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.472 qpair failed and we were unable to recover it. 00:34:23.472 [2024-07-25 01:20:16.293407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.472 [2024-07-25 01:20:16.293433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.472 qpair failed and we were unable to recover it. 00:34:23.472 [2024-07-25 01:20:16.293577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.472 [2024-07-25 01:20:16.293603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.472 qpair failed and we were unable to recover it. 00:34:23.472 [2024-07-25 01:20:16.293742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.472 [2024-07-25 01:20:16.293767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.472 qpair failed and we were unable to recover it. 00:34:23.472 [2024-07-25 01:20:16.293879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.472 [2024-07-25 01:20:16.293904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.472 qpair failed and we were unable to recover it. 00:34:23.472 [2024-07-25 01:20:16.294038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.472 [2024-07-25 01:20:16.294062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.472 qpair failed and we were unable to recover it. 00:34:23.472 [2024-07-25 01:20:16.294200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.472 [2024-07-25 01:20:16.294225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.472 qpair failed and we were unable to recover it. 00:34:23.472 [2024-07-25 01:20:16.294347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.472 [2024-07-25 01:20:16.294372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.472 qpair failed and we were unable to recover it. 00:34:23.472 [2024-07-25 01:20:16.294518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.472 [2024-07-25 01:20:16.294542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.472 qpair failed and we were unable to recover it. 00:34:23.472 [2024-07-25 01:20:16.294702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.472 [2024-07-25 01:20:16.294726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.472 qpair failed and we were unable to recover it. 00:34:23.472 [2024-07-25 01:20:16.294873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.472 [2024-07-25 01:20:16.294898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.472 qpair failed and we were unable to recover it. 00:34:23.472 [2024-07-25 01:20:16.295050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.472 [2024-07-25 01:20:16.295075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.472 qpair failed and we were unable to recover it. 00:34:23.472 [2024-07-25 01:20:16.295251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.473 [2024-07-25 01:20:16.295284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.473 qpair failed and we were unable to recover it. 00:34:23.473 [2024-07-25 01:20:16.295432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.473 [2024-07-25 01:20:16.295457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.473 qpair failed and we were unable to recover it. 00:34:23.473 [2024-07-25 01:20:16.295602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.473 [2024-07-25 01:20:16.295627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.473 qpair failed and we were unable to recover it. 00:34:23.473 [2024-07-25 01:20:16.295774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.473 [2024-07-25 01:20:16.295799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.473 qpair failed and we were unable to recover it. 00:34:23.473 [2024-07-25 01:20:16.295920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.473 [2024-07-25 01:20:16.295947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.473 qpair failed and we were unable to recover it. 00:34:23.473 [2024-07-25 01:20:16.296063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.473 [2024-07-25 01:20:16.296088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.473 qpair failed and we were unable to recover it. 00:34:23.473 [2024-07-25 01:20:16.296228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.473 [2024-07-25 01:20:16.296261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.473 qpair failed and we were unable to recover it. 00:34:23.473 [2024-07-25 01:20:16.296410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.473 [2024-07-25 01:20:16.296435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.473 qpair failed and we were unable to recover it. 00:34:23.473 [2024-07-25 01:20:16.296553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.473 [2024-07-25 01:20:16.296578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.473 qpair failed and we were unable to recover it. 00:34:23.473 [2024-07-25 01:20:16.296710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.473 [2024-07-25 01:20:16.296734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.473 qpair failed and we were unable to recover it. 00:34:23.473 [2024-07-25 01:20:16.296866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.473 [2024-07-25 01:20:16.296891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.473 qpair failed and we were unable to recover it. 00:34:23.473 [2024-07-25 01:20:16.297015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.473 [2024-07-25 01:20:16.297041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.473 qpair failed and we were unable to recover it. 00:34:23.473 [2024-07-25 01:20:16.297192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.473 [2024-07-25 01:20:16.297216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.473 qpair failed and we were unable to recover it. 00:34:23.473 [2024-07-25 01:20:16.297374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.473 [2024-07-25 01:20:16.297401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.473 qpair failed and we were unable to recover it. 00:34:23.473 [2024-07-25 01:20:16.297520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.473 [2024-07-25 01:20:16.297546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.473 qpair failed and we were unable to recover it. 00:34:23.473 [2024-07-25 01:20:16.297700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.473 [2024-07-25 01:20:16.297725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.473 qpair failed and we were unable to recover it. 00:34:23.473 [2024-07-25 01:20:16.297871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.473 [2024-07-25 01:20:16.297896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.473 qpair failed and we were unable to recover it. 00:34:23.473 [2024-07-25 01:20:16.298035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.473 [2024-07-25 01:20:16.298061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.473 qpair failed and we were unable to recover it. 00:34:23.473 [2024-07-25 01:20:16.298174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.473 [2024-07-25 01:20:16.298199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.473 qpair failed and we were unable to recover it. 00:34:23.473 [2024-07-25 01:20:16.298330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.473 [2024-07-25 01:20:16.298356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.473 qpair failed and we were unable to recover it. 00:34:23.473 [2024-07-25 01:20:16.298503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.473 [2024-07-25 01:20:16.298528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.473 qpair failed and we were unable to recover it. 00:34:23.473 [2024-07-25 01:20:16.298645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.473 [2024-07-25 01:20:16.298670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.473 qpair failed and we were unable to recover it. 00:34:23.473 [2024-07-25 01:20:16.298797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.473 [2024-07-25 01:20:16.298822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.473 qpair failed and we were unable to recover it. 00:34:23.473 [2024-07-25 01:20:16.298940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.473 [2024-07-25 01:20:16.298965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.473 qpair failed and we were unable to recover it. 00:34:23.473 [2024-07-25 01:20:16.299108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.473 [2024-07-25 01:20:16.299135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.473 qpair failed and we were unable to recover it. 00:34:23.473 [2024-07-25 01:20:16.299277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.473 [2024-07-25 01:20:16.299303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.473 qpair failed and we were unable to recover it. 00:34:23.473 [2024-07-25 01:20:16.299447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.473 [2024-07-25 01:20:16.299473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.473 qpair failed and we were unable to recover it. 00:34:23.473 [2024-07-25 01:20:16.299624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.473 [2024-07-25 01:20:16.299649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.473 qpair failed and we were unable to recover it. 00:34:23.473 [2024-07-25 01:20:16.299767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.473 [2024-07-25 01:20:16.299792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.473 qpair failed and we were unable to recover it. 00:34:23.473 [2024-07-25 01:20:16.299936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.473 [2024-07-25 01:20:16.299961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.473 qpair failed and we were unable to recover it. 00:34:23.473 [2024-07-25 01:20:16.300112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.473 [2024-07-25 01:20:16.300137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.473 qpair failed and we were unable to recover it. 00:34:23.473 [2024-07-25 01:20:16.300255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.473 [2024-07-25 01:20:16.300281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.473 qpair failed and we were unable to recover it. 00:34:23.473 [2024-07-25 01:20:16.300447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.473 [2024-07-25 01:20:16.300472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.473 qpair failed and we were unable to recover it. 00:34:23.473 [2024-07-25 01:20:16.300616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.473 [2024-07-25 01:20:16.300641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.473 qpair failed and we were unable to recover it. 00:34:23.473 [2024-07-25 01:20:16.300771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.473 [2024-07-25 01:20:16.300796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.473 qpair failed and we were unable to recover it. 00:34:23.473 [2024-07-25 01:20:16.300921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.473 [2024-07-25 01:20:16.300946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.473 qpair failed and we were unable to recover it. 00:34:23.473 [2024-07-25 01:20:16.301097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.473 [2024-07-25 01:20:16.301122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.473 qpair failed and we were unable to recover it. 00:34:23.473 [2024-07-25 01:20:16.301240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.473 [2024-07-25 01:20:16.301278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.474 qpair failed and we were unable to recover it. 00:34:23.474 [2024-07-25 01:20:16.301402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.474 [2024-07-25 01:20:16.301428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.474 qpair failed and we were unable to recover it. 00:34:23.474 [2024-07-25 01:20:16.301565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.474 [2024-07-25 01:20:16.301590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.474 qpair failed and we were unable to recover it. 00:34:23.474 [2024-07-25 01:20:16.301707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.474 [2024-07-25 01:20:16.301737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.474 qpair failed and we were unable to recover it. 00:34:23.474 [2024-07-25 01:20:16.301888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.474 [2024-07-25 01:20:16.301912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.474 qpair failed and we were unable to recover it. 00:34:23.474 [2024-07-25 01:20:16.302054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.474 [2024-07-25 01:20:16.302080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.474 qpair failed and we were unable to recover it. 00:34:23.474 [2024-07-25 01:20:16.302221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.474 [2024-07-25 01:20:16.302251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.474 qpair failed and we were unable to recover it. 00:34:23.474 [2024-07-25 01:20:16.302394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.474 [2024-07-25 01:20:16.302419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.474 qpair failed and we were unable to recover it. 00:34:23.474 [2024-07-25 01:20:16.302577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.474 [2024-07-25 01:20:16.302601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.474 qpair failed and we were unable to recover it. 00:34:23.474 [2024-07-25 01:20:16.302751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.474 [2024-07-25 01:20:16.302775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.474 qpair failed and we were unable to recover it. 00:34:23.474 [2024-07-25 01:20:16.302895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.474 [2024-07-25 01:20:16.302920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.474 qpair failed and we were unable to recover it. 00:34:23.474 [2024-07-25 01:20:16.303060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.474 [2024-07-25 01:20:16.303086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.474 qpair failed and we were unable to recover it. 00:34:23.474 [2024-07-25 01:20:16.303227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.474 [2024-07-25 01:20:16.303258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.474 qpair failed and we were unable to recover it. 00:34:23.474 [2024-07-25 01:20:16.303407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.474 [2024-07-25 01:20:16.303433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.474 qpair failed and we were unable to recover it. 00:34:23.474 [2024-07-25 01:20:16.303556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.474 [2024-07-25 01:20:16.303581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.474 qpair failed and we were unable to recover it. 00:34:23.474 [2024-07-25 01:20:16.303728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.474 [2024-07-25 01:20:16.303752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.474 qpair failed and we were unable to recover it. 00:34:23.474 [2024-07-25 01:20:16.303893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.474 [2024-07-25 01:20:16.303920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.474 qpair failed and we were unable to recover it. 00:34:23.474 [2024-07-25 01:20:16.304046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.474 [2024-07-25 01:20:16.304071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.474 qpair failed and we were unable to recover it. 00:34:23.474 [2024-07-25 01:20:16.304189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.474 [2024-07-25 01:20:16.304213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.474 qpair failed and we were unable to recover it. 00:34:23.474 [2024-07-25 01:20:16.304392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.474 [2024-07-25 01:20:16.304418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.474 qpair failed and we were unable to recover it. 00:34:23.474 [2024-07-25 01:20:16.304537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.474 [2024-07-25 01:20:16.304562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.474 qpair failed and we were unable to recover it. 00:34:23.474 [2024-07-25 01:20:16.304708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.474 [2024-07-25 01:20:16.304732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.474 qpair failed and we were unable to recover it. 00:34:23.474 [2024-07-25 01:20:16.304880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.474 [2024-07-25 01:20:16.304905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.474 qpair failed and we were unable to recover it. 00:34:23.474 [2024-07-25 01:20:16.305074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.474 [2024-07-25 01:20:16.305099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.474 qpair failed and we were unable to recover it. 00:34:23.474 [2024-07-25 01:20:16.305251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.474 [2024-07-25 01:20:16.305277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.474 qpair failed and we were unable to recover it. 00:34:23.474 [2024-07-25 01:20:16.305422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.474 [2024-07-25 01:20:16.305447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.474 qpair failed and we were unable to recover it. 00:34:23.474 [2024-07-25 01:20:16.305560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.474 [2024-07-25 01:20:16.305584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.474 qpair failed and we were unable to recover it. 00:34:23.474 [2024-07-25 01:20:16.305703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.474 [2024-07-25 01:20:16.305728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.474 qpair failed and we were unable to recover it. 00:34:23.474 [2024-07-25 01:20:16.305871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.474 [2024-07-25 01:20:16.305896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.474 qpair failed and we were unable to recover it. 00:34:23.474 [2024-07-25 01:20:16.306033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.474 [2024-07-25 01:20:16.306058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.474 qpair failed and we were unable to recover it. 00:34:23.474 [2024-07-25 01:20:16.306203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.474 [2024-07-25 01:20:16.306228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.474 qpair failed and we were unable to recover it. 00:34:23.474 [2024-07-25 01:20:16.306385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.474 [2024-07-25 01:20:16.306412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.474 qpair failed and we were unable to recover it. 00:34:23.474 [2024-07-25 01:20:16.306556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.474 [2024-07-25 01:20:16.306582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.474 qpair failed and we were unable to recover it. 00:34:23.474 [2024-07-25 01:20:16.306740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.474 [2024-07-25 01:20:16.306766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.474 qpair failed and we were unable to recover it. 00:34:23.474 [2024-07-25 01:20:16.306912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.474 [2024-07-25 01:20:16.306937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.474 qpair failed and we were unable to recover it. 00:34:23.474 [2024-07-25 01:20:16.307074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.474 [2024-07-25 01:20:16.307098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.474 qpair failed and we were unable to recover it. 00:34:23.474 [2024-07-25 01:20:16.307237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.474 [2024-07-25 01:20:16.307270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.474 qpair failed and we were unable to recover it. 00:34:23.474 [2024-07-25 01:20:16.307419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.474 [2024-07-25 01:20:16.307444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.474 qpair failed and we were unable to recover it. 00:34:23.474 [2024-07-25 01:20:16.307599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.475 [2024-07-25 01:20:16.307624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.475 qpair failed and we were unable to recover it. 00:34:23.475 [2024-07-25 01:20:16.307753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.475 [2024-07-25 01:20:16.307778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.475 qpair failed and we were unable to recover it. 00:34:23.475 [2024-07-25 01:20:16.307886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.475 [2024-07-25 01:20:16.307911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.475 qpair failed and we were unable to recover it. 00:34:23.475 [2024-07-25 01:20:16.308057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.475 [2024-07-25 01:20:16.308082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.475 qpair failed and we were unable to recover it. 00:34:23.475 [2024-07-25 01:20:16.308223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.475 [2024-07-25 01:20:16.308255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.475 qpair failed and we were unable to recover it. 00:34:23.475 [2024-07-25 01:20:16.308405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.475 [2024-07-25 01:20:16.308434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.475 qpair failed and we were unable to recover it. 00:34:23.475 [2024-07-25 01:20:16.308553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.475 [2024-07-25 01:20:16.308578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.475 qpair failed and we were unable to recover it. 00:34:23.475 [2024-07-25 01:20:16.308694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.475 [2024-07-25 01:20:16.308719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.475 qpair failed and we were unable to recover it. 00:34:23.475 [2024-07-25 01:20:16.308857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.475 [2024-07-25 01:20:16.308882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.475 qpair failed and we were unable to recover it. 00:34:23.475 [2024-07-25 01:20:16.309003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.475 [2024-07-25 01:20:16.309028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.475 qpair failed and we were unable to recover it. 00:34:23.475 [2024-07-25 01:20:16.309169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.475 [2024-07-25 01:20:16.309194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.475 qpair failed and we were unable to recover it. 00:34:23.475 [2024-07-25 01:20:16.309312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.475 [2024-07-25 01:20:16.309337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.475 qpair failed and we were unable to recover it. 00:34:23.475 [2024-07-25 01:20:16.309450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.475 [2024-07-25 01:20:16.309476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.475 qpair failed and we were unable to recover it. 00:34:23.475 [2024-07-25 01:20:16.309588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.475 [2024-07-25 01:20:16.309612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.475 qpair failed and we were unable to recover it. 00:34:23.475 [2024-07-25 01:20:16.309734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.475 [2024-07-25 01:20:16.309761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.475 qpair failed and we were unable to recover it. 00:34:23.475 [2024-07-25 01:20:16.309909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.475 [2024-07-25 01:20:16.309934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.475 qpair failed and we were unable to recover it. 00:34:23.475 [2024-07-25 01:20:16.310053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.475 [2024-07-25 01:20:16.310078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.475 qpair failed and we were unable to recover it. 00:34:23.475 [2024-07-25 01:20:16.310190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.475 [2024-07-25 01:20:16.310215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.475 qpair failed and we were unable to recover it. 00:34:23.475 [2024-07-25 01:20:16.310346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.475 [2024-07-25 01:20:16.310372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.475 qpair failed and we were unable to recover it. 00:34:23.475 [2024-07-25 01:20:16.310523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.475 [2024-07-25 01:20:16.310549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.475 qpair failed and we were unable to recover it. 00:34:23.475 [2024-07-25 01:20:16.310669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.475 [2024-07-25 01:20:16.310693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.475 qpair failed and we were unable to recover it. 00:34:23.475 [2024-07-25 01:20:16.310839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.475 [2024-07-25 01:20:16.310863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.475 qpair failed and we were unable to recover it. 00:34:23.475 [2024-07-25 01:20:16.310997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.475 [2024-07-25 01:20:16.311022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.475 qpair failed and we were unable to recover it. 00:34:23.475 [2024-07-25 01:20:16.311157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.475 [2024-07-25 01:20:16.311182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.475 qpair failed and we were unable to recover it. 00:34:23.475 [2024-07-25 01:20:16.311326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.475 [2024-07-25 01:20:16.311352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.475 qpair failed and we were unable to recover it. 00:34:23.475 [2024-07-25 01:20:16.311499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.475 [2024-07-25 01:20:16.311524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.475 qpair failed and we were unable to recover it. 00:34:23.475 [2024-07-25 01:20:16.311670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.475 [2024-07-25 01:20:16.311694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.475 qpair failed and we were unable to recover it. 00:34:23.475 [2024-07-25 01:20:16.311841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.475 [2024-07-25 01:20:16.311866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.475 qpair failed and we were unable to recover it. 00:34:23.475 [2024-07-25 01:20:16.311981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.475 [2024-07-25 01:20:16.312005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.475 qpair failed and we were unable to recover it. 00:34:23.475 [2024-07-25 01:20:16.312176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.475 [2024-07-25 01:20:16.312201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.475 qpair failed and we were unable to recover it. 00:34:23.475 [2024-07-25 01:20:16.312377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.475 [2024-07-25 01:20:16.312403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.475 qpair failed and we were unable to recover it. 00:34:23.475 [2024-07-25 01:20:16.312544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.475 [2024-07-25 01:20:16.312570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.475 qpair failed and we were unable to recover it. 00:34:23.475 [2024-07-25 01:20:16.312691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.475 [2024-07-25 01:20:16.312716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.475 qpair failed and we were unable to recover it. 00:34:23.476 [2024-07-25 01:20:16.312859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.476 [2024-07-25 01:20:16.312884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.476 qpair failed and we were unable to recover it. 00:34:23.476 [2024-07-25 01:20:16.313029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.476 [2024-07-25 01:20:16.313054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.476 qpair failed and we were unable to recover it. 00:34:23.476 [2024-07-25 01:20:16.313225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.476 [2024-07-25 01:20:16.313255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.476 qpair failed and we were unable to recover it. 00:34:23.476 [2024-07-25 01:20:16.313401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.476 [2024-07-25 01:20:16.313425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.476 qpair failed and we were unable to recover it. 00:34:23.476 [2024-07-25 01:20:16.313552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.476 [2024-07-25 01:20:16.313577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.476 qpair failed and we were unable to recover it. 00:34:23.476 [2024-07-25 01:20:16.313724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.476 [2024-07-25 01:20:16.313749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.476 qpair failed and we were unable to recover it. 00:34:23.476 [2024-07-25 01:20:16.313920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.476 [2024-07-25 01:20:16.313945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.476 qpair failed and we were unable to recover it. 00:34:23.476 [2024-07-25 01:20:16.314089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.476 [2024-07-25 01:20:16.314113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.476 qpair failed and we were unable to recover it. 00:34:23.476 [2024-07-25 01:20:16.314265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.476 [2024-07-25 01:20:16.314291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.476 qpair failed and we were unable to recover it. 00:34:23.476 [2024-07-25 01:20:16.314409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.476 [2024-07-25 01:20:16.314434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.476 qpair failed and we were unable to recover it. 00:34:23.476 [2024-07-25 01:20:16.314577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.476 [2024-07-25 01:20:16.314602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.476 qpair failed and we were unable to recover it. 00:34:23.476 [2024-07-25 01:20:16.314726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.476 [2024-07-25 01:20:16.314751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.476 qpair failed and we were unable to recover it. 00:34:23.476 [2024-07-25 01:20:16.314925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.476 [2024-07-25 01:20:16.314953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.476 qpair failed and we were unable to recover it. 00:34:23.476 [2024-07-25 01:20:16.315094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.476 [2024-07-25 01:20:16.315120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.476 qpair failed and we were unable to recover it. 00:34:23.476 [2024-07-25 01:20:16.315260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.476 [2024-07-25 01:20:16.315286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.476 qpair failed and we were unable to recover it. 00:34:23.476 [2024-07-25 01:20:16.315397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.476 [2024-07-25 01:20:16.315421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.476 qpair failed and we were unable to recover it. 00:34:23.476 [2024-07-25 01:20:16.315567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.476 [2024-07-25 01:20:16.315592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.476 qpair failed and we were unable to recover it. 00:34:23.476 [2024-07-25 01:20:16.315712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.476 [2024-07-25 01:20:16.315736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.476 qpair failed and we were unable to recover it. 00:34:23.476 [2024-07-25 01:20:16.315883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.476 [2024-07-25 01:20:16.315908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.476 qpair failed and we were unable to recover it. 00:34:23.476 [2024-07-25 01:20:16.316051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.476 [2024-07-25 01:20:16.316075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.476 qpair failed and we were unable to recover it. 00:34:23.476 [2024-07-25 01:20:16.316216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.476 [2024-07-25 01:20:16.316247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.476 qpair failed and we were unable to recover it. 00:34:23.476 [2024-07-25 01:20:16.316403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.476 [2024-07-25 01:20:16.316429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.476 qpair failed and we were unable to recover it. 00:34:23.476 [2024-07-25 01:20:16.316553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.476 [2024-07-25 01:20:16.316579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.476 qpair failed and we were unable to recover it. 00:34:23.476 [2024-07-25 01:20:16.316702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.476 [2024-07-25 01:20:16.316727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.476 qpair failed and we were unable to recover it. 00:34:23.476 [2024-07-25 01:20:16.316868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.476 [2024-07-25 01:20:16.316895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.476 qpair failed and we were unable to recover it. 00:34:23.476 [2024-07-25 01:20:16.317041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.476 [2024-07-25 01:20:16.317067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.476 qpair failed and we were unable to recover it. 00:34:23.476 [2024-07-25 01:20:16.317220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.476 [2024-07-25 01:20:16.317252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.476 qpair failed and we were unable to recover it. 00:34:23.476 [2024-07-25 01:20:16.317377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.476 [2024-07-25 01:20:16.317403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.476 qpair failed and we were unable to recover it. 00:34:23.476 [2024-07-25 01:20:16.317523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.476 [2024-07-25 01:20:16.317548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.476 qpair failed and we were unable to recover it. 00:34:23.476 [2024-07-25 01:20:16.317715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.476 [2024-07-25 01:20:16.317740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.476 qpair failed and we were unable to recover it. 00:34:23.476 [2024-07-25 01:20:16.317917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.476 [2024-07-25 01:20:16.317943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.476 qpair failed and we were unable to recover it. 00:34:23.476 [2024-07-25 01:20:16.318092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.476 [2024-07-25 01:20:16.318117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.476 qpair failed and we were unable to recover it. 00:34:23.476 [2024-07-25 01:20:16.318261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.476 [2024-07-25 01:20:16.318287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.476 qpair failed and we were unable to recover it. 00:34:23.476 [2024-07-25 01:20:16.318454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.476 [2024-07-25 01:20:16.318479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.476 qpair failed and we were unable to recover it. 00:34:23.476 [2024-07-25 01:20:16.318599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.476 [2024-07-25 01:20:16.318625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.476 qpair failed and we were unable to recover it. 00:34:23.476 [2024-07-25 01:20:16.318740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.476 [2024-07-25 01:20:16.318765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.477 qpair failed and we were unable to recover it. 00:34:23.477 [2024-07-25 01:20:16.318878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.477 [2024-07-25 01:20:16.318903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.477 qpair failed and we were unable to recover it. 00:34:23.477 [2024-07-25 01:20:16.319023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.477 [2024-07-25 01:20:16.319048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.477 qpair failed and we were unable to recover it. 00:34:23.477 [2024-07-25 01:20:16.319205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.477 [2024-07-25 01:20:16.319230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.477 qpair failed and we were unable to recover it. 00:34:23.477 [2024-07-25 01:20:16.319363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.477 [2024-07-25 01:20:16.319389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.477 qpair failed and we were unable to recover it. 00:34:23.477 [2024-07-25 01:20:16.319506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.477 [2024-07-25 01:20:16.319532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.477 qpair failed and we were unable to recover it. 00:34:23.477 [2024-07-25 01:20:16.319680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.477 [2024-07-25 01:20:16.319706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.477 qpair failed and we were unable to recover it. 00:34:23.477 [2024-07-25 01:20:16.319876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.477 [2024-07-25 01:20:16.319903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.477 qpair failed and we were unable to recover it. 00:34:23.477 [2024-07-25 01:20:16.320049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.477 [2024-07-25 01:20:16.320074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.477 qpair failed and we were unable to recover it. 00:34:23.477 [2024-07-25 01:20:16.320210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.477 [2024-07-25 01:20:16.320235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.477 qpair failed and we were unable to recover it. 00:34:23.477 [2024-07-25 01:20:16.320365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.477 [2024-07-25 01:20:16.320391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.477 qpair failed and we were unable to recover it. 00:34:23.477 [2024-07-25 01:20:16.320564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.477 [2024-07-25 01:20:16.320589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.477 qpair failed and we were unable to recover it. 00:34:23.477 [2024-07-25 01:20:16.320702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.477 [2024-07-25 01:20:16.320729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.477 qpair failed and we were unable to recover it. 00:34:23.477 [2024-07-25 01:20:16.320848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.477 [2024-07-25 01:20:16.320873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.477 qpair failed and we were unable to recover it. 00:34:23.477 [2024-07-25 01:20:16.321013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.477 [2024-07-25 01:20:16.321038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.477 qpair failed and we were unable to recover it. 00:34:23.477 [2024-07-25 01:20:16.321208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.477 [2024-07-25 01:20:16.321234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.477 qpair failed and we were unable to recover it. 00:34:23.477 [2024-07-25 01:20:16.321361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.477 [2024-07-25 01:20:16.321386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.477 qpair failed and we were unable to recover it. 00:34:23.477 [2024-07-25 01:20:16.321498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.477 [2024-07-25 01:20:16.321528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.477 qpair failed and we were unable to recover it. 00:34:23.477 [2024-07-25 01:20:16.321638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.477 [2024-07-25 01:20:16.321663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.477 qpair failed and we were unable to recover it. 00:34:23.477 [2024-07-25 01:20:16.321813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.477 [2024-07-25 01:20:16.321839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.477 qpair failed and we were unable to recover it. 00:34:23.477 [2024-07-25 01:20:16.321975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.477 [2024-07-25 01:20:16.322000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.477 qpair failed and we were unable to recover it. 00:34:23.477 [2024-07-25 01:20:16.322115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.477 [2024-07-25 01:20:16.322140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.477 qpair failed and we were unable to recover it. 00:34:23.477 [2024-07-25 01:20:16.322270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.477 [2024-07-25 01:20:16.322296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.477 qpair failed and we were unable to recover it. 00:34:23.477 [2024-07-25 01:20:16.322458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.477 [2024-07-25 01:20:16.322483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.477 qpair failed and we were unable to recover it. 00:34:23.477 [2024-07-25 01:20:16.322606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.477 [2024-07-25 01:20:16.322631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.477 qpair failed and we were unable to recover it. 00:34:23.477 [2024-07-25 01:20:16.322772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.477 [2024-07-25 01:20:16.322797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.477 qpair failed and we were unable to recover it. 00:34:23.477 [2024-07-25 01:20:16.322943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.477 [2024-07-25 01:20:16.322968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.477 qpair failed and we were unable to recover it. 00:34:23.477 [2024-07-25 01:20:16.323087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.477 [2024-07-25 01:20:16.323113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.477 qpair failed and we were unable to recover it. 00:34:23.477 [2024-07-25 01:20:16.323266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.477 [2024-07-25 01:20:16.323292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.477 qpair failed and we were unable to recover it. 00:34:23.477 [2024-07-25 01:20:16.323413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.477 [2024-07-25 01:20:16.323439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.477 qpair failed and we were unable to recover it. 00:34:23.477 [2024-07-25 01:20:16.323556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.477 [2024-07-25 01:20:16.323582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.477 qpair failed and we were unable to recover it. 00:34:23.477 [2024-07-25 01:20:16.323737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.477 [2024-07-25 01:20:16.323762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.477 qpair failed and we were unable to recover it. 00:34:23.477 [2024-07-25 01:20:16.323905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.477 [2024-07-25 01:20:16.323930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.477 qpair failed and we were unable to recover it. 00:34:23.477 [2024-07-25 01:20:16.324043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.477 [2024-07-25 01:20:16.324068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.477 qpair failed and we were unable to recover it. 00:34:23.477 [2024-07-25 01:20:16.324237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.477 [2024-07-25 01:20:16.324268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.477 qpair failed and we were unable to recover it. 00:34:23.477 [2024-07-25 01:20:16.324385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.477 [2024-07-25 01:20:16.324412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.477 qpair failed and we were unable to recover it. 00:34:23.477 [2024-07-25 01:20:16.324572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.477 [2024-07-25 01:20:16.324597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.477 qpair failed and we were unable to recover it. 00:34:23.477 [2024-07-25 01:20:16.324719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.477 [2024-07-25 01:20:16.324744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.477 qpair failed and we were unable to recover it. 00:34:23.477 [2024-07-25 01:20:16.324885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.478 [2024-07-25 01:20:16.324910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.478 qpair failed and we were unable to recover it. 00:34:23.478 [2024-07-25 01:20:16.325060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.478 [2024-07-25 01:20:16.325086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.478 qpair failed and we were unable to recover it. 00:34:23.478 [2024-07-25 01:20:16.325228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.478 [2024-07-25 01:20:16.325259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.478 qpair failed and we were unable to recover it. 00:34:23.478 [2024-07-25 01:20:16.325380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.478 [2024-07-25 01:20:16.325405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.478 qpair failed and we were unable to recover it. 00:34:23.478 [2024-07-25 01:20:16.325572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.478 [2024-07-25 01:20:16.325597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.478 qpair failed and we were unable to recover it. 00:34:23.478 [2024-07-25 01:20:16.325731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.478 [2024-07-25 01:20:16.325757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.478 qpair failed and we were unable to recover it. 00:34:23.478 [2024-07-25 01:20:16.325879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.478 [2024-07-25 01:20:16.325905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.478 qpair failed and we were unable to recover it. 00:34:23.478 [2024-07-25 01:20:16.326050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.478 [2024-07-25 01:20:16.326075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.478 qpair failed and we were unable to recover it. 00:34:23.478 [2024-07-25 01:20:16.326188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.478 [2024-07-25 01:20:16.326214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.478 qpair failed and we were unable to recover it. 00:34:23.478 [2024-07-25 01:20:16.326405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.478 [2024-07-25 01:20:16.326432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.478 qpair failed and we were unable to recover it. 00:34:23.478 [2024-07-25 01:20:16.326583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.478 [2024-07-25 01:20:16.326608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.478 qpair failed and we were unable to recover it. 00:34:23.478 [2024-07-25 01:20:16.326751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.478 [2024-07-25 01:20:16.326778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.478 qpair failed and we were unable to recover it. 00:34:23.478 [2024-07-25 01:20:16.326947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.478 [2024-07-25 01:20:16.326973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.478 qpair failed and we were unable to recover it. 00:34:23.478 [2024-07-25 01:20:16.327093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.478 [2024-07-25 01:20:16.327117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.478 qpair failed and we were unable to recover it. 00:34:23.478 [2024-07-25 01:20:16.327286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.478 [2024-07-25 01:20:16.327312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.478 qpair failed and we were unable to recover it. 00:34:23.478 [2024-07-25 01:20:16.327433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.478 [2024-07-25 01:20:16.327457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.478 qpair failed and we were unable to recover it. 00:34:23.478 [2024-07-25 01:20:16.327569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.478 [2024-07-25 01:20:16.327595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.478 qpair failed and we were unable to recover it. 00:34:23.478 [2024-07-25 01:20:16.327740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.478 [2024-07-25 01:20:16.327765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.478 qpair failed and we were unable to recover it. 00:34:23.478 [2024-07-25 01:20:16.327879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.478 [2024-07-25 01:20:16.327905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.478 qpair failed and we were unable to recover it. 00:34:23.478 [2024-07-25 01:20:16.328030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.478 [2024-07-25 01:20:16.328062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.478 qpair failed and we were unable to recover it. 00:34:23.478 [2024-07-25 01:20:16.328203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.478 [2024-07-25 01:20:16.328228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.478 qpair failed and we were unable to recover it. 00:34:23.478 [2024-07-25 01:20:16.328367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.478 [2024-07-25 01:20:16.328393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.478 qpair failed and we were unable to recover it. 00:34:23.478 [2024-07-25 01:20:16.328517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.478 [2024-07-25 01:20:16.328542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.478 qpair failed and we were unable to recover it. 00:34:23.478 [2024-07-25 01:20:16.328693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.478 [2024-07-25 01:20:16.328719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.478 qpair failed and we were unable to recover it. 00:34:23.478 [2024-07-25 01:20:16.328889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.478 [2024-07-25 01:20:16.328914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.478 qpair failed and we were unable to recover it. 00:34:23.478 [2024-07-25 01:20:16.329035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.478 [2024-07-25 01:20:16.329059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.478 qpair failed and we were unable to recover it. 00:34:23.478 [2024-07-25 01:20:16.329168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.478 [2024-07-25 01:20:16.329193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.478 qpair failed and we were unable to recover it. 00:34:23.478 [2024-07-25 01:20:16.329347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.478 [2024-07-25 01:20:16.329373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.478 qpair failed and we were unable to recover it. 00:34:23.478 [2024-07-25 01:20:16.329500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.478 [2024-07-25 01:20:16.329525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.478 qpair failed and we were unable to recover it. 00:34:23.478 [2024-07-25 01:20:16.329674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.478 [2024-07-25 01:20:16.329700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.478 qpair failed and we were unable to recover it. 00:34:23.478 [2024-07-25 01:20:16.329865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.478 [2024-07-25 01:20:16.329890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.478 qpair failed and we were unable to recover it. 00:34:23.478 [2024-07-25 01:20:16.330014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.478 [2024-07-25 01:20:16.330039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.478 qpair failed and we were unable to recover it. 00:34:23.478 [2024-07-25 01:20:16.330182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.478 [2024-07-25 01:20:16.330207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.478 qpair failed and we were unable to recover it. 00:34:23.478 [2024-07-25 01:20:16.330378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.478 [2024-07-25 01:20:16.330405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.478 qpair failed and we were unable to recover it. 00:34:23.478 [2024-07-25 01:20:16.330521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.478 [2024-07-25 01:20:16.330545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.478 qpair failed and we were unable to recover it. 00:34:23.478 [2024-07-25 01:20:16.330694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.478 [2024-07-25 01:20:16.330719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.478 qpair failed and we were unable to recover it. 00:34:23.478 [2024-07-25 01:20:16.330828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.478 [2024-07-25 01:20:16.330852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.478 qpair failed and we were unable to recover it. 00:34:23.478 [2024-07-25 01:20:16.330967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.478 [2024-07-25 01:20:16.330992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.478 qpair failed and we were unable to recover it. 00:34:23.479 [2024-07-25 01:20:16.331133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.479 [2024-07-25 01:20:16.331158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.479 qpair failed and we were unable to recover it. 00:34:23.479 [2024-07-25 01:20:16.331305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.479 [2024-07-25 01:20:16.331330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.479 qpair failed and we were unable to recover it. 00:34:23.479 [2024-07-25 01:20:16.331472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.479 [2024-07-25 01:20:16.331497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.479 qpair failed and we were unable to recover it. 00:34:23.479 [2024-07-25 01:20:16.331643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.479 [2024-07-25 01:20:16.331668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.479 qpair failed and we were unable to recover it. 00:34:23.479 [2024-07-25 01:20:16.331810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.479 [2024-07-25 01:20:16.331834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.479 qpair failed and we were unable to recover it. 00:34:23.479 [2024-07-25 01:20:16.331954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.479 [2024-07-25 01:20:16.331979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.479 qpair failed and we were unable to recover it. 00:34:23.479 [2024-07-25 01:20:16.332114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.479 [2024-07-25 01:20:16.332139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.479 qpair failed and we were unable to recover it. 00:34:23.479 [2024-07-25 01:20:16.332252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.479 [2024-07-25 01:20:16.332279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.479 qpair failed and we were unable to recover it. 00:34:23.479 [2024-07-25 01:20:16.332429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.479 [2024-07-25 01:20:16.332454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.479 qpair failed and we were unable to recover it. 00:34:23.479 [2024-07-25 01:20:16.332587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.479 [2024-07-25 01:20:16.332611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.479 qpair failed and we were unable to recover it. 00:34:23.479 [2024-07-25 01:20:16.332754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.479 [2024-07-25 01:20:16.332779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.479 qpair failed and we were unable to recover it. 00:34:23.479 [2024-07-25 01:20:16.332900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.479 [2024-07-25 01:20:16.332924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.479 qpair failed and we were unable to recover it. 00:34:23.479 [2024-07-25 01:20:16.333092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.479 [2024-07-25 01:20:16.333117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.479 qpair failed and we were unable to recover it. 00:34:23.479 [2024-07-25 01:20:16.333269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.479 [2024-07-25 01:20:16.333295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.479 qpair failed and we were unable to recover it. 00:34:23.479 [2024-07-25 01:20:16.333411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.479 [2024-07-25 01:20:16.333436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.479 qpair failed and we were unable to recover it. 00:34:23.479 [2024-07-25 01:20:16.333552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.479 [2024-07-25 01:20:16.333578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.479 qpair failed and we were unable to recover it. 00:34:23.479 [2024-07-25 01:20:16.333721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.479 [2024-07-25 01:20:16.333746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.479 qpair failed and we were unable to recover it. 00:34:23.479 [2024-07-25 01:20:16.333913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.479 [2024-07-25 01:20:16.333938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.479 qpair failed and we were unable to recover it. 00:34:23.479 [2024-07-25 01:20:16.334052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.479 [2024-07-25 01:20:16.334078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.479 qpair failed and we were unable to recover it. 00:34:23.479 [2024-07-25 01:20:16.334226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.479 [2024-07-25 01:20:16.334264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.479 qpair failed and we were unable to recover it. 00:34:23.479 [2024-07-25 01:20:16.334435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.479 [2024-07-25 01:20:16.334459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.479 qpair failed and we were unable to recover it. 00:34:23.479 [2024-07-25 01:20:16.334584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.479 [2024-07-25 01:20:16.334615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.479 qpair failed and we were unable to recover it. 00:34:23.479 [2024-07-25 01:20:16.334734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.479 [2024-07-25 01:20:16.334758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.479 qpair failed and we were unable to recover it. 00:34:23.479 [2024-07-25 01:20:16.334872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.479 [2024-07-25 01:20:16.334897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.479 qpair failed and we were unable to recover it. 00:34:23.479 [2024-07-25 01:20:16.335040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.479 [2024-07-25 01:20:16.335067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.479 qpair failed and we were unable to recover it. 00:34:23.479 [2024-07-25 01:20:16.335207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.479 [2024-07-25 01:20:16.335232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.479 qpair failed and we were unable to recover it. 00:34:23.479 [2024-07-25 01:20:16.335357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.479 [2024-07-25 01:20:16.335383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.479 qpair failed and we were unable to recover it. 00:34:23.479 [2024-07-25 01:20:16.335492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.479 [2024-07-25 01:20:16.335527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.479 qpair failed and we were unable to recover it. 00:34:23.479 [2024-07-25 01:20:16.335677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.479 [2024-07-25 01:20:16.335710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.479 qpair failed and we were unable to recover it. 00:34:23.479 [2024-07-25 01:20:16.335835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.479 [2024-07-25 01:20:16.335861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.479 qpair failed and we were unable to recover it. 00:34:23.479 [2024-07-25 01:20:16.336008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.479 [2024-07-25 01:20:16.336032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.479 qpair failed and we were unable to recover it. 00:34:23.479 [2024-07-25 01:20:16.336204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.479 [2024-07-25 01:20:16.336228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.479 qpair failed and we were unable to recover it. 00:34:23.479 [2024-07-25 01:20:16.336389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.479 [2024-07-25 01:20:16.336415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.479 qpair failed and we were unable to recover it. 00:34:23.479 [2024-07-25 01:20:16.336561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.479 [2024-07-25 01:20:16.336585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.479 qpair failed and we were unable to recover it. 00:34:23.479 [2024-07-25 01:20:16.336700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.479 [2024-07-25 01:20:16.336724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.479 qpair failed and we were unable to recover it. 00:34:23.479 [2024-07-25 01:20:16.336856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.479 [2024-07-25 01:20:16.336882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.479 qpair failed and we were unable to recover it. 00:34:23.479 [2024-07-25 01:20:16.337013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.479 [2024-07-25 01:20:16.337038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.479 qpair failed and we were unable to recover it. 00:34:23.479 [2024-07-25 01:20:16.337183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.480 [2024-07-25 01:20:16.337209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.480 qpair failed and we were unable to recover it. 00:34:23.480 [2024-07-25 01:20:16.337340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.480 [2024-07-25 01:20:16.337365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.480 qpair failed and we were unable to recover it. 00:34:23.480 [2024-07-25 01:20:16.337486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.480 [2024-07-25 01:20:16.337510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.480 qpair failed and we were unable to recover it. 00:34:23.480 [2024-07-25 01:20:16.337646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.480 [2024-07-25 01:20:16.337671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.480 qpair failed and we were unable to recover it. 00:34:23.480 [2024-07-25 01:20:16.337810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.480 [2024-07-25 01:20:16.337835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.480 qpair failed and we were unable to recover it. 00:34:23.480 [2024-07-25 01:20:16.337980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.480 [2024-07-25 01:20:16.338004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.480 qpair failed and we were unable to recover it. 00:34:23.480 [2024-07-25 01:20:16.338140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.480 [2024-07-25 01:20:16.338165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.480 qpair failed and we were unable to recover it. 00:34:23.480 [2024-07-25 01:20:16.338335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.480 [2024-07-25 01:20:16.338361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.480 qpair failed and we were unable to recover it. 00:34:23.480 [2024-07-25 01:20:16.338505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.480 [2024-07-25 01:20:16.338529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.480 qpair failed and we were unable to recover it. 00:34:23.480 [2024-07-25 01:20:16.338676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.480 [2024-07-25 01:20:16.338702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.480 qpair failed and we were unable to recover it. 00:34:23.480 [2024-07-25 01:20:16.338869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.480 [2024-07-25 01:20:16.338895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.480 qpair failed and we were unable to recover it. 00:34:23.480 [2024-07-25 01:20:16.339040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.480 [2024-07-25 01:20:16.339065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.480 qpair failed and we were unable to recover it. 00:34:23.480 [2024-07-25 01:20:16.339214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.480 [2024-07-25 01:20:16.339239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.480 qpair failed and we were unable to recover it. 00:34:23.480 [2024-07-25 01:20:16.339364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.480 [2024-07-25 01:20:16.339389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.480 qpair failed and we were unable to recover it. 00:34:23.480 [2024-07-25 01:20:16.339531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.480 [2024-07-25 01:20:16.339555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.480 qpair failed and we were unable to recover it. 00:34:23.480 [2024-07-25 01:20:16.339702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.480 [2024-07-25 01:20:16.339727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.480 qpair failed and we were unable to recover it. 00:34:23.480 [2024-07-25 01:20:16.339864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.480 [2024-07-25 01:20:16.339889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.480 qpair failed and we were unable to recover it. 00:34:23.480 [2024-07-25 01:20:16.340004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.480 [2024-07-25 01:20:16.340030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.480 qpair failed and we were unable to recover it. 00:34:23.480 [2024-07-25 01:20:16.340152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.480 [2024-07-25 01:20:16.340178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.480 qpair failed and we were unable to recover it. 00:34:23.480 [2024-07-25 01:20:16.340338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.480 [2024-07-25 01:20:16.340364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.480 qpair failed and we were unable to recover it. 00:34:23.480 [2024-07-25 01:20:16.340487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.480 [2024-07-25 01:20:16.340512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.480 qpair failed and we were unable to recover it. 00:34:23.480 [2024-07-25 01:20:16.340628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.480 [2024-07-25 01:20:16.340652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.480 qpair failed and we were unable to recover it. 00:34:23.480 [2024-07-25 01:20:16.340772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.480 [2024-07-25 01:20:16.340798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.480 qpair failed and we were unable to recover it. 00:34:23.480 [2024-07-25 01:20:16.340970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.480 [2024-07-25 01:20:16.340995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.480 qpair failed and we were unable to recover it. 00:34:23.480 [2024-07-25 01:20:16.341114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.480 [2024-07-25 01:20:16.341145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.480 qpair failed and we were unable to recover it. 00:34:23.480 [2024-07-25 01:20:16.341300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.480 [2024-07-25 01:20:16.341326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.480 qpair failed and we were unable to recover it. 00:34:23.480 [2024-07-25 01:20:16.341452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.480 [2024-07-25 01:20:16.341477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.480 qpair failed and we were unable to recover it. 00:34:23.480 [2024-07-25 01:20:16.341595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.480 [2024-07-25 01:20:16.341620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.480 qpair failed and we were unable to recover it. 00:34:23.480 [2024-07-25 01:20:16.341750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.480 [2024-07-25 01:20:16.341775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.480 qpair failed and we were unable to recover it. 00:34:23.480 [2024-07-25 01:20:16.341889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.480 [2024-07-25 01:20:16.341914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.480 qpair failed and we were unable to recover it. 00:34:23.480 [2024-07-25 01:20:16.342031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.480 [2024-07-25 01:20:16.342057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.480 qpair failed and we were unable to recover it. 00:34:23.480 [2024-07-25 01:20:16.342181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.480 [2024-07-25 01:20:16.342206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.480 qpair failed and we were unable to recover it. 00:34:23.480 [2024-07-25 01:20:16.342337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.481 [2024-07-25 01:20:16.342364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.481 qpair failed and we were unable to recover it. 00:34:23.481 [2024-07-25 01:20:16.342509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.481 [2024-07-25 01:20:16.342533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.481 qpair failed and we were unable to recover it. 00:34:23.481 [2024-07-25 01:20:16.342679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.481 [2024-07-25 01:20:16.342704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.481 qpair failed and we were unable to recover it. 00:34:23.481 [2024-07-25 01:20:16.342818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.481 [2024-07-25 01:20:16.342843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.481 qpair failed and we were unable to recover it. 00:34:23.481 [2024-07-25 01:20:16.342997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.481 [2024-07-25 01:20:16.343022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.481 qpair failed and we were unable to recover it. 00:34:23.481 [2024-07-25 01:20:16.343166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.481 [2024-07-25 01:20:16.343191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.481 qpair failed and we were unable to recover it. 00:34:23.481 [2024-07-25 01:20:16.343353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.481 [2024-07-25 01:20:16.343379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.481 qpair failed and we were unable to recover it. 00:34:23.481 [2024-07-25 01:20:16.343525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.481 [2024-07-25 01:20:16.343550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.481 qpair failed and we were unable to recover it. 00:34:23.481 [2024-07-25 01:20:16.343667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.481 [2024-07-25 01:20:16.343692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.481 qpair failed and we were unable to recover it. 00:34:23.481 [2024-07-25 01:20:16.343835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.481 [2024-07-25 01:20:16.343860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.481 qpair failed and we were unable to recover it. 00:34:23.481 [2024-07-25 01:20:16.343995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.481 [2024-07-25 01:20:16.344020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.481 qpair failed and we were unable to recover it. 00:34:23.481 [2024-07-25 01:20:16.344192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.481 [2024-07-25 01:20:16.344218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.481 qpair failed and we were unable to recover it. 00:34:23.481 [2024-07-25 01:20:16.344371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.481 [2024-07-25 01:20:16.344397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.481 qpair failed and we were unable to recover it. 00:34:23.481 [2024-07-25 01:20:16.344539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.481 [2024-07-25 01:20:16.344564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.481 qpair failed and we were unable to recover it. 00:34:23.481 [2024-07-25 01:20:16.344720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.481 [2024-07-25 01:20:16.344744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.481 qpair failed and we were unable to recover it. 00:34:23.481 [2024-07-25 01:20:16.344897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.481 [2024-07-25 01:20:16.344922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.481 qpair failed and we were unable to recover it. 00:34:23.481 [2024-07-25 01:20:16.345080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.481 [2024-07-25 01:20:16.345105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.481 qpair failed and we were unable to recover it. 00:34:23.481 [2024-07-25 01:20:16.345256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.481 [2024-07-25 01:20:16.345282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.481 qpair failed and we were unable to recover it. 00:34:23.481 [2024-07-25 01:20:16.345454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.481 [2024-07-25 01:20:16.345479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.481 qpair failed and we were unable to recover it. 00:34:23.481 [2024-07-25 01:20:16.345627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.481 [2024-07-25 01:20:16.345654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.481 qpair failed and we were unable to recover it. 00:34:23.481 [2024-07-25 01:20:16.345780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.481 [2024-07-25 01:20:16.345805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.481 qpair failed and we were unable to recover it. 00:34:23.481 [2024-07-25 01:20:16.345937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.481 [2024-07-25 01:20:16.345962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.481 qpair failed and we were unable to recover it. 00:34:23.481 [2024-07-25 01:20:16.346137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.481 [2024-07-25 01:20:16.346162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.481 qpair failed and we were unable to recover it. 00:34:23.481 [2024-07-25 01:20:16.346282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.481 [2024-07-25 01:20:16.346308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.481 qpair failed and we were unable to recover it. 00:34:23.481 [2024-07-25 01:20:16.346428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.481 [2024-07-25 01:20:16.346453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.481 qpair failed and we were unable to recover it. 00:34:23.481 [2024-07-25 01:20:16.346573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.481 [2024-07-25 01:20:16.346599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.481 qpair failed and we were unable to recover it. 00:34:23.481 [2024-07-25 01:20:16.346716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.481 [2024-07-25 01:20:16.346741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.481 qpair failed and we were unable to recover it. 00:34:23.481 [2024-07-25 01:20:16.346907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.481 [2024-07-25 01:20:16.346931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.481 qpair failed and we were unable to recover it. 00:34:23.481 [2024-07-25 01:20:16.347052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.481 [2024-07-25 01:20:16.347075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.481 qpair failed and we were unable to recover it. 00:34:23.481 [2024-07-25 01:20:16.347216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.481 [2024-07-25 01:20:16.347239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.481 qpair failed and we were unable to recover it. 00:34:23.481 [2024-07-25 01:20:16.347401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.481 [2024-07-25 01:20:16.347425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.481 qpair failed and we were unable to recover it. 00:34:23.481 [2024-07-25 01:20:16.347548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.481 [2024-07-25 01:20:16.347572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.481 qpair failed and we were unable to recover it. 00:34:23.481 [2024-07-25 01:20:16.347677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.481 [2024-07-25 01:20:16.347705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.481 qpair failed and we were unable to recover it. 00:34:23.481 [2024-07-25 01:20:16.347842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.481 [2024-07-25 01:20:16.347866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.481 qpair failed and we were unable to recover it. 00:34:23.481 [2024-07-25 01:20:16.348011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.481 [2024-07-25 01:20:16.348034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.481 qpair failed and we were unable to recover it. 00:34:23.481 [2024-07-25 01:20:16.348174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.481 [2024-07-25 01:20:16.348198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.481 qpair failed and we were unable to recover it. 00:34:23.481 [2024-07-25 01:20:16.348341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.481 [2024-07-25 01:20:16.348367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.481 qpair failed and we were unable to recover it. 00:34:23.482 [2024-07-25 01:20:16.348535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.482 [2024-07-25 01:20:16.348559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.482 qpair failed and we were unable to recover it. 00:34:23.482 [2024-07-25 01:20:16.348691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.482 [2024-07-25 01:20:16.348715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.482 qpair failed and we were unable to recover it. 00:34:23.482 [2024-07-25 01:20:16.348884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.482 [2024-07-25 01:20:16.348909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.482 qpair failed and we were unable to recover it. 00:34:23.482 [2024-07-25 01:20:16.349036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.482 [2024-07-25 01:20:16.349059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.482 qpair failed and we were unable to recover it. 00:34:23.482 [2024-07-25 01:20:16.349200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.482 [2024-07-25 01:20:16.349224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.482 qpair failed and we were unable to recover it. 00:34:23.482 [2024-07-25 01:20:16.349362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.482 [2024-07-25 01:20:16.349386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.482 qpair failed and we were unable to recover it. 00:34:23.482 [2024-07-25 01:20:16.349511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.482 [2024-07-25 01:20:16.349534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.482 qpair failed and we were unable to recover it. 00:34:23.482 [2024-07-25 01:20:16.349677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.482 [2024-07-25 01:20:16.349701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.482 qpair failed and we were unable to recover it. 00:34:23.482 [2024-07-25 01:20:16.349829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.482 [2024-07-25 01:20:16.349854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.482 qpair failed and we were unable to recover it. 00:34:23.482 [2024-07-25 01:20:16.350030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.482 [2024-07-25 01:20:16.350056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.482 qpair failed and we were unable to recover it. 00:34:23.482 [2024-07-25 01:20:16.350226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.482 [2024-07-25 01:20:16.350258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.482 qpair failed and we were unable to recover it. 00:34:23.482 [2024-07-25 01:20:16.350382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.482 [2024-07-25 01:20:16.350408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.482 qpair failed and we were unable to recover it. 00:34:23.482 [2024-07-25 01:20:16.350550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.482 [2024-07-25 01:20:16.350575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.482 qpair failed and we were unable to recover it. 00:34:23.482 [2024-07-25 01:20:16.350768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.482 [2024-07-25 01:20:16.350793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.482 qpair failed and we were unable to recover it. 00:34:23.482 [2024-07-25 01:20:16.350908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.482 [2024-07-25 01:20:16.350933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.482 qpair failed and we were unable to recover it. 00:34:23.482 [2024-07-25 01:20:16.351152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.482 [2024-07-25 01:20:16.351177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.482 qpair failed and we were unable to recover it. 00:34:23.482 [2024-07-25 01:20:16.351324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.482 [2024-07-25 01:20:16.351349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.482 qpair failed and we were unable to recover it. 00:34:23.482 [2024-07-25 01:20:16.351491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.482 [2024-07-25 01:20:16.351517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.482 qpair failed and we were unable to recover it. 00:34:23.482 [2024-07-25 01:20:16.351664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.482 [2024-07-25 01:20:16.351691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.482 qpair failed and we were unable to recover it. 00:34:23.482 [2024-07-25 01:20:16.351816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.482 [2024-07-25 01:20:16.351840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.482 qpair failed and we were unable to recover it. 00:34:23.482 [2024-07-25 01:20:16.351979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.482 [2024-07-25 01:20:16.352005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.482 qpair failed and we were unable to recover it. 00:34:23.482 [2024-07-25 01:20:16.352149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.482 [2024-07-25 01:20:16.352174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.482 qpair failed and we were unable to recover it. 00:34:23.482 [2024-07-25 01:20:16.352300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.482 [2024-07-25 01:20:16.352325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.482 qpair failed and we were unable to recover it. 00:34:23.482 [2024-07-25 01:20:16.352486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.482 [2024-07-25 01:20:16.352511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.482 qpair failed and we were unable to recover it. 00:34:23.482 [2024-07-25 01:20:16.352656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.482 [2024-07-25 01:20:16.352681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.482 qpair failed and we were unable to recover it. 00:34:23.482 [2024-07-25 01:20:16.352821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.482 [2024-07-25 01:20:16.352846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.482 qpair failed and we were unable to recover it. 00:34:23.482 [2024-07-25 01:20:16.352961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.482 [2024-07-25 01:20:16.352987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.482 qpair failed and we were unable to recover it. 00:34:23.482 [2024-07-25 01:20:16.353107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.482 [2024-07-25 01:20:16.353131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.482 qpair failed and we were unable to recover it. 00:34:23.482 [2024-07-25 01:20:16.353280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.482 [2024-07-25 01:20:16.353305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.482 qpair failed and we were unable to recover it. 00:34:23.482 [2024-07-25 01:20:16.353423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.482 [2024-07-25 01:20:16.353450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.482 qpair failed and we were unable to recover it. 00:34:23.482 [2024-07-25 01:20:16.353591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.482 [2024-07-25 01:20:16.353616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.482 qpair failed and we were unable to recover it. 00:34:23.482 [2024-07-25 01:20:16.353730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.482 [2024-07-25 01:20:16.353755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.482 qpair failed and we were unable to recover it. 00:34:23.482 [2024-07-25 01:20:16.353903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.482 [2024-07-25 01:20:16.353928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.482 qpair failed and we were unable to recover it. 00:34:23.482 [2024-07-25 01:20:16.354069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.482 [2024-07-25 01:20:16.354094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.482 qpair failed and we were unable to recover it. 00:34:23.482 [2024-07-25 01:20:16.354211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.482 [2024-07-25 01:20:16.354236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.482 qpair failed and we were unable to recover it. 00:34:23.482 [2024-07-25 01:20:16.354403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.482 [2024-07-25 01:20:16.354432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.482 qpair failed and we were unable to recover it. 00:34:23.482 [2024-07-25 01:20:16.354580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.482 [2024-07-25 01:20:16.354604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.482 qpair failed and we were unable to recover it. 00:34:23.482 [2024-07-25 01:20:16.354726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.483 [2024-07-25 01:20:16.354752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.483 qpair failed and we were unable to recover it. 00:34:23.483 [2024-07-25 01:20:16.354869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.483 [2024-07-25 01:20:16.354895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.483 qpair failed and we were unable to recover it. 00:34:23.483 [2024-07-25 01:20:16.355012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.483 [2024-07-25 01:20:16.355037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.483 qpair failed and we were unable to recover it. 00:34:23.483 [2024-07-25 01:20:16.355150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.483 [2024-07-25 01:20:16.355174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.483 qpair failed and we were unable to recover it. 00:34:23.483 [2024-07-25 01:20:16.355299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.483 [2024-07-25 01:20:16.355325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.483 qpair failed and we were unable to recover it. 00:34:23.483 [2024-07-25 01:20:16.355467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.483 [2024-07-25 01:20:16.355492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.483 qpair failed and we were unable to recover it. 00:34:23.483 [2024-07-25 01:20:16.355636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.483 [2024-07-25 01:20:16.355660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.483 qpair failed and we were unable to recover it. 00:34:23.483 [2024-07-25 01:20:16.355774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.483 [2024-07-25 01:20:16.355800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.483 qpair failed and we were unable to recover it. 00:34:23.483 [2024-07-25 01:20:16.355942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.483 [2024-07-25 01:20:16.355966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.483 qpair failed and we were unable to recover it. 00:34:23.483 [2024-07-25 01:20:16.356080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.483 [2024-07-25 01:20:16.356105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.483 qpair failed and we were unable to recover it. 00:34:23.483 [2024-07-25 01:20:16.356269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.483 [2024-07-25 01:20:16.356295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.483 qpair failed and we were unable to recover it. 00:34:23.483 [2024-07-25 01:20:16.356413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.483 [2024-07-25 01:20:16.356437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.483 qpair failed and we were unable to recover it. 00:34:23.483 [2024-07-25 01:20:16.356583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.483 [2024-07-25 01:20:16.356608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.483 qpair failed and we were unable to recover it. 00:34:23.483 [2024-07-25 01:20:16.356746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.483 [2024-07-25 01:20:16.356770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.483 qpair failed and we were unable to recover it. 00:34:23.483 [2024-07-25 01:20:16.356905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.483 [2024-07-25 01:20:16.356930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.483 qpair failed and we were unable to recover it. 00:34:23.483 [2024-07-25 01:20:16.357079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.483 [2024-07-25 01:20:16.357105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.483 qpair failed and we were unable to recover it. 00:34:23.483 [2024-07-25 01:20:16.357251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.483 [2024-07-25 01:20:16.357277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.483 qpair failed and we were unable to recover it. 00:34:23.483 [2024-07-25 01:20:16.357395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.483 [2024-07-25 01:20:16.357419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.483 qpair failed and we were unable to recover it. 00:34:23.483 [2024-07-25 01:20:16.357544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.483 [2024-07-25 01:20:16.357569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.483 qpair failed and we were unable to recover it. 00:34:23.483 [2024-07-25 01:20:16.357714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.483 [2024-07-25 01:20:16.357738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.483 qpair failed and we were unable to recover it. 00:34:23.483 [2024-07-25 01:20:16.357878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.483 [2024-07-25 01:20:16.357904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.483 qpair failed and we were unable to recover it. 00:34:23.483 [2024-07-25 01:20:16.358076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.483 [2024-07-25 01:20:16.358101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.483 qpair failed and we were unable to recover it. 00:34:23.483 [2024-07-25 01:20:16.358262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.483 [2024-07-25 01:20:16.358288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.483 qpair failed and we were unable to recover it. 00:34:23.483 [2024-07-25 01:20:16.358408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.483 [2024-07-25 01:20:16.358433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.483 qpair failed and we were unable to recover it. 00:34:23.483 [2024-07-25 01:20:16.358571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.483 [2024-07-25 01:20:16.358596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.483 qpair failed and we were unable to recover it. 00:34:23.483 [2024-07-25 01:20:16.358768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.483 [2024-07-25 01:20:16.358797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.483 qpair failed and we were unable to recover it. 00:34:23.483 [2024-07-25 01:20:16.358939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.483 [2024-07-25 01:20:16.358965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.483 qpair failed and we were unable to recover it. 00:34:23.483 [2024-07-25 01:20:16.359085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.483 [2024-07-25 01:20:16.359109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.483 qpair failed and we were unable to recover it. 00:34:23.483 [2024-07-25 01:20:16.359232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.483 [2024-07-25 01:20:16.359265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.483 qpair failed and we were unable to recover it. 00:34:23.483 [2024-07-25 01:20:16.359405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.483 [2024-07-25 01:20:16.359430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.483 qpair failed and we were unable to recover it. 00:34:23.483 [2024-07-25 01:20:16.359541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.483 [2024-07-25 01:20:16.359565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.483 qpair failed and we were unable to recover it. 00:34:23.483 [2024-07-25 01:20:16.359711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.483 [2024-07-25 01:20:16.359737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.483 qpair failed and we were unable to recover it. 00:34:23.483 [2024-07-25 01:20:16.359848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.483 [2024-07-25 01:20:16.359873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.483 qpair failed and we were unable to recover it. 00:34:23.483 [2024-07-25 01:20:16.360008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.483 [2024-07-25 01:20:16.360033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.483 qpair failed and we were unable to recover it. 00:34:23.483 [2024-07-25 01:20:16.360147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.483 [2024-07-25 01:20:16.360173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.483 qpair failed and we were unable to recover it. 00:34:23.483 [2024-07-25 01:20:16.360316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.483 [2024-07-25 01:20:16.360341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.483 qpair failed and we were unable to recover it. 00:34:23.483 [2024-07-25 01:20:16.360463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.483 [2024-07-25 01:20:16.360488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.483 qpair failed and we were unable to recover it. 00:34:23.483 [2024-07-25 01:20:16.360631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.483 [2024-07-25 01:20:16.360655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.483 qpair failed and we were unable to recover it. 00:34:23.484 [2024-07-25 01:20:16.360769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.484 [2024-07-25 01:20:16.360794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.484 qpair failed and we were unable to recover it. 00:34:23.484 [2024-07-25 01:20:16.360916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.484 [2024-07-25 01:20:16.360942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.484 qpair failed and we were unable to recover it. 00:34:23.484 [2024-07-25 01:20:16.361067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.484 [2024-07-25 01:20:16.361091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.484 qpair failed and we were unable to recover it. 00:34:23.484 [2024-07-25 01:20:16.361204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.484 [2024-07-25 01:20:16.361228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.484 qpair failed and we were unable to recover it. 00:34:23.484 [2024-07-25 01:20:16.361351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.484 [2024-07-25 01:20:16.361376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.484 qpair failed and we were unable to recover it. 00:34:23.484 [2024-07-25 01:20:16.361488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.484 [2024-07-25 01:20:16.361513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.484 qpair failed and we were unable to recover it. 00:34:23.484 [2024-07-25 01:20:16.361677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.484 [2024-07-25 01:20:16.361702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.484 qpair failed and we were unable to recover it. 00:34:23.484 [2024-07-25 01:20:16.361820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.484 [2024-07-25 01:20:16.361844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.484 qpair failed and we were unable to recover it. 00:34:23.484 [2024-07-25 01:20:16.361980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.484 [2024-07-25 01:20:16.362005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.484 qpair failed and we were unable to recover it. 00:34:23.484 [2024-07-25 01:20:16.362172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.484 [2024-07-25 01:20:16.362197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.484 qpair failed and we were unable to recover it. 00:34:23.484 [2024-07-25 01:20:16.362340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.484 [2024-07-25 01:20:16.362366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.484 qpair failed and we were unable to recover it. 00:34:23.484 [2024-07-25 01:20:16.362486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.484 [2024-07-25 01:20:16.362511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.484 qpair failed and we were unable to recover it. 00:34:23.484 [2024-07-25 01:20:16.362668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.484 [2024-07-25 01:20:16.362693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.484 qpair failed and we were unable to recover it. 00:34:23.484 [2024-07-25 01:20:16.362831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.484 [2024-07-25 01:20:16.362855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.484 qpair failed and we were unable to recover it. 00:34:23.484 [2024-07-25 01:20:16.362979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.484 [2024-07-25 01:20:16.363004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.484 qpair failed and we were unable to recover it. 00:34:23.484 [2024-07-25 01:20:16.363117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.484 [2024-07-25 01:20:16.363143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.484 qpair failed and we were unable to recover it. 00:34:23.484 [2024-07-25 01:20:16.363288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.484 [2024-07-25 01:20:16.363314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.484 qpair failed and we were unable to recover it. 00:34:23.484 [2024-07-25 01:20:16.363455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.484 [2024-07-25 01:20:16.363479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.484 qpair failed and we were unable to recover it. 00:34:23.484 [2024-07-25 01:20:16.363618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.484 [2024-07-25 01:20:16.363643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.484 qpair failed and we were unable to recover it. 00:34:23.484 [2024-07-25 01:20:16.363809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.484 [2024-07-25 01:20:16.363834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.484 qpair failed and we were unable to recover it. 00:34:23.484 [2024-07-25 01:20:16.363962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.484 [2024-07-25 01:20:16.363987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.484 qpair failed and we were unable to recover it. 00:34:23.484 [2024-07-25 01:20:16.364139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.484 [2024-07-25 01:20:16.364163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.484 qpair failed and we were unable to recover it. 00:34:23.484 [2024-07-25 01:20:16.364279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.484 [2024-07-25 01:20:16.364305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.484 qpair failed and we were unable to recover it. 00:34:23.484 [2024-07-25 01:20:16.364448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.484 [2024-07-25 01:20:16.364473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.484 qpair failed and we were unable to recover it. 00:34:23.484 [2024-07-25 01:20:16.364627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.484 [2024-07-25 01:20:16.364653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.484 qpair failed and we were unable to recover it. 00:34:23.484 [2024-07-25 01:20:16.364790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.484 [2024-07-25 01:20:16.364815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.484 qpair failed and we were unable to recover it. 00:34:23.484 [2024-07-25 01:20:16.364951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.484 [2024-07-25 01:20:16.364975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.484 qpair failed and we were unable to recover it. 00:34:23.484 [2024-07-25 01:20:16.365109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.484 [2024-07-25 01:20:16.365139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.484 qpair failed and we were unable to recover it. 00:34:23.484 [2024-07-25 01:20:16.365258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.484 [2024-07-25 01:20:16.365284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.484 qpair failed and we were unable to recover it. 00:34:23.484 [2024-07-25 01:20:16.365430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.484 [2024-07-25 01:20:16.365455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.484 qpair failed and we were unable to recover it. 00:34:23.484 [2024-07-25 01:20:16.365572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.484 [2024-07-25 01:20:16.365596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.484 qpair failed and we were unable to recover it. 00:34:23.484 [2024-07-25 01:20:16.365758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.484 [2024-07-25 01:20:16.365783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.484 qpair failed and we were unable to recover it. 00:34:23.484 [2024-07-25 01:20:16.365938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.484 [2024-07-25 01:20:16.365964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.484 qpair failed and we were unable to recover it. 00:34:23.484 [2024-07-25 01:20:16.366132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.484 [2024-07-25 01:20:16.366156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.484 qpair failed and we were unable to recover it. 00:34:23.484 [2024-07-25 01:20:16.366293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.484 [2024-07-25 01:20:16.366319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.484 qpair failed and we were unable to recover it. 00:34:23.484 [2024-07-25 01:20:16.366440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.484 [2024-07-25 01:20:16.366465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.484 qpair failed and we were unable to recover it. 00:34:23.484 [2024-07-25 01:20:16.366580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.484 [2024-07-25 01:20:16.366605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.484 qpair failed and we were unable to recover it. 00:34:23.484 [2024-07-25 01:20:16.366738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.485 [2024-07-25 01:20:16.366762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.485 qpair failed and we were unable to recover it. 00:34:23.485 [2024-07-25 01:20:16.366877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.485 [2024-07-25 01:20:16.366901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.485 qpair failed and we were unable to recover it. 00:34:23.485 [2024-07-25 01:20:16.367014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.485 [2024-07-25 01:20:16.367039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.485 qpair failed and we were unable to recover it. 00:34:23.485 [2024-07-25 01:20:16.367153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.485 [2024-07-25 01:20:16.367178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.485 qpair failed and we were unable to recover it. 00:34:23.485 [2024-07-25 01:20:16.367296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.485 [2024-07-25 01:20:16.367323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.485 qpair failed and we were unable to recover it. 00:34:23.485 [2024-07-25 01:20:16.367465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.485 [2024-07-25 01:20:16.367491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.485 qpair failed and we were unable to recover it. 00:34:23.485 [2024-07-25 01:20:16.367632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.485 [2024-07-25 01:20:16.367657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.485 qpair failed and we were unable to recover it. 00:34:23.485 [2024-07-25 01:20:16.367773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.485 [2024-07-25 01:20:16.367799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.485 qpair failed and we were unable to recover it. 00:34:23.485 [2024-07-25 01:20:16.367950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.485 [2024-07-25 01:20:16.367974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.485 qpair failed and we were unable to recover it. 00:34:23.485 [2024-07-25 01:20:16.368091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.485 [2024-07-25 01:20:16.368116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.485 qpair failed and we were unable to recover it. 00:34:23.485 [2024-07-25 01:20:16.368254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.485 [2024-07-25 01:20:16.368280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.485 qpair failed and we were unable to recover it. 00:34:23.485 [2024-07-25 01:20:16.368420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.485 [2024-07-25 01:20:16.368444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.485 qpair failed and we were unable to recover it. 00:34:23.485 [2024-07-25 01:20:16.368586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.485 [2024-07-25 01:20:16.368611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.485 qpair failed and we were unable to recover it. 00:34:23.485 [2024-07-25 01:20:16.368736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.485 [2024-07-25 01:20:16.368760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.485 qpair failed and we were unable to recover it. 00:34:23.485 [2024-07-25 01:20:16.368901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.485 [2024-07-25 01:20:16.368926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.485 qpair failed and we were unable to recover it. 00:34:23.485 [2024-07-25 01:20:16.369063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.485 [2024-07-25 01:20:16.369089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.485 qpair failed and we were unable to recover it. 00:34:23.485 [2024-07-25 01:20:16.369252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.485 [2024-07-25 01:20:16.369277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.485 qpair failed and we were unable to recover it. 00:34:23.485 [2024-07-25 01:20:16.369423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.485 [2024-07-25 01:20:16.369448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.485 qpair failed and we were unable to recover it. 00:34:23.485 [2024-07-25 01:20:16.369567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.485 [2024-07-25 01:20:16.369592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.485 qpair failed and we were unable to recover it. 00:34:23.485 [2024-07-25 01:20:16.369700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.485 [2024-07-25 01:20:16.369725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.485 qpair failed and we were unable to recover it. 00:34:23.485 [2024-07-25 01:20:16.369860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.485 [2024-07-25 01:20:16.369885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.485 qpair failed and we were unable to recover it. 00:34:23.485 [2024-07-25 01:20:16.370026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.485 [2024-07-25 01:20:16.370050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.485 qpair failed and we were unable to recover it. 00:34:23.485 [2024-07-25 01:20:16.370164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.485 [2024-07-25 01:20:16.370190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.485 qpair failed and we were unable to recover it. 00:34:23.485 [2024-07-25 01:20:16.370331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.485 [2024-07-25 01:20:16.370357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.485 qpair failed and we were unable to recover it. 00:34:23.485 [2024-07-25 01:20:16.370499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.485 [2024-07-25 01:20:16.370523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.485 qpair failed and we were unable to recover it. 00:34:23.485 [2024-07-25 01:20:16.370669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.485 [2024-07-25 01:20:16.370695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.485 qpair failed and we were unable to recover it. 00:34:23.485 [2024-07-25 01:20:16.370815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.485 [2024-07-25 01:20:16.370840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.485 qpair failed and we were unable to recover it. 00:34:23.485 [2024-07-25 01:20:16.370983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.485 [2024-07-25 01:20:16.371008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.485 qpair failed and we were unable to recover it. 00:34:23.485 [2024-07-25 01:20:16.371127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.485 [2024-07-25 01:20:16.371153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.485 qpair failed and we were unable to recover it. 00:34:23.485 [2024-07-25 01:20:16.371298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.485 [2024-07-25 01:20:16.371324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.485 qpair failed and we were unable to recover it. 00:34:23.485 [2024-07-25 01:20:16.371466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.485 [2024-07-25 01:20:16.371495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.485 qpair failed and we were unable to recover it. 00:34:23.485 [2024-07-25 01:20:16.371640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.485 [2024-07-25 01:20:16.371667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.485 qpair failed and we were unable to recover it. 00:34:23.486 [2024-07-25 01:20:16.371783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.486 [2024-07-25 01:20:16.371808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.486 qpair failed and we were unable to recover it. 00:34:23.486 [2024-07-25 01:20:16.371986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.486 [2024-07-25 01:20:16.372011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.486 qpair failed and we were unable to recover it. 00:34:23.486 [2024-07-25 01:20:16.372163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.486 [2024-07-25 01:20:16.372188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.486 qpair failed and we were unable to recover it. 00:34:23.486 [2024-07-25 01:20:16.372352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.486 [2024-07-25 01:20:16.372377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.486 qpair failed and we were unable to recover it. 00:34:23.486 [2024-07-25 01:20:16.372494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.486 [2024-07-25 01:20:16.372520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.486 qpair failed and we were unable to recover it. 00:34:23.486 [2024-07-25 01:20:16.372687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.486 [2024-07-25 01:20:16.372713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.486 qpair failed and we were unable to recover it. 00:34:23.486 [2024-07-25 01:20:16.372878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.486 [2024-07-25 01:20:16.372903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.486 qpair failed and we were unable to recover it. 00:34:23.486 [2024-07-25 01:20:16.373042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.486 [2024-07-25 01:20:16.373068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.486 qpair failed and we were unable to recover it. 00:34:23.486 [2024-07-25 01:20:16.373191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.486 [2024-07-25 01:20:16.373216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.486 qpair failed and we were unable to recover it. 00:34:23.486 [2024-07-25 01:20:16.373337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.486 [2024-07-25 01:20:16.373362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.486 qpair failed and we were unable to recover it. 00:34:23.486 [2024-07-25 01:20:16.373481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.486 [2024-07-25 01:20:16.373507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.486 qpair failed and we were unable to recover it. 00:34:23.486 [2024-07-25 01:20:16.373673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.486 [2024-07-25 01:20:16.373698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.486 qpair failed and we were unable to recover it. 00:34:23.486 [2024-07-25 01:20:16.373818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.486 [2024-07-25 01:20:16.373844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.486 qpair failed and we were unable to recover it. 00:34:23.486 [2024-07-25 01:20:16.373986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.486 [2024-07-25 01:20:16.374011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.486 qpair failed and we were unable to recover it. 00:34:23.486 [2024-07-25 01:20:16.374126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.486 [2024-07-25 01:20:16.374150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.486 qpair failed and we were unable to recover it. 00:34:23.486 [2024-07-25 01:20:16.374279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.486 [2024-07-25 01:20:16.374305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.486 qpair failed and we were unable to recover it. 00:34:23.486 [2024-07-25 01:20:16.374424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.486 [2024-07-25 01:20:16.374449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.486 qpair failed and we were unable to recover it. 00:34:23.486 [2024-07-25 01:20:16.374594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.486 [2024-07-25 01:20:16.374619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.486 qpair failed and we were unable to recover it. 00:34:23.486 [2024-07-25 01:20:16.374736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.486 [2024-07-25 01:20:16.374763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.486 qpair failed and we were unable to recover it. 00:34:23.486 [2024-07-25 01:20:16.374886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.486 [2024-07-25 01:20:16.374911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.486 qpair failed and we were unable to recover it. 00:34:23.486 [2024-07-25 01:20:16.375049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.486 [2024-07-25 01:20:16.375075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.486 qpair failed and we were unable to recover it. 00:34:23.486 [2024-07-25 01:20:16.375216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.486 [2024-07-25 01:20:16.375240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.486 qpair failed and we were unable to recover it. 00:34:23.486 [2024-07-25 01:20:16.375398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.486 [2024-07-25 01:20:16.375424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.486 qpair failed and we were unable to recover it. 00:34:23.486 [2024-07-25 01:20:16.375546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.486 [2024-07-25 01:20:16.375572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.486 qpair failed and we were unable to recover it. 00:34:23.486 [2024-07-25 01:20:16.375718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.486 [2024-07-25 01:20:16.375742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.486 qpair failed and we were unable to recover it. 00:34:23.486 [2024-07-25 01:20:16.375892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.486 [2024-07-25 01:20:16.375917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.486 qpair failed and we were unable to recover it. 00:34:23.486 [2024-07-25 01:20:16.376056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.486 [2024-07-25 01:20:16.376081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.486 qpair failed and we were unable to recover it. 00:34:23.486 [2024-07-25 01:20:16.376194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.486 [2024-07-25 01:20:16.376219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.486 qpair failed and we were unable to recover it. 00:34:23.486 [2024-07-25 01:20:16.376381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.486 [2024-07-25 01:20:16.376431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.486 qpair failed and we were unable to recover it. 00:34:23.486 [2024-07-25 01:20:16.376578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.486 [2024-07-25 01:20:16.376615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.486 qpair failed and we were unable to recover it. 00:34:23.486 [2024-07-25 01:20:16.376785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.486 [2024-07-25 01:20:16.376813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.486 qpair failed and we were unable to recover it. 00:34:23.486 [2024-07-25 01:20:16.376960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.486 [2024-07-25 01:20:16.376986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.486 qpair failed and we were unable to recover it. 00:34:23.486 [2024-07-25 01:20:16.377098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.486 [2024-07-25 01:20:16.377123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.486 qpair failed and we were unable to recover it. 00:34:23.486 [2024-07-25 01:20:16.377253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.486 [2024-07-25 01:20:16.377288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.486 qpair failed and we were unable to recover it. 00:34:23.486 [2024-07-25 01:20:16.377437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.486 [2024-07-25 01:20:16.377472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.486 qpair failed and we were unable to recover it. 00:34:23.486 [2024-07-25 01:20:16.377619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.487 [2024-07-25 01:20:16.377655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.487 qpair failed and we were unable to recover it. 00:34:23.487 [2024-07-25 01:20:16.377794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.487 [2024-07-25 01:20:16.377829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.487 qpair failed and we were unable to recover it. 00:34:23.487 [2024-07-25 01:20:16.377974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.487 [2024-07-25 01:20:16.378010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.487 qpair failed and we were unable to recover it. 00:34:23.487 [2024-07-25 01:20:16.378152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.487 [2024-07-25 01:20:16.378195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.487 qpair failed and we were unable to recover it. 00:34:23.487 [2024-07-25 01:20:16.378375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.487 [2024-07-25 01:20:16.378403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.487 qpair failed and we were unable to recover it. 00:34:23.487 [2024-07-25 01:20:16.378552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.487 [2024-07-25 01:20:16.378578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.487 qpair failed and we were unable to recover it. 00:34:23.487 [2024-07-25 01:20:16.378692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.487 [2024-07-25 01:20:16.378717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.487 qpair failed and we were unable to recover it. 00:34:23.487 [2024-07-25 01:20:16.378837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.487 [2024-07-25 01:20:16.378872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.487 qpair failed and we were unable to recover it. 00:34:23.487 [2024-07-25 01:20:16.379042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.487 [2024-07-25 01:20:16.379068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.487 qpair failed and we were unable to recover it. 00:34:23.487 [2024-07-25 01:20:16.379199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.487 [2024-07-25 01:20:16.379239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.487 qpair failed and we were unable to recover it. 00:34:23.487 [2024-07-25 01:20:16.379370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.487 [2024-07-25 01:20:16.379396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.487 qpair failed and we were unable to recover it. 00:34:23.487 [2024-07-25 01:20:16.379507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.487 [2024-07-25 01:20:16.379531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.487 qpair failed and we were unable to recover it. 00:34:23.487 [2024-07-25 01:20:16.379682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.487 [2024-07-25 01:20:16.379707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.487 qpair failed and we were unable to recover it. 00:34:23.487 [2024-07-25 01:20:16.379845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.487 [2024-07-25 01:20:16.379870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.487 qpair failed and we were unable to recover it. 00:34:23.487 [2024-07-25 01:20:16.380014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.487 [2024-07-25 01:20:16.380039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.487 qpair failed and we were unable to recover it. 00:34:23.487 [2024-07-25 01:20:16.380152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.487 [2024-07-25 01:20:16.380178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.487 qpair failed and we were unable to recover it. 00:34:23.487 [2024-07-25 01:20:16.380312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.487 [2024-07-25 01:20:16.380338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.487 qpair failed and we were unable to recover it. 00:34:23.487 [2024-07-25 01:20:16.380488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.487 [2024-07-25 01:20:16.380514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.487 qpair failed and we were unable to recover it. 00:34:23.487 [2024-07-25 01:20:16.380654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.487 [2024-07-25 01:20:16.380679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.487 qpair failed and we were unable to recover it. 00:34:23.487 [2024-07-25 01:20:16.380819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.487 [2024-07-25 01:20:16.380843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.487 qpair failed and we were unable to recover it. 00:34:23.487 [2024-07-25 01:20:16.380964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.487 [2024-07-25 01:20:16.380989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.487 qpair failed and we were unable to recover it. 00:34:23.487 [2024-07-25 01:20:16.381128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.487 [2024-07-25 01:20:16.381152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.487 qpair failed and we were unable to recover it. 00:34:23.487 [2024-07-25 01:20:16.381288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.487 [2024-07-25 01:20:16.381314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.487 qpair failed and we were unable to recover it. 00:34:23.487 [2024-07-25 01:20:16.381478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.487 [2024-07-25 01:20:16.381503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.487 qpair failed and we were unable to recover it. 00:34:23.487 [2024-07-25 01:20:16.381618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.487 [2024-07-25 01:20:16.381642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.487 qpair failed and we were unable to recover it. 00:34:23.487 [2024-07-25 01:20:16.381757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.487 [2024-07-25 01:20:16.381783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.487 qpair failed and we were unable to recover it. 00:34:23.487 [2024-07-25 01:20:16.381950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.487 [2024-07-25 01:20:16.381975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.487 qpair failed and we were unable to recover it. 00:34:23.487 [2024-07-25 01:20:16.382099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.487 [2024-07-25 01:20:16.382124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.487 qpair failed and we were unable to recover it. 00:34:23.487 [2024-07-25 01:20:16.382271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.487 [2024-07-25 01:20:16.382297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.487 qpair failed and we were unable to recover it. 00:34:23.487 [2024-07-25 01:20:16.382419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.487 [2024-07-25 01:20:16.382444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.487 qpair failed and we were unable to recover it. 00:34:23.487 [2024-07-25 01:20:16.382585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.487 [2024-07-25 01:20:16.382614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.488 qpair failed and we were unable to recover it. 00:34:23.488 [2024-07-25 01:20:16.382756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.488 [2024-07-25 01:20:16.382781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.488 qpair failed and we were unable to recover it. 00:34:23.488 [2024-07-25 01:20:16.382924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.488 [2024-07-25 01:20:16.382949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.488 qpair failed and we were unable to recover it. 00:34:23.488 [2024-07-25 01:20:16.383077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.488 [2024-07-25 01:20:16.383102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.488 qpair failed and we were unable to recover it. 00:34:23.488 [2024-07-25 01:20:16.383251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.488 [2024-07-25 01:20:16.383277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.488 qpair failed and we were unable to recover it. 00:34:23.488 [2024-07-25 01:20:16.383421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.488 [2024-07-25 01:20:16.383445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.488 qpair failed and we were unable to recover it. 00:34:23.488 [2024-07-25 01:20:16.383587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.488 [2024-07-25 01:20:16.383612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.488 qpair failed and we were unable to recover it. 00:34:23.488 [2024-07-25 01:20:16.383754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.488 [2024-07-25 01:20:16.383779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.488 qpair failed and we were unable to recover it. 00:34:23.488 [2024-07-25 01:20:16.383945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.488 [2024-07-25 01:20:16.383970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.488 qpair failed and we were unable to recover it. 00:34:23.488 [2024-07-25 01:20:16.384138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.488 [2024-07-25 01:20:16.384163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.488 qpair failed and we were unable to recover it. 00:34:23.488 [2024-07-25 01:20:16.384278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.488 [2024-07-25 01:20:16.384304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.488 qpair failed and we were unable to recover it. 00:34:23.488 [2024-07-25 01:20:16.384450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.488 [2024-07-25 01:20:16.384474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.488 qpair failed and we were unable to recover it. 00:34:23.488 [2024-07-25 01:20:16.384611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.488 [2024-07-25 01:20:16.384635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.488 qpair failed and we were unable to recover it. 00:34:23.488 [2024-07-25 01:20:16.384803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.488 [2024-07-25 01:20:16.384828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.488 qpair failed and we were unable to recover it. 00:34:23.488 [2024-07-25 01:20:16.384956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.488 [2024-07-25 01:20:16.384981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.488 qpair failed and we were unable to recover it. 00:34:23.488 [2024-07-25 01:20:16.385118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.488 [2024-07-25 01:20:16.385143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.488 qpair failed and we were unable to recover it. 00:34:23.488 [2024-07-25 01:20:16.385276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.488 [2024-07-25 01:20:16.385302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.488 qpair failed and we were unable to recover it. 00:34:23.488 [2024-07-25 01:20:16.385450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.488 [2024-07-25 01:20:16.385475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.488 qpair failed and we were unable to recover it. 00:34:23.488 [2024-07-25 01:20:16.385632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.488 [2024-07-25 01:20:16.385657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.488 qpair failed and we were unable to recover it. 00:34:23.488 [2024-07-25 01:20:16.385772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.488 [2024-07-25 01:20:16.385796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.488 qpair failed and we were unable to recover it. 00:34:23.488 [2024-07-25 01:20:16.385963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.488 [2024-07-25 01:20:16.385987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.488 qpair failed and we were unable to recover it. 00:34:23.488 [2024-07-25 01:20:16.386154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.488 [2024-07-25 01:20:16.386179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.488 qpair failed and we were unable to recover it. 00:34:23.488 [2024-07-25 01:20:16.386305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.488 [2024-07-25 01:20:16.386331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.488 qpair failed and we were unable to recover it. 00:34:23.488 [2024-07-25 01:20:16.386446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.488 [2024-07-25 01:20:16.386472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.488 qpair failed and we were unable to recover it. 00:34:23.488 [2024-07-25 01:20:16.386621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.488 [2024-07-25 01:20:16.386646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.488 qpair failed and we were unable to recover it. 00:34:23.488 [2024-07-25 01:20:16.386815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.488 [2024-07-25 01:20:16.386840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.488 qpair failed and we were unable to recover it. 00:34:23.488 [2024-07-25 01:20:16.386959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.488 [2024-07-25 01:20:16.386983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.488 qpair failed and we were unable to recover it. 00:34:23.488 [2024-07-25 01:20:16.387128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.488 [2024-07-25 01:20:16.387154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.488 qpair failed and we were unable to recover it. 00:34:23.488 [2024-07-25 01:20:16.387289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.488 [2024-07-25 01:20:16.387314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.488 qpair failed and we were unable to recover it. 00:34:23.488 [2024-07-25 01:20:16.387432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.488 [2024-07-25 01:20:16.387456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.488 qpair failed and we were unable to recover it. 00:34:23.488 [2024-07-25 01:20:16.387570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.488 [2024-07-25 01:20:16.387595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.488 qpair failed and we were unable to recover it. 00:34:23.488 [2024-07-25 01:20:16.387703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.488 [2024-07-25 01:20:16.387727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.488 qpair failed and we were unable to recover it. 00:34:23.488 [2024-07-25 01:20:16.387871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.488 [2024-07-25 01:20:16.387896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.488 qpair failed and we were unable to recover it. 00:34:23.488 [2024-07-25 01:20:16.388039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.488 [2024-07-25 01:20:16.388063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.488 qpair failed and we were unable to recover it. 00:34:23.488 [2024-07-25 01:20:16.388231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.488 [2024-07-25 01:20:16.388262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.488 qpair failed and we were unable to recover it. 00:34:23.488 [2024-07-25 01:20:16.388409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.488 [2024-07-25 01:20:16.388434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.488 qpair failed and we were unable to recover it. 00:34:23.488 [2024-07-25 01:20:16.388559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.488 [2024-07-25 01:20:16.388583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.488 qpair failed and we were unable to recover it. 00:34:23.488 [2024-07-25 01:20:16.388708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.488 [2024-07-25 01:20:16.388732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.488 qpair failed and we were unable to recover it. 00:34:23.489 [2024-07-25 01:20:16.388881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.489 [2024-07-25 01:20:16.388907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.489 qpair failed and we were unable to recover it. 00:34:23.489 [2024-07-25 01:20:16.389050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.489 [2024-07-25 01:20:16.389075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.489 qpair failed and we were unable to recover it. 00:34:23.489 [2024-07-25 01:20:16.389189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.489 [2024-07-25 01:20:16.389219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.489 qpair failed and we were unable to recover it. 00:34:23.489 [2024-07-25 01:20:16.389340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.489 [2024-07-25 01:20:16.389365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.489 qpair failed and we were unable to recover it. 00:34:23.489 [2024-07-25 01:20:16.389474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.489 [2024-07-25 01:20:16.389498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.489 qpair failed and we were unable to recover it. 00:34:23.489 [2024-07-25 01:20:16.389639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.489 [2024-07-25 01:20:16.389665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.489 qpair failed and we were unable to recover it. 00:34:23.489 [2024-07-25 01:20:16.389803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.489 [2024-07-25 01:20:16.389829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.489 qpair failed and we were unable to recover it. 00:34:23.489 [2024-07-25 01:20:16.389973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.489 [2024-07-25 01:20:16.389998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.489 qpair failed and we were unable to recover it. 00:34:23.489 [2024-07-25 01:20:16.390138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.489 [2024-07-25 01:20:16.390162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.489 qpair failed and we were unable to recover it. 00:34:23.489 [2024-07-25 01:20:16.390305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.489 [2024-07-25 01:20:16.390332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.489 qpair failed and we were unable to recover it. 00:34:23.489 [2024-07-25 01:20:16.390446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.489 [2024-07-25 01:20:16.390470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.489 qpair failed and we were unable to recover it. 00:34:23.489 [2024-07-25 01:20:16.390641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.489 [2024-07-25 01:20:16.390666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.489 qpair failed and we were unable to recover it. 00:34:23.489 [2024-07-25 01:20:16.390811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.489 [2024-07-25 01:20:16.390836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.489 qpair failed and we were unable to recover it. 00:34:23.489 [2024-07-25 01:20:16.390973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.489 [2024-07-25 01:20:16.390998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.489 qpair failed and we were unable to recover it. 00:34:23.489 [2024-07-25 01:20:16.391140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.489 [2024-07-25 01:20:16.391165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.489 qpair failed and we were unable to recover it. 00:34:23.489 [2024-07-25 01:20:16.391282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.489 [2024-07-25 01:20:16.391307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.489 qpair failed and we were unable to recover it. 00:34:23.489 [2024-07-25 01:20:16.391431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.489 [2024-07-25 01:20:16.391455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.489 qpair failed and we were unable to recover it. 00:34:23.489 [2024-07-25 01:20:16.391600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.489 [2024-07-25 01:20:16.391627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.489 qpair failed and we were unable to recover it. 00:34:23.489 [2024-07-25 01:20:16.391795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.489 [2024-07-25 01:20:16.391820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.489 qpair failed and we were unable to recover it. 00:34:23.489 [2024-07-25 01:20:16.391939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.489 [2024-07-25 01:20:16.391964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.489 qpair failed and we were unable to recover it. 00:34:23.489 [2024-07-25 01:20:16.392112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.489 [2024-07-25 01:20:16.392137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.489 qpair failed and we were unable to recover it. 00:34:23.489 [2024-07-25 01:20:16.392286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.489 [2024-07-25 01:20:16.392311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.489 qpair failed and we were unable to recover it. 00:34:23.489 [2024-07-25 01:20:16.392422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.489 [2024-07-25 01:20:16.392447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.489 qpair failed and we were unable to recover it. 00:34:23.489 [2024-07-25 01:20:16.392572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.489 [2024-07-25 01:20:16.392596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.489 qpair failed and we were unable to recover it. 00:34:23.489 [2024-07-25 01:20:16.392738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.489 [2024-07-25 01:20:16.392762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.489 qpair failed and we were unable to recover it. 00:34:23.489 [2024-07-25 01:20:16.392885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.489 [2024-07-25 01:20:16.392910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.489 qpair failed and we were unable to recover it. 00:34:23.489 [2024-07-25 01:20:16.393077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.489 [2024-07-25 01:20:16.393101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.489 qpair failed and we were unable to recover it. 00:34:23.489 [2024-07-25 01:20:16.393249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.489 [2024-07-25 01:20:16.393274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.489 qpair failed and we were unable to recover it. 00:34:23.489 [2024-07-25 01:20:16.393416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.489 [2024-07-25 01:20:16.393441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.489 qpair failed and we were unable to recover it. 00:34:23.489 [2024-07-25 01:20:16.393589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.489 [2024-07-25 01:20:16.393614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.489 qpair failed and we were unable to recover it. 00:34:23.489 [2024-07-25 01:20:16.393759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.489 [2024-07-25 01:20:16.393785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.489 qpair failed and we were unable to recover it. 00:34:23.489 [2024-07-25 01:20:16.393900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.489 [2024-07-25 01:20:16.393926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.489 qpair failed and we were unable to recover it. 00:34:23.489 [2024-07-25 01:20:16.394047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.489 [2024-07-25 01:20:16.394072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.489 qpair failed and we were unable to recover it. 00:34:23.489 [2024-07-25 01:20:16.394218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.489 [2024-07-25 01:20:16.394260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.489 qpair failed and we were unable to recover it. 00:34:23.489 [2024-07-25 01:20:16.394410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.489 [2024-07-25 01:20:16.394435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.489 qpair failed and we were unable to recover it. 00:34:23.489 [2024-07-25 01:20:16.394602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.489 [2024-07-25 01:20:16.394628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.489 qpair failed and we were unable to recover it. 00:34:23.489 [2024-07-25 01:20:16.394769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.489 [2024-07-25 01:20:16.394794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.489 qpair failed and we were unable to recover it. 00:34:23.489 [2024-07-25 01:20:16.394970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.489 [2024-07-25 01:20:16.394995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.490 qpair failed and we were unable to recover it. 00:34:23.490 [2024-07-25 01:20:16.395134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.490 [2024-07-25 01:20:16.395160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.490 qpair failed and we were unable to recover it. 00:34:23.490 [2024-07-25 01:20:16.395290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.490 [2024-07-25 01:20:16.395316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.490 qpair failed and we were unable to recover it. 00:34:23.490 [2024-07-25 01:20:16.395431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.490 [2024-07-25 01:20:16.395456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.490 qpair failed and we were unable to recover it. 00:34:23.490 [2024-07-25 01:20:16.395568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.490 [2024-07-25 01:20:16.395593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.490 qpair failed and we were unable to recover it. 00:34:23.490 [2024-07-25 01:20:16.395709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.490 [2024-07-25 01:20:16.395742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.490 qpair failed and we were unable to recover it. 00:34:23.490 [2024-07-25 01:20:16.395858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.490 [2024-07-25 01:20:16.395883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.490 qpair failed and we were unable to recover it. 00:34:23.490 [2024-07-25 01:20:16.396030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.490 [2024-07-25 01:20:16.396054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.490 qpair failed and we were unable to recover it. 00:34:23.490 [2024-07-25 01:20:16.396281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.490 [2024-07-25 01:20:16.396307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.490 qpair failed and we were unable to recover it. 00:34:23.490 [2024-07-25 01:20:16.396454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.490 [2024-07-25 01:20:16.396479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.490 qpair failed and we were unable to recover it. 00:34:23.490 [2024-07-25 01:20:16.396588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.490 [2024-07-25 01:20:16.396613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.490 qpair failed and we were unable to recover it. 00:34:23.490 [2024-07-25 01:20:16.396730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.490 [2024-07-25 01:20:16.396756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.490 qpair failed and we were unable to recover it. 00:34:23.490 [2024-07-25 01:20:16.396871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.490 [2024-07-25 01:20:16.396896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.490 qpair failed and we were unable to recover it. 00:34:23.490 [2024-07-25 01:20:16.397063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.490 [2024-07-25 01:20:16.397088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.490 qpair failed and we were unable to recover it. 00:34:23.490 [2024-07-25 01:20:16.397231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.490 [2024-07-25 01:20:16.397263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.490 qpair failed and we were unable to recover it. 00:34:23.490 [2024-07-25 01:20:16.397382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.490 [2024-07-25 01:20:16.397408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.490 qpair failed and we were unable to recover it. 00:34:23.490 [2024-07-25 01:20:16.397588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.490 [2024-07-25 01:20:16.397613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.490 qpair failed and we were unable to recover it. 00:34:23.490 [2024-07-25 01:20:16.397756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.490 [2024-07-25 01:20:16.397781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.490 qpair failed and we were unable to recover it. 00:34:23.490 [2024-07-25 01:20:16.397926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.490 [2024-07-25 01:20:16.397951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.490 qpair failed and we were unable to recover it. 00:34:23.490 [2024-07-25 01:20:16.398098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.490 [2024-07-25 01:20:16.398124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.490 qpair failed and we were unable to recover it. 00:34:23.490 [2024-07-25 01:20:16.398272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.490 [2024-07-25 01:20:16.398298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.490 qpair failed and we were unable to recover it. 00:34:23.490 [2024-07-25 01:20:16.398414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.490 [2024-07-25 01:20:16.398439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.490 qpair failed and we were unable to recover it. 00:34:23.490 [2024-07-25 01:20:16.398550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.490 [2024-07-25 01:20:16.398575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.490 qpair failed and we were unable to recover it. 00:34:23.490 [2024-07-25 01:20:16.398723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.490 [2024-07-25 01:20:16.398749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.490 qpair failed and we were unable to recover it. 00:34:23.490 [2024-07-25 01:20:16.398866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.490 [2024-07-25 01:20:16.398891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.490 qpair failed and we were unable to recover it. 00:34:23.490 [2024-07-25 01:20:16.399008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.490 [2024-07-25 01:20:16.399033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.490 qpair failed and we were unable to recover it. 00:34:23.490 [2024-07-25 01:20:16.399140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.490 [2024-07-25 01:20:16.399165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.490 qpair failed and we were unable to recover it. 00:34:23.490 [2024-07-25 01:20:16.399310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.490 [2024-07-25 01:20:16.399336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.490 qpair failed and we were unable to recover it. 00:34:23.490 [2024-07-25 01:20:16.399476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.490 [2024-07-25 01:20:16.399501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.490 qpair failed and we were unable to recover it. 00:34:23.490 [2024-07-25 01:20:16.399646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.490 [2024-07-25 01:20:16.399671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.490 qpair failed and we were unable to recover it. 00:34:23.490 [2024-07-25 01:20:16.399889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.490 [2024-07-25 01:20:16.399914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.490 qpair failed and we were unable to recover it. 00:34:23.490 [2024-07-25 01:20:16.400062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.491 [2024-07-25 01:20:16.400087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.491 qpair failed and we were unable to recover it. 00:34:23.491 [2024-07-25 01:20:16.400261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.491 [2024-07-25 01:20:16.400287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.491 qpair failed and we were unable to recover it. 00:34:23.491 [2024-07-25 01:20:16.400403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.491 [2024-07-25 01:20:16.400429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.491 qpair failed and we were unable to recover it. 00:34:23.491 [2024-07-25 01:20:16.400572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.491 [2024-07-25 01:20:16.400597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.491 qpair failed and we were unable to recover it. 00:34:23.491 [2024-07-25 01:20:16.400740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.491 [2024-07-25 01:20:16.400765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.491 qpair failed and we were unable to recover it. 00:34:23.491 [2024-07-25 01:20:16.400941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.491 [2024-07-25 01:20:16.400966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.491 qpair failed and we were unable to recover it. 00:34:23.491 [2024-07-25 01:20:16.401079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.491 [2024-07-25 01:20:16.401104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.491 qpair failed and we were unable to recover it. 00:34:23.491 [2024-07-25 01:20:16.401259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.491 [2024-07-25 01:20:16.401285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.491 qpair failed and we were unable to recover it. 00:34:23.491 [2024-07-25 01:20:16.401407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.491 [2024-07-25 01:20:16.401433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.491 qpair failed and we were unable to recover it. 00:34:23.491 [2024-07-25 01:20:16.401575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.491 [2024-07-25 01:20:16.401601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.491 qpair failed and we were unable to recover it. 00:34:23.491 [2024-07-25 01:20:16.401719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.491 [2024-07-25 01:20:16.401745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.491 qpair failed and we were unable to recover it. 00:34:23.491 [2024-07-25 01:20:16.401863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.491 [2024-07-25 01:20:16.401888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.491 qpair failed and we were unable to recover it. 00:34:23.491 [2024-07-25 01:20:16.402010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.491 [2024-07-25 01:20:16.402036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.491 qpair failed and we were unable to recover it. 00:34:23.491 [2024-07-25 01:20:16.402206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.491 [2024-07-25 01:20:16.402231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.491 qpair failed and we were unable to recover it. 00:34:23.491 [2024-07-25 01:20:16.402380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.491 [2024-07-25 01:20:16.402410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.491 qpair failed and we were unable to recover it. 00:34:23.491 [2024-07-25 01:20:16.402551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.491 [2024-07-25 01:20:16.402576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.491 qpair failed and we were unable to recover it. 00:34:23.491 [2024-07-25 01:20:16.402705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.491 [2024-07-25 01:20:16.402730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.491 qpair failed and we were unable to recover it. 00:34:23.491 [2024-07-25 01:20:16.402873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.491 [2024-07-25 01:20:16.402898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.491 qpair failed and we were unable to recover it. 00:34:23.491 [2024-07-25 01:20:16.403035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.491 [2024-07-25 01:20:16.403060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.491 qpair failed and we were unable to recover it. 00:34:23.491 [2024-07-25 01:20:16.403178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.491 [2024-07-25 01:20:16.403203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.491 qpair failed and we were unable to recover it. 00:34:23.491 [2024-07-25 01:20:16.403324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.491 [2024-07-25 01:20:16.403350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.491 qpair failed and we were unable to recover it. 00:34:23.491 [2024-07-25 01:20:16.403487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.491 [2024-07-25 01:20:16.403512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.491 qpair failed and we were unable to recover it. 00:34:23.491 [2024-07-25 01:20:16.403658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.491 [2024-07-25 01:20:16.403684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.491 qpair failed and we were unable to recover it. 00:34:23.491 [2024-07-25 01:20:16.403805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.491 [2024-07-25 01:20:16.403831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.491 qpair failed and we were unable to recover it. 00:34:23.491 [2024-07-25 01:20:16.403953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.491 [2024-07-25 01:20:16.403978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.491 qpair failed and we were unable to recover it. 00:34:23.491 [2024-07-25 01:20:16.404128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.491 [2024-07-25 01:20:16.404154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.491 qpair failed and we were unable to recover it. 00:34:23.491 [2024-07-25 01:20:16.404312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.491 [2024-07-25 01:20:16.404339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.491 qpair failed and we were unable to recover it. 00:34:23.491 [2024-07-25 01:20:16.404454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.491 [2024-07-25 01:20:16.404480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.491 qpair failed and we were unable to recover it. 00:34:23.491 [2024-07-25 01:20:16.404627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.491 [2024-07-25 01:20:16.404653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.491 qpair failed and we were unable to recover it. 00:34:23.491 [2024-07-25 01:20:16.404793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.491 [2024-07-25 01:20:16.404818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.491 qpair failed and we were unable to recover it. 00:34:23.491 [2024-07-25 01:20:16.404928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.491 [2024-07-25 01:20:16.404953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.491 qpair failed and we were unable to recover it. 00:34:23.491 [2024-07-25 01:20:16.405092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.491 [2024-07-25 01:20:16.405117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.491 qpair failed and we were unable to recover it. 00:34:23.491 [2024-07-25 01:20:16.405261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.491 [2024-07-25 01:20:16.405287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.491 qpair failed and we were unable to recover it. 00:34:23.491 [2024-07-25 01:20:16.405427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.491 [2024-07-25 01:20:16.405452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.491 qpair failed and we were unable to recover it. 00:34:23.491 [2024-07-25 01:20:16.405570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.491 [2024-07-25 01:20:16.405595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.491 qpair failed and we were unable to recover it. 00:34:23.491 [2024-07-25 01:20:16.405716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.491 [2024-07-25 01:20:16.405741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.491 qpair failed and we were unable to recover it. 00:34:23.491 [2024-07-25 01:20:16.405878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.491 [2024-07-25 01:20:16.405903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.491 qpair failed and we were unable to recover it. 00:34:23.491 [2024-07-25 01:20:16.406012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.491 [2024-07-25 01:20:16.406038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.491 qpair failed and we were unable to recover it. 00:34:23.492 [2024-07-25 01:20:16.406184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.492 [2024-07-25 01:20:16.406209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.492 qpair failed and we were unable to recover it. 00:34:23.492 [2024-07-25 01:20:16.406332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.492 [2024-07-25 01:20:16.406357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.492 qpair failed and we were unable to recover it. 00:34:23.492 [2024-07-25 01:20:16.406499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.492 [2024-07-25 01:20:16.406525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.492 qpair failed and we were unable to recover it. 00:34:23.492 [2024-07-25 01:20:16.406642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.492 [2024-07-25 01:20:16.406668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.492 qpair failed and we were unable to recover it. 00:34:23.492 [2024-07-25 01:20:16.406783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.492 [2024-07-25 01:20:16.406808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.492 qpair failed and we were unable to recover it. 00:34:23.492 [2024-07-25 01:20:16.406964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.492 [2024-07-25 01:20:16.406989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.492 qpair failed and we were unable to recover it. 00:34:23.492 [2024-07-25 01:20:16.407111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.492 [2024-07-25 01:20:16.407136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.492 qpair failed and we were unable to recover it. 00:34:23.492 [2024-07-25 01:20:16.407275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.492 [2024-07-25 01:20:16.407300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.492 qpair failed and we were unable to recover it. 00:34:23.492 [2024-07-25 01:20:16.407447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.492 [2024-07-25 01:20:16.407472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.492 qpair failed and we were unable to recover it. 00:34:23.492 [2024-07-25 01:20:16.407589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.492 [2024-07-25 01:20:16.407614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.492 qpair failed and we were unable to recover it. 00:34:23.492 [2024-07-25 01:20:16.407779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.492 [2024-07-25 01:20:16.407805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.492 qpair failed and we were unable to recover it. 00:34:23.492 [2024-07-25 01:20:16.407924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.492 [2024-07-25 01:20:16.407950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.492 qpair failed and we were unable to recover it. 00:34:23.492 [2024-07-25 01:20:16.408093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.492 [2024-07-25 01:20:16.408118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.492 qpair failed and we were unable to recover it. 00:34:23.492 [2024-07-25 01:20:16.408262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.492 [2024-07-25 01:20:16.408288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.492 qpair failed and we were unable to recover it. 00:34:23.492 [2024-07-25 01:20:16.408432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.492 [2024-07-25 01:20:16.408457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.492 qpair failed and we were unable to recover it. 00:34:23.492 [2024-07-25 01:20:16.408602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.492 [2024-07-25 01:20:16.408627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.492 qpair failed and we were unable to recover it. 00:34:23.492 [2024-07-25 01:20:16.408768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.492 [2024-07-25 01:20:16.408798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.492 qpair failed and we were unable to recover it. 00:34:23.492 [2024-07-25 01:20:16.408945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.492 [2024-07-25 01:20:16.408970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.492 qpair failed and we were unable to recover it. 00:34:23.492 [2024-07-25 01:20:16.409134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.492 [2024-07-25 01:20:16.409159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.492 qpair failed and we were unable to recover it. 00:34:23.492 [2024-07-25 01:20:16.409329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.492 [2024-07-25 01:20:16.409355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.492 qpair failed and we were unable to recover it. 00:34:23.492 [2024-07-25 01:20:16.409498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.492 [2024-07-25 01:20:16.409523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.492 qpair failed and we were unable to recover it. 00:34:23.492 [2024-07-25 01:20:16.409661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.492 [2024-07-25 01:20:16.409685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.492 qpair failed and we were unable to recover it. 00:34:23.492 [2024-07-25 01:20:16.409854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.492 [2024-07-25 01:20:16.409879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.492 qpair failed and we were unable to recover it. 00:34:23.492 [2024-07-25 01:20:16.410025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.492 [2024-07-25 01:20:16.410050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.492 qpair failed and we were unable to recover it. 00:34:23.492 [2024-07-25 01:20:16.410162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.492 [2024-07-25 01:20:16.410187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.492 qpair failed and we were unable to recover it. 00:34:23.492 [2024-07-25 01:20:16.410309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.492 [2024-07-25 01:20:16.410336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.492 qpair failed and we were unable to recover it. 00:34:23.492 [2024-07-25 01:20:16.410482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.492 [2024-07-25 01:20:16.410507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.492 qpair failed and we were unable to recover it. 00:34:23.492 [2024-07-25 01:20:16.410628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.492 [2024-07-25 01:20:16.410653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.492 qpair failed and we were unable to recover it. 00:34:23.492 [2024-07-25 01:20:16.410797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.492 [2024-07-25 01:20:16.410823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.492 qpair failed and we were unable to recover it. 00:34:23.492 [2024-07-25 01:20:16.410928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.492 [2024-07-25 01:20:16.410954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.492 qpair failed and we were unable to recover it. 00:34:23.492 [2024-07-25 01:20:16.411069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.492 [2024-07-25 01:20:16.411094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.492 qpair failed and we were unable to recover it. 00:34:23.492 [2024-07-25 01:20:16.411239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.492 [2024-07-25 01:20:16.411271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.492 qpair failed and we were unable to recover it. 00:34:23.492 [2024-07-25 01:20:16.411442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.492 [2024-07-25 01:20:16.411468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.492 qpair failed and we were unable to recover it. 00:34:23.492 [2024-07-25 01:20:16.411614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.492 [2024-07-25 01:20:16.411640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.492 qpair failed and we were unable to recover it. 00:34:23.492 [2024-07-25 01:20:16.411807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.492 [2024-07-25 01:20:16.411833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.492 qpair failed and we were unable to recover it. 00:34:23.492 [2024-07-25 01:20:16.411980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.492 [2024-07-25 01:20:16.412005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.492 qpair failed and we were unable to recover it. 00:34:23.492 [2024-07-25 01:20:16.412174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.492 [2024-07-25 01:20:16.412200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.492 qpair failed and we were unable to recover it. 00:34:23.492 [2024-07-25 01:20:16.412383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.493 [2024-07-25 01:20:16.412409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.493 qpair failed and we were unable to recover it. 00:34:23.493 [2024-07-25 01:20:16.412555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.493 [2024-07-25 01:20:16.412580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.493 qpair failed and we were unable to recover it. 00:34:23.493 [2024-07-25 01:20:16.412728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.493 [2024-07-25 01:20:16.412754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.493 qpair failed and we were unable to recover it. 00:34:23.493 [2024-07-25 01:20:16.412921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.493 [2024-07-25 01:20:16.412946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.493 qpair failed and we were unable to recover it. 00:34:23.493 [2024-07-25 01:20:16.413086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.493 [2024-07-25 01:20:16.413111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.493 qpair failed and we were unable to recover it. 00:34:23.493 [2024-07-25 01:20:16.413257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.493 [2024-07-25 01:20:16.413283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.493 qpair failed and we were unable to recover it. 00:34:23.493 [2024-07-25 01:20:16.413397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.493 [2024-07-25 01:20:16.413422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.493 qpair failed and we were unable to recover it. 00:34:23.493 [2024-07-25 01:20:16.413568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.493 [2024-07-25 01:20:16.413594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.493 qpair failed and we were unable to recover it. 00:34:23.493 [2024-07-25 01:20:16.413750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.493 [2024-07-25 01:20:16.413776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.493 qpair failed and we were unable to recover it. 00:34:23.493 [2024-07-25 01:20:16.413918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.493 [2024-07-25 01:20:16.413943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.493 qpair failed and we were unable to recover it. 00:34:23.493 [2024-07-25 01:20:16.414085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.493 [2024-07-25 01:20:16.414111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.493 qpair failed and we were unable to recover it. 00:34:23.493 [2024-07-25 01:20:16.414276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.493 [2024-07-25 01:20:16.414302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.493 qpair failed and we were unable to recover it. 00:34:23.493 [2024-07-25 01:20:16.414458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.493 [2024-07-25 01:20:16.414483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.493 qpair failed and we were unable to recover it. 00:34:23.493 [2024-07-25 01:20:16.414595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.493 [2024-07-25 01:20:16.414620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.493 qpair failed and we were unable to recover it. 00:34:23.493 [2024-07-25 01:20:16.414787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.493 [2024-07-25 01:20:16.414812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.493 qpair failed and we were unable to recover it. 00:34:23.493 [2024-07-25 01:20:16.414955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.493 [2024-07-25 01:20:16.414980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.493 qpair failed and we were unable to recover it. 00:34:23.493 [2024-07-25 01:20:16.415144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.493 [2024-07-25 01:20:16.415169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.493 qpair failed and we were unable to recover it. 00:34:23.493 [2024-07-25 01:20:16.415287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.493 [2024-07-25 01:20:16.415312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.493 qpair failed and we were unable to recover it. 00:34:23.493 [2024-07-25 01:20:16.415430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.493 [2024-07-25 01:20:16.415457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.493 qpair failed and we were unable to recover it. 00:34:23.493 [2024-07-25 01:20:16.415605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.493 [2024-07-25 01:20:16.415635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.493 qpair failed and we were unable to recover it. 00:34:23.493 [2024-07-25 01:20:16.415802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.493 [2024-07-25 01:20:16.415828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.493 qpair failed and we were unable to recover it. 00:34:23.493 [2024-07-25 01:20:16.415947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.493 [2024-07-25 01:20:16.415972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.493 qpair failed and we were unable to recover it. 00:34:23.493 [2024-07-25 01:20:16.416080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.493 [2024-07-25 01:20:16.416105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.493 qpair failed and we were unable to recover it. 00:34:23.493 [2024-07-25 01:20:16.416249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.493 [2024-07-25 01:20:16.416275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.493 qpair failed and we were unable to recover it. 00:34:23.493 [2024-07-25 01:20:16.416423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.493 [2024-07-25 01:20:16.416448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.493 qpair failed and we were unable to recover it. 00:34:23.493 [2024-07-25 01:20:16.416567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.493 [2024-07-25 01:20:16.416594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.493 qpair failed and we were unable to recover it. 00:34:23.493 [2024-07-25 01:20:16.416732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.493 [2024-07-25 01:20:16.416758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.493 qpair failed and we were unable to recover it. 00:34:23.493 [2024-07-25 01:20:16.416903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.493 [2024-07-25 01:20:16.416928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.493 qpair failed and we were unable to recover it. 00:34:23.493 [2024-07-25 01:20:16.417094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.493 [2024-07-25 01:20:16.417119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.493 qpair failed and we were unable to recover it. 00:34:23.493 [2024-07-25 01:20:16.417297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.493 [2024-07-25 01:20:16.417323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.493 qpair failed and we were unable to recover it. 00:34:23.493 [2024-07-25 01:20:16.417446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.493 [2024-07-25 01:20:16.417472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.493 qpair failed and we were unable to recover it. 00:34:23.493 [2024-07-25 01:20:16.417609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.493 [2024-07-25 01:20:16.417635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.493 qpair failed and we were unable to recover it. 00:34:23.493 [2024-07-25 01:20:16.417781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.493 [2024-07-25 01:20:16.417806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.493 qpair failed and we were unable to recover it. 00:34:23.493 [2024-07-25 01:20:16.417980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.493 [2024-07-25 01:20:16.418006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.493 qpair failed and we were unable to recover it. 00:34:23.493 [2024-07-25 01:20:16.418178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.493 [2024-07-25 01:20:16.418203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.493 qpair failed and we were unable to recover it. 00:34:23.493 [2024-07-25 01:20:16.418374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.493 [2024-07-25 01:20:16.418400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.493 qpair failed and we were unable to recover it. 00:34:23.493 [2024-07-25 01:20:16.418569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.493 [2024-07-25 01:20:16.418594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.493 qpair failed and we were unable to recover it. 00:34:23.493 [2024-07-25 01:20:16.418729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.493 [2024-07-25 01:20:16.418755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.493 qpair failed and we were unable to recover it. 00:34:23.494 [2024-07-25 01:20:16.418976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.494 [2024-07-25 01:20:16.419002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.494 qpair failed and we were unable to recover it. 00:34:23.494 [2024-07-25 01:20:16.419169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.494 [2024-07-25 01:20:16.419194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.494 qpair failed and we were unable to recover it. 00:34:23.494 [2024-07-25 01:20:16.419326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.494 [2024-07-25 01:20:16.419352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.494 qpair failed and we were unable to recover it. 00:34:23.494 [2024-07-25 01:20:16.419522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.494 [2024-07-25 01:20:16.419547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.494 qpair failed and we were unable to recover it. 00:34:23.494 [2024-07-25 01:20:16.419699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.494 [2024-07-25 01:20:16.419724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.494 qpair failed and we were unable to recover it. 00:34:23.494 [2024-07-25 01:20:16.419867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.494 [2024-07-25 01:20:16.419892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.494 qpair failed and we were unable to recover it. 00:34:23.494 [2024-07-25 01:20:16.420035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.494 [2024-07-25 01:20:16.420061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.494 qpair failed and we were unable to recover it. 00:34:23.494 [2024-07-25 01:20:16.420177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.494 [2024-07-25 01:20:16.420202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.494 qpair failed and we were unable to recover it. 00:34:23.494 [2024-07-25 01:20:16.420363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.494 [2024-07-25 01:20:16.420390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.494 qpair failed and we were unable to recover it. 00:34:23.494 [2024-07-25 01:20:16.420498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.494 [2024-07-25 01:20:16.420523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.494 qpair failed and we were unable to recover it. 00:34:23.494 [2024-07-25 01:20:16.420667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.494 [2024-07-25 01:20:16.420693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.494 qpair failed and we were unable to recover it. 00:34:23.494 [2024-07-25 01:20:16.420832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.494 [2024-07-25 01:20:16.420858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.494 qpair failed and we were unable to recover it. 00:34:23.494 [2024-07-25 01:20:16.421029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.494 [2024-07-25 01:20:16.421054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.494 qpair failed and we were unable to recover it. 00:34:23.494 [2024-07-25 01:20:16.421193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.494 [2024-07-25 01:20:16.421218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.494 qpair failed and we were unable to recover it. 00:34:23.494 [2024-07-25 01:20:16.421395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.494 [2024-07-25 01:20:16.421420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.494 qpair failed and we were unable to recover it. 00:34:23.494 [2024-07-25 01:20:16.421534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.494 [2024-07-25 01:20:16.421559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.494 qpair failed and we were unable to recover it. 00:34:23.494 [2024-07-25 01:20:16.421676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.494 [2024-07-25 01:20:16.421701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.494 qpair failed and we were unable to recover it. 00:34:23.494 [2024-07-25 01:20:16.421864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.494 [2024-07-25 01:20:16.421890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.494 qpair failed and we were unable to recover it. 00:34:23.494 [2024-07-25 01:20:16.422050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.494 [2024-07-25 01:20:16.422075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.494 qpair failed and we were unable to recover it. 00:34:23.494 [2024-07-25 01:20:16.422195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.494 [2024-07-25 01:20:16.422220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.494 qpair failed and we were unable to recover it. 00:34:23.494 [2024-07-25 01:20:16.422368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.494 [2024-07-25 01:20:16.422394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.494 qpair failed and we were unable to recover it. 00:34:23.494 [2024-07-25 01:20:16.422507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.494 [2024-07-25 01:20:16.422536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.494 qpair failed and we were unable to recover it. 00:34:23.494 [2024-07-25 01:20:16.422678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.494 [2024-07-25 01:20:16.422703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.494 qpair failed and we were unable to recover it. 00:34:23.494 [2024-07-25 01:20:16.422845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.494 [2024-07-25 01:20:16.422871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.494 qpair failed and we were unable to recover it. 00:34:23.494 [2024-07-25 01:20:16.423022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.494 [2024-07-25 01:20:16.423047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.494 qpair failed and we were unable to recover it. 00:34:23.494 [2024-07-25 01:20:16.423163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.494 [2024-07-25 01:20:16.423190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.494 qpair failed and we were unable to recover it. 00:34:23.494 [2024-07-25 01:20:16.423357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.494 [2024-07-25 01:20:16.423383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.494 qpair failed and we were unable to recover it. 00:34:23.494 [2024-07-25 01:20:16.423528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.494 [2024-07-25 01:20:16.423553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.494 qpair failed and we were unable to recover it. 00:34:23.494 [2024-07-25 01:20:16.423694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.494 [2024-07-25 01:20:16.423719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.494 qpair failed and we were unable to recover it. 00:34:23.494 [2024-07-25 01:20:16.423841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.494 [2024-07-25 01:20:16.423866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.494 qpair failed and we were unable to recover it. 00:34:23.494 [2024-07-25 01:20:16.423982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.494 [2024-07-25 01:20:16.424007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.494 qpair failed and we were unable to recover it. 00:34:23.494 [2024-07-25 01:20:16.424149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.494 [2024-07-25 01:20:16.424174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.494 qpair failed and we were unable to recover it. 00:34:23.495 [2024-07-25 01:20:16.424322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.495 [2024-07-25 01:20:16.424348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.495 qpair failed and we were unable to recover it. 00:34:23.495 [2024-07-25 01:20:16.424463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.495 [2024-07-25 01:20:16.424488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.495 qpair failed and we were unable to recover it. 00:34:23.495 [2024-07-25 01:20:16.424629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.495 [2024-07-25 01:20:16.424655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.495 qpair failed and we were unable to recover it. 00:34:23.495 [2024-07-25 01:20:16.424779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.495 [2024-07-25 01:20:16.424804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.495 qpair failed and we were unable to recover it. 00:34:23.495 [2024-07-25 01:20:16.424931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.495 [2024-07-25 01:20:16.424957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.495 qpair failed and we were unable to recover it. 00:34:23.495 [2024-07-25 01:20:16.425137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.495 [2024-07-25 01:20:16.425163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.495 qpair failed and we were unable to recover it. 00:34:23.495 [2024-07-25 01:20:16.425274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.495 [2024-07-25 01:20:16.425300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.495 qpair failed and we were unable to recover it. 00:34:23.495 [2024-07-25 01:20:16.425442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.495 [2024-07-25 01:20:16.425467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.495 qpair failed and we were unable to recover it. 00:34:23.495 [2024-07-25 01:20:16.425616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.495 [2024-07-25 01:20:16.425641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.495 qpair failed and we were unable to recover it. 00:34:23.495 [2024-07-25 01:20:16.425808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.495 [2024-07-25 01:20:16.425833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.495 qpair failed and we were unable to recover it. 00:34:23.495 [2024-07-25 01:20:16.425979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.495 [2024-07-25 01:20:16.426004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.495 qpair failed and we were unable to recover it. 00:34:23.495 [2024-07-25 01:20:16.426153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.495 [2024-07-25 01:20:16.426179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.495 qpair failed and we were unable to recover it. 00:34:23.495 [2024-07-25 01:20:16.426346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.495 [2024-07-25 01:20:16.426372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.495 qpair failed and we were unable to recover it. 00:34:23.495 [2024-07-25 01:20:16.426489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.495 [2024-07-25 01:20:16.426515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.495 qpair failed and we were unable to recover it. 00:34:23.495 [2024-07-25 01:20:16.426658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.495 [2024-07-25 01:20:16.426683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.495 qpair failed and we were unable to recover it. 00:34:23.495 [2024-07-25 01:20:16.426802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.495 [2024-07-25 01:20:16.426827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.495 qpair failed and we were unable to recover it. 00:34:23.495 [2024-07-25 01:20:16.426989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.495 [2024-07-25 01:20:16.427014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.495 qpair failed and we were unable to recover it. 00:34:23.495 [2024-07-25 01:20:16.427182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.495 [2024-07-25 01:20:16.427208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.495 qpair failed and we were unable to recover it. 00:34:23.495 [2024-07-25 01:20:16.427324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.495 [2024-07-25 01:20:16.427349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.495 qpair failed and we were unable to recover it. 00:34:23.495 [2024-07-25 01:20:16.427474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.495 [2024-07-25 01:20:16.427499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.495 qpair failed and we were unable to recover it. 00:34:23.495 [2024-07-25 01:20:16.427676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.495 [2024-07-25 01:20:16.427701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.495 qpair failed and we were unable to recover it. 00:34:23.495 [2024-07-25 01:20:16.427844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.495 [2024-07-25 01:20:16.427869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.495 qpair failed and we were unable to recover it. 00:34:23.495 [2024-07-25 01:20:16.428012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.495 [2024-07-25 01:20:16.428038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.495 qpair failed and we were unable to recover it. 00:34:23.495 [2024-07-25 01:20:16.428183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.495 [2024-07-25 01:20:16.428208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.495 qpair failed and we were unable to recover it. 00:34:23.495 [2024-07-25 01:20:16.428356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.495 [2024-07-25 01:20:16.428382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.495 qpair failed and we were unable to recover it. 00:34:23.495 [2024-07-25 01:20:16.428546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.495 [2024-07-25 01:20:16.428571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.495 qpair failed and we were unable to recover it. 00:34:23.495 [2024-07-25 01:20:16.428694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.495 [2024-07-25 01:20:16.428719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.495 qpair failed and we were unable to recover it. 00:34:23.495 [2024-07-25 01:20:16.428834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.495 [2024-07-25 01:20:16.428859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.495 qpair failed and we were unable to recover it. 00:34:23.495 [2024-07-25 01:20:16.429010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.495 [2024-07-25 01:20:16.429035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.495 qpair failed and we were unable to recover it. 00:34:23.495 [2024-07-25 01:20:16.429192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.495 [2024-07-25 01:20:16.429224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.495 qpair failed and we were unable to recover it. 00:34:23.495 [2024-07-25 01:20:16.429380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.495 [2024-07-25 01:20:16.429405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.495 qpair failed and we were unable to recover it. 00:34:23.495 [2024-07-25 01:20:16.429515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.495 [2024-07-25 01:20:16.429540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.495 qpair failed and we were unable to recover it. 00:34:23.495 [2024-07-25 01:20:16.429681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.495 [2024-07-25 01:20:16.429706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.495 qpair failed and we were unable to recover it. 00:34:23.495 [2024-07-25 01:20:16.429848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.495 [2024-07-25 01:20:16.429873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.495 qpair failed and we were unable to recover it. 00:34:23.495 [2024-07-25 01:20:16.429988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.495 [2024-07-25 01:20:16.430012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.495 qpair failed and we were unable to recover it. 00:34:23.495 [2024-07-25 01:20:16.430155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.495 [2024-07-25 01:20:16.430180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.495 qpair failed and we were unable to recover it. 00:34:23.495 [2024-07-25 01:20:16.430342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.495 [2024-07-25 01:20:16.430368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.495 qpair failed and we were unable to recover it. 00:34:23.495 [2024-07-25 01:20:16.430511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.495 [2024-07-25 01:20:16.430536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.496 qpair failed and we were unable to recover it. 00:34:23.496 [2024-07-25 01:20:16.430685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.496 [2024-07-25 01:20:16.430710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.496 qpair failed and we were unable to recover it. 00:34:23.496 [2024-07-25 01:20:16.430834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.496 [2024-07-25 01:20:16.430859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.496 qpair failed and we were unable to recover it. 00:34:23.496 [2024-07-25 01:20:16.431000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.496 [2024-07-25 01:20:16.431026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.496 qpair failed and we were unable to recover it. 00:34:23.496 [2024-07-25 01:20:16.431142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.496 [2024-07-25 01:20:16.431167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.496 qpair failed and we were unable to recover it. 00:34:23.496 [2024-07-25 01:20:16.431313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.496 [2024-07-25 01:20:16.431339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.496 qpair failed and we were unable to recover it. 00:34:23.496 [2024-07-25 01:20:16.431467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.496 [2024-07-25 01:20:16.431492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.496 qpair failed and we were unable to recover it. 00:34:23.496 [2024-07-25 01:20:16.431628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.496 [2024-07-25 01:20:16.431653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.496 qpair failed and we were unable to recover it. 00:34:23.496 [2024-07-25 01:20:16.431772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.496 [2024-07-25 01:20:16.431798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.496 qpair failed and we were unable to recover it. 00:34:23.496 [2024-07-25 01:20:16.431905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.496 [2024-07-25 01:20:16.431930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.496 qpair failed and we were unable to recover it. 00:34:23.496 [2024-07-25 01:20:16.432043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.496 [2024-07-25 01:20:16.432070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.496 qpair failed and we were unable to recover it. 00:34:23.496 [2024-07-25 01:20:16.432183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.496 [2024-07-25 01:20:16.432208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.496 qpair failed and we were unable to recover it. 00:34:23.496 [2024-07-25 01:20:16.432348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.496 [2024-07-25 01:20:16.432374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.496 qpair failed and we were unable to recover it. 00:34:23.496 [2024-07-25 01:20:16.432493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.496 [2024-07-25 01:20:16.432519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.496 qpair failed and we were unable to recover it. 00:34:23.496 [2024-07-25 01:20:16.432662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.496 [2024-07-25 01:20:16.432687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.496 qpair failed and we were unable to recover it. 00:34:23.496 [2024-07-25 01:20:16.432829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.496 [2024-07-25 01:20:16.432855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.496 qpair failed and we were unable to recover it. 00:34:23.496 [2024-07-25 01:20:16.433023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.496 [2024-07-25 01:20:16.433048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.496 qpair failed and we were unable to recover it. 00:34:23.496 [2024-07-25 01:20:16.433190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.496 [2024-07-25 01:20:16.433215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.496 qpair failed and we were unable to recover it. 00:34:23.496 [2024-07-25 01:20:16.433339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.496 [2024-07-25 01:20:16.433365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.496 qpair failed and we were unable to recover it. 00:34:23.496 [2024-07-25 01:20:16.433518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.496 [2024-07-25 01:20:16.433543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.496 qpair failed and we were unable to recover it. 00:34:23.496 [2024-07-25 01:20:16.433665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.496 [2024-07-25 01:20:16.433691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.496 qpair failed and we were unable to recover it. 00:34:23.496 [2024-07-25 01:20:16.433830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.496 [2024-07-25 01:20:16.433856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.496 qpair failed and we were unable to recover it. 00:34:23.496 [2024-07-25 01:20:16.433997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.496 [2024-07-25 01:20:16.434021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.496 qpair failed and we were unable to recover it. 00:34:23.496 [2024-07-25 01:20:16.434140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.496 [2024-07-25 01:20:16.434165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.496 qpair failed and we were unable to recover it. 00:34:23.496 [2024-07-25 01:20:16.434285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.496 [2024-07-25 01:20:16.434311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.496 qpair failed and we were unable to recover it. 00:34:23.496 [2024-07-25 01:20:16.434424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.496 [2024-07-25 01:20:16.434449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.496 qpair failed and we were unable to recover it. 00:34:23.496 [2024-07-25 01:20:16.434619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.496 [2024-07-25 01:20:16.434645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.496 qpair failed and we were unable to recover it. 00:34:23.496 [2024-07-25 01:20:16.434789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.496 [2024-07-25 01:20:16.434814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.496 qpair failed and we were unable to recover it. 00:34:23.496 [2024-07-25 01:20:16.434982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.496 [2024-07-25 01:20:16.435007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.496 qpair failed and we were unable to recover it. 00:34:23.496 [2024-07-25 01:20:16.435147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.496 [2024-07-25 01:20:16.435172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.496 qpair failed and we were unable to recover it. 00:34:23.496 [2024-07-25 01:20:16.435310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.496 [2024-07-25 01:20:16.435335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.496 qpair failed and we were unable to recover it. 00:34:23.496 [2024-07-25 01:20:16.435473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.496 [2024-07-25 01:20:16.435497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.496 qpair failed and we were unable to recover it. 00:34:23.496 [2024-07-25 01:20:16.435638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.496 [2024-07-25 01:20:16.435668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.496 qpair failed and we were unable to recover it. 00:34:23.496 [2024-07-25 01:20:16.435809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.496 [2024-07-25 01:20:16.435834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.496 qpair failed and we were unable to recover it. 00:34:23.496 [2024-07-25 01:20:16.435990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.496 [2024-07-25 01:20:16.436015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.496 qpair failed and we were unable to recover it. 00:34:23.496 [2024-07-25 01:20:16.436128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.496 [2024-07-25 01:20:16.436153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.496 qpair failed and we were unable to recover it. 00:34:23.496 [2024-07-25 01:20:16.436320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.496 [2024-07-25 01:20:16.436345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.496 qpair failed and we were unable to recover it. 00:34:23.497 [2024-07-25 01:20:16.436487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.497 [2024-07-25 01:20:16.436512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.497 qpair failed and we were unable to recover it. 00:34:23.497 [2024-07-25 01:20:16.436652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.497 [2024-07-25 01:20:16.436677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.497 qpair failed and we were unable to recover it. 00:34:23.497 [2024-07-25 01:20:16.436819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.497 [2024-07-25 01:20:16.436845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.497 qpair failed and we were unable to recover it. 00:34:23.497 [2024-07-25 01:20:16.437014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.497 [2024-07-25 01:20:16.437040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.497 qpair failed and we were unable to recover it. 00:34:23.497 [2024-07-25 01:20:16.437178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.497 [2024-07-25 01:20:16.437204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.497 qpair failed and we were unable to recover it. 00:34:23.497 [2024-07-25 01:20:16.437332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.497 [2024-07-25 01:20:16.437358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.497 qpair failed and we were unable to recover it. 00:34:23.497 [2024-07-25 01:20:16.437479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.497 [2024-07-25 01:20:16.437505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.497 qpair failed and we were unable to recover it. 00:34:23.497 [2024-07-25 01:20:16.437647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.497 [2024-07-25 01:20:16.437672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.497 qpair failed and we were unable to recover it. 00:34:23.497 [2024-07-25 01:20:16.437841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.497 [2024-07-25 01:20:16.437866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.497 qpair failed and we were unable to recover it. 00:34:23.497 [2024-07-25 01:20:16.438037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.497 [2024-07-25 01:20:16.438062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.497 qpair failed and we were unable to recover it. 00:34:23.497 [2024-07-25 01:20:16.438172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.497 [2024-07-25 01:20:16.438197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.497 qpair failed and we were unable to recover it. 00:34:23.497 [2024-07-25 01:20:16.438301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.497 [2024-07-25 01:20:16.438327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.497 qpair failed and we were unable to recover it. 00:34:23.497 [2024-07-25 01:20:16.438468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.497 [2024-07-25 01:20:16.438493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.497 qpair failed and we were unable to recover it. 00:34:23.497 [2024-07-25 01:20:16.438631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.497 [2024-07-25 01:20:16.438656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.497 qpair failed and we were unable to recover it. 00:34:23.497 [2024-07-25 01:20:16.438798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.497 [2024-07-25 01:20:16.438823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.497 qpair failed and we were unable to recover it. 00:34:23.497 [2024-07-25 01:20:16.438965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.497 [2024-07-25 01:20:16.438990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.497 qpair failed and we were unable to recover it. 00:34:23.497 [2024-07-25 01:20:16.439112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.497 [2024-07-25 01:20:16.439137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.497 qpair failed and we were unable to recover it. 00:34:23.497 [2024-07-25 01:20:16.439255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.497 [2024-07-25 01:20:16.439282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.497 qpair failed and we were unable to recover it. 00:34:23.497 [2024-07-25 01:20:16.439428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.497 [2024-07-25 01:20:16.439453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.497 qpair failed and we were unable to recover it. 00:34:23.497 [2024-07-25 01:20:16.439588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.497 [2024-07-25 01:20:16.439613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.497 qpair failed and we were unable to recover it. 00:34:23.497 [2024-07-25 01:20:16.439752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.497 [2024-07-25 01:20:16.439778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.497 qpair failed and we were unable to recover it. 00:34:23.497 [2024-07-25 01:20:16.439919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.497 [2024-07-25 01:20:16.439944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.497 qpair failed and we were unable to recover it. 00:34:23.497 [2024-07-25 01:20:16.440085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.497 [2024-07-25 01:20:16.440111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.497 qpair failed and we were unable to recover it. 00:34:23.497 [2024-07-25 01:20:16.440261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.497 [2024-07-25 01:20:16.440287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.497 qpair failed and we were unable to recover it. 00:34:23.497 [2024-07-25 01:20:16.440428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.497 [2024-07-25 01:20:16.440454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.497 qpair failed and we were unable to recover it. 00:34:23.497 [2024-07-25 01:20:16.440605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.497 [2024-07-25 01:20:16.440631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.497 qpair failed and we were unable to recover it. 00:34:23.497 [2024-07-25 01:20:16.440739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.497 [2024-07-25 01:20:16.440764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.497 qpair failed and we were unable to recover it. 00:34:23.497 [2024-07-25 01:20:16.440872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.497 [2024-07-25 01:20:16.440897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.497 qpair failed and we were unable to recover it. 00:34:23.497 [2024-07-25 01:20:16.441074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.497 [2024-07-25 01:20:16.441099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.497 qpair failed and we were unable to recover it. 00:34:23.497 [2024-07-25 01:20:16.441239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.497 [2024-07-25 01:20:16.441270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.497 qpair failed and we were unable to recover it. 00:34:23.497 [2024-07-25 01:20:16.441402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.497 [2024-07-25 01:20:16.441429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.497 qpair failed and we were unable to recover it. 00:34:23.497 [2024-07-25 01:20:16.441548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.497 [2024-07-25 01:20:16.441573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.497 qpair failed and we were unable to recover it. 00:34:23.497 [2024-07-25 01:20:16.441730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.497 [2024-07-25 01:20:16.441755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.497 qpair failed and we were unable to recover it. 00:34:23.497 [2024-07-25 01:20:16.441873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.497 [2024-07-25 01:20:16.441898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.497 qpair failed and we were unable to recover it. 00:34:23.497 [2024-07-25 01:20:16.442068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.497 [2024-07-25 01:20:16.442093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.497 qpair failed and we were unable to recover it. 00:34:23.497 [2024-07-25 01:20:16.442314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.497 [2024-07-25 01:20:16.442344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.497 qpair failed and we were unable to recover it. 00:34:23.497 [2024-07-25 01:20:16.442486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.497 [2024-07-25 01:20:16.442512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.497 qpair failed and we were unable to recover it. 00:34:23.497 [2024-07-25 01:20:16.442656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.498 [2024-07-25 01:20:16.442682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.498 qpair failed and we were unable to recover it. 00:34:23.498 [2024-07-25 01:20:16.442824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.498 [2024-07-25 01:20:16.442850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.498 qpair failed and we were unable to recover it. 00:34:23.498 [2024-07-25 01:20:16.443018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.498 [2024-07-25 01:20:16.443043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.498 qpair failed and we were unable to recover it. 00:34:23.498 [2024-07-25 01:20:16.443269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.498 [2024-07-25 01:20:16.443295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.498 qpair failed and we were unable to recover it. 00:34:23.498 [2024-07-25 01:20:16.443442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.498 [2024-07-25 01:20:16.443468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.498 qpair failed and we were unable to recover it. 00:34:23.498 [2024-07-25 01:20:16.443690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.498 [2024-07-25 01:20:16.443715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.498 qpair failed and we were unable to recover it. 00:34:23.498 [2024-07-25 01:20:16.443859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.498 [2024-07-25 01:20:16.443885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.498 qpair failed and we were unable to recover it. 00:34:23.498 [2024-07-25 01:20:16.444061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.498 [2024-07-25 01:20:16.444086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.498 qpair failed and we were unable to recover it. 00:34:23.498 [2024-07-25 01:20:16.444268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.498 [2024-07-25 01:20:16.444307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.498 qpair failed and we were unable to recover it. 00:34:23.498 [2024-07-25 01:20:16.444422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.498 [2024-07-25 01:20:16.444448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.498 qpair failed and we were unable to recover it. 00:34:23.498 [2024-07-25 01:20:16.444617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.498 [2024-07-25 01:20:16.444642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.498 qpair failed and we were unable to recover it. 00:34:23.498 [2024-07-25 01:20:16.444809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.498 [2024-07-25 01:20:16.444834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.498 qpair failed and we were unable to recover it. 00:34:23.498 [2024-07-25 01:20:16.444984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.498 [2024-07-25 01:20:16.445010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.498 qpair failed and we were unable to recover it. 00:34:23.498 [2024-07-25 01:20:16.445128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.498 [2024-07-25 01:20:16.445155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.498 qpair failed and we were unable to recover it. 00:34:23.498 [2024-07-25 01:20:16.445324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.498 [2024-07-25 01:20:16.445350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.498 qpair failed and we were unable to recover it. 00:34:23.498 [2024-07-25 01:20:16.445524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.498 [2024-07-25 01:20:16.445549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.498 qpair failed and we were unable to recover it. 00:34:23.498 [2024-07-25 01:20:16.445692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.498 [2024-07-25 01:20:16.445718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.498 qpair failed and we were unable to recover it. 00:34:23.498 [2024-07-25 01:20:16.445862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.498 [2024-07-25 01:20:16.445887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.498 qpair failed and we were unable to recover it. 00:34:23.498 [2024-07-25 01:20:16.446055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.498 [2024-07-25 01:20:16.446080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.498 qpair failed and we were unable to recover it. 00:34:23.498 [2024-07-25 01:20:16.446227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.498 [2024-07-25 01:20:16.446262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.498 qpair failed and we were unable to recover it. 00:34:23.498 [2024-07-25 01:20:16.446387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.498 [2024-07-25 01:20:16.446412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.498 qpair failed and we were unable to recover it. 00:34:23.498 [2024-07-25 01:20:16.446551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.498 [2024-07-25 01:20:16.446576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.498 qpair failed and we were unable to recover it. 00:34:23.498 [2024-07-25 01:20:16.446724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.498 [2024-07-25 01:20:16.446749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.498 qpair failed and we were unable to recover it. 00:34:23.498 [2024-07-25 01:20:16.446887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.498 [2024-07-25 01:20:16.446912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.498 qpair failed and we were unable to recover it. 00:34:23.498 [2024-07-25 01:20:16.447035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.498 [2024-07-25 01:20:16.447060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.498 qpair failed and we were unable to recover it. 00:34:23.498 [2024-07-25 01:20:16.447235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.498 [2024-07-25 01:20:16.447268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.498 qpair failed and we were unable to recover it. 00:34:23.498 [2024-07-25 01:20:16.447410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.498 [2024-07-25 01:20:16.447435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.498 qpair failed and we were unable to recover it. 00:34:23.498 [2024-07-25 01:20:16.447581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.498 [2024-07-25 01:20:16.447606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.498 qpair failed and we were unable to recover it. 00:34:23.498 [2024-07-25 01:20:16.447755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.498 [2024-07-25 01:20:16.447781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.498 qpair failed and we were unable to recover it. 00:34:23.498 [2024-07-25 01:20:16.447920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.498 [2024-07-25 01:20:16.447945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.498 qpair failed and we were unable to recover it. 00:34:23.498 [2024-07-25 01:20:16.448101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.498 [2024-07-25 01:20:16.448127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.498 qpair failed and we were unable to recover it. 00:34:23.498 [2024-07-25 01:20:16.448270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.498 [2024-07-25 01:20:16.448296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.498 qpair failed and we were unable to recover it. 00:34:23.498 [2024-07-25 01:20:16.448463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.498 [2024-07-25 01:20:16.448489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.498 qpair failed and we were unable to recover it. 00:34:23.498 [2024-07-25 01:20:16.448630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.498 [2024-07-25 01:20:16.448656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.498 qpair failed and we were unable to recover it. 00:34:23.498 [2024-07-25 01:20:16.448829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.498 [2024-07-25 01:20:16.448854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.498 qpair failed and we were unable to recover it. 00:34:23.498 [2024-07-25 01:20:16.448997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.498 [2024-07-25 01:20:16.449022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.498 qpair failed and we were unable to recover it. 00:34:23.498 [2024-07-25 01:20:16.449193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.498 [2024-07-25 01:20:16.449218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.498 qpair failed and we were unable to recover it. 00:34:23.498 [2024-07-25 01:20:16.449366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.499 [2024-07-25 01:20:16.449393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.499 qpair failed and we were unable to recover it. 00:34:23.499 [2024-07-25 01:20:16.449532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.499 [2024-07-25 01:20:16.449561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.499 qpair failed and we were unable to recover it. 00:34:23.499 [2024-07-25 01:20:16.449667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.499 [2024-07-25 01:20:16.449692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.499 qpair failed and we were unable to recover it. 00:34:23.499 [2024-07-25 01:20:16.449835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.499 [2024-07-25 01:20:16.449860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.499 qpair failed and we were unable to recover it. 00:34:23.499 [2024-07-25 01:20:16.449965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.499 [2024-07-25 01:20:16.449990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.499 qpair failed and we were unable to recover it. 00:34:23.499 [2024-07-25 01:20:16.450129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.499 [2024-07-25 01:20:16.450154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.499 qpair failed and we were unable to recover it. 00:34:23.499 [2024-07-25 01:20:16.450296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.499 [2024-07-25 01:20:16.450323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.499 qpair failed and we were unable to recover it. 00:34:23.499 [2024-07-25 01:20:16.450467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.499 [2024-07-25 01:20:16.450493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.499 qpair failed and we were unable to recover it. 00:34:23.499 [2024-07-25 01:20:16.450663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.499 [2024-07-25 01:20:16.450688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.499 qpair failed and we were unable to recover it. 00:34:23.499 [2024-07-25 01:20:16.450827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.499 [2024-07-25 01:20:16.450852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.499 qpair failed and we were unable to recover it. 00:34:23.499 [2024-07-25 01:20:16.451023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.499 [2024-07-25 01:20:16.451049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.499 qpair failed and we were unable to recover it. 00:34:23.499 [2024-07-25 01:20:16.451195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.499 [2024-07-25 01:20:16.451221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.499 qpair failed and we were unable to recover it. 00:34:23.499 [2024-07-25 01:20:16.451371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.499 [2024-07-25 01:20:16.451398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.499 qpair failed and we were unable to recover it. 00:34:23.499 [2024-07-25 01:20:16.451508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.499 [2024-07-25 01:20:16.451533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.499 qpair failed and we were unable to recover it. 00:34:23.499 [2024-07-25 01:20:16.451675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.499 [2024-07-25 01:20:16.451701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.499 qpair failed and we were unable to recover it. 00:34:23.499 [2024-07-25 01:20:16.451839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.499 [2024-07-25 01:20:16.451865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.499 qpair failed and we were unable to recover it. 00:34:23.499 [2024-07-25 01:20:16.452000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.499 [2024-07-25 01:20:16.452025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.499 qpair failed and we were unable to recover it. 00:34:23.499 [2024-07-25 01:20:16.452140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.499 [2024-07-25 01:20:16.452165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.499 qpair failed and we were unable to recover it. 00:34:23.499 [2024-07-25 01:20:16.452307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.499 [2024-07-25 01:20:16.452333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.499 qpair failed and we were unable to recover it. 00:34:23.499 [2024-07-25 01:20:16.452447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.499 [2024-07-25 01:20:16.452473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.499 qpair failed and we were unable to recover it. 00:34:23.499 [2024-07-25 01:20:16.452617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.499 [2024-07-25 01:20:16.452642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.499 qpair failed and we were unable to recover it. 00:34:23.499 [2024-07-25 01:20:16.452783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.499 [2024-07-25 01:20:16.452809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.499 qpair failed and we were unable to recover it. 00:34:23.499 [2024-07-25 01:20:16.452949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.499 [2024-07-25 01:20:16.452975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.499 qpair failed and we were unable to recover it. 00:34:23.499 [2024-07-25 01:20:16.453114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.499 [2024-07-25 01:20:16.453140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.499 qpair failed and we were unable to recover it. 00:34:23.499 [2024-07-25 01:20:16.453266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.499 [2024-07-25 01:20:16.453293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.499 qpair failed and we were unable to recover it. 00:34:23.499 [2024-07-25 01:20:16.453414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.499 [2024-07-25 01:20:16.453440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.499 qpair failed and we were unable to recover it. 00:34:23.499 [2024-07-25 01:20:16.453609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.499 [2024-07-25 01:20:16.453635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.499 qpair failed and we were unable to recover it. 00:34:23.499 [2024-07-25 01:20:16.453773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.499 [2024-07-25 01:20:16.453799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.499 qpair failed and we were unable to recover it. 00:34:23.499 [2024-07-25 01:20:16.453916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.499 [2024-07-25 01:20:16.453942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.499 qpair failed and we were unable to recover it. 00:34:23.499 [2024-07-25 01:20:16.454063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.499 [2024-07-25 01:20:16.454089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.499 qpair failed and we were unable to recover it. 00:34:23.499 [2024-07-25 01:20:16.454204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.499 [2024-07-25 01:20:16.454230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.499 qpair failed and we were unable to recover it. 00:34:23.499 [2024-07-25 01:20:16.454382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.499 [2024-07-25 01:20:16.454408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.499 qpair failed and we were unable to recover it. 00:34:23.500 [2024-07-25 01:20:16.454565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.500 [2024-07-25 01:20:16.454590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.500 qpair failed and we were unable to recover it. 00:34:23.500 [2024-07-25 01:20:16.454732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.500 [2024-07-25 01:20:16.454758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.500 qpair failed and we were unable to recover it. 00:34:23.500 [2024-07-25 01:20:16.454904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.500 [2024-07-25 01:20:16.454929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.500 qpair failed and we were unable to recover it. 00:34:23.500 [2024-07-25 01:20:16.455042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.500 [2024-07-25 01:20:16.455067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.500 qpair failed and we were unable to recover it. 00:34:23.500 [2024-07-25 01:20:16.455216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.500 [2024-07-25 01:20:16.455247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.500 qpair failed and we were unable to recover it. 00:34:23.500 [2024-07-25 01:20:16.455383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.500 [2024-07-25 01:20:16.455409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.500 qpair failed and we were unable to recover it. 00:34:23.500 [2024-07-25 01:20:16.455553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.500 [2024-07-25 01:20:16.455579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.500 qpair failed and we were unable to recover it. 00:34:23.500 [2024-07-25 01:20:16.455801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.500 [2024-07-25 01:20:16.455826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.500 qpair failed and we were unable to recover it. 00:34:23.500 [2024-07-25 01:20:16.455966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.500 [2024-07-25 01:20:16.455992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.500 qpair failed and we were unable to recover it. 00:34:23.500 [2024-07-25 01:20:16.456112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.500 [2024-07-25 01:20:16.456141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.500 qpair failed and we were unable to recover it. 00:34:23.500 [2024-07-25 01:20:16.456256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.500 [2024-07-25 01:20:16.456281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.500 qpair failed and we were unable to recover it. 00:34:23.500 [2024-07-25 01:20:16.456429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.500 [2024-07-25 01:20:16.456454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.500 qpair failed and we were unable to recover it. 00:34:23.500 [2024-07-25 01:20:16.456618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.500 [2024-07-25 01:20:16.456643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.500 qpair failed and we were unable to recover it. 00:34:23.500 [2024-07-25 01:20:16.456811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.500 [2024-07-25 01:20:16.456836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.500 qpair failed and we were unable to recover it. 00:34:23.500 [2024-07-25 01:20:16.456981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.500 [2024-07-25 01:20:16.457007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.500 qpair failed and we were unable to recover it. 00:34:23.500 [2024-07-25 01:20:16.457131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.500 [2024-07-25 01:20:16.457157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.500 qpair failed and we were unable to recover it. 00:34:23.500 [2024-07-25 01:20:16.457304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.500 [2024-07-25 01:20:16.457329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.500 qpair failed and we were unable to recover it. 00:34:23.500 [2024-07-25 01:20:16.457472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.500 [2024-07-25 01:20:16.457498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.500 qpair failed and we were unable to recover it. 00:34:23.500 [2024-07-25 01:20:16.457665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.500 [2024-07-25 01:20:16.457691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.500 qpair failed and we were unable to recover it. 00:34:23.500 [2024-07-25 01:20:16.457803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.500 [2024-07-25 01:20:16.457829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.500 qpair failed and we were unable to recover it. 00:34:23.500 [2024-07-25 01:20:16.457996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.500 [2024-07-25 01:20:16.458021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.500 qpair failed and we were unable to recover it. 00:34:23.500 [2024-07-25 01:20:16.458162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.500 [2024-07-25 01:20:16.458188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.500 qpair failed and we were unable to recover it. 00:34:23.500 [2024-07-25 01:20:16.458330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.500 [2024-07-25 01:20:16.458355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.500 qpair failed and we were unable to recover it. 00:34:23.500 [2024-07-25 01:20:16.458503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.500 [2024-07-25 01:20:16.458528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.500 qpair failed and we were unable to recover it. 00:34:23.500 [2024-07-25 01:20:16.458665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.500 [2024-07-25 01:20:16.458690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.500 qpair failed and we were unable to recover it. 00:34:23.500 [2024-07-25 01:20:16.458826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.500 [2024-07-25 01:20:16.458852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.500 qpair failed and we were unable to recover it. 00:34:23.500 [2024-07-25 01:20:16.458980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.500 [2024-07-25 01:20:16.459006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.500 qpair failed and we were unable to recover it. 00:34:23.500 [2024-07-25 01:20:16.459163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.500 [2024-07-25 01:20:16.459188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.500 qpair failed and we were unable to recover it. 00:34:23.500 [2024-07-25 01:20:16.459331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.500 [2024-07-25 01:20:16.459356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.500 qpair failed and we were unable to recover it. 00:34:23.500 [2024-07-25 01:20:16.459524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.500 [2024-07-25 01:20:16.459549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.500 qpair failed and we were unable to recover it. 00:34:23.500 [2024-07-25 01:20:16.459691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.500 [2024-07-25 01:20:16.459716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.500 qpair failed and we were unable to recover it. 00:34:23.500 [2024-07-25 01:20:16.459831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.500 [2024-07-25 01:20:16.459856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.500 qpair failed and we were unable to recover it. 00:34:23.500 [2024-07-25 01:20:16.460034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.500 [2024-07-25 01:20:16.460059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.500 qpair failed and we were unable to recover it. 00:34:23.500 [2024-07-25 01:20:16.460172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.500 [2024-07-25 01:20:16.460197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.500 qpair failed and we were unable to recover it. 00:34:23.501 [2024-07-25 01:20:16.460348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.501 [2024-07-25 01:20:16.460374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.501 qpair failed and we were unable to recover it. 00:34:23.501 [2024-07-25 01:20:16.460516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.501 [2024-07-25 01:20:16.460541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.501 qpair failed and we were unable to recover it. 00:34:23.501 [2024-07-25 01:20:16.460706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.501 [2024-07-25 01:20:16.460735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.501 qpair failed and we were unable to recover it. 00:34:23.501 [2024-07-25 01:20:16.460906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.501 [2024-07-25 01:20:16.460931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.501 qpair failed and we were unable to recover it. 00:34:23.501 [2024-07-25 01:20:16.461080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.501 [2024-07-25 01:20:16.461105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.501 qpair failed and we were unable to recover it. 00:34:23.501 [2024-07-25 01:20:16.461255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.501 [2024-07-25 01:20:16.461281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.501 qpair failed and we were unable to recover it. 00:34:23.501 [2024-07-25 01:20:16.461409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.501 [2024-07-25 01:20:16.461435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.501 qpair failed and we were unable to recover it. 00:34:23.501 [2024-07-25 01:20:16.461588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.501 [2024-07-25 01:20:16.461613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.501 qpair failed and we were unable to recover it. 00:34:23.501 [2024-07-25 01:20:16.461770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.501 [2024-07-25 01:20:16.461794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.501 qpair failed and we were unable to recover it. 00:34:23.501 [2024-07-25 01:20:16.461907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.501 [2024-07-25 01:20:16.461932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.501 qpair failed and we were unable to recover it. 00:34:23.501 [2024-07-25 01:20:16.462072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.501 [2024-07-25 01:20:16.462097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.501 qpair failed and we were unable to recover it. 00:34:23.501 [2024-07-25 01:20:16.462259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.501 [2024-07-25 01:20:16.462286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.501 qpair failed and we were unable to recover it. 00:34:23.501 [2024-07-25 01:20:16.462458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.501 [2024-07-25 01:20:16.462483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.501 qpair failed and we were unable to recover it. 00:34:23.501 [2024-07-25 01:20:16.462594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.501 [2024-07-25 01:20:16.462619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.501 qpair failed and we were unable to recover it. 00:34:23.501 [2024-07-25 01:20:16.462738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.501 [2024-07-25 01:20:16.462764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.501 qpair failed and we were unable to recover it. 00:34:23.501 [2024-07-25 01:20:16.462906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.501 [2024-07-25 01:20:16.462931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.501 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 3927568 Killed "${NVMF_APP[@]}" "$@" 00:34:23.501 qpair failed and we were unable to recover it. 00:34:23.501 [2024-07-25 01:20:16.463089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.501 [2024-07-25 01:20:16.463114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.501 qpair failed and we were unable to recover it. 00:34:23.501 [2024-07-25 01:20:16.463259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.501 [2024-07-25 01:20:16.463294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.501 qpair failed and we were unable to recover it. 00:34:23.501 [2024-07-25 01:20:16.463416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.501 [2024-07-25 01:20:16.463442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.501 qpair failed and we were unable to recover it. 00:34:23.501 01:20:16 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:34:23.501 [2024-07-25 01:20:16.463574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.501 [2024-07-25 01:20:16.463602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.501 qpair failed and we were unable to recover it. 00:34:23.501 [2024-07-25 01:20:16.463745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.501 [2024-07-25 01:20:16.463772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.501 qpair failed and we were unable to recover it. 00:34:23.501 01:20:16 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:34:23.501 [2024-07-25 01:20:16.463919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.501 [2024-07-25 01:20:16.463946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.501 qpair failed and we were unable to recover it. 00:34:23.501 [2024-07-25 01:20:16.464114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.501 [2024-07-25 01:20:16.464140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.501 qpair failed and we were unable to recover it. 00:34:23.501 01:20:16 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:34:23.501 [2024-07-25 01:20:16.464258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.501 [2024-07-25 01:20:16.464285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.501 qpair failed and we were unable to recover it. 00:34:23.501 01:20:16 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@720 -- # xtrace_disable 00:34:23.501 [2024-07-25 01:20:16.464448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.501 [2024-07-25 01:20:16.464474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.501 qpair failed and we were unable to recover it. 00:34:23.501 [2024-07-25 01:20:16.464610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.501 [2024-07-25 01:20:16.464636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.501 01:20:16 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:23.501 qpair failed and we were unable to recover it. 00:34:23.501 [2024-07-25 01:20:16.464780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.501 [2024-07-25 01:20:16.464809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.501 qpair failed and we were unable to recover it. 00:34:23.501 [2024-07-25 01:20:16.464933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.501 [2024-07-25 01:20:16.464957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.501 qpair failed and we were unable to recover it. 00:34:23.501 [2024-07-25 01:20:16.465096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.501 [2024-07-25 01:20:16.465121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.501 qpair failed and we were unable to recover it. 00:34:23.501 [2024-07-25 01:20:16.465262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.501 [2024-07-25 01:20:16.465300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.501 qpair failed and we were unable to recover it. 00:34:23.501 [2024-07-25 01:20:16.465421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.501 [2024-07-25 01:20:16.465448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.501 qpair failed and we were unable to recover it. 00:34:23.501 [2024-07-25 01:20:16.465613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.501 [2024-07-25 01:20:16.465639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.501 qpair failed and we were unable to recover it. 00:34:23.501 [2024-07-25 01:20:16.465791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.501 [2024-07-25 01:20:16.465816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.501 qpair failed and we were unable to recover it. 00:34:23.501 [2024-07-25 01:20:16.465971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.501 [2024-07-25 01:20:16.465996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.501 qpair failed and we were unable to recover it. 00:34:23.501 [2024-07-25 01:20:16.466144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.501 [2024-07-25 01:20:16.466169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.501 qpair failed and we were unable to recover it. 00:34:23.502 [2024-07-25 01:20:16.466337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.502 [2024-07-25 01:20:16.466362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.502 qpair failed and we were unable to recover it. 00:34:23.502 [2024-07-25 01:20:16.466502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.502 [2024-07-25 01:20:16.466528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.502 qpair failed and we were unable to recover it. 00:34:23.502 [2024-07-25 01:20:16.466637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.502 [2024-07-25 01:20:16.466663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.502 qpair failed and we were unable to recover it. 00:34:23.502 [2024-07-25 01:20:16.466779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.502 [2024-07-25 01:20:16.466806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.502 qpair failed and we were unable to recover it. 00:34:23.502 [2024-07-25 01:20:16.466956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.502 [2024-07-25 01:20:16.466981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.502 qpair failed and we were unable to recover it. 00:34:23.502 [2024-07-25 01:20:16.467129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.502 [2024-07-25 01:20:16.467155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.502 qpair failed and we were unable to recover it. 00:34:23.502 [2024-07-25 01:20:16.467338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.502 [2024-07-25 01:20:16.467364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.502 qpair failed and we were unable to recover it. 00:34:23.502 [2024-07-25 01:20:16.467505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.502 [2024-07-25 01:20:16.467531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.502 qpair failed and we were unable to recover it. 00:34:23.502 [2024-07-25 01:20:16.467700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.502 [2024-07-25 01:20:16.467725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.502 qpair failed and we were unable to recover it. 00:34:23.502 [2024-07-25 01:20:16.467861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.502 [2024-07-25 01:20:16.467886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.502 qpair failed and we were unable to recover it. 00:34:23.502 [2024-07-25 01:20:16.468029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.502 [2024-07-25 01:20:16.468055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.502 qpair failed and we were unable to recover it. 00:34:23.502 [2024-07-25 01:20:16.468174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.502 [2024-07-25 01:20:16.468200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.502 qpair failed and we were unable to recover it. 00:34:23.502 [2024-07-25 01:20:16.468372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.502 [2024-07-25 01:20:16.468399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.502 qpair failed and we were unable to recover it. 00:34:23.502 [2024-07-25 01:20:16.468512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.502 [2024-07-25 01:20:16.468537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.502 qpair failed and we were unable to recover it. 00:34:23.502 [2024-07-25 01:20:16.468658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.502 [2024-07-25 01:20:16.468683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.502 qpair failed and we were unable to recover it. 00:34:23.502 [2024-07-25 01:20:16.468848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.502 [2024-07-25 01:20:16.468873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.502 qpair failed and we were unable to recover it. 00:34:23.502 [2024-07-25 01:20:16.469015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.502 [2024-07-25 01:20:16.469040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.502 qpair failed and we were unable to recover it. 00:34:23.502 [2024-07-25 01:20:16.469185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.502 01:20:16 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=3928120 00:34:23.502 [2024-07-25 01:20:16.469210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.502 qpair failed and we were unable to recover it. 00:34:23.502 01:20:16 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:34:23.502 [2024-07-25 01:20:16.469352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.502 [2024-07-25 01:20:16.469378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.502 qpair failed and we were unable to recover it. 00:34:23.502 01:20:16 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 3928120 00:34:23.502 [2024-07-25 01:20:16.469515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.502 [2024-07-25 01:20:16.469540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.502 qpair failed and we were unable to recover it. 00:34:23.502 01:20:16 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@827 -- # '[' -z 3928120 ']' 00:34:23.502 [2024-07-25 01:20:16.469710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.502 [2024-07-25 01:20:16.469735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.502 qpair failed and we were unable to recover it. 00:34:23.502 01:20:16 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:23.502 [2024-07-25 01:20:16.469848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.502 [2024-07-25 01:20:16.469874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.502 qpair failed and we were unable to recover it. 00:34:23.502 01:20:16 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@832 -- # local max_retries=100 00:34:23.502 [2024-07-25 01:20:16.469982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.502 [2024-07-25 01:20:16.470008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.502 qpair failed and we were unable to recover it. 00:34:23.502 01:20:16 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:23.502 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:23.502 [2024-07-25 01:20:16.470167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.502 [2024-07-25 01:20:16.470194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.502 qpair failed and we were unable to recover it. 00:34:23.502 01:20:16 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # xtrace_disable 00:34:23.502 [2024-07-25 01:20:16.470366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.502 [2024-07-25 01:20:16.470393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.502 qpair failed and we were unable to recover it. 00:34:23.502 01:20:16 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:23.502 [2024-07-25 01:20:16.470508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.502 [2024-07-25 01:20:16.470534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.502 qpair failed and we were unable to recover it. 00:34:23.502 [2024-07-25 01:20:16.470644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.502 [2024-07-25 01:20:16.470670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.502 qpair failed and we were unable to recover it. 00:34:23.502 [2024-07-25 01:20:16.470819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.502 [2024-07-25 01:20:16.470844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.502 qpair failed and we were unable to recover it. 00:34:23.502 [2024-07-25 01:20:16.470977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.502 [2024-07-25 01:20:16.471002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.502 qpair failed and we were unable to recover it. 00:34:23.502 [2024-07-25 01:20:16.471171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.502 [2024-07-25 01:20:16.471196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.502 qpair failed and we were unable to recover it. 00:34:23.502 [2024-07-25 01:20:16.471330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.502 [2024-07-25 01:20:16.471356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.502 qpair failed and we were unable to recover it. 00:34:23.502 [2024-07-25 01:20:16.471474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.502 [2024-07-25 01:20:16.471501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.502 qpair failed and we were unable to recover it. 00:34:23.502 [2024-07-25 01:20:16.471646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.502 [2024-07-25 01:20:16.471672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.502 qpair failed and we were unable to recover it. 00:34:23.502 [2024-07-25 01:20:16.471787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.503 [2024-07-25 01:20:16.471814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.503 qpair failed and we were unable to recover it. 00:34:23.503 [2024-07-25 01:20:16.471954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.503 [2024-07-25 01:20:16.471979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.503 qpair failed and we were unable to recover it. 00:34:23.503 [2024-07-25 01:20:16.472180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.503 [2024-07-25 01:20:16.472205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.503 qpair failed and we were unable to recover it. 00:34:23.503 [2024-07-25 01:20:16.472341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.503 [2024-07-25 01:20:16.472367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.503 qpair failed and we were unable to recover it. 00:34:23.503 [2024-07-25 01:20:16.472507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.503 [2024-07-25 01:20:16.472532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.503 qpair failed and we were unable to recover it. 00:34:23.503 [2024-07-25 01:20:16.472643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.503 [2024-07-25 01:20:16.472669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.503 qpair failed and we were unable to recover it. 00:34:23.503 [2024-07-25 01:20:16.472832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.503 [2024-07-25 01:20:16.472858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.503 qpair failed and we were unable to recover it. 00:34:23.503 [2024-07-25 01:20:16.473004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.503 [2024-07-25 01:20:16.473034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.503 qpair failed and we were unable to recover it. 00:34:23.503 [2024-07-25 01:20:16.473201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.503 [2024-07-25 01:20:16.473226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.503 qpair failed and we were unable to recover it. 00:34:23.503 [2024-07-25 01:20:16.473379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.503 [2024-07-25 01:20:16.473404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.503 qpair failed and we were unable to recover it. 00:34:23.503 [2024-07-25 01:20:16.473521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.503 [2024-07-25 01:20:16.473547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.503 qpair failed and we were unable to recover it. 00:34:23.503 [2024-07-25 01:20:16.473716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.503 [2024-07-25 01:20:16.473742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.503 qpair failed and we were unable to recover it. 00:34:23.503 [2024-07-25 01:20:16.473909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.503 [2024-07-25 01:20:16.473934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.503 qpair failed and we were unable to recover it. 00:34:23.503 [2024-07-25 01:20:16.474077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.503 [2024-07-25 01:20:16.474103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.503 qpair failed and we were unable to recover it. 00:34:23.503 [2024-07-25 01:20:16.474271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.503 [2024-07-25 01:20:16.474297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.503 qpair failed and we were unable to recover it. 00:34:23.503 [2024-07-25 01:20:16.474450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.503 [2024-07-25 01:20:16.474476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.503 qpair failed and we were unable to recover it. 00:34:23.503 [2024-07-25 01:20:16.474617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.503 [2024-07-25 01:20:16.474644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.503 qpair failed and we were unable to recover it. 00:34:23.503 [2024-07-25 01:20:16.474792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.503 [2024-07-25 01:20:16.474819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.503 qpair failed and we were unable to recover it. 00:34:23.503 [2024-07-25 01:20:16.474967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.503 [2024-07-25 01:20:16.474992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.503 qpair failed and we were unable to recover it. 00:34:23.503 [2024-07-25 01:20:16.475104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.503 [2024-07-25 01:20:16.475130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.503 qpair failed and we were unable to recover it. 00:34:23.503 [2024-07-25 01:20:16.475270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.503 [2024-07-25 01:20:16.475296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.503 qpair failed and we were unable to recover it. 00:34:23.503 [2024-07-25 01:20:16.475445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.503 [2024-07-25 01:20:16.475471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.503 qpair failed and we were unable to recover it. 00:34:23.503 [2024-07-25 01:20:16.475589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.503 [2024-07-25 01:20:16.475616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.503 qpair failed and we were unable to recover it. 00:34:23.503 [2024-07-25 01:20:16.475761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.503 [2024-07-25 01:20:16.475787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.503 qpair failed and we were unable to recover it. 00:34:23.503 [2024-07-25 01:20:16.475953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.503 [2024-07-25 01:20:16.475979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.503 qpair failed and we were unable to recover it. 00:34:23.503 [2024-07-25 01:20:16.476104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.503 [2024-07-25 01:20:16.476131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.503 qpair failed and we were unable to recover it. 00:34:23.503 [2024-07-25 01:20:16.476253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.503 [2024-07-25 01:20:16.476278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.503 qpair failed and we were unable to recover it. 00:34:23.503 [2024-07-25 01:20:16.476405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.503 [2024-07-25 01:20:16.476430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.503 qpair failed and we were unable to recover it. 00:34:23.503 [2024-07-25 01:20:16.476537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.503 [2024-07-25 01:20:16.476562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.503 qpair failed and we were unable to recover it. 00:34:23.503 [2024-07-25 01:20:16.476672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.503 [2024-07-25 01:20:16.476698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.503 qpair failed and we were unable to recover it. 00:34:23.503 [2024-07-25 01:20:16.476866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.503 [2024-07-25 01:20:16.476891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.503 qpair failed and we were unable to recover it. 00:34:23.503 [2024-07-25 01:20:16.477035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.503 [2024-07-25 01:20:16.477061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.503 qpair failed and we were unable to recover it. 00:34:23.503 [2024-07-25 01:20:16.477180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.503 [2024-07-25 01:20:16.477204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.503 qpair failed and we were unable to recover it. 00:34:23.503 [2024-07-25 01:20:16.477334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.503 [2024-07-25 01:20:16.477360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.503 qpair failed and we were unable to recover it. 00:34:23.503 [2024-07-25 01:20:16.477478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.503 [2024-07-25 01:20:16.477505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.503 qpair failed and we were unable to recover it. 00:34:23.503 [2024-07-25 01:20:16.477642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.503 [2024-07-25 01:20:16.477668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.503 qpair failed and we were unable to recover it. 00:34:23.503 [2024-07-25 01:20:16.477812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.503 [2024-07-25 01:20:16.477837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.503 qpair failed and we were unable to recover it. 00:34:23.503 [2024-07-25 01:20:16.477948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.503 [2024-07-25 01:20:16.477974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.503 qpair failed and we were unable to recover it. 00:34:23.504 [2024-07-25 01:20:16.478120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.504 [2024-07-25 01:20:16.478145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.504 qpair failed and we were unable to recover it. 00:34:23.504 [2024-07-25 01:20:16.478316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.504 [2024-07-25 01:20:16.478342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.504 qpair failed and we were unable to recover it. 00:34:23.504 [2024-07-25 01:20:16.478485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.504 [2024-07-25 01:20:16.478510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.504 qpair failed and we were unable to recover it. 00:34:23.504 [2024-07-25 01:20:16.478621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.504 [2024-07-25 01:20:16.478647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.504 qpair failed and we were unable to recover it. 00:34:23.504 [2024-07-25 01:20:16.478759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.504 [2024-07-25 01:20:16.478784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.504 qpair failed and we were unable to recover it. 00:34:23.504 [2024-07-25 01:20:16.478888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.504 [2024-07-25 01:20:16.478913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.504 qpair failed and we were unable to recover it. 00:34:23.504 [2024-07-25 01:20:16.479054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.504 [2024-07-25 01:20:16.479079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.504 qpair failed and we were unable to recover it. 00:34:23.504 [2024-07-25 01:20:16.479227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.504 [2024-07-25 01:20:16.479260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.504 qpair failed and we were unable to recover it. 00:34:23.504 [2024-07-25 01:20:16.479380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.504 [2024-07-25 01:20:16.479405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.504 qpair failed and we were unable to recover it. 00:34:23.504 [2024-07-25 01:20:16.479549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.504 [2024-07-25 01:20:16.479580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.504 qpair failed and we were unable to recover it. 00:34:23.504 [2024-07-25 01:20:16.479722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.504 [2024-07-25 01:20:16.479747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.504 qpair failed and we were unable to recover it. 00:34:23.504 [2024-07-25 01:20:16.479891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.504 [2024-07-25 01:20:16.479916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.504 qpair failed and we were unable to recover it. 00:34:23.504 [2024-07-25 01:20:16.480061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.504 [2024-07-25 01:20:16.480086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.504 qpair failed and we were unable to recover it. 00:34:23.504 [2024-07-25 01:20:16.480256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.504 [2024-07-25 01:20:16.480282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.504 qpair failed and we were unable to recover it. 00:34:23.504 [2024-07-25 01:20:16.480434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.504 [2024-07-25 01:20:16.480461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.504 qpair failed and we were unable to recover it. 00:34:23.504 [2024-07-25 01:20:16.480598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.504 [2024-07-25 01:20:16.480625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.504 qpair failed and we were unable to recover it. 00:34:23.504 [2024-07-25 01:20:16.480772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.504 [2024-07-25 01:20:16.480797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.504 qpair failed and we were unable to recover it. 00:34:23.504 [2024-07-25 01:20:16.480940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.504 [2024-07-25 01:20:16.480966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.504 qpair failed and we were unable to recover it. 00:34:23.504 [2024-07-25 01:20:16.481079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.504 [2024-07-25 01:20:16.481105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.504 qpair failed and we were unable to recover it. 00:34:23.504 [2024-07-25 01:20:16.481250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.504 [2024-07-25 01:20:16.481276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.504 qpair failed and we were unable to recover it. 00:34:23.504 [2024-07-25 01:20:16.481411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.504 [2024-07-25 01:20:16.481437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.504 qpair failed and we were unable to recover it. 00:34:23.504 [2024-07-25 01:20:16.481577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.504 [2024-07-25 01:20:16.481603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.504 qpair failed and we were unable to recover it. 00:34:23.504 [2024-07-25 01:20:16.481736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.504 [2024-07-25 01:20:16.481762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.504 qpair failed and we were unable to recover it. 00:34:23.504 [2024-07-25 01:20:16.481923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.504 [2024-07-25 01:20:16.481948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.504 qpair failed and we were unable to recover it. 00:34:23.504 [2024-07-25 01:20:16.482114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.504 [2024-07-25 01:20:16.482140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.504 qpair failed and we were unable to recover it. 00:34:23.504 [2024-07-25 01:20:16.482266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.504 [2024-07-25 01:20:16.482292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.504 qpair failed and we were unable to recover it. 00:34:23.504 [2024-07-25 01:20:16.482407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.504 [2024-07-25 01:20:16.482433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.504 qpair failed and we were unable to recover it. 00:34:23.504 [2024-07-25 01:20:16.482553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.504 [2024-07-25 01:20:16.482578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.504 qpair failed and we were unable to recover it. 00:34:23.504 [2024-07-25 01:20:16.482715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.504 [2024-07-25 01:20:16.482740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.504 qpair failed and we were unable to recover it. 00:34:23.504 [2024-07-25 01:20:16.482879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.504 [2024-07-25 01:20:16.482904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.504 qpair failed and we were unable to recover it. 00:34:23.504 [2024-07-25 01:20:16.483074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.504 [2024-07-25 01:20:16.483099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.504 qpair failed and we were unable to recover it. 00:34:23.504 [2024-07-25 01:20:16.483253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.505 [2024-07-25 01:20:16.483279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.505 qpair failed and we were unable to recover it. 00:34:23.505 [2024-07-25 01:20:16.483399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.505 [2024-07-25 01:20:16.483424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.505 qpair failed and we were unable to recover it. 00:34:23.505 [2024-07-25 01:20:16.483564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.505 [2024-07-25 01:20:16.483590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.505 qpair failed and we were unable to recover it. 00:34:23.505 [2024-07-25 01:20:16.483712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.505 [2024-07-25 01:20:16.483737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.505 qpair failed and we were unable to recover it. 00:34:23.505 [2024-07-25 01:20:16.483845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.505 [2024-07-25 01:20:16.483870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.505 qpair failed and we were unable to recover it. 00:34:23.505 [2024-07-25 01:20:16.483995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.505 [2024-07-25 01:20:16.484020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.505 qpair failed and we were unable to recover it. 00:34:23.505 [2024-07-25 01:20:16.484157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.505 [2024-07-25 01:20:16.484182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.505 qpair failed and we were unable to recover it. 00:34:23.505 [2024-07-25 01:20:16.484302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.505 [2024-07-25 01:20:16.484328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.505 qpair failed and we were unable to recover it. 00:34:23.505 [2024-07-25 01:20:16.484471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.505 [2024-07-25 01:20:16.484497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.505 qpair failed and we were unable to recover it. 00:34:23.505 [2024-07-25 01:20:16.484653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.505 [2024-07-25 01:20:16.484678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.505 qpair failed and we were unable to recover it. 00:34:23.505 [2024-07-25 01:20:16.484797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.505 [2024-07-25 01:20:16.484824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.505 qpair failed and we were unable to recover it. 00:34:23.505 [2024-07-25 01:20:16.484967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.505 [2024-07-25 01:20:16.484992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.505 qpair failed and we were unable to recover it. 00:34:23.505 [2024-07-25 01:20:16.485160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.505 [2024-07-25 01:20:16.485185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.505 qpair failed and we were unable to recover it. 00:34:23.505 [2024-07-25 01:20:16.485334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.505 [2024-07-25 01:20:16.485360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.505 qpair failed and we were unable to recover it. 00:34:23.505 [2024-07-25 01:20:16.485477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.505 [2024-07-25 01:20:16.485502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.505 qpair failed and we were unable to recover it. 00:34:23.505 [2024-07-25 01:20:16.485664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.505 [2024-07-25 01:20:16.485689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.505 qpair failed and we were unable to recover it. 00:34:23.505 [2024-07-25 01:20:16.485857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.505 [2024-07-25 01:20:16.485882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.505 qpair failed and we were unable to recover it. 00:34:23.505 [2024-07-25 01:20:16.486022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.505 [2024-07-25 01:20:16.486048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.505 qpair failed and we were unable to recover it. 00:34:23.505 [2024-07-25 01:20:16.486152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.505 [2024-07-25 01:20:16.486181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.505 qpair failed and we were unable to recover it. 00:34:23.505 [2024-07-25 01:20:16.486308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.505 [2024-07-25 01:20:16.486334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.505 qpair failed and we were unable to recover it. 00:34:23.505 [2024-07-25 01:20:16.486453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.505 [2024-07-25 01:20:16.486480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.505 qpair failed and we were unable to recover it. 00:34:23.505 [2024-07-25 01:20:16.486600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.505 [2024-07-25 01:20:16.486626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.505 qpair failed and we were unable to recover it. 00:34:23.505 [2024-07-25 01:20:16.486763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.505 [2024-07-25 01:20:16.486788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.505 qpair failed and we were unable to recover it. 00:34:23.505 [2024-07-25 01:20:16.486924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.505 [2024-07-25 01:20:16.486950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.505 qpair failed and we were unable to recover it. 00:34:23.505 [2024-07-25 01:20:16.487058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.505 [2024-07-25 01:20:16.487083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.505 qpair failed and we were unable to recover it. 00:34:23.505 [2024-07-25 01:20:16.487227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.505 [2024-07-25 01:20:16.487259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.505 qpair failed and we were unable to recover it. 00:34:23.505 [2024-07-25 01:20:16.487399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.505 [2024-07-25 01:20:16.487425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.505 qpair failed and we were unable to recover it. 00:34:23.505 [2024-07-25 01:20:16.487568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.505 [2024-07-25 01:20:16.487593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.505 qpair failed and we were unable to recover it. 00:34:23.505 [2024-07-25 01:20:16.487733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.505 [2024-07-25 01:20:16.487759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.505 qpair failed and we were unable to recover it. 00:34:23.505 [2024-07-25 01:20:16.487926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.505 [2024-07-25 01:20:16.487951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.505 qpair failed and we were unable to recover it. 00:34:23.505 [2024-07-25 01:20:16.488064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.505 [2024-07-25 01:20:16.488089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.505 qpair failed and we were unable to recover it. 00:34:23.505 [2024-07-25 01:20:16.488232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.505 [2024-07-25 01:20:16.488266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.505 qpair failed and we were unable to recover it. 00:34:23.505 [2024-07-25 01:20:16.488416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.505 [2024-07-25 01:20:16.488442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.505 qpair failed and we were unable to recover it. 00:34:23.505 [2024-07-25 01:20:16.488557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.505 [2024-07-25 01:20:16.488582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.505 qpair failed and we were unable to recover it. 00:34:23.505 [2024-07-25 01:20:16.488749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.505 [2024-07-25 01:20:16.488774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.505 qpair failed and we were unable to recover it. 00:34:23.505 [2024-07-25 01:20:16.488897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.505 [2024-07-25 01:20:16.488923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.505 qpair failed and we were unable to recover it. 00:34:23.505 [2024-07-25 01:20:16.489044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.505 [2024-07-25 01:20:16.489070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.505 qpair failed and we were unable to recover it. 00:34:23.505 [2024-07-25 01:20:16.489211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.505 [2024-07-25 01:20:16.489237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.505 qpair failed and we were unable to recover it. 00:34:23.506 [2024-07-25 01:20:16.489386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.506 [2024-07-25 01:20:16.489411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.506 qpair failed and we were unable to recover it. 00:34:23.506 [2024-07-25 01:20:16.489546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.506 [2024-07-25 01:20:16.489572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.506 qpair failed and we were unable to recover it. 00:34:23.506 [2024-07-25 01:20:16.489718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.506 [2024-07-25 01:20:16.489744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.506 qpair failed and we were unable to recover it. 00:34:23.506 [2024-07-25 01:20:16.489886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.506 [2024-07-25 01:20:16.489911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.506 qpair failed and we were unable to recover it. 00:34:23.506 [2024-07-25 01:20:16.490054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.506 [2024-07-25 01:20:16.490079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.506 qpair failed and we were unable to recover it. 00:34:23.506 [2024-07-25 01:20:16.490216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.506 [2024-07-25 01:20:16.490257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.506 qpair failed and we were unable to recover it. 00:34:23.506 [2024-07-25 01:20:16.490407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.506 [2024-07-25 01:20:16.490433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.506 qpair failed and we were unable to recover it. 00:34:23.506 [2024-07-25 01:20:16.490583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.506 [2024-07-25 01:20:16.490608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.506 qpair failed and we were unable to recover it. 00:34:23.506 [2024-07-25 01:20:16.490751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.506 [2024-07-25 01:20:16.490776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.506 qpair failed and we were unable to recover it. 00:34:23.506 [2024-07-25 01:20:16.490916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.506 [2024-07-25 01:20:16.490941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.506 qpair failed and we were unable to recover it. 00:34:23.506 [2024-07-25 01:20:16.491095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.506 [2024-07-25 01:20:16.491120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.506 qpair failed and we were unable to recover it. 00:34:23.506 [2024-07-25 01:20:16.491274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.506 [2024-07-25 01:20:16.491300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.506 qpair failed and we were unable to recover it. 00:34:23.506 [2024-07-25 01:20:16.491453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.506 [2024-07-25 01:20:16.491478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.506 qpair failed and we were unable to recover it. 00:34:23.506 [2024-07-25 01:20:16.491620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.506 [2024-07-25 01:20:16.491645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.506 qpair failed and we were unable to recover it. 00:34:23.506 [2024-07-25 01:20:16.491766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.506 [2024-07-25 01:20:16.491792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.506 qpair failed and we were unable to recover it. 00:34:23.506 [2024-07-25 01:20:16.491961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.506 [2024-07-25 01:20:16.491986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.506 qpair failed and we were unable to recover it. 00:34:23.506 [2024-07-25 01:20:16.492124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.506 [2024-07-25 01:20:16.492149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.506 qpair failed and we were unable to recover it. 00:34:23.506 [2024-07-25 01:20:16.492320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.506 [2024-07-25 01:20:16.492345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.506 qpair failed and we were unable to recover it. 00:34:23.506 [2024-07-25 01:20:16.492517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.506 [2024-07-25 01:20:16.492542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.506 qpair failed and we were unable to recover it. 00:34:23.506 [2024-07-25 01:20:16.492690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.506 [2024-07-25 01:20:16.492716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.506 qpair failed and we were unable to recover it. 00:34:23.506 [2024-07-25 01:20:16.492860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.506 [2024-07-25 01:20:16.492888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.506 qpair failed and we were unable to recover it. 00:34:23.506 [2024-07-25 01:20:16.492995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.506 [2024-07-25 01:20:16.493021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.506 qpair failed and we were unable to recover it. 00:34:23.506 [2024-07-25 01:20:16.493193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.506 [2024-07-25 01:20:16.493218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.506 qpair failed and we were unable to recover it. 00:34:23.506 [2024-07-25 01:20:16.493352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.506 [2024-07-25 01:20:16.493378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.506 qpair failed and we were unable to recover it. 00:34:23.506 [2024-07-25 01:20:16.493488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.506 [2024-07-25 01:20:16.493514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.506 qpair failed and we were unable to recover it. 00:34:23.506 [2024-07-25 01:20:16.493661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.506 [2024-07-25 01:20:16.493686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.506 qpair failed and we were unable to recover it. 00:34:23.506 [2024-07-25 01:20:16.493829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.506 [2024-07-25 01:20:16.493854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.506 qpair failed and we were unable to recover it. 00:34:23.506 [2024-07-25 01:20:16.493972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.506 [2024-07-25 01:20:16.493997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.506 qpair failed and we were unable to recover it. 00:34:23.506 [2024-07-25 01:20:16.494117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.506 [2024-07-25 01:20:16.494142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.506 qpair failed and we were unable to recover it. 00:34:23.506 [2024-07-25 01:20:16.494277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.506 [2024-07-25 01:20:16.494302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.506 qpair failed and we were unable to recover it. 00:34:23.506 [2024-07-25 01:20:16.494442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.506 [2024-07-25 01:20:16.494467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.506 qpair failed and we were unable to recover it. 00:34:23.506 [2024-07-25 01:20:16.494608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.506 [2024-07-25 01:20:16.494633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.506 qpair failed and we were unable to recover it. 00:34:23.506 [2024-07-25 01:20:16.494746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.506 [2024-07-25 01:20:16.494771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.506 qpair failed and we were unable to recover it. 00:34:23.506 [2024-07-25 01:20:16.494913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.506 [2024-07-25 01:20:16.494938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.506 qpair failed and we were unable to recover it. 00:34:23.506 [2024-07-25 01:20:16.495114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.506 [2024-07-25 01:20:16.495139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.506 qpair failed and we were unable to recover it. 00:34:23.506 [2024-07-25 01:20:16.495266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.506 [2024-07-25 01:20:16.495292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.506 qpair failed and we were unable to recover it. 00:34:23.506 [2024-07-25 01:20:16.495436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.506 [2024-07-25 01:20:16.495462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.506 qpair failed and we were unable to recover it. 00:34:23.506 [2024-07-25 01:20:16.495576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.507 [2024-07-25 01:20:16.495601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.507 qpair failed and we were unable to recover it. 00:34:23.507 [2024-07-25 01:20:16.495769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.507 [2024-07-25 01:20:16.495795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.507 qpair failed and we were unable to recover it. 00:34:23.507 [2024-07-25 01:20:16.495935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.507 [2024-07-25 01:20:16.495961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.507 qpair failed and we were unable to recover it. 00:34:23.507 [2024-07-25 01:20:16.496110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.507 [2024-07-25 01:20:16.496136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.507 qpair failed and we were unable to recover it. 00:34:23.507 [2024-07-25 01:20:16.496307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.507 [2024-07-25 01:20:16.496333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.507 qpair failed and we were unable to recover it. 00:34:23.507 [2024-07-25 01:20:16.496488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.507 [2024-07-25 01:20:16.496513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.507 qpair failed and we were unable to recover it. 00:34:23.507 [2024-07-25 01:20:16.496632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.507 [2024-07-25 01:20:16.496658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.507 qpair failed and we were unable to recover it. 00:34:23.507 [2024-07-25 01:20:16.496798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.507 [2024-07-25 01:20:16.496823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.507 qpair failed and we were unable to recover it. 00:34:23.507 [2024-07-25 01:20:16.496992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.507 [2024-07-25 01:20:16.497018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.507 qpair failed and we were unable to recover it. 00:34:23.507 [2024-07-25 01:20:16.497157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.507 [2024-07-25 01:20:16.497182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.507 qpair failed and we were unable to recover it. 00:34:23.507 [2024-07-25 01:20:16.497309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.507 [2024-07-25 01:20:16.497335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.507 qpair failed and we were unable to recover it. 00:34:23.507 [2024-07-25 01:20:16.497491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.507 [2024-07-25 01:20:16.497516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.507 qpair failed and we were unable to recover it. 00:34:23.507 [2024-07-25 01:20:16.497689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.507 [2024-07-25 01:20:16.497714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.507 qpair failed and we were unable to recover it. 00:34:23.507 [2024-07-25 01:20:16.497853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.507 [2024-07-25 01:20:16.497878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.507 qpair failed and we were unable to recover it. 00:34:23.507 [2024-07-25 01:20:16.498043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.507 [2024-07-25 01:20:16.498069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.507 qpair failed and we were unable to recover it. 00:34:23.507 [2024-07-25 01:20:16.498209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.507 [2024-07-25 01:20:16.498236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.507 qpair failed and we were unable to recover it. 00:34:23.507 [2024-07-25 01:20:16.498405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.507 [2024-07-25 01:20:16.498430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.507 qpair failed and we were unable to recover it. 00:34:23.507 [2024-07-25 01:20:16.498545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.507 [2024-07-25 01:20:16.498572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.507 qpair failed and we were unable to recover it. 00:34:23.507 [2024-07-25 01:20:16.498695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.507 [2024-07-25 01:20:16.498720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.507 qpair failed and we were unable to recover it. 00:34:23.507 [2024-07-25 01:20:16.498832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.507 [2024-07-25 01:20:16.498857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.507 qpair failed and we were unable to recover it. 00:34:23.507 [2024-07-25 01:20:16.499000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.507 [2024-07-25 01:20:16.499027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.507 qpair failed and we were unable to recover it. 00:34:23.507 [2024-07-25 01:20:16.499130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.507 [2024-07-25 01:20:16.499155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.507 qpair failed and we were unable to recover it. 00:34:23.507 [2024-07-25 01:20:16.499310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.507 [2024-07-25 01:20:16.499337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.507 qpair failed and we were unable to recover it. 00:34:23.507 [2024-07-25 01:20:16.499479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.507 [2024-07-25 01:20:16.499510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.507 qpair failed and we were unable to recover it. 00:34:23.507 [2024-07-25 01:20:16.499636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.507 [2024-07-25 01:20:16.499661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.507 qpair failed and we were unable to recover it. 00:34:23.507 [2024-07-25 01:20:16.499818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.507 [2024-07-25 01:20:16.499844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.507 qpair failed and we were unable to recover it. 00:34:23.507 [2024-07-25 01:20:16.500008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.507 [2024-07-25 01:20:16.500033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.507 qpair failed and we were unable to recover it. 00:34:23.507 [2024-07-25 01:20:16.500174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.507 [2024-07-25 01:20:16.500199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.507 qpair failed and we were unable to recover it. 00:34:23.507 [2024-07-25 01:20:16.500334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.507 [2024-07-25 01:20:16.500359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.507 qpair failed and we were unable to recover it. 00:34:23.507 [2024-07-25 01:20:16.500477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.507 [2024-07-25 01:20:16.500501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.507 qpair failed and we were unable to recover it. 00:34:23.507 [2024-07-25 01:20:16.500619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.507 [2024-07-25 01:20:16.500645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.507 qpair failed and we were unable to recover it. 00:34:23.507 [2024-07-25 01:20:16.500765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.507 [2024-07-25 01:20:16.500791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.507 qpair failed and we were unable to recover it. 00:34:23.507 [2024-07-25 01:20:16.500925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.507 [2024-07-25 01:20:16.500950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.507 qpair failed and we were unable to recover it. 00:34:23.507 [2024-07-25 01:20:16.501071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.507 [2024-07-25 01:20:16.501096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.507 qpair failed and we were unable to recover it. 00:34:23.507 [2024-07-25 01:20:16.501236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.507 [2024-07-25 01:20:16.501268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.507 qpair failed and we were unable to recover it. 00:34:23.507 [2024-07-25 01:20:16.501406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.507 [2024-07-25 01:20:16.501431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.507 qpair failed and we were unable to recover it. 00:34:23.507 [2024-07-25 01:20:16.501571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.507 [2024-07-25 01:20:16.501596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.507 qpair failed and we were unable to recover it. 00:34:23.507 [2024-07-25 01:20:16.501766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.507 [2024-07-25 01:20:16.501791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.507 qpair failed and we were unable to recover it. 00:34:23.508 [2024-07-25 01:20:16.501898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.508 [2024-07-25 01:20:16.501923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.508 qpair failed and we were unable to recover it. 00:34:23.508 [2024-07-25 01:20:16.502031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.508 [2024-07-25 01:20:16.502056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.508 qpair failed and we were unable to recover it. 00:34:23.508 [2024-07-25 01:20:16.502167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.508 [2024-07-25 01:20:16.502192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.508 qpair failed and we were unable to recover it. 00:34:23.508 [2024-07-25 01:20:16.502369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.508 [2024-07-25 01:20:16.502396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.508 qpair failed and we were unable to recover it. 00:34:23.508 [2024-07-25 01:20:16.502541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.508 [2024-07-25 01:20:16.502566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.508 qpair failed and we were unable to recover it. 00:34:23.508 [2024-07-25 01:20:16.502710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.508 [2024-07-25 01:20:16.502735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.508 qpair failed and we were unable to recover it. 00:34:23.508 [2024-07-25 01:20:16.502875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.508 [2024-07-25 01:20:16.502901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.508 qpair failed and we were unable to recover it. 00:34:23.508 [2024-07-25 01:20:16.503026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.508 [2024-07-25 01:20:16.503051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.508 qpair failed and we were unable to recover it. 00:34:23.508 [2024-07-25 01:20:16.503203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.508 [2024-07-25 01:20:16.503229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.508 qpair failed and we were unable to recover it. 00:34:23.508 [2024-07-25 01:20:16.503390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.508 [2024-07-25 01:20:16.503417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.508 qpair failed and we were unable to recover it. 00:34:23.508 [2024-07-25 01:20:16.503597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.508 [2024-07-25 01:20:16.503622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.508 qpair failed and we were unable to recover it. 00:34:23.508 [2024-07-25 01:20:16.503735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.508 [2024-07-25 01:20:16.503760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.508 qpair failed and we were unable to recover it. 00:34:23.508 [2024-07-25 01:20:16.503878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.508 [2024-07-25 01:20:16.503905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.508 qpair failed and we were unable to recover it. 00:34:23.508 [2024-07-25 01:20:16.504045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.508 [2024-07-25 01:20:16.504071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.508 qpair failed and we were unable to recover it. 00:34:23.508 [2024-07-25 01:20:16.504191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.508 [2024-07-25 01:20:16.504216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.508 qpair failed and we were unable to recover it. 00:34:23.508 [2024-07-25 01:20:16.504368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.508 [2024-07-25 01:20:16.504394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.508 qpair failed and we were unable to recover it. 00:34:23.508 [2024-07-25 01:20:16.504566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.508 [2024-07-25 01:20:16.504591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.508 qpair failed and we were unable to recover it. 00:34:23.508 [2024-07-25 01:20:16.504734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.508 [2024-07-25 01:20:16.504759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.508 qpair failed and we were unable to recover it. 00:34:23.508 [2024-07-25 01:20:16.504904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.508 [2024-07-25 01:20:16.504930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.508 qpair failed and we were unable to recover it. 00:34:23.508 [2024-07-25 01:20:16.505045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.508 [2024-07-25 01:20:16.505070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.508 qpair failed and we were unable to recover it. 00:34:23.508 [2024-07-25 01:20:16.505213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.508 [2024-07-25 01:20:16.505239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.508 qpair failed and we were unable to recover it. 00:34:23.508 [2024-07-25 01:20:16.505395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.508 [2024-07-25 01:20:16.505420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.508 qpair failed and we were unable to recover it. 00:34:23.508 [2024-07-25 01:20:16.505540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.508 [2024-07-25 01:20:16.505565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.508 qpair failed and we were unable to recover it. 00:34:23.508 [2024-07-25 01:20:16.505682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.508 [2024-07-25 01:20:16.505707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.508 qpair failed and we were unable to recover it. 00:34:23.508 [2024-07-25 01:20:16.505842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.508 [2024-07-25 01:20:16.505867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.508 qpair failed and we were unable to recover it. 00:34:23.508 [2024-07-25 01:20:16.506032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.508 [2024-07-25 01:20:16.506061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.508 qpair failed and we were unable to recover it. 00:34:23.508 [2024-07-25 01:20:16.506202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.508 [2024-07-25 01:20:16.506227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.508 qpair failed and we were unable to recover it. 00:34:23.508 [2024-07-25 01:20:16.506388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.508 [2024-07-25 01:20:16.506414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.508 qpair failed and we were unable to recover it. 00:34:23.508 [2024-07-25 01:20:16.506569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.508 [2024-07-25 01:20:16.506594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.508 qpair failed and we were unable to recover it. 00:34:23.508 [2024-07-25 01:20:16.506766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.508 [2024-07-25 01:20:16.506790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.508 qpair failed and we were unable to recover it. 00:34:23.508 [2024-07-25 01:20:16.506956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.508 [2024-07-25 01:20:16.506981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.508 qpair failed and we were unable to recover it. 00:34:23.508 [2024-07-25 01:20:16.507099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.508 [2024-07-25 01:20:16.507124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.508 qpair failed and we were unable to recover it. 00:34:23.508 [2024-07-25 01:20:16.507296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.508 [2024-07-25 01:20:16.507322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.508 qpair failed and we were unable to recover it. 00:34:23.508 [2024-07-25 01:20:16.507462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.509 [2024-07-25 01:20:16.507487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.509 qpair failed and we were unable to recover it. 00:34:23.509 [2024-07-25 01:20:16.507632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.509 [2024-07-25 01:20:16.507657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.509 qpair failed and we were unable to recover it. 00:34:23.509 [2024-07-25 01:20:16.507808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.509 [2024-07-25 01:20:16.507833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.509 qpair failed and we were unable to recover it. 00:34:23.509 [2024-07-25 01:20:16.507975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.509 [2024-07-25 01:20:16.508001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.509 qpair failed and we were unable to recover it. 00:34:23.509 [2024-07-25 01:20:16.508165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.509 [2024-07-25 01:20:16.508190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.509 qpair failed and we were unable to recover it. 00:34:23.509 [2024-07-25 01:20:16.508324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.509 [2024-07-25 01:20:16.508350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.509 qpair failed and we were unable to recover it. 00:34:23.509 [2024-07-25 01:20:16.508524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.509 [2024-07-25 01:20:16.508549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.509 qpair failed and we were unable to recover it. 00:34:23.509 [2024-07-25 01:20:16.508722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.509 [2024-07-25 01:20:16.508747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.509 qpair failed and we were unable to recover it. 00:34:23.509 [2024-07-25 01:20:16.508890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.509 [2024-07-25 01:20:16.508915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.509 qpair failed and we were unable to recover it. 00:34:23.509 [2024-07-25 01:20:16.509027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.509 [2024-07-25 01:20:16.509053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.509 qpair failed and we were unable to recover it. 00:34:23.509 [2024-07-25 01:20:16.509192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.509 [2024-07-25 01:20:16.509217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.509 qpair failed and we were unable to recover it. 00:34:23.509 [2024-07-25 01:20:16.509341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.509 [2024-07-25 01:20:16.509367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.509 qpair failed and we were unable to recover it. 00:34:23.509 [2024-07-25 01:20:16.509487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.509 [2024-07-25 01:20:16.509512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.509 qpair failed and we were unable to recover it. 00:34:23.509 [2024-07-25 01:20:16.509649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.509 [2024-07-25 01:20:16.509674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.509 qpair failed and we were unable to recover it. 00:34:23.509 [2024-07-25 01:20:16.509816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.509 [2024-07-25 01:20:16.509842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.509 qpair failed and we were unable to recover it. 00:34:23.509 [2024-07-25 01:20:16.510006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.509 [2024-07-25 01:20:16.510032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.509 qpair failed and we were unable to recover it. 00:34:23.509 [2024-07-25 01:20:16.510170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.509 [2024-07-25 01:20:16.510195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.509 qpair failed and we were unable to recover it. 00:34:23.509 [2024-07-25 01:20:16.510342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.509 [2024-07-25 01:20:16.510368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.509 qpair failed and we were unable to recover it. 00:34:23.509 [2024-07-25 01:20:16.510513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.509 [2024-07-25 01:20:16.510547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.509 qpair failed and we were unable to recover it. 00:34:23.509 [2024-07-25 01:20:16.510661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.509 [2024-07-25 01:20:16.510687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.509 qpair failed and we were unable to recover it. 00:34:23.509 [2024-07-25 01:20:16.510830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.509 [2024-07-25 01:20:16.510855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.509 qpair failed and we were unable to recover it. 00:34:23.509 [2024-07-25 01:20:16.511003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.509 [2024-07-25 01:20:16.511028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.509 qpair failed and we were unable to recover it. 00:34:23.509 [2024-07-25 01:20:16.511170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.509 [2024-07-25 01:20:16.511196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.509 qpair failed and we were unable to recover it. 00:34:23.509 [2024-07-25 01:20:16.511337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.509 [2024-07-25 01:20:16.511363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.509 qpair failed and we were unable to recover it. 00:34:23.509 [2024-07-25 01:20:16.511473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.509 [2024-07-25 01:20:16.511498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.509 qpair failed and we were unable to recover it. 00:34:23.509 [2024-07-25 01:20:16.511654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.509 [2024-07-25 01:20:16.511679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.509 qpair failed and we were unable to recover it. 00:34:23.509 [2024-07-25 01:20:16.511821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.509 [2024-07-25 01:20:16.511847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.509 qpair failed and we were unable to recover it. 00:34:23.509 [2024-07-25 01:20:16.511962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.509 [2024-07-25 01:20:16.511987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.509 qpair failed and we were unable to recover it. 00:34:23.509 [2024-07-25 01:20:16.512129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.509 [2024-07-25 01:20:16.512154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.509 qpair failed and we were unable to recover it. 00:34:23.509 [2024-07-25 01:20:16.512292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.509 [2024-07-25 01:20:16.512318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.509 qpair failed and we were unable to recover it. 00:34:23.509 [2024-07-25 01:20:16.512441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.509 [2024-07-25 01:20:16.512466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.509 qpair failed and we were unable to recover it. 00:34:23.509 [2024-07-25 01:20:16.512580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.509 [2024-07-25 01:20:16.512605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.509 qpair failed and we were unable to recover it. 00:34:23.509 [2024-07-25 01:20:16.512748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.509 [2024-07-25 01:20:16.512777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.509 qpair failed and we were unable to recover it. 00:34:23.509 [2024-07-25 01:20:16.512919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.509 [2024-07-25 01:20:16.512945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.509 qpair failed and we were unable to recover it. 00:34:23.509 [2024-07-25 01:20:16.513059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.509 [2024-07-25 01:20:16.513085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.509 qpair failed and we were unable to recover it. 00:34:23.509 [2024-07-25 01:20:16.513254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.509 [2024-07-25 01:20:16.513280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.509 qpair failed and we were unable to recover it. 00:34:23.509 [2024-07-25 01:20:16.513435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.509 [2024-07-25 01:20:16.513460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.509 qpair failed and we were unable to recover it. 00:34:23.509 [2024-07-25 01:20:16.513612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.510 [2024-07-25 01:20:16.513638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.510 qpair failed and we were unable to recover it. 00:34:23.510 [2024-07-25 01:20:16.513777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.510 [2024-07-25 01:20:16.513801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.510 qpair failed and we were unable to recover it. 00:34:23.510 [2024-07-25 01:20:16.513975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.510 [2024-07-25 01:20:16.514001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.510 qpair failed and we were unable to recover it. 00:34:23.510 [2024-07-25 01:20:16.514146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.510 [2024-07-25 01:20:16.514171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.510 qpair failed and we were unable to recover it. 00:34:23.510 [2024-07-25 01:20:16.514319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.510 [2024-07-25 01:20:16.514345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.510 qpair failed and we were unable to recover it. 00:34:23.510 [2024-07-25 01:20:16.514486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.510 [2024-07-25 01:20:16.514512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.510 qpair failed and we were unable to recover it. 00:34:23.510 [2024-07-25 01:20:16.514623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.510 [2024-07-25 01:20:16.514649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.510 qpair failed and we were unable to recover it. 00:34:23.510 [2024-07-25 01:20:16.514793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.510 [2024-07-25 01:20:16.514818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.510 qpair failed and we were unable to recover it. 00:34:23.510 [2024-07-25 01:20:16.514956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.510 [2024-07-25 01:20:16.514981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.510 qpair failed and we were unable to recover it. 00:34:23.510 [2024-07-25 01:20:16.515125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.510 [2024-07-25 01:20:16.515150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.510 qpair failed and we were unable to recover it. 00:34:23.510 [2024-07-25 01:20:16.515319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.510 [2024-07-25 01:20:16.515345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.510 qpair failed and we were unable to recover it. 00:34:23.510 [2024-07-25 01:20:16.515461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.510 [2024-07-25 01:20:16.515486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.510 qpair failed and we were unable to recover it. 00:34:23.510 [2024-07-25 01:20:16.515625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.510 [2024-07-25 01:20:16.515651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.510 qpair failed and we were unable to recover it. 00:34:23.510 [2024-07-25 01:20:16.515817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.510 [2024-07-25 01:20:16.515842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.510 qpair failed and we were unable to recover it. 00:34:23.510 [2024-07-25 01:20:16.515979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.510 [2024-07-25 01:20:16.516004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.510 qpair failed and we were unable to recover it. 00:34:23.510 [2024-07-25 01:20:16.516144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.510 [2024-07-25 01:20:16.516169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.510 qpair failed and we were unable to recover it. 00:34:23.510 [2024-07-25 01:20:16.516285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.510 [2024-07-25 01:20:16.516312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.510 qpair failed and we were unable to recover it. 00:34:23.510 [2024-07-25 01:20:16.516453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.510 [2024-07-25 01:20:16.516478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.510 qpair failed and we were unable to recover it. 00:34:23.510 [2024-07-25 01:20:16.516591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.510 [2024-07-25 01:20:16.516617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.510 qpair failed and we were unable to recover it. 00:34:23.510 [2024-07-25 01:20:16.516760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.510 [2024-07-25 01:20:16.516786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.510 qpair failed and we were unable to recover it. 00:34:23.510 [2024-07-25 01:20:16.516927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.510 [2024-07-25 01:20:16.516953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.510 qpair failed and we were unable to recover it. 00:34:23.510 [2024-07-25 01:20:16.517099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.510 [2024-07-25 01:20:16.517126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.510 qpair failed and we were unable to recover it. 00:34:23.510 [2024-07-25 01:20:16.517296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.510 [2024-07-25 01:20:16.517322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.510 qpair failed and we were unable to recover it. 00:34:23.510 [2024-07-25 01:20:16.517343] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 23.11.0 initialization... 00:34:23.510 [2024-07-25 01:20:16.517422] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:23.510 [2024-07-25 01:20:16.517491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.510 [2024-07-25 01:20:16.517516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.510 qpair failed and we were unable to recover it. 00:34:23.510 [2024-07-25 01:20:16.517653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.510 [2024-07-25 01:20:16.517677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.510 qpair failed and we were unable to recover it. 00:34:23.510 [2024-07-25 01:20:16.517852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.510 [2024-07-25 01:20:16.517876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.510 qpair failed and we were unable to recover it. 00:34:23.510 [2024-07-25 01:20:16.518023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.510 [2024-07-25 01:20:16.518049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.510 qpair failed and we were unable to recover it. 00:34:23.510 [2024-07-25 01:20:16.518196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.510 [2024-07-25 01:20:16.518223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.510 qpair failed and we were unable to recover it. 00:34:23.510 [2024-07-25 01:20:16.518377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.510 [2024-07-25 01:20:16.518403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.510 qpair failed and we were unable to recover it. 00:34:23.510 [2024-07-25 01:20:16.518552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.510 [2024-07-25 01:20:16.518577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.510 qpair failed and we were unable to recover it. 00:34:23.510 [2024-07-25 01:20:16.518688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.510 [2024-07-25 01:20:16.518714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.510 qpair failed and we were unable to recover it. 00:34:23.510 [2024-07-25 01:20:16.518864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.510 [2024-07-25 01:20:16.518889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.510 qpair failed and we were unable to recover it. 00:34:23.510 [2024-07-25 01:20:16.519030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.510 [2024-07-25 01:20:16.519055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.510 qpair failed and we were unable to recover it. 00:34:23.510 [2024-07-25 01:20:16.519174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.510 [2024-07-25 01:20:16.519200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.510 qpair failed and we were unable to recover it. 00:34:23.510 [2024-07-25 01:20:16.519327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.510 [2024-07-25 01:20:16.519358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.510 qpair failed and we were unable to recover it. 00:34:23.510 [2024-07-25 01:20:16.519506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.510 [2024-07-25 01:20:16.519531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.510 qpair failed and we were unable to recover it. 00:34:23.510 [2024-07-25 01:20:16.519673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.510 [2024-07-25 01:20:16.519699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.510 qpair failed and we were unable to recover it. 00:34:23.510 [2024-07-25 01:20:16.519841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.510 [2024-07-25 01:20:16.519868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.510 qpair failed and we were unable to recover it. 00:34:23.510 [2024-07-25 01:20:16.520010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.510 [2024-07-25 01:20:16.520036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.510 qpair failed and we were unable to recover it. 00:34:23.510 [2024-07-25 01:20:16.520205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.511 [2024-07-25 01:20:16.520230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.511 qpair failed and we were unable to recover it. 00:34:23.511 [2024-07-25 01:20:16.520355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.511 [2024-07-25 01:20:16.520381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.511 qpair failed and we were unable to recover it. 00:34:23.511 [2024-07-25 01:20:16.520523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.511 [2024-07-25 01:20:16.520548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.511 qpair failed and we were unable to recover it. 00:34:23.511 [2024-07-25 01:20:16.520666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.511 [2024-07-25 01:20:16.520692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.511 qpair failed and we were unable to recover it. 00:34:23.511 [2024-07-25 01:20:16.520862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.511 [2024-07-25 01:20:16.520887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.511 qpair failed and we were unable to recover it. 00:34:23.511 [2024-07-25 01:20:16.521025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.511 [2024-07-25 01:20:16.521050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.511 qpair failed and we were unable to recover it. 00:34:23.511 [2024-07-25 01:20:16.521187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.511 [2024-07-25 01:20:16.521213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.511 qpair failed and we were unable to recover it. 00:34:23.511 [2024-07-25 01:20:16.521376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.511 [2024-07-25 01:20:16.521402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.511 qpair failed and we were unable to recover it. 00:34:23.511 [2024-07-25 01:20:16.521556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.511 [2024-07-25 01:20:16.521581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.511 qpair failed and we were unable to recover it. 00:34:23.511 [2024-07-25 01:20:16.521714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.511 [2024-07-25 01:20:16.521741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.511 qpair failed and we were unable to recover it. 00:34:23.511 [2024-07-25 01:20:16.521887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.511 [2024-07-25 01:20:16.521912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.511 qpair failed and we were unable to recover it. 00:34:23.511 [2024-07-25 01:20:16.522057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.511 [2024-07-25 01:20:16.522081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.511 qpair failed and we were unable to recover it. 00:34:23.511 [2024-07-25 01:20:16.522204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.511 [2024-07-25 01:20:16.522229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.511 qpair failed and we were unable to recover it. 00:34:23.511 [2024-07-25 01:20:16.522389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.511 [2024-07-25 01:20:16.522416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.511 qpair failed and we were unable to recover it. 00:34:23.511 [2024-07-25 01:20:16.522529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.511 [2024-07-25 01:20:16.522555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.511 qpair failed and we were unable to recover it. 00:34:23.511 [2024-07-25 01:20:16.522693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.511 [2024-07-25 01:20:16.522719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.511 qpair failed and we were unable to recover it. 00:34:23.511 [2024-07-25 01:20:16.522866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.511 [2024-07-25 01:20:16.522891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.511 qpair failed and we were unable to recover it. 00:34:23.511 [2024-07-25 01:20:16.523011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.511 [2024-07-25 01:20:16.523037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.511 qpair failed and we were unable to recover it. 00:34:23.511 [2024-07-25 01:20:16.523181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.511 [2024-07-25 01:20:16.523210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.511 qpair failed and we were unable to recover it. 00:34:23.511 [2024-07-25 01:20:16.523361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.511 [2024-07-25 01:20:16.523388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.511 qpair failed and we were unable to recover it. 00:34:23.511 [2024-07-25 01:20:16.523506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.511 [2024-07-25 01:20:16.523531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.511 qpair failed and we were unable to recover it. 00:34:23.511 [2024-07-25 01:20:16.523699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.511 [2024-07-25 01:20:16.523725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.511 qpair failed and we were unable to recover it. 00:34:23.511 [2024-07-25 01:20:16.523875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.511 [2024-07-25 01:20:16.523900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.511 qpair failed and we were unable to recover it. 00:34:23.511 [2024-07-25 01:20:16.524046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.511 [2024-07-25 01:20:16.524071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.511 qpair failed and we were unable to recover it. 00:34:23.511 [2024-07-25 01:20:16.524212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.511 [2024-07-25 01:20:16.524239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.511 qpair failed and we were unable to recover it. 00:34:23.511 [2024-07-25 01:20:16.524392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.511 [2024-07-25 01:20:16.524418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.511 qpair failed and we were unable to recover it. 00:34:23.511 [2024-07-25 01:20:16.524582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.511 [2024-07-25 01:20:16.524608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.511 qpair failed and we were unable to recover it. 00:34:23.511 [2024-07-25 01:20:16.524724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.511 [2024-07-25 01:20:16.524750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.511 qpair failed and we were unable to recover it. 00:34:23.511 [2024-07-25 01:20:16.524894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.511 [2024-07-25 01:20:16.524920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.511 qpair failed and we were unable to recover it. 00:34:23.511 [2024-07-25 01:20:16.525046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.511 [2024-07-25 01:20:16.525071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.511 qpair failed and we were unable to recover it. 00:34:23.511 [2024-07-25 01:20:16.525196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.511 [2024-07-25 01:20:16.525222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.511 qpair failed and we were unable to recover it. 00:34:23.511 [2024-07-25 01:20:16.525344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.511 [2024-07-25 01:20:16.525370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.511 qpair failed and we were unable to recover it. 00:34:23.511 [2024-07-25 01:20:16.525513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.511 [2024-07-25 01:20:16.525550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.511 qpair failed and we were unable to recover it. 00:34:23.511 [2024-07-25 01:20:16.525689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.511 [2024-07-25 01:20:16.525715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.511 qpair failed and we were unable to recover it. 00:34:23.511 [2024-07-25 01:20:16.525884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.511 [2024-07-25 01:20:16.525909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.511 qpair failed and we were unable to recover it. 00:34:23.511 [2024-07-25 01:20:16.526023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.511 [2024-07-25 01:20:16.526052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.511 qpair failed and we were unable to recover it. 00:34:23.511 [2024-07-25 01:20:16.526206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.511 [2024-07-25 01:20:16.526232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.511 qpair failed and we were unable to recover it. 00:34:23.511 [2024-07-25 01:20:16.526386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.511 [2024-07-25 01:20:16.526412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.511 qpair failed and we were unable to recover it. 00:34:23.511 [2024-07-25 01:20:16.526557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.511 [2024-07-25 01:20:16.526582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.511 qpair failed and we were unable to recover it. 00:34:23.511 [2024-07-25 01:20:16.526697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.511 [2024-07-25 01:20:16.526721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.511 qpair failed and we were unable to recover it. 00:34:23.511 [2024-07-25 01:20:16.526860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.511 [2024-07-25 01:20:16.526886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.511 qpair failed and we were unable to recover it. 00:34:23.511 [2024-07-25 01:20:16.527026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.512 [2024-07-25 01:20:16.527051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.512 qpair failed and we were unable to recover it. 00:34:23.512 [2024-07-25 01:20:16.527169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.512 [2024-07-25 01:20:16.527194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.512 qpair failed and we were unable to recover it. 00:34:23.512 [2024-07-25 01:20:16.527340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.512 [2024-07-25 01:20:16.527367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.512 qpair failed and we were unable to recover it. 00:34:23.512 [2024-07-25 01:20:16.527483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.512 [2024-07-25 01:20:16.527509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.512 qpair failed and we were unable to recover it. 00:34:23.512 [2024-07-25 01:20:16.527648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.512 [2024-07-25 01:20:16.527673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.512 qpair failed and we were unable to recover it. 00:34:23.512 [2024-07-25 01:20:16.527832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.512 [2024-07-25 01:20:16.527857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.512 qpair failed and we were unable to recover it. 00:34:23.512 [2024-07-25 01:20:16.527995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.512 [2024-07-25 01:20:16.528021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.512 qpair failed and we were unable to recover it. 00:34:23.512 [2024-07-25 01:20:16.528158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.512 [2024-07-25 01:20:16.528183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.512 qpair failed and we were unable to recover it. 00:34:23.512 [2024-07-25 01:20:16.528318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.512 [2024-07-25 01:20:16.528345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.512 qpair failed and we were unable to recover it. 00:34:23.512 [2024-07-25 01:20:16.528459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.512 [2024-07-25 01:20:16.528485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.512 qpair failed and we were unable to recover it. 00:34:23.512 [2024-07-25 01:20:16.528655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.512 [2024-07-25 01:20:16.528681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.512 qpair failed and we were unable to recover it. 00:34:23.512 [2024-07-25 01:20:16.528803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.512 [2024-07-25 01:20:16.528828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.512 qpair failed and we were unable to recover it. 00:34:23.512 [2024-07-25 01:20:16.528990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.512 [2024-07-25 01:20:16.529016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.512 qpair failed and we were unable to recover it. 00:34:23.512 [2024-07-25 01:20:16.529134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.512 [2024-07-25 01:20:16.529159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.512 qpair failed and we were unable to recover it. 00:34:23.512 [2024-07-25 01:20:16.529307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.512 [2024-07-25 01:20:16.529333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.512 qpair failed and we were unable to recover it. 00:34:23.512 [2024-07-25 01:20:16.529478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.512 [2024-07-25 01:20:16.529503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.512 qpair failed and we were unable to recover it. 00:34:23.512 [2024-07-25 01:20:16.529640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.512 [2024-07-25 01:20:16.529666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.512 qpair failed and we were unable to recover it. 00:34:23.512 [2024-07-25 01:20:16.529778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.512 [2024-07-25 01:20:16.529804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.512 qpair failed and we were unable to recover it. 00:34:23.512 [2024-07-25 01:20:16.529926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.512 [2024-07-25 01:20:16.529951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.512 qpair failed and we were unable to recover it. 00:34:23.512 [2024-07-25 01:20:16.530067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.512 [2024-07-25 01:20:16.530092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.512 qpair failed and we were unable to recover it. 00:34:23.512 [2024-07-25 01:20:16.530229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.512 [2024-07-25 01:20:16.530261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.512 qpair failed and we were unable to recover it. 00:34:23.512 [2024-07-25 01:20:16.530306] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fad390 (9): Bad file descriptor 00:34:23.512 [2024-07-25 01:20:16.530498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.512 [2024-07-25 01:20:16.530544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.512 qpair failed and we were unable to recover it. 00:34:23.512 [2024-07-25 01:20:16.530670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.512 [2024-07-25 01:20:16.530708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.512 qpair failed and we were unable to recover it. 00:34:23.512 [2024-07-25 01:20:16.530878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.512 [2024-07-25 01:20:16.530914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.512 qpair failed and we were unable to recover it. 00:34:23.512 [2024-07-25 01:20:16.531043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.512 [2024-07-25 01:20:16.531069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.512 qpair failed and we were unable to recover it. 00:34:23.512 [2024-07-25 01:20:16.531213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.512 [2024-07-25 01:20:16.531238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.512 qpair failed and we were unable to recover it. 00:34:23.512 [2024-07-25 01:20:16.531366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.512 [2024-07-25 01:20:16.531392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.512 qpair failed and we were unable to recover it. 00:34:23.512 [2024-07-25 01:20:16.531512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.512 [2024-07-25 01:20:16.531537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.512 qpair failed and we were unable to recover it. 00:34:23.512 [2024-07-25 01:20:16.531650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.512 [2024-07-25 01:20:16.531676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.512 qpair failed and we were unable to recover it. 00:34:23.512 [2024-07-25 01:20:16.531817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.512 [2024-07-25 01:20:16.531842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.512 qpair failed and we were unable to recover it. 00:34:23.512 [2024-07-25 01:20:16.531985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.512 [2024-07-25 01:20:16.532010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.512 qpair failed and we were unable to recover it. 00:34:23.512 [2024-07-25 01:20:16.532150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.512 [2024-07-25 01:20:16.532176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.512 qpair failed and we were unable to recover it. 00:34:23.512 [2024-07-25 01:20:16.532333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.512 [2024-07-25 01:20:16.532359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.512 qpair failed and we were unable to recover it. 00:34:23.512 [2024-07-25 01:20:16.532477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.512 [2024-07-25 01:20:16.532502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.512 qpair failed and we were unable to recover it. 00:34:23.512 [2024-07-25 01:20:16.532623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.512 [2024-07-25 01:20:16.532649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.512 qpair failed and we were unable to recover it. 00:34:23.512 [2024-07-25 01:20:16.532789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.512 [2024-07-25 01:20:16.532814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.512 qpair failed and we were unable to recover it. 00:34:23.512 [2024-07-25 01:20:16.532960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.512 [2024-07-25 01:20:16.532985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.512 qpair failed and we were unable to recover it. 00:34:23.512 [2024-07-25 01:20:16.533102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.512 [2024-07-25 01:20:16.533128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.512 qpair failed and we were unable to recover it. 00:34:23.512 [2024-07-25 01:20:16.533237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.512 [2024-07-25 01:20:16.533269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.512 qpair failed and we were unable to recover it. 00:34:23.512 [2024-07-25 01:20:16.533434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.512 [2024-07-25 01:20:16.533459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.512 qpair failed and we were unable to recover it. 00:34:23.512 [2024-07-25 01:20:16.533606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.512 [2024-07-25 01:20:16.533632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.512 qpair failed and we were unable to recover it. 00:34:23.512 [2024-07-25 01:20:16.533754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.512 [2024-07-25 01:20:16.533779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.512 qpair failed and we were unable to recover it. 00:34:23.512 [2024-07-25 01:20:16.533890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.513 [2024-07-25 01:20:16.533915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.513 qpair failed and we were unable to recover it. 00:34:23.513 [2024-07-25 01:20:16.534051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.513 [2024-07-25 01:20:16.534077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.513 qpair failed and we were unable to recover it. 00:34:23.513 [2024-07-25 01:20:16.534198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.513 [2024-07-25 01:20:16.534224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.513 qpair failed and we were unable to recover it. 00:34:23.513 [2024-07-25 01:20:16.534379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.513 [2024-07-25 01:20:16.534404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.513 qpair failed and we were unable to recover it. 00:34:23.513 [2024-07-25 01:20:16.534582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.513 [2024-07-25 01:20:16.534607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.513 qpair failed and we were unable to recover it. 00:34:23.513 [2024-07-25 01:20:16.534747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.513 [2024-07-25 01:20:16.534777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.513 qpair failed and we were unable to recover it. 00:34:23.513 [2024-07-25 01:20:16.534917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.513 [2024-07-25 01:20:16.534943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.513 qpair failed and we were unable to recover it. 00:34:23.513 [2024-07-25 01:20:16.535055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.513 [2024-07-25 01:20:16.535080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.513 qpair failed and we were unable to recover it. 00:34:23.513 [2024-07-25 01:20:16.535220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.513 [2024-07-25 01:20:16.535259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.513 qpair failed and we were unable to recover it. 00:34:23.513 [2024-07-25 01:20:16.535398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.513 [2024-07-25 01:20:16.535424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.513 qpair failed and we were unable to recover it. 00:34:23.513 [2024-07-25 01:20:16.535545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.513 [2024-07-25 01:20:16.535570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.513 qpair failed and we were unable to recover it. 00:34:23.513 [2024-07-25 01:20:16.535711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.513 [2024-07-25 01:20:16.535737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.513 qpair failed and we were unable to recover it. 00:34:23.513 [2024-07-25 01:20:16.535904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.513 [2024-07-25 01:20:16.535929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.513 qpair failed and we were unable to recover it. 00:34:23.513 [2024-07-25 01:20:16.536065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.513 [2024-07-25 01:20:16.536090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.513 qpair failed and we were unable to recover it. 00:34:23.513 [2024-07-25 01:20:16.536217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.513 [2024-07-25 01:20:16.536260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.513 qpair failed and we were unable to recover it. 00:34:23.513 [2024-07-25 01:20:16.536378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.513 [2024-07-25 01:20:16.536404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.513 qpair failed and we were unable to recover it. 00:34:23.513 [2024-07-25 01:20:16.536574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.513 [2024-07-25 01:20:16.536600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.513 qpair failed and we were unable to recover it. 00:34:23.513 [2024-07-25 01:20:16.536748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.513 [2024-07-25 01:20:16.536773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.513 qpair failed and we were unable to recover it. 00:34:23.513 [2024-07-25 01:20:16.536890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.513 [2024-07-25 01:20:16.536916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.513 qpair failed and we were unable to recover it. 00:34:23.513 [2024-07-25 01:20:16.537055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.513 [2024-07-25 01:20:16.537080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.513 qpair failed and we were unable to recover it. 00:34:23.513 [2024-07-25 01:20:16.537191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.513 [2024-07-25 01:20:16.537217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.513 qpair failed and we were unable to recover it. 00:34:23.513 [2024-07-25 01:20:16.537355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.513 [2024-07-25 01:20:16.537381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.513 qpair failed and we were unable to recover it. 00:34:23.513 [2024-07-25 01:20:16.537501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.513 [2024-07-25 01:20:16.537527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.513 qpair failed and we were unable to recover it. 00:34:23.513 [2024-07-25 01:20:16.537648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.513 [2024-07-25 01:20:16.537675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.513 qpair failed and we were unable to recover it. 00:34:23.513 [2024-07-25 01:20:16.537787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.513 [2024-07-25 01:20:16.537813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.513 qpair failed and we were unable to recover it. 00:34:23.513 [2024-07-25 01:20:16.537978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.513 [2024-07-25 01:20:16.538004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.513 qpair failed and we were unable to recover it. 00:34:23.513 [2024-07-25 01:20:16.538123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.513 [2024-07-25 01:20:16.538148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.513 qpair failed and we were unable to recover it. 00:34:23.513 [2024-07-25 01:20:16.538259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.513 [2024-07-25 01:20:16.538286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.513 qpair failed and we were unable to recover it. 00:34:23.513 [2024-07-25 01:20:16.538444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.513 [2024-07-25 01:20:16.538469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.514 qpair failed and we were unable to recover it. 00:34:23.514 [2024-07-25 01:20:16.538590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.514 [2024-07-25 01:20:16.538615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.514 qpair failed and we were unable to recover it. 00:34:23.514 [2024-07-25 01:20:16.538770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.514 [2024-07-25 01:20:16.538796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.514 qpair failed and we were unable to recover it. 00:34:23.514 [2024-07-25 01:20:16.538939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.514 [2024-07-25 01:20:16.538966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.514 qpair failed and we were unable to recover it. 00:34:23.514 [2024-07-25 01:20:16.539088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.514 [2024-07-25 01:20:16.539113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.514 qpair failed and we were unable to recover it. 00:34:23.514 [2024-07-25 01:20:16.539259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.514 [2024-07-25 01:20:16.539286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.514 qpair failed and we were unable to recover it. 00:34:23.514 [2024-07-25 01:20:16.539399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.514 [2024-07-25 01:20:16.539424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.514 qpair failed and we were unable to recover it. 00:34:23.514 [2024-07-25 01:20:16.539541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.514 [2024-07-25 01:20:16.539566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.514 qpair failed and we were unable to recover it. 00:34:23.514 [2024-07-25 01:20:16.539682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.514 [2024-07-25 01:20:16.539707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.514 qpair failed and we were unable to recover it. 00:34:23.514 [2024-07-25 01:20:16.539877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.514 [2024-07-25 01:20:16.539903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.514 qpair failed and we were unable to recover it. 00:34:23.514 [2024-07-25 01:20:16.540032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.514 [2024-07-25 01:20:16.540058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.514 qpair failed and we were unable to recover it. 00:34:23.514 [2024-07-25 01:20:16.540179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.514 [2024-07-25 01:20:16.540205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.514 qpair failed and we were unable to recover it. 00:34:23.514 [2024-07-25 01:20:16.540342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.514 [2024-07-25 01:20:16.540368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.514 qpair failed and we were unable to recover it. 00:34:23.514 [2024-07-25 01:20:16.540488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.514 [2024-07-25 01:20:16.540513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.514 qpair failed and we were unable to recover it. 00:34:23.514 [2024-07-25 01:20:16.540624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.514 [2024-07-25 01:20:16.540651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.514 qpair failed and we were unable to recover it. 00:34:23.514 [2024-07-25 01:20:16.540767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.514 [2024-07-25 01:20:16.540793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.514 qpair failed and we were unable to recover it. 00:34:23.514 [2024-07-25 01:20:16.540940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.514 [2024-07-25 01:20:16.540965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.514 qpair failed and we were unable to recover it. 00:34:23.514 [2024-07-25 01:20:16.541077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.514 [2024-07-25 01:20:16.541106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.514 qpair failed and we were unable to recover it. 00:34:23.514 [2024-07-25 01:20:16.541221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.514 [2024-07-25 01:20:16.541254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.514 qpair failed and we were unable to recover it. 00:34:23.514 [2024-07-25 01:20:16.541404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.514 [2024-07-25 01:20:16.541430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.514 qpair failed and we were unable to recover it. 00:34:23.514 [2024-07-25 01:20:16.541570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.514 [2024-07-25 01:20:16.541595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.514 qpair failed and we were unable to recover it. 00:34:23.514 [2024-07-25 01:20:16.541711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.514 [2024-07-25 01:20:16.541736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.514 qpair failed and we were unable to recover it. 00:34:23.514 [2024-07-25 01:20:16.541852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.514 [2024-07-25 01:20:16.541879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.514 qpair failed and we were unable to recover it. 00:34:23.514 [2024-07-25 01:20:16.542011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.514 [2024-07-25 01:20:16.542036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.514 qpair failed and we were unable to recover it. 00:34:23.514 [2024-07-25 01:20:16.542185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.514 [2024-07-25 01:20:16.542210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.514 qpair failed and we were unable to recover it. 00:34:23.514 [2024-07-25 01:20:16.542376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.514 [2024-07-25 01:20:16.542402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.514 qpair failed and we were unable to recover it. 00:34:23.514 [2024-07-25 01:20:16.542527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.514 [2024-07-25 01:20:16.542552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.514 qpair failed and we were unable to recover it. 00:34:23.514 [2024-07-25 01:20:16.542692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.514 [2024-07-25 01:20:16.542717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.514 qpair failed and we were unable to recover it. 00:34:23.514 [2024-07-25 01:20:16.542856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.514 [2024-07-25 01:20:16.542881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.514 qpair failed and we were unable to recover it. 00:34:23.514 [2024-07-25 01:20:16.543001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.514 [2024-07-25 01:20:16.543027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.514 qpair failed and we were unable to recover it. 00:34:23.514 [2024-07-25 01:20:16.543170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.514 [2024-07-25 01:20:16.543195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.514 qpair failed and we were unable to recover it. 00:34:23.514 [2024-07-25 01:20:16.543372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.514 [2024-07-25 01:20:16.543398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.514 qpair failed and we were unable to recover it. 00:34:23.514 [2024-07-25 01:20:16.543536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.514 [2024-07-25 01:20:16.543561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.514 qpair failed and we were unable to recover it. 00:34:23.514 [2024-07-25 01:20:16.543702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.514 [2024-07-25 01:20:16.543729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.514 qpair failed and we were unable to recover it. 00:34:23.514 [2024-07-25 01:20:16.543856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.514 [2024-07-25 01:20:16.543883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.514 qpair failed and we were unable to recover it. 00:34:23.514 [2024-07-25 01:20:16.544025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.514 [2024-07-25 01:20:16.544051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.514 qpair failed and we were unable to recover it. 00:34:23.514 [2024-07-25 01:20:16.544193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.514 [2024-07-25 01:20:16.544219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.514 qpair failed and we were unable to recover it. 00:34:23.514 [2024-07-25 01:20:16.544390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.515 [2024-07-25 01:20:16.544431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:23.515 qpair failed and we were unable to recover it. 00:34:23.515 [2024-07-25 01:20:16.544561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.515 [2024-07-25 01:20:16.544588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:23.515 qpair failed and we were unable to recover it. 00:34:23.515 [2024-07-25 01:20:16.544728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.515 [2024-07-25 01:20:16.544755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:23.515 qpair failed and we were unable to recover it. 00:34:23.515 [2024-07-25 01:20:16.544896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.515 [2024-07-25 01:20:16.544924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:23.515 qpair failed and we were unable to recover it. 00:34:23.515 [2024-07-25 01:20:16.545077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.515 [2024-07-25 01:20:16.545103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:23.515 qpair failed and we were unable to recover it. 00:34:23.515 [2024-07-25 01:20:16.545254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.515 [2024-07-25 01:20:16.545281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:23.515 qpair failed and we were unable to recover it. 00:34:23.515 [2024-07-25 01:20:16.545397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.515 [2024-07-25 01:20:16.545423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:23.515 qpair failed and we were unable to recover it. 00:34:23.515 [2024-07-25 01:20:16.545573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.515 [2024-07-25 01:20:16.545599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:23.515 qpair failed and we were unable to recover it. 00:34:23.515 [2024-07-25 01:20:16.545714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.515 [2024-07-25 01:20:16.545739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:23.515 qpair failed and we were unable to recover it. 00:34:23.515 [2024-07-25 01:20:16.545858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.515 [2024-07-25 01:20:16.545884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:23.515 qpair failed and we were unable to recover it. 00:34:23.515 [2024-07-25 01:20:16.546010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.515 [2024-07-25 01:20:16.546036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:23.515 qpair failed and we were unable to recover it. 00:34:23.515 [2024-07-25 01:20:16.546207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.515 [2024-07-25 01:20:16.546234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:23.515 qpair failed and we were unable to recover it. 00:34:23.515 [2024-07-25 01:20:16.546390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.515 [2024-07-25 01:20:16.546416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:23.515 qpair failed and we were unable to recover it. 00:34:23.515 [2024-07-25 01:20:16.546550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.515 [2024-07-25 01:20:16.546576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:23.515 qpair failed and we were unable to recover it. 00:34:23.515 [2024-07-25 01:20:16.546717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.515 [2024-07-25 01:20:16.546744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:23.515 qpair failed and we were unable to recover it. 00:34:23.515 [2024-07-25 01:20:16.546859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.515 [2024-07-25 01:20:16.546885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:23.515 qpair failed and we were unable to recover it. 00:34:23.515 [2024-07-25 01:20:16.547051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.515 [2024-07-25 01:20:16.547077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:23.515 qpair failed and we were unable to recover it. 00:34:23.515 [2024-07-25 01:20:16.547199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.515 [2024-07-25 01:20:16.547225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:23.515 qpair failed and we were unable to recover it. 00:34:23.515 [2024-07-25 01:20:16.547382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.515 [2024-07-25 01:20:16.547411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.515 qpair failed and we were unable to recover it. 00:34:23.515 [2024-07-25 01:20:16.547532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.515 [2024-07-25 01:20:16.547558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.515 qpair failed and we were unable to recover it. 00:34:23.515 [2024-07-25 01:20:16.547674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.515 [2024-07-25 01:20:16.547704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.515 qpair failed and we were unable to recover it. 00:34:23.515 [2024-07-25 01:20:16.547840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.515 [2024-07-25 01:20:16.547866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.515 qpair failed and we were unable to recover it. 00:34:23.515 [2024-07-25 01:20:16.548007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.515 [2024-07-25 01:20:16.548032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.515 qpair failed and we were unable to recover it. 00:34:23.515 [2024-07-25 01:20:16.548148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.515 [2024-07-25 01:20:16.548174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.515 qpair failed and we were unable to recover it. 00:34:23.515 [2024-07-25 01:20:16.548293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.515 [2024-07-25 01:20:16.548319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.515 qpair failed and we were unable to recover it. 00:34:23.515 [2024-07-25 01:20:16.548467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.515 [2024-07-25 01:20:16.548493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.515 qpair failed and we were unable to recover it. 00:34:23.515 [2024-07-25 01:20:16.548635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.515 [2024-07-25 01:20:16.548660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.515 qpair failed and we were unable to recover it. 00:34:23.515 [2024-07-25 01:20:16.548784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.515 [2024-07-25 01:20:16.548809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.515 qpair failed and we were unable to recover it. 00:34:23.515 [2024-07-25 01:20:16.548953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.515 [2024-07-25 01:20:16.548979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.515 qpair failed and we were unable to recover it. 00:34:23.515 [2024-07-25 01:20:16.549101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.515 [2024-07-25 01:20:16.549126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.515 qpair failed and we were unable to recover it. 00:34:23.515 [2024-07-25 01:20:16.549261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.515 [2024-07-25 01:20:16.549290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.515 qpair failed and we were unable to recover it. 00:34:23.515 [2024-07-25 01:20:16.549405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.515 [2024-07-25 01:20:16.549431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.515 qpair failed and we were unable to recover it. 00:34:23.515 [2024-07-25 01:20:16.549556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.515 [2024-07-25 01:20:16.549582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.515 qpair failed and we were unable to recover it. 00:34:23.515 [2024-07-25 01:20:16.549688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.515 [2024-07-25 01:20:16.549713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.515 qpair failed and we were unable to recover it. 00:34:23.515 [2024-07-25 01:20:16.549861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.515 [2024-07-25 01:20:16.549886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.515 qpair failed and we were unable to recover it. 00:34:23.515 [2024-07-25 01:20:16.550007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.515 [2024-07-25 01:20:16.550032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.515 qpair failed and we were unable to recover it. 00:34:23.515 [2024-07-25 01:20:16.550157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.515 [2024-07-25 01:20:16.550185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:23.515 qpair failed and we were unable to recover it. 00:34:23.515 [2024-07-25 01:20:16.550312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.515 [2024-07-25 01:20:16.550339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:23.515 qpair failed and we were unable to recover it. 00:34:23.515 [2024-07-25 01:20:16.550482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.515 [2024-07-25 01:20:16.550508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:23.515 qpair failed and we were unable to recover it. 00:34:23.515 [2024-07-25 01:20:16.550625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.515 [2024-07-25 01:20:16.550652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:23.515 qpair failed and we were unable to recover it. 00:34:23.515 [2024-07-25 01:20:16.550784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.515 [2024-07-25 01:20:16.550812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:23.515 qpair failed and we were unable to recover it. 00:34:23.515 [2024-07-25 01:20:16.550950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.515 [2024-07-25 01:20:16.550976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:23.515 qpair failed and we were unable to recover it. 00:34:23.515 [2024-07-25 01:20:16.551090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.515 [2024-07-25 01:20:16.551116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:23.515 qpair failed and we were unable to recover it. 00:34:23.515 [2024-07-25 01:20:16.551236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.515 [2024-07-25 01:20:16.551267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:23.515 qpair failed and we were unable to recover it. 00:34:23.516 [2024-07-25 01:20:16.551389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.516 [2024-07-25 01:20:16.551416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:23.516 qpair failed and we were unable to recover it. 00:34:23.516 [2024-07-25 01:20:16.551533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.516 [2024-07-25 01:20:16.551559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:23.516 qpair failed and we were unable to recover it. 00:34:23.516 [2024-07-25 01:20:16.551673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.516 [2024-07-25 01:20:16.551700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:23.516 qpair failed and we were unable to recover it. 00:34:23.516 [2024-07-25 01:20:16.551845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.516 [2024-07-25 01:20:16.551871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:23.516 qpair failed and we were unable to recover it. 00:34:23.516 [2024-07-25 01:20:16.551996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.516 [2024-07-25 01:20:16.552022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:23.516 qpair failed and we were unable to recover it. 00:34:23.516 [2024-07-25 01:20:16.552167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.516 [2024-07-25 01:20:16.552195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:23.516 qpair failed and we were unable to recover it. 00:34:23.516 [2024-07-25 01:20:16.552346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.516 [2024-07-25 01:20:16.552373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:23.516 qpair failed and we were unable to recover it. 00:34:23.516 EAL: No free 2048 kB hugepages reported on node 1 00:34:23.516 [2024-07-25 01:20:16.552527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.516 [2024-07-25 01:20:16.552553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:23.516 qpair failed and we were unable to recover it. 00:34:23.516 [2024-07-25 01:20:16.552697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.516 [2024-07-25 01:20:16.552723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:23.516 qpair failed and we were unable to recover it. 00:34:23.516 [2024-07-25 01:20:16.552893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.516 [2024-07-25 01:20:16.552919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:23.516 qpair failed and we were unable to recover it. 00:34:23.516 [2024-07-25 01:20:16.553032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.516 [2024-07-25 01:20:16.553058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:23.516 qpair failed and we were unable to recover it. 00:34:23.516 [2024-07-25 01:20:16.553196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.516 [2024-07-25 01:20:16.553222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:23.516 qpair failed and we were unable to recover it. 00:34:23.516 [2024-07-25 01:20:16.553376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.516 [2024-07-25 01:20:16.553415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.516 qpair failed and we were unable to recover it. 00:34:23.516 [2024-07-25 01:20:16.553555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.516 [2024-07-25 01:20:16.553583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.516 qpair failed and we were unable to recover it. 00:34:23.516 [2024-07-25 01:20:16.553723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.516 [2024-07-25 01:20:16.553749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.516 qpair failed and we were unable to recover it. 00:34:23.516 [2024-07-25 01:20:16.553860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.516 [2024-07-25 01:20:16.553886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.516 qpair failed and we were unable to recover it. 00:34:23.516 [2024-07-25 01:20:16.554024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.516 [2024-07-25 01:20:16.554054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.516 qpair failed and we were unable to recover it. 00:34:23.516 [2024-07-25 01:20:16.554197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.516 [2024-07-25 01:20:16.554223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.516 qpair failed and we were unable to recover it. 00:34:23.516 [2024-07-25 01:20:16.554385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.516 [2024-07-25 01:20:16.554411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.516 qpair failed and we were unable to recover it. 00:34:23.516 [2024-07-25 01:20:16.554558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.516 [2024-07-25 01:20:16.554583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.516 qpair failed and we were unable to recover it. 00:34:23.516 [2024-07-25 01:20:16.554697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.516 [2024-07-25 01:20:16.554722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.516 qpair failed and we were unable to recover it. 00:34:23.516 [2024-07-25 01:20:16.554841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.516 [2024-07-25 01:20:16.554866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.516 qpair failed and we were unable to recover it. 00:34:23.516 [2024-07-25 01:20:16.555020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.516 [2024-07-25 01:20:16.555046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.516 qpair failed and we were unable to recover it. 00:34:23.516 [2024-07-25 01:20:16.555190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.516 [2024-07-25 01:20:16.555215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.516 qpair failed and we were unable to recover it. 00:34:23.516 [2024-07-25 01:20:16.555337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.516 [2024-07-25 01:20:16.555365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.516 qpair failed and we were unable to recover it. 00:34:23.516 [2024-07-25 01:20:16.555511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.516 [2024-07-25 01:20:16.555536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.516 qpair failed and we were unable to recover it. 00:34:23.516 [2024-07-25 01:20:16.555650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.516 [2024-07-25 01:20:16.555675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.516 qpair failed and we were unable to recover it. 00:34:23.516 [2024-07-25 01:20:16.555819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.516 [2024-07-25 01:20:16.555845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.516 qpair failed and we were unable to recover it. 00:34:23.516 [2024-07-25 01:20:16.555987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.516 [2024-07-25 01:20:16.556013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.516 qpair failed and we were unable to recover it. 00:34:23.516 [2024-07-25 01:20:16.556135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.516 [2024-07-25 01:20:16.556160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.516 qpair failed and we were unable to recover it. 00:34:23.516 [2024-07-25 01:20:16.556280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.516 [2024-07-25 01:20:16.556306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.516 qpair failed and we were unable to recover it. 00:34:23.516 [2024-07-25 01:20:16.556477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.516 [2024-07-25 01:20:16.556502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.516 qpair failed and we were unable to recover it. 00:34:23.516 [2024-07-25 01:20:16.556622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.516 [2024-07-25 01:20:16.556648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.516 qpair failed and we were unable to recover it. 00:34:23.516 [2024-07-25 01:20:16.556765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.516 [2024-07-25 01:20:16.556791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.516 qpair failed and we were unable to recover it. 00:34:23.516 [2024-07-25 01:20:16.556935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.516 [2024-07-25 01:20:16.556961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.516 qpair failed and we were unable to recover it. 00:34:23.516 [2024-07-25 01:20:16.557106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.516 [2024-07-25 01:20:16.557131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.516 qpair failed and we were unable to recover it. 00:34:23.516 [2024-07-25 01:20:16.557258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.516 [2024-07-25 01:20:16.557297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:23.516 qpair failed and we were unable to recover it. 00:34:23.516 [2024-07-25 01:20:16.557420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.516 [2024-07-25 01:20:16.557447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:23.516 qpair failed and we were unable to recover it. 00:34:23.516 [2024-07-25 01:20:16.557603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.516 [2024-07-25 01:20:16.557629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:23.516 qpair failed and we were unable to recover it. 00:34:23.516 [2024-07-25 01:20:16.557748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.516 [2024-07-25 01:20:16.557774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:23.516 qpair failed and we were unable to recover it. 00:34:23.516 [2024-07-25 01:20:16.557916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.516 [2024-07-25 01:20:16.557942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:23.516 qpair failed and we were unable to recover it. 00:34:23.516 [2024-07-25 01:20:16.558052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.516 [2024-07-25 01:20:16.558077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:23.516 qpair failed and we were unable to recover it. 00:34:23.516 [2024-07-25 01:20:16.558216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.516 [2024-07-25 01:20:16.558249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:23.516 qpair failed and we were unable to recover it. 00:34:23.516 [2024-07-25 01:20:16.558372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.517 [2024-07-25 01:20:16.558398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:23.517 qpair failed and we were unable to recover it. 00:34:23.517 [2024-07-25 01:20:16.558540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.517 [2024-07-25 01:20:16.558565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:23.517 qpair failed and we were unable to recover it. 00:34:23.517 [2024-07-25 01:20:16.558707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.517 [2024-07-25 01:20:16.558732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:23.517 qpair failed and we were unable to recover it. 00:34:23.517 [2024-07-25 01:20:16.558849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.517 [2024-07-25 01:20:16.558875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:23.517 qpair failed and we were unable to recover it. 00:34:23.517 [2024-07-25 01:20:16.558996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.517 [2024-07-25 01:20:16.559023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:23.517 qpair failed and we were unable to recover it. 00:34:23.517 [2024-07-25 01:20:16.559142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.517 [2024-07-25 01:20:16.559169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:23.517 qpair failed and we were unable to recover it. 00:34:23.517 [2024-07-25 01:20:16.559311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.517 [2024-07-25 01:20:16.559338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:23.517 qpair failed and we were unable to recover it. 00:34:23.517 [2024-07-25 01:20:16.559450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.517 [2024-07-25 01:20:16.559476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:23.517 qpair failed and we were unable to recover it. 00:34:23.517 [2024-07-25 01:20:16.559594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.517 [2024-07-25 01:20:16.559620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:23.517 qpair failed and we were unable to recover it. 00:34:23.517 [2024-07-25 01:20:16.559760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.517 [2024-07-25 01:20:16.559787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:23.517 qpair failed and we were unable to recover it. 00:34:23.517 [2024-07-25 01:20:16.559902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.517 [2024-07-25 01:20:16.559929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:23.517 qpair failed and we were unable to recover it. 00:34:23.517 [2024-07-25 01:20:16.560078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.517 [2024-07-25 01:20:16.560105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:23.517 qpair failed and we were unable to recover it. 00:34:23.517 [2024-07-25 01:20:16.560221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.517 [2024-07-25 01:20:16.560258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:23.517 qpair failed and we were unable to recover it. 00:34:23.517 [2024-07-25 01:20:16.560373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.517 [2024-07-25 01:20:16.560404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:23.517 qpair failed and we were unable to recover it. 00:34:23.517 [2024-07-25 01:20:16.560522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.517 [2024-07-25 01:20:16.560547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:23.517 qpair failed and we were unable to recover it. 00:34:23.517 [2024-07-25 01:20:16.560663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.517 [2024-07-25 01:20:16.560689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:23.517 qpair failed and we were unable to recover it. 00:34:23.517 [2024-07-25 01:20:16.560804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.517 [2024-07-25 01:20:16.560830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:23.517 qpair failed and we were unable to recover it. 00:34:23.517 [2024-07-25 01:20:16.560990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.517 [2024-07-25 01:20:16.561015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:23.517 qpair failed and we were unable to recover it. 00:34:23.517 [2024-07-25 01:20:16.561157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.517 [2024-07-25 01:20:16.561183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:23.517 qpair failed and we were unable to recover it. 00:34:23.517 [2024-07-25 01:20:16.561338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.517 [2024-07-25 01:20:16.561365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:23.517 qpair failed and we were unable to recover it. 00:34:23.517 [2024-07-25 01:20:16.561484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.517 [2024-07-25 01:20:16.561510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:23.517 qpair failed and we were unable to recover it. 00:34:23.517 [2024-07-25 01:20:16.561655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.517 [2024-07-25 01:20:16.561681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:23.517 qpair failed and we were unable to recover it. 00:34:23.517 [2024-07-25 01:20:16.561795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.517 [2024-07-25 01:20:16.561821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:23.517 qpair failed and we were unable to recover it. 00:34:23.517 [2024-07-25 01:20:16.561939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.517 [2024-07-25 01:20:16.561965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:23.517 qpair failed and we were unable to recover it. 00:34:23.517 [2024-07-25 01:20:16.562133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.517 [2024-07-25 01:20:16.562159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:23.517 qpair failed and we were unable to recover it. 00:34:23.517 [2024-07-25 01:20:16.562284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.517 [2024-07-25 01:20:16.562310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:23.517 qpair failed and we were unable to recover it. 00:34:23.517 [2024-07-25 01:20:16.562428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.517 [2024-07-25 01:20:16.562454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:23.517 qpair failed and we were unable to recover it. 00:34:23.517 [2024-07-25 01:20:16.562581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.517 [2024-07-25 01:20:16.562606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:23.517 qpair failed and we were unable to recover it. 00:34:23.517 [2024-07-25 01:20:16.562723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.517 [2024-07-25 01:20:16.562749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:23.517 qpair failed and we were unable to recover it. 00:34:23.517 [2024-07-25 01:20:16.562923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.517 [2024-07-25 01:20:16.562949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:23.517 qpair failed and we were unable to recover it. 00:34:23.517 [2024-07-25 01:20:16.563060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.517 [2024-07-25 01:20:16.563086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:23.517 qpair failed and we were unable to recover it. 00:34:23.517 [2024-07-25 01:20:16.563193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.517 [2024-07-25 01:20:16.563219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:23.517 qpair failed and we were unable to recover it. 00:34:23.517 [2024-07-25 01:20:16.563347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.517 [2024-07-25 01:20:16.563375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.517 qpair failed and we were unable to recover it. 00:34:23.517 [2024-07-25 01:20:16.563502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.517 [2024-07-25 01:20:16.563528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.517 qpair failed and we were unable to recover it. 00:34:23.517 [2024-07-25 01:20:16.563664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.517 [2024-07-25 01:20:16.563689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.517 qpair failed and we were unable to recover it. 00:34:23.517 [2024-07-25 01:20:16.563840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.517 [2024-07-25 01:20:16.563866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.517 qpair failed and we were unable to recover it. 00:34:23.517 [2024-07-25 01:20:16.563976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.518 [2024-07-25 01:20:16.564001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.518 qpair failed and we were unable to recover it. 00:34:23.518 [2024-07-25 01:20:16.564118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.518 [2024-07-25 01:20:16.564144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.518 qpair failed and we were unable to recover it. 00:34:23.518 [2024-07-25 01:20:16.564265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.518 [2024-07-25 01:20:16.564293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:23.518 qpair failed and we were unable to recover it. 00:34:23.518 [2024-07-25 01:20:16.564442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.518 [2024-07-25 01:20:16.564468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:23.518 qpair failed and we were unable to recover it. 00:34:23.518 [2024-07-25 01:20:16.564602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.518 [2024-07-25 01:20:16.564628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:23.518 qpair failed and we were unable to recover it. 00:34:23.518 [2024-07-25 01:20:16.564739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.518 [2024-07-25 01:20:16.564765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:23.518 qpair failed and we were unable to recover it. 00:34:23.518 [2024-07-25 01:20:16.564904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.518 [2024-07-25 01:20:16.564930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:23.518 qpair failed and we were unable to recover it. 00:34:23.518 [2024-07-25 01:20:16.565070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.518 [2024-07-25 01:20:16.565095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:23.518 qpair failed and we were unable to recover it. 00:34:23.518 [2024-07-25 01:20:16.565239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.518 [2024-07-25 01:20:16.565270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:23.518 qpair failed and we were unable to recover it. 00:34:23.518 [2024-07-25 01:20:16.565386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.518 [2024-07-25 01:20:16.565413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:23.518 qpair failed and we were unable to recover it. 00:34:23.518 [2024-07-25 01:20:16.565531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.518 [2024-07-25 01:20:16.565557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:23.518 qpair failed and we were unable to recover it. 00:34:23.518 [2024-07-25 01:20:16.565700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.518 [2024-07-25 01:20:16.565727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:23.518 qpair failed and we were unable to recover it. 00:34:23.518 [2024-07-25 01:20:16.565840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.518 [2024-07-25 01:20:16.565865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:23.518 qpair failed and we were unable to recover it. 00:34:23.518 [2024-07-25 01:20:16.565985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.518 [2024-07-25 01:20:16.566011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:23.518 qpair failed and we were unable to recover it. 00:34:23.518 [2024-07-25 01:20:16.566152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.518 [2024-07-25 01:20:16.566177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:23.518 qpair failed and we were unable to recover it. 00:34:23.518 [2024-07-25 01:20:16.566325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.518 [2024-07-25 01:20:16.566352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:23.518 qpair failed and we were unable to recover it. 00:34:23.518 [2024-07-25 01:20:16.566472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.518 [2024-07-25 01:20:16.566499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:23.518 qpair failed and we were unable to recover it. 00:34:23.518 [2024-07-25 01:20:16.566639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.518 [2024-07-25 01:20:16.566669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:23.518 qpair failed and we were unable to recover it. 00:34:23.518 [2024-07-25 01:20:16.566798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.518 [2024-07-25 01:20:16.566824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:23.518 qpair failed and we were unable to recover it. 00:34:23.518 [2024-07-25 01:20:16.566943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.518 [2024-07-25 01:20:16.566969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:23.518 qpair failed and we were unable to recover it. 00:34:23.518 [2024-07-25 01:20:16.567097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.518 [2024-07-25 01:20:16.567122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:23.518 qpair failed and we were unable to recover it. 00:34:23.518 [2024-07-25 01:20:16.567272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.518 [2024-07-25 01:20:16.567299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:23.518 qpair failed and we were unable to recover it. 00:34:23.518 [2024-07-25 01:20:16.567420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.518 [2024-07-25 01:20:16.567446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:23.518 qpair failed and we were unable to recover it. 00:34:23.518 [2024-07-25 01:20:16.567598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.518 [2024-07-25 01:20:16.567624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:23.518 qpair failed and we were unable to recover it. 00:34:23.518 [2024-07-25 01:20:16.567732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.518 [2024-07-25 01:20:16.567758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:23.518 qpair failed and we were unable to recover it. 00:34:23.518 [2024-07-25 01:20:16.567900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.518 [2024-07-25 01:20:16.567926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:23.518 qpair failed and we were unable to recover it. 00:34:23.518 [2024-07-25 01:20:16.568039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.518 [2024-07-25 01:20:16.568065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:23.518 qpair failed and we were unable to recover it. 00:34:23.518 [2024-07-25 01:20:16.568196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.518 [2024-07-25 01:20:16.568223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:23.518 qpair failed and we were unable to recover it. 00:34:23.518 [2024-07-25 01:20:16.568364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.518 [2024-07-25 01:20:16.568390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:23.518 qpair failed and we were unable to recover it. 00:34:23.518 [2024-07-25 01:20:16.568517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.518 [2024-07-25 01:20:16.568543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:23.518 qpair failed and we were unable to recover it. 00:34:23.518 [2024-07-25 01:20:16.568716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.518 [2024-07-25 01:20:16.568742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:23.518 qpair failed and we were unable to recover it. 00:34:23.518 [2024-07-25 01:20:16.568859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.518 [2024-07-25 01:20:16.568885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:23.518 qpair failed and we were unable to recover it. 00:34:23.518 [2024-07-25 01:20:16.569001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.518 [2024-07-25 01:20:16.569028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:23.518 qpair failed and we were unable to recover it. 00:34:23.518 [2024-07-25 01:20:16.569192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.518 [2024-07-25 01:20:16.569219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:23.518 qpair failed and we were unable to recover it. 00:34:23.518 [2024-07-25 01:20:16.569337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.518 [2024-07-25 01:20:16.569363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:23.518 qpair failed and we were unable to recover it. 00:34:23.518 [2024-07-25 01:20:16.569512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.518 [2024-07-25 01:20:16.569538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:23.518 qpair failed and we were unable to recover it. 00:34:23.518 [2024-07-25 01:20:16.569652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.518 [2024-07-25 01:20:16.569678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:23.518 qpair failed and we were unable to recover it. 00:34:23.518 [2024-07-25 01:20:16.569793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.518 [2024-07-25 01:20:16.569823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:23.518 qpair failed and we were unable to recover it. 00:34:23.518 [2024-07-25 01:20:16.569972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.518 [2024-07-25 01:20:16.570000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.518 qpair failed and we were unable to recover it. 00:34:23.518 [2024-07-25 01:20:16.570112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.518 [2024-07-25 01:20:16.570138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.518 qpair failed and we were unable to recover it. 00:34:23.518 [2024-07-25 01:20:16.570287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.518 [2024-07-25 01:20:16.570313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.518 qpair failed and we were unable to recover it. 00:34:23.518 [2024-07-25 01:20:16.570426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.518 [2024-07-25 01:20:16.570451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.518 qpair failed and we were unable to recover it. 00:34:23.518 [2024-07-25 01:20:16.570576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.518 [2024-07-25 01:20:16.570601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.518 qpair failed and we were unable to recover it. 00:34:23.518 [2024-07-25 01:20:16.570767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.518 [2024-07-25 01:20:16.570792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.518 qpair failed and we were unable to recover it. 00:34:23.518 [2024-07-25 01:20:16.570932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.519 [2024-07-25 01:20:16.570958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.519 qpair failed and we were unable to recover it. 00:34:23.519 [2024-07-25 01:20:16.571070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.519 [2024-07-25 01:20:16.571096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.519 qpair failed and we were unable to recover it. 00:34:23.519 [2024-07-25 01:20:16.571204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.519 [2024-07-25 01:20:16.571229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.519 qpair failed and we were unable to recover it. 00:34:23.519 [2024-07-25 01:20:16.571353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.519 [2024-07-25 01:20:16.571379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.519 qpair failed and we were unable to recover it. 00:34:23.519 [2024-07-25 01:20:16.571519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.519 [2024-07-25 01:20:16.571552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.519 qpair failed and we were unable to recover it. 00:34:23.519 [2024-07-25 01:20:16.571718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.519 [2024-07-25 01:20:16.571744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.519 qpair failed and we were unable to recover it. 00:34:23.519 [2024-07-25 01:20:16.571858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.519 [2024-07-25 01:20:16.571884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.519 qpair failed and we were unable to recover it. 00:34:23.519 [2024-07-25 01:20:16.572002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.519 [2024-07-25 01:20:16.572027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.519 qpair failed and we were unable to recover it. 00:34:23.519 [2024-07-25 01:20:16.572136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.519 [2024-07-25 01:20:16.572162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.519 qpair failed and we were unable to recover it. 00:34:23.519 [2024-07-25 01:20:16.572300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.519 [2024-07-25 01:20:16.572326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.519 qpair failed and we were unable to recover it. 00:34:23.519 [2024-07-25 01:20:16.572493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.519 [2024-07-25 01:20:16.572518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.519 qpair failed and we were unable to recover it. 00:34:23.519 [2024-07-25 01:20:16.572659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.519 [2024-07-25 01:20:16.572685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.519 qpair failed and we were unable to recover it. 00:34:23.519 [2024-07-25 01:20:16.572804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.519 [2024-07-25 01:20:16.572829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.519 qpair failed and we were unable to recover it. 00:34:23.519 [2024-07-25 01:20:16.572949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.519 [2024-07-25 01:20:16.572978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.519 qpair failed and we were unable to recover it. 00:34:23.519 [2024-07-25 01:20:16.573147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.519 [2024-07-25 01:20:16.573172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.519 qpair failed and we were unable to recover it. 00:34:23.519 [2024-07-25 01:20:16.573300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.519 [2024-07-25 01:20:16.573326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.519 qpair failed and we were unable to recover it. 00:34:23.519 [2024-07-25 01:20:16.573438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.519 [2024-07-25 01:20:16.573464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.519 qpair failed and we were unable to recover it. 00:34:23.519 [2024-07-25 01:20:16.573592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.519 [2024-07-25 01:20:16.573617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.519 qpair failed and we were unable to recover it. 00:34:23.519 [2024-07-25 01:20:16.573753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.519 [2024-07-25 01:20:16.573778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.519 qpair failed and we were unable to recover it. 00:34:23.519 [2024-07-25 01:20:16.573900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.519 [2024-07-25 01:20:16.573925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.519 qpair failed and we were unable to recover it. 00:34:23.519 [2024-07-25 01:20:16.574083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.519 [2024-07-25 01:20:16.574108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.519 qpair failed and we were unable to recover it. 00:34:23.519 [2024-07-25 01:20:16.574269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.519 [2024-07-25 01:20:16.574295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.519 qpair failed and we were unable to recover it. 00:34:23.519 [2024-07-25 01:20:16.574413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.519 [2024-07-25 01:20:16.574438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.519 qpair failed and we were unable to recover it. 00:34:23.519 [2024-07-25 01:20:16.574590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.519 [2024-07-25 01:20:16.574616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.519 qpair failed and we were unable to recover it. 00:34:23.519 [2024-07-25 01:20:16.574757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.519 [2024-07-25 01:20:16.574783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.519 qpair failed and we were unable to recover it. 00:34:23.519 [2024-07-25 01:20:16.574912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.519 [2024-07-25 01:20:16.574937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.519 qpair failed and we were unable to recover it. 00:34:23.519 [2024-07-25 01:20:16.575056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.519 [2024-07-25 01:20:16.575081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.519 qpair failed and we were unable to recover it. 00:34:23.519 [2024-07-25 01:20:16.575206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.519 [2024-07-25 01:20:16.575232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.519 qpair failed and we were unable to recover it. 00:34:23.519 [2024-07-25 01:20:16.575382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.519 [2024-07-25 01:20:16.575407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.519 qpair failed and we were unable to recover it. 00:34:23.519 [2024-07-25 01:20:16.575541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.519 [2024-07-25 01:20:16.575566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.519 qpair failed and we were unable to recover it. 00:34:23.519 [2024-07-25 01:20:16.575694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.519 [2024-07-25 01:20:16.575720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.519 qpair failed and we were unable to recover it. 00:34:23.519 [2024-07-25 01:20:16.575862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.519 [2024-07-25 01:20:16.575887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.519 qpair failed and we were unable to recover it. 00:34:23.519 [2024-07-25 01:20:16.576053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.519 [2024-07-25 01:20:16.576079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.519 qpair failed and we were unable to recover it. 00:34:23.519 [2024-07-25 01:20:16.576207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.519 [2024-07-25 01:20:16.576254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:23.519 qpair failed and we were unable to recover it. 00:34:23.519 [2024-07-25 01:20:16.576422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.519 [2024-07-25 01:20:16.576450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:23.519 qpair failed and we were unable to recover it. 00:34:23.519 [2024-07-25 01:20:16.576591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.519 [2024-07-25 01:20:16.576617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:23.519 qpair failed and we were unable to recover it. 00:34:23.519 [2024-07-25 01:20:16.576730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.519 [2024-07-25 01:20:16.576756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:23.519 qpair failed and we were unable to recover it. 00:34:23.519 [2024-07-25 01:20:16.576869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.519 [2024-07-25 01:20:16.576894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:23.519 qpair failed and we were unable to recover it. 00:34:23.519 [2024-07-25 01:20:16.577038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.519 [2024-07-25 01:20:16.577063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:23.519 qpair failed and we were unable to recover it. 00:34:23.519 [2024-07-25 01:20:16.577177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.519 [2024-07-25 01:20:16.577203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:23.519 qpair failed and we were unable to recover it. 00:34:23.519 [2024-07-25 01:20:16.577385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.519 [2024-07-25 01:20:16.577411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:23.519 qpair failed and we were unable to recover it. 00:34:23.519 [2024-07-25 01:20:16.577524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.519 [2024-07-25 01:20:16.577550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:23.519 qpair failed and we were unable to recover it. 00:34:23.519 [2024-07-25 01:20:16.577669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.519 [2024-07-25 01:20:16.577695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:23.519 qpair failed and we were unable to recover it. 00:34:23.519 [2024-07-25 01:20:16.577843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.519 [2024-07-25 01:20:16.577867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:23.519 qpair failed and we were unable to recover it. 00:34:23.519 [2024-07-25 01:20:16.578035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.520 [2024-07-25 01:20:16.578061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:23.520 qpair failed and we were unable to recover it. 00:34:23.520 [2024-07-25 01:20:16.578176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.520 [2024-07-25 01:20:16.578201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:23.520 qpair failed and we were unable to recover it. 00:34:23.520 [2024-07-25 01:20:16.578325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.520 [2024-07-25 01:20:16.578351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:23.520 qpair failed and we were unable to recover it. 00:34:23.520 [2024-07-25 01:20:16.578491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.520 [2024-07-25 01:20:16.578516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:23.520 qpair failed and we were unable to recover it. 00:34:23.520 [2024-07-25 01:20:16.578656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.520 [2024-07-25 01:20:16.578682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:23.520 qpair failed and we were unable to recover it. 00:34:23.520 [2024-07-25 01:20:16.578800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.520 [2024-07-25 01:20:16.578825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:23.520 qpair failed and we were unable to recover it. 00:34:23.520 [2024-07-25 01:20:16.578982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.520 [2024-07-25 01:20:16.579007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:23.520 qpair failed and we were unable to recover it. 00:34:23.520 [2024-07-25 01:20:16.579123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.520 [2024-07-25 01:20:16.579147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:23.520 qpair failed and we were unable to recover it. 00:34:23.520 [2024-07-25 01:20:16.579295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.520 [2024-07-25 01:20:16.579320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:23.520 qpair failed and we were unable to recover it. 00:34:23.520 [2024-07-25 01:20:16.579433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.520 [2024-07-25 01:20:16.579464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:23.520 qpair failed and we were unable to recover it. 00:34:23.520 [2024-07-25 01:20:16.579637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.520 [2024-07-25 01:20:16.579662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:23.520 qpair failed and we were unable to recover it. 00:34:23.520 [2024-07-25 01:20:16.579775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.520 [2024-07-25 01:20:16.579799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:23.520 qpair failed and we were unable to recover it. 00:34:23.520 [2024-07-25 01:20:16.579943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.520 [2024-07-25 01:20:16.579968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:23.520 qpair failed and we were unable to recover it. 00:34:23.520 [2024-07-25 01:20:16.580081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.520 [2024-07-25 01:20:16.580106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:23.520 qpair failed and we were unable to recover it. 00:34:23.520 [2024-07-25 01:20:16.580231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.520 [2024-07-25 01:20:16.580262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:23.520 qpair failed and we were unable to recover it. 00:34:23.520 [2024-07-25 01:20:16.580381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.520 [2024-07-25 01:20:16.580406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:23.520 qpair failed and we were unable to recover it. 00:34:23.520 [2024-07-25 01:20:16.580560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.520 [2024-07-25 01:20:16.580585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:23.520 qpair failed and we were unable to recover it. 00:34:23.520 [2024-07-25 01:20:16.580693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.520 [2024-07-25 01:20:16.580718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:23.520 qpair failed and we were unable to recover it. 00:34:23.520 [2024-07-25 01:20:16.580858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.520 [2024-07-25 01:20:16.580885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:23.520 qpair failed and we were unable to recover it. 00:34:23.520 [2024-07-25 01:20:16.580998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.520 [2024-07-25 01:20:16.581022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:23.520 qpair failed and we were unable to recover it. 00:34:23.520 [2024-07-25 01:20:16.581138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.520 [2024-07-25 01:20:16.581163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:23.520 qpair failed and we were unable to recover it. 00:34:23.520 [2024-07-25 01:20:16.581297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.520 [2024-07-25 01:20:16.581323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:23.520 qpair failed and we were unable to recover it. 00:34:23.520 [2024-07-25 01:20:16.581440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.520 [2024-07-25 01:20:16.581466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:23.520 qpair failed and we were unable to recover it. 00:34:23.520 [2024-07-25 01:20:16.581611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.520 [2024-07-25 01:20:16.581637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:23.520 qpair failed and we were unable to recover it. 00:34:23.520 [2024-07-25 01:20:16.581754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.520 [2024-07-25 01:20:16.581779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:23.520 qpair failed and we were unable to recover it. 00:34:23.520 [2024-07-25 01:20:16.581953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.520 [2024-07-25 01:20:16.581979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:23.520 qpair failed and we were unable to recover it. 00:34:23.520 [2024-07-25 01:20:16.582094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.520 [2024-07-25 01:20:16.582120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:23.520 qpair failed and we were unable to recover it. 00:34:23.520 [2024-07-25 01:20:16.582232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.520 [2024-07-25 01:20:16.582263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:23.520 qpair failed and we were unable to recover it. 00:34:23.520 [2024-07-25 01:20:16.582378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.520 [2024-07-25 01:20:16.582404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:23.520 qpair failed and we were unable to recover it. 00:34:23.520 [2024-07-25 01:20:16.582518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.520 [2024-07-25 01:20:16.582543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:23.520 qpair failed and we were unable to recover it. 00:34:23.520 [2024-07-25 01:20:16.582653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.520 [2024-07-25 01:20:16.582679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:23.520 qpair failed and we were unable to recover it. 00:34:23.520 [2024-07-25 01:20:16.582821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.520 [2024-07-25 01:20:16.582846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:23.520 qpair failed and we were unable to recover it. 00:34:23.520 [2024-07-25 01:20:16.582963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.520 [2024-07-25 01:20:16.582987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:23.520 qpair failed and we were unable to recover it. 00:34:23.520 [2024-07-25 01:20:16.583126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.520 [2024-07-25 01:20:16.583151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:23.520 qpair failed and we were unable to recover it. 00:34:23.520 [2024-07-25 01:20:16.583266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.520 [2024-07-25 01:20:16.583291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:23.520 qpair failed and we were unable to recover it. 00:34:23.520 [2024-07-25 01:20:16.583409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.520 [2024-07-25 01:20:16.583435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:23.520 qpair failed and we were unable to recover it. 00:34:23.520 [2024-07-25 01:20:16.583583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.520 [2024-07-25 01:20:16.583609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:23.520 qpair failed and we were unable to recover it. 00:34:23.520 [2024-07-25 01:20:16.583730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.520 [2024-07-25 01:20:16.583756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:23.520 qpair failed and we were unable to recover it. 00:34:23.520 [2024-07-25 01:20:16.583902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.520 [2024-07-25 01:20:16.583927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:23.520 qpair failed and we were unable to recover it. 00:34:23.520 [2024-07-25 01:20:16.584058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.520 [2024-07-25 01:20:16.584083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:23.520 qpair failed and we were unable to recover it. 00:34:23.520 [2024-07-25 01:20:16.584214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.520 [2024-07-25 01:20:16.584260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.520 qpair failed and we were unable to recover it. 00:34:23.520 [2024-07-25 01:20:16.584421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.520 [2024-07-25 01:20:16.584448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.520 qpair failed and we were unable to recover it. 00:34:23.520 [2024-07-25 01:20:16.584598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.520 [2024-07-25 01:20:16.584624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.520 qpair failed and we were unable to recover it. 00:34:23.520 [2024-07-25 01:20:16.584729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.520 [2024-07-25 01:20:16.584754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.520 qpair failed and we were unable to recover it. 00:34:23.521 [2024-07-25 01:20:16.584883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.521 [2024-07-25 01:20:16.584908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.521 qpair failed and we were unable to recover it. 00:34:23.521 [2024-07-25 01:20:16.585016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.521 [2024-07-25 01:20:16.585042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.521 qpair failed and we were unable to recover it. 00:34:23.521 [2024-07-25 01:20:16.585166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.521 [2024-07-25 01:20:16.585191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.521 qpair failed and we were unable to recover it. 00:34:23.521 [2024-07-25 01:20:16.585334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.521 [2024-07-25 01:20:16.585361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.521 qpair failed and we were unable to recover it. 00:34:23.521 [2024-07-25 01:20:16.585504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.521 [2024-07-25 01:20:16.585530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.521 qpair failed and we were unable to recover it. 00:34:23.521 [2024-07-25 01:20:16.585671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.521 [2024-07-25 01:20:16.585705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.521 qpair failed and we were unable to recover it. 00:34:23.521 [2024-07-25 01:20:16.585824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.521 [2024-07-25 01:20:16.585850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.521 qpair failed and we were unable to recover it. 00:34:23.521 [2024-07-25 01:20:16.585973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.521 [2024-07-25 01:20:16.586000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.521 qpair failed and we were unable to recover it. 00:34:23.521 [2024-07-25 01:20:16.586120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.521 [2024-07-25 01:20:16.586146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.521 qpair failed and we were unable to recover it. 00:34:23.521 [2024-07-25 01:20:16.586289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.521 [2024-07-25 01:20:16.586315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.521 qpair failed and we were unable to recover it. 00:34:23.521 [2024-07-25 01:20:16.586425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.521 [2024-07-25 01:20:16.586451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.521 qpair failed and we were unable to recover it. 00:34:23.521 [2024-07-25 01:20:16.586618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.521 [2024-07-25 01:20:16.586643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.521 qpair failed and we were unable to recover it. 00:34:23.521 [2024-07-25 01:20:16.586801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.521 [2024-07-25 01:20:16.586827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.521 qpair failed and we were unable to recover it. 00:34:23.521 [2024-07-25 01:20:16.586942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.521 [2024-07-25 01:20:16.586968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.521 qpair failed and we were unable to recover it. 00:34:23.521 [2024-07-25 01:20:16.587104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.521 [2024-07-25 01:20:16.587129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.521 qpair failed and we were unable to recover it. 00:34:23.521 [2024-07-25 01:20:16.587268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.521 [2024-07-25 01:20:16.587293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.521 qpair failed and we were unable to recover it. 00:34:23.521 [2024-07-25 01:20:16.587435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.521 [2024-07-25 01:20:16.587461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.521 qpair failed and we were unable to recover it. 00:34:23.521 [2024-07-25 01:20:16.587582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.521 [2024-07-25 01:20:16.587619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.521 qpair failed and we were unable to recover it. 00:34:23.521 [2024-07-25 01:20:16.587796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.521 [2024-07-25 01:20:16.587832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.521 qpair failed and we were unable to recover it. 00:34:23.521 [2024-07-25 01:20:16.587977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.521 [2024-07-25 01:20:16.588004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.521 qpair failed and we were unable to recover it. 00:34:23.521 [2024-07-25 01:20:16.588138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.521 [2024-07-25 01:20:16.588163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.521 qpair failed and we were unable to recover it. 00:34:23.521 [2024-07-25 01:20:16.588279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.521 [2024-07-25 01:20:16.588305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.521 qpair failed and we were unable to recover it. 00:34:23.521 [2024-07-25 01:20:16.588423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.521 [2024-07-25 01:20:16.588449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.521 qpair failed and we were unable to recover it. 00:34:23.521 [2024-07-25 01:20:16.588570] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:34:23.521 [2024-07-25 01:20:16.588586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.521 [2024-07-25 01:20:16.588611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.521 qpair failed and we were unable to recover it. 00:34:23.521 [2024-07-25 01:20:16.588753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.521 [2024-07-25 01:20:16.588778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.521 qpair failed and we were unable to recover it. 00:34:23.521 [2024-07-25 01:20:16.588923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.521 [2024-07-25 01:20:16.588949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.521 qpair failed and we were unable to recover it. 00:34:23.521 [2024-07-25 01:20:16.589063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.521 [2024-07-25 01:20:16.589088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.521 qpair failed and we were unable to recover it. 00:34:23.521 [2024-07-25 01:20:16.589233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.521 [2024-07-25 01:20:16.589268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.521 qpair failed and we were unable to recover it. 00:34:23.521 [2024-07-25 01:20:16.589388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.521 [2024-07-25 01:20:16.589414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.521 qpair failed and we were unable to recover it. 00:34:23.521 [2024-07-25 01:20:16.589536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.521 [2024-07-25 01:20:16.589573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.521 qpair failed and we were unable to recover it. 00:34:23.521 [2024-07-25 01:20:16.589718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.521 [2024-07-25 01:20:16.589746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.521 qpair failed and we were unable to recover it. 00:34:23.521 [2024-07-25 01:20:16.589864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.521 [2024-07-25 01:20:16.589891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.521 qpair failed and we were unable to recover it. 00:34:23.521 [2024-07-25 01:20:16.590014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.521 [2024-07-25 01:20:16.590041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.521 qpair failed and we were unable to recover it. 00:34:23.521 [2024-07-25 01:20:16.590183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.521 [2024-07-25 01:20:16.590209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.521 qpair failed and we were unable to recover it. 00:34:23.521 [2024-07-25 01:20:16.590348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.521 [2024-07-25 01:20:16.590375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.521 qpair failed and we were unable to recover it. 00:34:23.521 [2024-07-25 01:20:16.590489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.521 [2024-07-25 01:20:16.590515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.521 qpair failed and we were unable to recover it. 00:34:23.521 [2024-07-25 01:20:16.590628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.521 [2024-07-25 01:20:16.590654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.522 qpair failed and we were unable to recover it. 00:34:23.522 [2024-07-25 01:20:16.590790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.522 [2024-07-25 01:20:16.590815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.522 qpair failed and we were unable to recover it. 00:34:23.522 [2024-07-25 01:20:16.590963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.522 [2024-07-25 01:20:16.590988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.522 qpair failed and we were unable to recover it. 00:34:23.522 [2024-07-25 01:20:16.591104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.522 [2024-07-25 01:20:16.591129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.522 qpair failed and we were unable to recover it. 00:34:23.522 [2024-07-25 01:20:16.591269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.522 [2024-07-25 01:20:16.591307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.522 qpair failed and we were unable to recover it. 00:34:23.839 [2024-07-25 01:20:16.591511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.839 [2024-07-25 01:20:16.591553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.839 qpair failed and we were unable to recover it. 00:34:23.839 [2024-07-25 01:20:16.591735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.839 [2024-07-25 01:20:16.591772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.839 qpair failed and we were unable to recover it. 00:34:23.839 [2024-07-25 01:20:16.591989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.839 [2024-07-25 01:20:16.592029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.839 qpair failed and we were unable to recover it. 00:34:23.839 [2024-07-25 01:20:16.592188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.839 [2024-07-25 01:20:16.592227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:23.839 qpair failed and we were unable to recover it. 00:34:23.839 [2024-07-25 01:20:16.592393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.839 [2024-07-25 01:20:16.592431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.839 qpair failed and we were unable to recover it. 00:34:23.839 [2024-07-25 01:20:16.592590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.839 [2024-07-25 01:20:16.592621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.839 qpair failed and we were unable to recover it. 00:34:23.839 [2024-07-25 01:20:16.592871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.839 [2024-07-25 01:20:16.592901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.839 qpair failed and we were unable to recover it. 00:34:23.839 [2024-07-25 01:20:16.593047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.839 [2024-07-25 01:20:16.593084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.839 qpair failed and we were unable to recover it. 00:34:23.839 [2024-07-25 01:20:16.593222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.839 [2024-07-25 01:20:16.593263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.839 qpair failed and we were unable to recover it. 00:34:23.839 [2024-07-25 01:20:16.593413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.839 [2024-07-25 01:20:16.593444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.839 qpair failed and we were unable to recover it. 00:34:23.839 [2024-07-25 01:20:16.593600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.839 [2024-07-25 01:20:16.593630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:23.839 qpair failed and we were unable to recover it. 00:34:23.839 [2024-07-25 01:20:16.593780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.839 [2024-07-25 01:20:16.593807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:23.839 qpair failed and we were unable to recover it. 00:34:23.840 [2024-07-25 01:20:16.593933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.840 [2024-07-25 01:20:16.593959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:23.840 qpair failed and we were unable to recover it. 00:34:23.840 [2024-07-25 01:20:16.594084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.840 [2024-07-25 01:20:16.594111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:23.840 qpair failed and we were unable to recover it. 00:34:23.840 [2024-07-25 01:20:16.594258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.840 [2024-07-25 01:20:16.594285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:23.840 qpair failed and we were unable to recover it. 00:34:23.840 [2024-07-25 01:20:16.594429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.840 [2024-07-25 01:20:16.594455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:23.840 qpair failed and we were unable to recover it. 00:34:23.840 [2024-07-25 01:20:16.594600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.840 [2024-07-25 01:20:16.594627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:23.840 qpair failed and we were unable to recover it. 00:34:23.840 [2024-07-25 01:20:16.594750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.840 [2024-07-25 01:20:16.594781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:23.840 qpair failed and we were unable to recover it. 00:34:23.840 [2024-07-25 01:20:16.594929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.840 [2024-07-25 01:20:16.594956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:23.840 qpair failed and we were unable to recover it. 00:34:23.840 [2024-07-25 01:20:16.595081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.840 [2024-07-25 01:20:16.595108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:23.840 qpair failed and we were unable to recover it. 00:34:23.840 [2024-07-25 01:20:16.595279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.840 [2024-07-25 01:20:16.595307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:23.840 qpair failed and we were unable to recover it. 00:34:23.840 [2024-07-25 01:20:16.595458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.840 [2024-07-25 01:20:16.595483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:23.840 qpair failed and we were unable to recover it. 00:34:23.840 [2024-07-25 01:20:16.595640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.840 [2024-07-25 01:20:16.595665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:23.840 qpair failed and we were unable to recover it. 00:34:23.840 [2024-07-25 01:20:16.595781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.840 [2024-07-25 01:20:16.595807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:23.840 qpair failed and we were unable to recover it. 00:34:23.840 [2024-07-25 01:20:16.595996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.840 [2024-07-25 01:20:16.596029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.840 qpair failed and we were unable to recover it. 00:34:23.840 [2024-07-25 01:20:16.596162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.840 [2024-07-25 01:20:16.596204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.840 qpair failed and we were unable to recover it. 00:34:23.840 [2024-07-25 01:20:16.596362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.840 [2024-07-25 01:20:16.596402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.840 qpair failed and we were unable to recover it. 00:34:23.840 [2024-07-25 01:20:16.596521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.840 [2024-07-25 01:20:16.596549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.840 qpair failed and we were unable to recover it. 00:34:23.840 [2024-07-25 01:20:16.596674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.840 [2024-07-25 01:20:16.596700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.840 qpair failed and we were unable to recover it. 00:34:23.840 [2024-07-25 01:20:16.596849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.840 [2024-07-25 01:20:16.596875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.840 qpair failed and we were unable to recover it. 00:34:23.840 [2024-07-25 01:20:16.597009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.840 [2024-07-25 01:20:16.597046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.840 qpair failed and we were unable to recover it. 00:34:23.840 [2024-07-25 01:20:16.597202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.840 [2024-07-25 01:20:16.597237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.840 qpair failed and we were unable to recover it. 00:34:23.840 [2024-07-25 01:20:16.597413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.840 [2024-07-25 01:20:16.597444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.840 qpair failed and we were unable to recover it. 00:34:23.840 [2024-07-25 01:20:16.597556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.840 [2024-07-25 01:20:16.597583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.840 qpair failed and we were unable to recover it. 00:34:23.840 [2024-07-25 01:20:16.597727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.840 [2024-07-25 01:20:16.597753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.840 qpair failed and we were unable to recover it. 00:34:23.840 [2024-07-25 01:20:16.597868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.840 [2024-07-25 01:20:16.597894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.840 qpair failed and we were unable to recover it. 00:34:23.840 [2024-07-25 01:20:16.598040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.840 [2024-07-25 01:20:16.598066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.840 qpair failed and we were unable to recover it. 00:34:23.840 [2024-07-25 01:20:16.598183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.840 [2024-07-25 01:20:16.598208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.840 qpair failed and we were unable to recover it. 00:34:23.840 [2024-07-25 01:20:16.598340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.841 [2024-07-25 01:20:16.598366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.841 qpair failed and we were unable to recover it. 00:34:23.841 [2024-07-25 01:20:16.598483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.841 [2024-07-25 01:20:16.598509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.841 qpair failed and we were unable to recover it. 00:34:23.841 [2024-07-25 01:20:16.598684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.841 [2024-07-25 01:20:16.598711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.841 qpair failed and we were unable to recover it. 00:34:23.841 [2024-07-25 01:20:16.598863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.841 [2024-07-25 01:20:16.598889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.841 qpair failed and we were unable to recover it. 00:34:23.841 [2024-07-25 01:20:16.599010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.841 [2024-07-25 01:20:16.599036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.841 qpair failed and we were unable to recover it. 00:34:23.841 [2024-07-25 01:20:16.599151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.841 [2024-07-25 01:20:16.599177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.841 qpair failed and we were unable to recover it. 00:34:23.841 [2024-07-25 01:20:16.599336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.841 [2024-07-25 01:20:16.599367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.841 qpair failed and we were unable to recover it. 00:34:23.841 [2024-07-25 01:20:16.599490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.841 [2024-07-25 01:20:16.599515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.841 qpair failed and we were unable to recover it. 00:34:23.841 [2024-07-25 01:20:16.599628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.841 [2024-07-25 01:20:16.599654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.841 qpair failed and we were unable to recover it. 00:34:23.841 [2024-07-25 01:20:16.599776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.841 [2024-07-25 01:20:16.599802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.841 qpair failed and we were unable to recover it. 00:34:23.841 [2024-07-25 01:20:16.599961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.841 [2024-07-25 01:20:16.599987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.841 qpair failed and we were unable to recover it. 00:34:23.841 [2024-07-25 01:20:16.600100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.841 [2024-07-25 01:20:16.600126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.841 qpair failed and we were unable to recover it. 00:34:23.841 [2024-07-25 01:20:16.600273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.841 [2024-07-25 01:20:16.600299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.841 qpair failed and we were unable to recover it. 00:34:23.841 [2024-07-25 01:20:16.600424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.841 [2024-07-25 01:20:16.600450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.841 qpair failed and we were unable to recover it. 00:34:23.841 [2024-07-25 01:20:16.600569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.841 [2024-07-25 01:20:16.600595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.841 qpair failed and we were unable to recover it. 00:34:23.841 [2024-07-25 01:20:16.600751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.841 [2024-07-25 01:20:16.600778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.841 qpair failed and we were unable to recover it. 00:34:23.841 [2024-07-25 01:20:16.600924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.841 [2024-07-25 01:20:16.600953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:23.841 qpair failed and we were unable to recover it. 00:34:23.841 [2024-07-25 01:20:16.601078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.841 [2024-07-25 01:20:16.601105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:23.841 qpair failed and we were unable to recover it. 00:34:23.841 [2024-07-25 01:20:16.601252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.841 [2024-07-25 01:20:16.601278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:23.841 qpair failed and we were unable to recover it. 00:34:23.841 [2024-07-25 01:20:16.601423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.841 [2024-07-25 01:20:16.601449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:23.841 qpair failed and we were unable to recover it. 00:34:23.841 [2024-07-25 01:20:16.601568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.841 [2024-07-25 01:20:16.601594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:23.841 qpair failed and we were unable to recover it. 00:34:23.841 [2024-07-25 01:20:16.601766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.841 [2024-07-25 01:20:16.601792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:23.841 qpair failed and we were unable to recover it. 00:34:23.841 [2024-07-25 01:20:16.601902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.841 [2024-07-25 01:20:16.601927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:23.841 qpair failed and we were unable to recover it. 00:34:23.841 [2024-07-25 01:20:16.602046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.841 [2024-07-25 01:20:16.602071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:23.841 qpair failed and we were unable to recover it. 00:34:23.841 [2024-07-25 01:20:16.602215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.841 [2024-07-25 01:20:16.602246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:23.841 qpair failed and we were unable to recover it. 00:34:23.841 [2024-07-25 01:20:16.602387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.841 [2024-07-25 01:20:16.602413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:23.841 qpair failed and we were unable to recover it. 00:34:23.841 [2024-07-25 01:20:16.602561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.841 [2024-07-25 01:20:16.602588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:23.841 qpair failed and we were unable to recover it. 00:34:23.841 [2024-07-25 01:20:16.602763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.841 [2024-07-25 01:20:16.602789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:23.841 qpair failed and we were unable to recover it. 00:34:23.841 [2024-07-25 01:20:16.602908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.841 [2024-07-25 01:20:16.602934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:23.841 qpair failed and we were unable to recover it. 00:34:23.841 [2024-07-25 01:20:16.603056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.841 [2024-07-25 01:20:16.603081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:23.841 qpair failed and we were unable to recover it. 00:34:23.841 [2024-07-25 01:20:16.603223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.841 [2024-07-25 01:20:16.603263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:23.841 qpair failed and we were unable to recover it. 00:34:23.841 [2024-07-25 01:20:16.603386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.841 [2024-07-25 01:20:16.603410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:23.841 qpair failed and we were unable to recover it. 00:34:23.841 [2024-07-25 01:20:16.603554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.841 [2024-07-25 01:20:16.603580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:23.841 qpair failed and we were unable to recover it. 00:34:23.841 [2024-07-25 01:20:16.603701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.841 [2024-07-25 01:20:16.603727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:23.841 qpair failed and we were unable to recover it. 00:34:23.841 [2024-07-25 01:20:16.603872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.842 [2024-07-25 01:20:16.603897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:23.842 qpair failed and we were unable to recover it. 00:34:23.842 [2024-07-25 01:20:16.604042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.842 [2024-07-25 01:20:16.604067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:23.842 qpair failed and we were unable to recover it. 00:34:23.842 [2024-07-25 01:20:16.604200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.842 [2024-07-25 01:20:16.604226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:23.842 qpair failed and we were unable to recover it. 00:34:23.842 [2024-07-25 01:20:16.604395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.842 [2024-07-25 01:20:16.604434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.842 qpair failed and we were unable to recover it. 00:34:23.842 [2024-07-25 01:20:16.604589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.842 [2024-07-25 01:20:16.604615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.842 qpair failed and we were unable to recover it. 00:34:23.842 [2024-07-25 01:20:16.604738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.842 [2024-07-25 01:20:16.604764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.842 qpair failed and we were unable to recover it. 00:34:23.842 [2024-07-25 01:20:16.604876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.842 [2024-07-25 01:20:16.604902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.842 qpair failed and we were unable to recover it. 00:34:23.842 [2024-07-25 01:20:16.605066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.842 [2024-07-25 01:20:16.605091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.842 qpair failed and we were unable to recover it. 00:34:23.842 [2024-07-25 01:20:16.605234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.842 [2024-07-25 01:20:16.605268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.842 qpair failed and we were unable to recover it. 00:34:23.842 [2024-07-25 01:20:16.605409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.842 [2024-07-25 01:20:16.605436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:23.842 qpair failed and we were unable to recover it. 00:34:23.842 [2024-07-25 01:20:16.605554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.842 [2024-07-25 01:20:16.605580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:23.842 qpair failed and we were unable to recover it. 00:34:23.842 [2024-07-25 01:20:16.605717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.842 [2024-07-25 01:20:16.605742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:23.842 qpair failed and we were unable to recover it. 00:34:23.842 [2024-07-25 01:20:16.605869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.842 [2024-07-25 01:20:16.605897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:23.842 qpair failed and we were unable to recover it. 00:34:23.842 [2024-07-25 01:20:16.606046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.842 [2024-07-25 01:20:16.606072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:23.842 qpair failed and we were unable to recover it. 00:34:23.842 [2024-07-25 01:20:16.606209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.842 [2024-07-25 01:20:16.606234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:23.842 qpair failed and we were unable to recover it. 00:34:23.842 [2024-07-25 01:20:16.606353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.842 [2024-07-25 01:20:16.606378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:23.842 qpair failed and we were unable to recover it. 00:34:23.842 [2024-07-25 01:20:16.606492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.842 [2024-07-25 01:20:16.606518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:23.842 qpair failed and we were unable to recover it. 00:34:23.842 [2024-07-25 01:20:16.606635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.842 [2024-07-25 01:20:16.606661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:23.842 qpair failed and we were unable to recover it. 00:34:23.842 [2024-07-25 01:20:16.606780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.842 [2024-07-25 01:20:16.606807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:23.842 qpair failed and we were unable to recover it. 00:34:23.842 [2024-07-25 01:20:16.606974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.842 [2024-07-25 01:20:16.606999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:23.842 qpair failed and we were unable to recover it. 00:34:23.842 [2024-07-25 01:20:16.607166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.842 [2024-07-25 01:20:16.607192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:23.842 qpair failed and we were unable to recover it. 00:34:23.842 [2024-07-25 01:20:16.607327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.842 [2024-07-25 01:20:16.607354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:23.842 qpair failed and we were unable to recover it. 00:34:23.842 [2024-07-25 01:20:16.607480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.842 [2024-07-25 01:20:16.607507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:23.842 qpair failed and we were unable to recover it. 00:34:23.842 [2024-07-25 01:20:16.607651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.842 [2024-07-25 01:20:16.607677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:23.842 qpair failed and we were unable to recover it. 00:34:23.842 [2024-07-25 01:20:16.607794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.842 [2024-07-25 01:20:16.607821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:23.842 qpair failed and we were unable to recover it. 00:34:23.842 [2024-07-25 01:20:16.607949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.842 [2024-07-25 01:20:16.607975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:23.842 qpair failed and we were unable to recover it. 00:34:23.842 [2024-07-25 01:20:16.608118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.842 [2024-07-25 01:20:16.608144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:23.842 qpair failed and we were unable to recover it. 00:34:23.842 [2024-07-25 01:20:16.608267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.842 [2024-07-25 01:20:16.608295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:23.842 qpair failed and we were unable to recover it. 00:34:23.842 [2024-07-25 01:20:16.608413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.842 [2024-07-25 01:20:16.608440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:23.842 qpair failed and we were unable to recover it. 00:34:23.842 [2024-07-25 01:20:16.608555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.842 [2024-07-25 01:20:16.608582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:23.842 qpair failed and we were unable to recover it. 00:34:23.842 [2024-07-25 01:20:16.608731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.842 [2024-07-25 01:20:16.608758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:23.842 qpair failed and we were unable to recover it. 00:34:23.842 [2024-07-25 01:20:16.608875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.842 [2024-07-25 01:20:16.608901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:23.842 qpair failed and we were unable to recover it. 00:34:23.842 [2024-07-25 01:20:16.609043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.842 [2024-07-25 01:20:16.609069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:23.842 qpair failed and we were unable to recover it. 00:34:23.842 [2024-07-25 01:20:16.609183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.842 [2024-07-25 01:20:16.609209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:23.842 qpair failed and we were unable to recover it. 00:34:23.842 [2024-07-25 01:20:16.609326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.842 [2024-07-25 01:20:16.609352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:23.842 qpair failed and we were unable to recover it. 00:34:23.842 [2024-07-25 01:20:16.609462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.842 [2024-07-25 01:20:16.609488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:23.842 qpair failed and we were unable to recover it. 00:34:23.842 [2024-07-25 01:20:16.609630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.842 [2024-07-25 01:20:16.609656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:23.842 qpair failed and we were unable to recover it. 00:34:23.842 [2024-07-25 01:20:16.609798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.843 [2024-07-25 01:20:16.609823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:23.843 qpair failed and we were unable to recover it. 00:34:23.843 [2024-07-25 01:20:16.609969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.843 [2024-07-25 01:20:16.609995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:23.843 qpair failed and we were unable to recover it. 00:34:23.843 [2024-07-25 01:20:16.610145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.843 [2024-07-25 01:20:16.610170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:23.843 qpair failed and we were unable to recover it. 00:34:23.843 [2024-07-25 01:20:16.610318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.843 [2024-07-25 01:20:16.610345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:23.843 qpair failed and we were unable to recover it. 00:34:23.843 [2024-07-25 01:20:16.610461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.843 [2024-07-25 01:20:16.610487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:23.843 qpair failed and we were unable to recover it. 00:34:23.843 [2024-07-25 01:20:16.610622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.843 [2024-07-25 01:20:16.610661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.843 qpair failed and we were unable to recover it. 00:34:23.843 [2024-07-25 01:20:16.610815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.843 [2024-07-25 01:20:16.610842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.843 qpair failed and we were unable to recover it. 00:34:23.843 [2024-07-25 01:20:16.610989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.843 [2024-07-25 01:20:16.611014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.843 qpair failed and we were unable to recover it. 00:34:23.843 [2024-07-25 01:20:16.611160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.843 [2024-07-25 01:20:16.611186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.843 qpair failed and we were unable to recover it. 00:34:23.843 [2024-07-25 01:20:16.611321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.843 [2024-07-25 01:20:16.611347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.843 qpair failed and we were unable to recover it. 00:34:23.843 [2024-07-25 01:20:16.611490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.843 [2024-07-25 01:20:16.611516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.843 qpair failed and we were unable to recover it. 00:34:23.843 [2024-07-25 01:20:16.611658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.843 [2024-07-25 01:20:16.611683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.843 qpair failed and we were unable to recover it. 00:34:23.843 [2024-07-25 01:20:16.611830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.843 [2024-07-25 01:20:16.611856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.843 qpair failed and we were unable to recover it. 00:34:23.843 [2024-07-25 01:20:16.611974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.843 [2024-07-25 01:20:16.611999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.843 qpair failed and we were unable to recover it. 00:34:23.843 [2024-07-25 01:20:16.612143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.843 [2024-07-25 01:20:16.612170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:23.843 qpair failed and we were unable to recover it. 00:34:23.843 [2024-07-25 01:20:16.612314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.843 [2024-07-25 01:20:16.612348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:23.843 qpair failed and we were unable to recover it. 00:34:23.843 [2024-07-25 01:20:16.612497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.843 [2024-07-25 01:20:16.612522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:23.843 qpair failed and we were unable to recover it. 00:34:23.843 [2024-07-25 01:20:16.612669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.843 [2024-07-25 01:20:16.612696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:23.843 qpair failed and we were unable to recover it. 00:34:23.843 [2024-07-25 01:20:16.612815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.843 [2024-07-25 01:20:16.612840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:23.843 qpair failed and we were unable to recover it. 00:34:23.843 [2024-07-25 01:20:16.612978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.843 [2024-07-25 01:20:16.613004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:23.843 qpair failed and we were unable to recover it. 00:34:23.843 [2024-07-25 01:20:16.613148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.843 [2024-07-25 01:20:16.613175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:23.843 qpair failed and we were unable to recover it. 00:34:23.843 [2024-07-25 01:20:16.613289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.843 [2024-07-25 01:20:16.613315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:23.843 qpair failed and we were unable to recover it. 00:34:23.843 [2024-07-25 01:20:16.613454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.843 [2024-07-25 01:20:16.613480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:23.843 qpair failed and we were unable to recover it. 00:34:23.843 [2024-07-25 01:20:16.613591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.843 [2024-07-25 01:20:16.613616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:23.843 qpair failed and we were unable to recover it. 00:34:23.843 [2024-07-25 01:20:16.613739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.843 [2024-07-25 01:20:16.613765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:23.843 qpair failed and we were unable to recover it. 00:34:23.843 [2024-07-25 01:20:16.613879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.843 [2024-07-25 01:20:16.613906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:23.843 qpair failed and we were unable to recover it. 00:34:23.843 [2024-07-25 01:20:16.614048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.843 [2024-07-25 01:20:16.614074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:23.843 qpair failed and we were unable to recover it. 00:34:23.843 [2024-07-25 01:20:16.614217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.843 [2024-07-25 01:20:16.614250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:23.843 qpair failed and we were unable to recover it. 00:34:23.843 [2024-07-25 01:20:16.614397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.843 [2024-07-25 01:20:16.614423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:23.843 qpair failed and we were unable to recover it. 00:34:23.843 [2024-07-25 01:20:16.614568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.843 [2024-07-25 01:20:16.614594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:23.843 qpair failed and we were unable to recover it. 00:34:23.843 [2024-07-25 01:20:16.614765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.843 [2024-07-25 01:20:16.614791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:23.843 qpair failed and we were unable to recover it. 00:34:23.843 [2024-07-25 01:20:16.614927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.843 [2024-07-25 01:20:16.614953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:23.843 qpair failed and we were unable to recover it. 00:34:23.843 [2024-07-25 01:20:16.615102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.843 [2024-07-25 01:20:16.615129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.843 qpair failed and we were unable to recover it. 00:34:23.843 [2024-07-25 01:20:16.615272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.843 [2024-07-25 01:20:16.615300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.843 qpair failed and we were unable to recover it. 00:34:23.843 [2024-07-25 01:20:16.615415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.843 [2024-07-25 01:20:16.615441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.843 qpair failed and we were unable to recover it. 00:34:23.843 [2024-07-25 01:20:16.615561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.843 [2024-07-25 01:20:16.615587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.843 qpair failed and we were unable to recover it. 00:34:23.843 [2024-07-25 01:20:16.615708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.843 [2024-07-25 01:20:16.615734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.843 qpair failed and we were unable to recover it. 00:34:23.843 [2024-07-25 01:20:16.615874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.843 [2024-07-25 01:20:16.615900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.843 qpair failed and we were unable to recover it. 00:34:23.844 [2024-07-25 01:20:16.616043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.844 [2024-07-25 01:20:16.616070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:23.844 qpair failed and we were unable to recover it. 00:34:23.844 [2024-07-25 01:20:16.616208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.844 [2024-07-25 01:20:16.616234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:23.844 qpair failed and we were unable to recover it. 00:34:23.844 [2024-07-25 01:20:16.616406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.844 [2024-07-25 01:20:16.616432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:23.844 qpair failed and we were unable to recover it. 00:34:23.844 [2024-07-25 01:20:16.616558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.844 [2024-07-25 01:20:16.616584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:23.844 qpair failed and we were unable to recover it. 00:34:23.844 [2024-07-25 01:20:16.616757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.844 [2024-07-25 01:20:16.616783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:23.844 qpair failed and we were unable to recover it. 00:34:23.844 [2024-07-25 01:20:16.616938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.844 [2024-07-25 01:20:16.616963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:23.844 qpair failed and we were unable to recover it. 00:34:23.844 [2024-07-25 01:20:16.617076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.844 [2024-07-25 01:20:16.617104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.844 qpair failed and we were unable to recover it. 00:34:23.844 [2024-07-25 01:20:16.617252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.844 [2024-07-25 01:20:16.617278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.844 qpair failed and we were unable to recover it. 00:34:23.844 [2024-07-25 01:20:16.617422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.844 [2024-07-25 01:20:16.617448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.844 qpair failed and we were unable to recover it. 00:34:23.844 [2024-07-25 01:20:16.617562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.844 [2024-07-25 01:20:16.617589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.844 qpair failed and we were unable to recover it. 00:34:23.844 [2024-07-25 01:20:16.617702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.844 [2024-07-25 01:20:16.617727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.844 qpair failed and we were unable to recover it. 00:34:23.844 [2024-07-25 01:20:16.617863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.844 [2024-07-25 01:20:16.617889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.844 qpair failed and we were unable to recover it. 00:34:23.844 [2024-07-25 01:20:16.618003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.844 [2024-07-25 01:20:16.618029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.844 qpair failed and we were unable to recover it. 00:34:23.844 [2024-07-25 01:20:16.618169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.844 [2024-07-25 01:20:16.618196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.844 qpair failed and we were unable to recover it. 00:34:23.844 [2024-07-25 01:20:16.618356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.844 [2024-07-25 01:20:16.618383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.844 qpair failed and we were unable to recover it. 00:34:23.844 [2024-07-25 01:20:16.618521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.844 [2024-07-25 01:20:16.618547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.844 qpair failed and we were unable to recover it. 00:34:23.844 [2024-07-25 01:20:16.618691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.844 [2024-07-25 01:20:16.618717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.844 qpair failed and we were unable to recover it. 00:34:23.844 [2024-07-25 01:20:16.618883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.844 [2024-07-25 01:20:16.618915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.844 qpair failed and we were unable to recover it. 00:34:23.844 [2024-07-25 01:20:16.619058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.844 [2024-07-25 01:20:16.619083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.844 qpair failed and we were unable to recover it. 00:34:23.844 [2024-07-25 01:20:16.619257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.844 [2024-07-25 01:20:16.619283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.844 qpair failed and we were unable to recover it. 00:34:23.844 [2024-07-25 01:20:16.619423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.844 [2024-07-25 01:20:16.619448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.844 qpair failed and we were unable to recover it. 00:34:23.844 [2024-07-25 01:20:16.619573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.844 [2024-07-25 01:20:16.619599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.844 qpair failed and we were unable to recover it. 00:34:23.844 [2024-07-25 01:20:16.619711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.844 [2024-07-25 01:20:16.619738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.844 qpair failed and we were unable to recover it. 00:34:23.844 [2024-07-25 01:20:16.619852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.844 [2024-07-25 01:20:16.619878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.844 qpair failed and we were unable to recover it. 00:34:23.844 [2024-07-25 01:20:16.620016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.844 [2024-07-25 01:20:16.620042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.844 qpair failed and we were unable to recover it. 00:34:23.844 [2024-07-25 01:20:16.620214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.844 [2024-07-25 01:20:16.620247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.844 qpair failed and we were unable to recover it. 00:34:23.844 [2024-07-25 01:20:16.620391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.844 [2024-07-25 01:20:16.620417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.844 qpair failed and we were unable to recover it. 00:34:23.844 [2024-07-25 01:20:16.620537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.844 [2024-07-25 01:20:16.620563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.844 qpair failed and we were unable to recover it. 00:34:23.844 [2024-07-25 01:20:16.620704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.844 [2024-07-25 01:20:16.620729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.844 qpair failed and we were unable to recover it. 00:34:23.844 [2024-07-25 01:20:16.620878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.844 [2024-07-25 01:20:16.620904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.844 qpair failed and we were unable to recover it. 00:34:23.844 [2024-07-25 01:20:16.621012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.845 [2024-07-25 01:20:16.621038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.845 qpair failed and we were unable to recover it. 00:34:23.845 [2024-07-25 01:20:16.621157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.845 [2024-07-25 01:20:16.621183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.845 qpair failed and we were unable to recover it. 00:34:23.845 [2024-07-25 01:20:16.621322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.845 [2024-07-25 01:20:16.621349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.845 qpair failed and we were unable to recover it. 00:34:23.845 [2024-07-25 01:20:16.621484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.845 [2024-07-25 01:20:16.621509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.845 qpair failed and we were unable to recover it. 00:34:23.845 [2024-07-25 01:20:16.621630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.845 [2024-07-25 01:20:16.621657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.845 qpair failed and we were unable to recover it. 00:34:23.845 [2024-07-25 01:20:16.621836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.845 [2024-07-25 01:20:16.621862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.845 qpair failed and we were unable to recover it. 00:34:23.845 [2024-07-25 01:20:16.622022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.845 [2024-07-25 01:20:16.622048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.845 qpair failed and we were unable to recover it. 00:34:23.845 [2024-07-25 01:20:16.622191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.845 [2024-07-25 01:20:16.622217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.845 qpair failed and we were unable to recover it. 00:34:23.845 [2024-07-25 01:20:16.622376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.845 [2024-07-25 01:20:16.622402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.845 qpair failed and we were unable to recover it. 00:34:23.845 [2024-07-25 01:20:16.622552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.845 [2024-07-25 01:20:16.622578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.845 qpair failed and we were unable to recover it. 00:34:23.845 [2024-07-25 01:20:16.622719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.845 [2024-07-25 01:20:16.622745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.845 qpair failed and we were unable to recover it. 00:34:23.845 [2024-07-25 01:20:16.622890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.845 [2024-07-25 01:20:16.622917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.845 qpair failed and we were unable to recover it. 00:34:23.845 [2024-07-25 01:20:16.623076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.845 [2024-07-25 01:20:16.623101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.845 qpair failed and we were unable to recover it. 00:34:23.845 [2024-07-25 01:20:16.623223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.845 [2024-07-25 01:20:16.623256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.845 qpair failed and we were unable to recover it. 00:34:23.845 [2024-07-25 01:20:16.623408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.845 [2024-07-25 01:20:16.623434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.845 qpair failed and we were unable to recover it. 00:34:23.845 [2024-07-25 01:20:16.623552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.845 [2024-07-25 01:20:16.623577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.845 qpair failed and we were unable to recover it. 00:34:23.845 [2024-07-25 01:20:16.623717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.845 [2024-07-25 01:20:16.623742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.845 qpair failed and we were unable to recover it. 00:34:23.845 [2024-07-25 01:20:16.623882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.845 [2024-07-25 01:20:16.623908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.845 qpair failed and we were unable to recover it. 00:34:23.845 [2024-07-25 01:20:16.624071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.845 [2024-07-25 01:20:16.624096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.845 qpair failed and we were unable to recover it. 00:34:23.845 [2024-07-25 01:20:16.624234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.845 [2024-07-25 01:20:16.624267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.845 qpair failed and we were unable to recover it. 00:34:23.845 [2024-07-25 01:20:16.624385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.845 [2024-07-25 01:20:16.624411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.845 qpair failed and we were unable to recover it. 00:34:23.845 [2024-07-25 01:20:16.624578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.845 [2024-07-25 01:20:16.624604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.845 qpair failed and we were unable to recover it. 00:34:23.845 [2024-07-25 01:20:16.624718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.845 [2024-07-25 01:20:16.624744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.845 qpair failed and we were unable to recover it. 00:34:23.845 [2024-07-25 01:20:16.624860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.845 [2024-07-25 01:20:16.624886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.845 qpair failed and we were unable to recover it. 00:34:23.845 [2024-07-25 01:20:16.625008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.845 [2024-07-25 01:20:16.625033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.845 qpair failed and we were unable to recover it. 00:34:23.845 [2024-07-25 01:20:16.625145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.845 [2024-07-25 01:20:16.625170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.845 qpair failed and we were unable to recover it. 00:34:23.845 [2024-07-25 01:20:16.625310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.845 [2024-07-25 01:20:16.625336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.845 qpair failed and we were unable to recover it. 00:34:23.845 [2024-07-25 01:20:16.625489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.845 [2024-07-25 01:20:16.625520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.845 qpair failed and we were unable to recover it. 00:34:23.845 [2024-07-25 01:20:16.625664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.845 [2024-07-25 01:20:16.625689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.845 qpair failed and we were unable to recover it. 00:34:23.845 [2024-07-25 01:20:16.625832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.845 [2024-07-25 01:20:16.625859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.845 qpair failed and we were unable to recover it. 00:34:23.845 [2024-07-25 01:20:16.625968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.845 [2024-07-25 01:20:16.625994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.845 qpair failed and we were unable to recover it. 00:34:23.845 [2024-07-25 01:20:16.626137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.845 [2024-07-25 01:20:16.626163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.845 qpair failed and we were unable to recover it. 00:34:23.845 [2024-07-25 01:20:16.626307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.845 [2024-07-25 01:20:16.626333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.845 qpair failed and we were unable to recover it. 00:34:23.845 [2024-07-25 01:20:16.626481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.845 [2024-07-25 01:20:16.626506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.845 qpair failed and we were unable to recover it. 00:34:23.845 [2024-07-25 01:20:16.626617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.845 [2024-07-25 01:20:16.626642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.845 qpair failed and we were unable to recover it. 00:34:23.845 [2024-07-25 01:20:16.626782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.845 [2024-07-25 01:20:16.626808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.845 qpair failed and we were unable to recover it. 00:34:23.845 [2024-07-25 01:20:16.626952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.845 [2024-07-25 01:20:16.626977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.845 qpair failed and we were unable to recover it. 00:34:23.845 [2024-07-25 01:20:16.627117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.846 [2024-07-25 01:20:16.627142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.846 qpair failed and we were unable to recover it. 00:34:23.846 [2024-07-25 01:20:16.627269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.846 [2024-07-25 01:20:16.627296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.846 qpair failed and we were unable to recover it. 00:34:23.846 [2024-07-25 01:20:16.627436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.846 [2024-07-25 01:20:16.627461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.846 qpair failed and we were unable to recover it. 00:34:23.846 [2024-07-25 01:20:16.627576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.846 [2024-07-25 01:20:16.627602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.846 qpair failed and we were unable to recover it. 00:34:23.846 [2024-07-25 01:20:16.627749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.846 [2024-07-25 01:20:16.627775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.846 qpair failed and we were unable to recover it. 00:34:23.846 [2024-07-25 01:20:16.627914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.846 [2024-07-25 01:20:16.627939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.846 qpair failed and we were unable to recover it. 00:34:23.846 [2024-07-25 01:20:16.628057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.846 [2024-07-25 01:20:16.628084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.846 qpair failed and we were unable to recover it. 00:34:23.846 [2024-07-25 01:20:16.628205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.846 [2024-07-25 01:20:16.628231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.846 qpair failed and we were unable to recover it. 00:34:23.846 [2024-07-25 01:20:16.628382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.846 [2024-07-25 01:20:16.628408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.846 qpair failed and we were unable to recover it. 00:34:23.846 [2024-07-25 01:20:16.628564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.846 [2024-07-25 01:20:16.628590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.846 qpair failed and we were unable to recover it. 00:34:23.846 [2024-07-25 01:20:16.628738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.846 [2024-07-25 01:20:16.628763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.846 qpair failed and we were unable to recover it. 00:34:23.846 [2024-07-25 01:20:16.628903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.846 [2024-07-25 01:20:16.628929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.846 qpair failed and we were unable to recover it. 00:34:23.846 [2024-07-25 01:20:16.629046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.846 [2024-07-25 01:20:16.629072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.846 qpair failed and we were unable to recover it. 00:34:23.846 [2024-07-25 01:20:16.629184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.846 [2024-07-25 01:20:16.629211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.846 qpair failed and we were unable to recover it. 00:34:23.846 [2024-07-25 01:20:16.629366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.846 [2024-07-25 01:20:16.629393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.846 qpair failed and we were unable to recover it. 00:34:23.846 [2024-07-25 01:20:16.629506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.846 [2024-07-25 01:20:16.629532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.846 qpair failed and we were unable to recover it. 00:34:23.846 [2024-07-25 01:20:16.629671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.846 [2024-07-25 01:20:16.629697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.846 qpair failed and we were unable to recover it. 00:34:23.846 [2024-07-25 01:20:16.629868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.846 [2024-07-25 01:20:16.629894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.846 qpair failed and we were unable to recover it. 00:34:23.846 [2024-07-25 01:20:16.630061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.846 [2024-07-25 01:20:16.630087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.846 qpair failed and we were unable to recover it. 00:34:23.846 [2024-07-25 01:20:16.630203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.846 [2024-07-25 01:20:16.630230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.846 qpair failed and we were unable to recover it. 00:34:23.846 [2024-07-25 01:20:16.630397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.846 [2024-07-25 01:20:16.630424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.846 qpair failed and we were unable to recover it. 00:34:23.846 [2024-07-25 01:20:16.630579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.846 [2024-07-25 01:20:16.630605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.846 qpair failed and we were unable to recover it. 00:34:23.846 [2024-07-25 01:20:16.630774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.846 [2024-07-25 01:20:16.630799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.846 qpair failed and we were unable to recover it. 00:34:23.846 [2024-07-25 01:20:16.630943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.846 [2024-07-25 01:20:16.630969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.846 qpair failed and we were unable to recover it. 00:34:23.846 [2024-07-25 01:20:16.631084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.846 [2024-07-25 01:20:16.631110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.846 qpair failed and we were unable to recover it. 00:34:23.846 [2024-07-25 01:20:16.631262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.846 [2024-07-25 01:20:16.631289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.846 qpair failed and we were unable to recover it. 00:34:23.846 [2024-07-25 01:20:16.631405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.846 [2024-07-25 01:20:16.631431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.846 qpair failed and we were unable to recover it. 00:34:23.846 [2024-07-25 01:20:16.631577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.846 [2024-07-25 01:20:16.631603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.846 qpair failed and we were unable to recover it. 00:34:23.846 [2024-07-25 01:20:16.631724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.846 [2024-07-25 01:20:16.631749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.846 qpair failed and we were unable to recover it. 00:34:23.846 [2024-07-25 01:20:16.631881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.846 [2024-07-25 01:20:16.631907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.846 qpair failed and we were unable to recover it. 00:34:23.846 [2024-07-25 01:20:16.632054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.846 [2024-07-25 01:20:16.632084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.846 qpair failed and we were unable to recover it. 00:34:23.846 [2024-07-25 01:20:16.632224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.846 [2024-07-25 01:20:16.632258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.846 qpair failed and we were unable to recover it. 00:34:23.846 [2024-07-25 01:20:16.632402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.846 [2024-07-25 01:20:16.632428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.846 qpair failed and we were unable to recover it. 00:34:23.846 [2024-07-25 01:20:16.632544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.846 [2024-07-25 01:20:16.632571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.846 qpair failed and we were unable to recover it. 00:34:23.846 [2024-07-25 01:20:16.632718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.846 [2024-07-25 01:20:16.632745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.846 qpair failed and we were unable to recover it. 00:34:23.846 [2024-07-25 01:20:16.632865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.846 [2024-07-25 01:20:16.632891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.846 qpair failed and we were unable to recover it. 00:34:23.846 [2024-07-25 01:20:16.633030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.846 [2024-07-25 01:20:16.633056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.846 qpair failed and we were unable to recover it. 00:34:23.846 [2024-07-25 01:20:16.633207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.846 [2024-07-25 01:20:16.633233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.846 qpair failed and we were unable to recover it. 00:34:23.847 [2024-07-25 01:20:16.633386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.847 [2024-07-25 01:20:16.633412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.847 qpair failed and we were unable to recover it. 00:34:23.847 [2024-07-25 01:20:16.633553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.847 [2024-07-25 01:20:16.633578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.847 qpair failed and we were unable to recover it. 00:34:23.847 [2024-07-25 01:20:16.633723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.847 [2024-07-25 01:20:16.633749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.847 qpair failed and we were unable to recover it. 00:34:23.847 [2024-07-25 01:20:16.633892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.847 [2024-07-25 01:20:16.633919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.847 qpair failed and we were unable to recover it. 00:34:23.847 [2024-07-25 01:20:16.634087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.847 [2024-07-25 01:20:16.634113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.847 qpair failed and we were unable to recover it. 00:34:23.847 [2024-07-25 01:20:16.634279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.847 [2024-07-25 01:20:16.634305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.847 qpair failed and we were unable to recover it. 00:34:23.847 [2024-07-25 01:20:16.634451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.847 [2024-07-25 01:20:16.634477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.847 qpair failed and we were unable to recover it. 00:34:23.847 [2024-07-25 01:20:16.634612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.847 [2024-07-25 01:20:16.634639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.847 qpair failed and we were unable to recover it. 00:34:23.847 [2024-07-25 01:20:16.634784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.847 [2024-07-25 01:20:16.634811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.847 qpair failed and we were unable to recover it. 00:34:23.847 [2024-07-25 01:20:16.634950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.847 [2024-07-25 01:20:16.634977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.847 qpair failed and we were unable to recover it. 00:34:23.847 [2024-07-25 01:20:16.635122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.847 [2024-07-25 01:20:16.635148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.847 qpair failed and we were unable to recover it. 00:34:23.847 [2024-07-25 01:20:16.635313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.847 [2024-07-25 01:20:16.635339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.847 qpair failed and we were unable to recover it. 00:34:23.847 [2024-07-25 01:20:16.635504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.847 [2024-07-25 01:20:16.635529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.847 qpair failed and we were unable to recover it. 00:34:23.847 [2024-07-25 01:20:16.635640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.847 [2024-07-25 01:20:16.635666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.847 qpair failed and we were unable to recover it. 00:34:23.847 [2024-07-25 01:20:16.635804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.847 [2024-07-25 01:20:16.635829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.847 qpair failed and we were unable to recover it. 00:34:23.847 [2024-07-25 01:20:16.635998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.847 [2024-07-25 01:20:16.636025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.847 qpair failed and we were unable to recover it. 00:34:23.847 [2024-07-25 01:20:16.636173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.847 [2024-07-25 01:20:16.636199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.847 qpair failed and we were unable to recover it. 00:34:23.847 [2024-07-25 01:20:16.636326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.847 [2024-07-25 01:20:16.636353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.847 qpair failed and we were unable to recover it. 00:34:23.847 [2024-07-25 01:20:16.636519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.847 [2024-07-25 01:20:16.636545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.847 qpair failed and we were unable to recover it. 00:34:23.847 [2024-07-25 01:20:16.636667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.847 [2024-07-25 01:20:16.636693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.847 qpair failed and we were unable to recover it. 00:34:23.847 [2024-07-25 01:20:16.636835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.847 [2024-07-25 01:20:16.636861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.847 qpair failed and we were unable to recover it. 00:34:23.847 [2024-07-25 01:20:16.637002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.847 [2024-07-25 01:20:16.637028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.847 qpair failed and we were unable to recover it. 00:34:23.847 [2024-07-25 01:20:16.637172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.847 [2024-07-25 01:20:16.637199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.847 qpair failed and we were unable to recover it. 00:34:23.847 [2024-07-25 01:20:16.637343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.847 [2024-07-25 01:20:16.637370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.847 qpair failed and we were unable to recover it. 00:34:23.847 [2024-07-25 01:20:16.637497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.847 [2024-07-25 01:20:16.637524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.847 qpair failed and we were unable to recover it. 00:34:23.847 [2024-07-25 01:20:16.637692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.847 [2024-07-25 01:20:16.637718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.847 qpair failed and we were unable to recover it. 00:34:23.847 [2024-07-25 01:20:16.637829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.847 [2024-07-25 01:20:16.637855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.847 qpair failed and we were unable to recover it. 00:34:23.847 [2024-07-25 01:20:16.638020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.847 [2024-07-25 01:20:16.638046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.847 qpair failed and we were unable to recover it. 00:34:23.847 [2024-07-25 01:20:16.638214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.847 [2024-07-25 01:20:16.638239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.847 qpair failed and we were unable to recover it. 00:34:23.847 [2024-07-25 01:20:16.638392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.847 [2024-07-25 01:20:16.638418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.847 qpair failed and we were unable to recover it. 00:34:23.847 [2024-07-25 01:20:16.638534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.847 [2024-07-25 01:20:16.638560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.847 qpair failed and we were unable to recover it. 00:34:23.847 [2024-07-25 01:20:16.638684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.847 [2024-07-25 01:20:16.638710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.847 qpair failed and we were unable to recover it. 00:34:23.847 [2024-07-25 01:20:16.638847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.847 [2024-07-25 01:20:16.638877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.847 qpair failed and we were unable to recover it. 00:34:23.847 [2024-07-25 01:20:16.639017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.847 [2024-07-25 01:20:16.639042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.847 qpair failed and we were unable to recover it. 00:34:23.847 [2024-07-25 01:20:16.639211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.848 [2024-07-25 01:20:16.639237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.848 qpair failed and we were unable to recover it. 00:34:23.848 [2024-07-25 01:20:16.639398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.848 [2024-07-25 01:20:16.639424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.848 qpair failed and we were unable to recover it. 00:34:23.848 [2024-07-25 01:20:16.639541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.848 [2024-07-25 01:20:16.639567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.848 qpair failed and we were unable to recover it. 00:34:23.848 [2024-07-25 01:20:16.639739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.848 [2024-07-25 01:20:16.639764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.848 qpair failed and we were unable to recover it. 00:34:23.848 [2024-07-25 01:20:16.639908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.848 [2024-07-25 01:20:16.639933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.848 qpair failed and we were unable to recover it. 00:34:23.848 [2024-07-25 01:20:16.640051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.848 [2024-07-25 01:20:16.640078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.848 qpair failed and we were unable to recover it. 00:34:23.848 [2024-07-25 01:20:16.640223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.848 [2024-07-25 01:20:16.640279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.848 qpair failed and we were unable to recover it. 00:34:23.848 [2024-07-25 01:20:16.640429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.848 [2024-07-25 01:20:16.640458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.848 qpair failed and we were unable to recover it. 00:34:23.848 [2024-07-25 01:20:16.640611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.848 [2024-07-25 01:20:16.640637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.848 qpair failed and we were unable to recover it. 00:34:23.848 [2024-07-25 01:20:16.640764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.848 [2024-07-25 01:20:16.640791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.848 qpair failed and we were unable to recover it. 00:34:23.848 [2024-07-25 01:20:16.640938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.848 [2024-07-25 01:20:16.640965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.848 qpair failed and we were unable to recover it. 00:34:23.848 [2024-07-25 01:20:16.641080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.848 [2024-07-25 01:20:16.641106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.848 qpair failed and we were unable to recover it. 00:34:23.848 [2024-07-25 01:20:16.641229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.848 [2024-07-25 01:20:16.641264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.848 qpair failed and we were unable to recover it. 00:34:23.848 [2024-07-25 01:20:16.641385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.848 [2024-07-25 01:20:16.641411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.848 qpair failed and we were unable to recover it. 00:34:23.848 [2024-07-25 01:20:16.641557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.848 [2024-07-25 01:20:16.641583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.848 qpair failed and we were unable to recover it. 00:34:23.848 [2024-07-25 01:20:16.641702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.848 [2024-07-25 01:20:16.641728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.848 qpair failed and we were unable to recover it. 00:34:23.848 [2024-07-25 01:20:16.641894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.848 [2024-07-25 01:20:16.641919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.848 qpair failed and we were unable to recover it. 00:34:23.848 [2024-07-25 01:20:16.642060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.848 [2024-07-25 01:20:16.642086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.848 qpair failed and we were unable to recover it. 00:34:23.848 [2024-07-25 01:20:16.642255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.848 [2024-07-25 01:20:16.642281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.848 qpair failed and we were unable to recover it. 00:34:23.848 [2024-07-25 01:20:16.642397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.848 [2024-07-25 01:20:16.642422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.848 qpair failed and we were unable to recover it. 00:34:23.848 [2024-07-25 01:20:16.642534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.848 [2024-07-25 01:20:16.642560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.848 qpair failed and we were unable to recover it. 00:34:23.848 [2024-07-25 01:20:16.642704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.848 [2024-07-25 01:20:16.642729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.848 qpair failed and we were unable to recover it. 00:34:23.848 [2024-07-25 01:20:16.642873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.848 [2024-07-25 01:20:16.642898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.848 qpair failed and we were unable to recover it. 00:34:23.848 [2024-07-25 01:20:16.643009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.848 [2024-07-25 01:20:16.643035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.848 qpair failed and we were unable to recover it. 00:34:23.848 [2024-07-25 01:20:16.643177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.848 [2024-07-25 01:20:16.643203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.848 qpair failed and we were unable to recover it. 00:34:23.848 [2024-07-25 01:20:16.643343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.848 [2024-07-25 01:20:16.643370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.848 qpair failed and we were unable to recover it. 00:34:23.848 [2024-07-25 01:20:16.643519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.848 [2024-07-25 01:20:16.643544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.848 qpair failed and we were unable to recover it. 00:34:23.848 [2024-07-25 01:20:16.643715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.848 [2024-07-25 01:20:16.643741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.848 qpair failed and we were unable to recover it. 00:34:23.848 [2024-07-25 01:20:16.643887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.848 [2024-07-25 01:20:16.643912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.848 qpair failed and we were unable to recover it. 00:34:23.848 [2024-07-25 01:20:16.644059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.848 [2024-07-25 01:20:16.644085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.848 qpair failed and we were unable to recover it. 00:34:23.848 [2024-07-25 01:20:16.644211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.848 [2024-07-25 01:20:16.644237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.848 qpair failed and we were unable to recover it. 00:34:23.848 [2024-07-25 01:20:16.644398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.848 [2024-07-25 01:20:16.644423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.848 qpair failed and we were unable to recover it. 00:34:23.848 [2024-07-25 01:20:16.644598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.848 [2024-07-25 01:20:16.644623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.848 qpair failed and we were unable to recover it. 00:34:23.848 [2024-07-25 01:20:16.644766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.848 [2024-07-25 01:20:16.644791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.848 qpair failed and we were unable to recover it. 00:34:23.848 [2024-07-25 01:20:16.644962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.848 [2024-07-25 01:20:16.644989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.848 qpair failed and we were unable to recover it. 00:34:23.848 [2024-07-25 01:20:16.645161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.849 [2024-07-25 01:20:16.645187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.849 qpair failed and we were unable to recover it. 00:34:23.849 [2024-07-25 01:20:16.645337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.849 [2024-07-25 01:20:16.645363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.849 qpair failed and we were unable to recover it. 00:34:23.849 [2024-07-25 01:20:16.645513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.849 [2024-07-25 01:20:16.645539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.849 qpair failed and we were unable to recover it. 00:34:23.849 [2024-07-25 01:20:16.645685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.849 [2024-07-25 01:20:16.645715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.849 qpair failed and we were unable to recover it. 00:34:23.849 [2024-07-25 01:20:16.645835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.849 [2024-07-25 01:20:16.645862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.849 qpair failed and we were unable to recover it. 00:34:23.849 [2024-07-25 01:20:16.646008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.849 [2024-07-25 01:20:16.646034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.849 qpair failed and we were unable to recover it. 00:34:23.849 [2024-07-25 01:20:16.646147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.849 [2024-07-25 01:20:16.646172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.849 qpair failed and we were unable to recover it. 00:34:23.849 [2024-07-25 01:20:16.646301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.849 [2024-07-25 01:20:16.646328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.849 qpair failed and we were unable to recover it. 00:34:23.849 [2024-07-25 01:20:16.646474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.849 [2024-07-25 01:20:16.646500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.849 qpair failed and we were unable to recover it. 00:34:23.849 [2024-07-25 01:20:16.646620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.849 [2024-07-25 01:20:16.646646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.849 qpair failed and we were unable to recover it. 00:34:23.849 [2024-07-25 01:20:16.646765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.849 [2024-07-25 01:20:16.646792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.849 qpair failed and we were unable to recover it. 00:34:23.849 [2024-07-25 01:20:16.646907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.849 [2024-07-25 01:20:16.646933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.849 qpair failed and we were unable to recover it. 00:34:23.849 [2024-07-25 01:20:16.647072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.849 [2024-07-25 01:20:16.647097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.849 qpair failed and we were unable to recover it. 00:34:23.849 [2024-07-25 01:20:16.647255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.849 [2024-07-25 01:20:16.647282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.849 qpair failed and we were unable to recover it. 00:34:23.849 [2024-07-25 01:20:16.647431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.849 [2024-07-25 01:20:16.647457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.849 qpair failed and we were unable to recover it. 00:34:23.849 [2024-07-25 01:20:16.647615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.849 [2024-07-25 01:20:16.647641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.849 qpair failed and we were unable to recover it. 00:34:23.849 [2024-07-25 01:20:16.647815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.849 [2024-07-25 01:20:16.647841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.849 qpair failed and we were unable to recover it. 00:34:23.849 [2024-07-25 01:20:16.648003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.849 [2024-07-25 01:20:16.648029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.849 qpair failed and we were unable to recover it. 00:34:23.849 [2024-07-25 01:20:16.648148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.849 [2024-07-25 01:20:16.648174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.849 qpair failed and we were unable to recover it. 00:34:23.849 [2024-07-25 01:20:16.648327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.849 [2024-07-25 01:20:16.648353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.849 qpair failed and we were unable to recover it. 00:34:23.849 [2024-07-25 01:20:16.648528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.849 [2024-07-25 01:20:16.648554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.849 qpair failed and we were unable to recover it. 00:34:23.849 [2024-07-25 01:20:16.648726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.849 [2024-07-25 01:20:16.648751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.849 qpair failed and we were unable to recover it. 00:34:23.849 [2024-07-25 01:20:16.648896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.849 [2024-07-25 01:20:16.648921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.849 qpair failed and we were unable to recover it. 00:34:23.849 [2024-07-25 01:20:16.649103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.849 [2024-07-25 01:20:16.649128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.849 qpair failed and we were unable to recover it. 00:34:23.849 [2024-07-25 01:20:16.649250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.849 [2024-07-25 01:20:16.649276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.849 qpair failed and we were unable to recover it. 00:34:23.849 [2024-07-25 01:20:16.649420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.849 [2024-07-25 01:20:16.649445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.849 qpair failed and we were unable to recover it. 00:34:23.849 [2024-07-25 01:20:16.649589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.849 [2024-07-25 01:20:16.649615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.849 qpair failed and we were unable to recover it. 00:34:23.849 [2024-07-25 01:20:16.649734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.849 [2024-07-25 01:20:16.649760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.849 qpair failed and we were unable to recover it. 00:34:23.849 [2024-07-25 01:20:16.649880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.850 [2024-07-25 01:20:16.649906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.850 qpair failed and we were unable to recover it. 00:34:23.850 [2024-07-25 01:20:16.650035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.850 [2024-07-25 01:20:16.650061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.850 qpair failed and we were unable to recover it. 00:34:23.850 [2024-07-25 01:20:16.650236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.850 [2024-07-25 01:20:16.650270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.850 qpair failed and we were unable to recover it. 00:34:23.850 [2024-07-25 01:20:16.650414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.850 [2024-07-25 01:20:16.650440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.850 qpair failed and we were unable to recover it. 00:34:23.850 [2024-07-25 01:20:16.650593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.850 [2024-07-25 01:20:16.650620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.850 qpair failed and we were unable to recover it. 00:34:23.850 [2024-07-25 01:20:16.650763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.850 [2024-07-25 01:20:16.650788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.850 qpair failed and we were unable to recover it. 00:34:23.850 [2024-07-25 01:20:16.650957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.850 [2024-07-25 01:20:16.650983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.850 qpair failed and we were unable to recover it. 00:34:23.850 [2024-07-25 01:20:16.651137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.850 [2024-07-25 01:20:16.651163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.850 qpair failed and we were unable to recover it. 00:34:23.850 [2024-07-25 01:20:16.651291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.850 [2024-07-25 01:20:16.651317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.850 qpair failed and we were unable to recover it. 00:34:23.850 [2024-07-25 01:20:16.651465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.850 [2024-07-25 01:20:16.651491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.850 qpair failed and we were unable to recover it. 00:34:23.850 [2024-07-25 01:20:16.651640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.850 [2024-07-25 01:20:16.651666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.850 qpair failed and we were unable to recover it. 00:34:23.850 [2024-07-25 01:20:16.651810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.850 [2024-07-25 01:20:16.651835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.850 qpair failed and we were unable to recover it. 00:34:23.850 [2024-07-25 01:20:16.651946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.850 [2024-07-25 01:20:16.651971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.850 qpair failed and we were unable to recover it. 00:34:23.850 [2024-07-25 01:20:16.652119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.850 [2024-07-25 01:20:16.652145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.850 qpair failed and we were unable to recover it. 00:34:23.850 [2024-07-25 01:20:16.652265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.850 [2024-07-25 01:20:16.652319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.850 qpair failed and we were unable to recover it. 00:34:23.850 [2024-07-25 01:20:16.652463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.850 [2024-07-25 01:20:16.652493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.850 qpair failed and we were unable to recover it. 00:34:23.850 [2024-07-25 01:20:16.652654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.850 [2024-07-25 01:20:16.652679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.850 qpair failed and we were unable to recover it. 00:34:23.850 [2024-07-25 01:20:16.652817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.850 [2024-07-25 01:20:16.652843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.850 qpair failed and we were unable to recover it. 00:34:23.850 [2024-07-25 01:20:16.652999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.850 [2024-07-25 01:20:16.653024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.850 qpair failed and we were unable to recover it. 00:34:23.850 [2024-07-25 01:20:16.653144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.850 [2024-07-25 01:20:16.653170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.850 qpair failed and we were unable to recover it. 00:34:23.850 [2024-07-25 01:20:16.653296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.850 [2024-07-25 01:20:16.653322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.850 qpair failed and we were unable to recover it. 00:34:23.850 [2024-07-25 01:20:16.653465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.850 [2024-07-25 01:20:16.653490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.850 qpair failed and we were unable to recover it. 00:34:23.850 [2024-07-25 01:20:16.653631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.850 [2024-07-25 01:20:16.653657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.850 qpair failed and we were unable to recover it. 00:34:23.850 [2024-07-25 01:20:16.653791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.850 [2024-07-25 01:20:16.653816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.850 qpair failed and we were unable to recover it. 00:34:23.850 [2024-07-25 01:20:16.653925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.850 [2024-07-25 01:20:16.653951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.850 qpair failed and we were unable to recover it. 00:34:23.850 [2024-07-25 01:20:16.654071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.850 [2024-07-25 01:20:16.654097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.850 qpair failed and we were unable to recover it. 00:34:23.850 [2024-07-25 01:20:16.654214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.850 [2024-07-25 01:20:16.654240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.850 qpair failed and we were unable to recover it. 00:34:23.850 [2024-07-25 01:20:16.654387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.850 [2024-07-25 01:20:16.654413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.850 qpair failed and we were unable to recover it. 00:34:23.850 [2024-07-25 01:20:16.654524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.850 [2024-07-25 01:20:16.654551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.850 qpair failed and we were unable to recover it. 00:34:23.850 [2024-07-25 01:20:16.654670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.850 [2024-07-25 01:20:16.654696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.850 qpair failed and we were unable to recover it. 00:34:23.850 [2024-07-25 01:20:16.654817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.850 [2024-07-25 01:20:16.654844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.850 qpair failed and we were unable to recover it. 00:34:23.850 [2024-07-25 01:20:16.654988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.850 [2024-07-25 01:20:16.655014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.850 qpair failed and we were unable to recover it. 00:34:23.850 [2024-07-25 01:20:16.655156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.850 [2024-07-25 01:20:16.655182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.850 qpair failed and we were unable to recover it. 00:34:23.850 [2024-07-25 01:20:16.655325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.850 [2024-07-25 01:20:16.655352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.850 qpair failed and we were unable to recover it. 00:34:23.850 [2024-07-25 01:20:16.655461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.850 [2024-07-25 01:20:16.655486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.850 qpair failed and we were unable to recover it. 00:34:23.850 [2024-07-25 01:20:16.655606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.850 [2024-07-25 01:20:16.655633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.850 qpair failed and we were unable to recover it. 00:34:23.850 [2024-07-25 01:20:16.655748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.851 [2024-07-25 01:20:16.655774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.851 qpair failed and we were unable to recover it. 00:34:23.851 [2024-07-25 01:20:16.655911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.851 [2024-07-25 01:20:16.655937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.851 qpair failed and we were unable to recover it. 00:34:23.851 [2024-07-25 01:20:16.656053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.851 [2024-07-25 01:20:16.656079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.851 qpair failed and we were unable to recover it. 00:34:23.851 [2024-07-25 01:20:16.656191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.851 [2024-07-25 01:20:16.656216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.851 qpair failed and we were unable to recover it. 00:34:23.851 [2024-07-25 01:20:16.656377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.851 [2024-07-25 01:20:16.656403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.851 qpair failed and we were unable to recover it. 00:34:23.851 [2024-07-25 01:20:16.656553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.851 [2024-07-25 01:20:16.656579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.851 qpair failed and we were unable to recover it. 00:34:23.851 [2024-07-25 01:20:16.656688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.851 [2024-07-25 01:20:16.656713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.851 qpair failed and we were unable to recover it. 00:34:23.851 [2024-07-25 01:20:16.656826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.851 [2024-07-25 01:20:16.656854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.851 qpair failed and we were unable to recover it. 00:34:23.851 [2024-07-25 01:20:16.656995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.851 [2024-07-25 01:20:16.657021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.851 qpair failed and we were unable to recover it. 00:34:23.851 [2024-07-25 01:20:16.657158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.851 [2024-07-25 01:20:16.657184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.851 qpair failed and we were unable to recover it. 00:34:23.851 [2024-07-25 01:20:16.657306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.851 [2024-07-25 01:20:16.657332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.851 qpair failed and we were unable to recover it. 00:34:23.851 [2024-07-25 01:20:16.657449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.851 [2024-07-25 01:20:16.657476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.851 qpair failed and we were unable to recover it. 00:34:23.851 [2024-07-25 01:20:16.657596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.851 [2024-07-25 01:20:16.657622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.851 qpair failed and we were unable to recover it. 00:34:23.851 [2024-07-25 01:20:16.657739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.851 [2024-07-25 01:20:16.657765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.851 qpair failed and we were unable to recover it. 00:34:23.851 [2024-07-25 01:20:16.657908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.851 [2024-07-25 01:20:16.657934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.851 qpair failed and we were unable to recover it. 00:34:23.851 [2024-07-25 01:20:16.658043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.851 [2024-07-25 01:20:16.658070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.851 qpair failed and we were unable to recover it. 00:34:23.851 [2024-07-25 01:20:16.658192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.851 [2024-07-25 01:20:16.658217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.851 qpair failed and we were unable to recover it. 00:34:23.851 [2024-07-25 01:20:16.658371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.851 [2024-07-25 01:20:16.658397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.851 qpair failed and we were unable to recover it. 00:34:23.851 [2024-07-25 01:20:16.658561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.851 [2024-07-25 01:20:16.658586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.851 qpair failed and we were unable to recover it. 00:34:23.851 [2024-07-25 01:20:16.658701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.851 [2024-07-25 01:20:16.658732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.851 qpair failed and we were unable to recover it. 00:34:23.851 [2024-07-25 01:20:16.658905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.851 [2024-07-25 01:20:16.658930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.851 qpair failed and we were unable to recover it. 00:34:23.851 [2024-07-25 01:20:16.659101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.851 [2024-07-25 01:20:16.659128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.851 qpair failed and we were unable to recover it. 00:34:23.851 [2024-07-25 01:20:16.659272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.851 [2024-07-25 01:20:16.659298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.851 qpair failed and we were unable to recover it. 00:34:23.851 [2024-07-25 01:20:16.659445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.851 [2024-07-25 01:20:16.659472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.851 qpair failed and we were unable to recover it. 00:34:23.851 [2024-07-25 01:20:16.659592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.851 [2024-07-25 01:20:16.659618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.851 qpair failed and we were unable to recover it. 00:34:23.851 [2024-07-25 01:20:16.659744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.851 [2024-07-25 01:20:16.659769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.851 qpair failed and we were unable to recover it. 00:34:23.851 [2024-07-25 01:20:16.659932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.851 [2024-07-25 01:20:16.659959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.851 qpair failed and we were unable to recover it. 00:34:23.851 [2024-07-25 01:20:16.660106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.851 [2024-07-25 01:20:16.660132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.851 qpair failed and we were unable to recover it. 00:34:23.851 [2024-07-25 01:20:16.660270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.851 [2024-07-25 01:20:16.660296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.851 qpair failed and we were unable to recover it. 00:34:23.851 [2024-07-25 01:20:16.660443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.851 [2024-07-25 01:20:16.660469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.851 qpair failed and we were unable to recover it. 00:34:23.851 [2024-07-25 01:20:16.660589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.851 [2024-07-25 01:20:16.660615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.851 qpair failed and we were unable to recover it. 00:34:23.851 [2024-07-25 01:20:16.660758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.851 [2024-07-25 01:20:16.660783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.851 qpair failed and we were unable to recover it. 00:34:23.851 [2024-07-25 01:20:16.660900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.851 [2024-07-25 01:20:16.660925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.851 qpair failed and we were unable to recover it. 00:34:23.851 [2024-07-25 01:20:16.661079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.851 [2024-07-25 01:20:16.661104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.851 qpair failed and we were unable to recover it. 00:34:23.851 [2024-07-25 01:20:16.661248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.851 [2024-07-25 01:20:16.661275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.851 qpair failed and we were unable to recover it. 00:34:23.851 [2024-07-25 01:20:16.661420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.851 [2024-07-25 01:20:16.661446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.851 qpair failed and we were unable to recover it. 00:34:23.851 [2024-07-25 01:20:16.661559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.851 [2024-07-25 01:20:16.661585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.851 qpair failed and we were unable to recover it. 00:34:23.851 [2024-07-25 01:20:16.661763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.852 [2024-07-25 01:20:16.661788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.852 qpair failed and we were unable to recover it. 00:34:23.852 [2024-07-25 01:20:16.661902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.852 [2024-07-25 01:20:16.661927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.852 qpair failed and we were unable to recover it. 00:34:23.852 [2024-07-25 01:20:16.662097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.852 [2024-07-25 01:20:16.662124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.852 qpair failed and we were unable to recover it. 00:34:23.852 [2024-07-25 01:20:16.662294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.852 [2024-07-25 01:20:16.662320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.852 qpair failed and we were unable to recover it. 00:34:23.852 [2024-07-25 01:20:16.662459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.852 [2024-07-25 01:20:16.662485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.852 qpair failed and we were unable to recover it. 00:34:23.852 [2024-07-25 01:20:16.662605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.852 [2024-07-25 01:20:16.662630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.852 qpair failed and we were unable to recover it. 00:34:23.852 [2024-07-25 01:20:16.662741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.852 [2024-07-25 01:20:16.662768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.852 qpair failed and we were unable to recover it. 00:34:23.852 [2024-07-25 01:20:16.662941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.852 [2024-07-25 01:20:16.662967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.852 qpair failed and we were unable to recover it. 00:34:23.852 [2024-07-25 01:20:16.663082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.852 [2024-07-25 01:20:16.663107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.852 qpair failed and we were unable to recover it. 00:34:23.852 [2024-07-25 01:20:16.663254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.852 [2024-07-25 01:20:16.663282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.852 qpair failed and we were unable to recover it. 00:34:23.852 [2024-07-25 01:20:16.663424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.852 [2024-07-25 01:20:16.663450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.852 qpair failed and we were unable to recover it. 00:34:23.852 [2024-07-25 01:20:16.663571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.852 [2024-07-25 01:20:16.663596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.852 qpair failed and we were unable to recover it. 00:34:23.852 [2024-07-25 01:20:16.663742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.852 [2024-07-25 01:20:16.663767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.852 qpair failed and we were unable to recover it. 00:34:23.852 [2024-07-25 01:20:16.663937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.852 [2024-07-25 01:20:16.663963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.852 qpair failed and we were unable to recover it. 00:34:23.852 [2024-07-25 01:20:16.664131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.852 [2024-07-25 01:20:16.664157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.852 qpair failed and we were unable to recover it. 00:34:23.852 [2024-07-25 01:20:16.664312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.852 [2024-07-25 01:20:16.664338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.852 qpair failed and we were unable to recover it. 00:34:23.852 [2024-07-25 01:20:16.664481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.852 [2024-07-25 01:20:16.664508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.852 qpair failed and we were unable to recover it. 00:34:23.852 [2024-07-25 01:20:16.664656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.852 [2024-07-25 01:20:16.664682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.852 qpair failed and we were unable to recover it. 00:34:23.852 [2024-07-25 01:20:16.664827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.852 [2024-07-25 01:20:16.664853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.852 qpair failed and we were unable to recover it. 00:34:23.852 [2024-07-25 01:20:16.664994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.852 [2024-07-25 01:20:16.665019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.852 qpair failed and we were unable to recover it. 00:34:23.852 [2024-07-25 01:20:16.665153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.852 [2024-07-25 01:20:16.665179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.852 qpair failed and we were unable to recover it. 00:34:23.852 [2024-07-25 01:20:16.665345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.852 [2024-07-25 01:20:16.665371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.852 qpair failed and we were unable to recover it. 00:34:23.852 [2024-07-25 01:20:16.665493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.852 [2024-07-25 01:20:16.665523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.852 qpair failed and we were unable to recover it. 00:34:23.852 [2024-07-25 01:20:16.665655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.852 [2024-07-25 01:20:16.665680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.852 qpair failed and we were unable to recover it. 00:34:23.852 [2024-07-25 01:20:16.665808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.852 [2024-07-25 01:20:16.665833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.852 qpair failed and we were unable to recover it. 00:34:23.852 [2024-07-25 01:20:16.665985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.852 [2024-07-25 01:20:16.666011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.852 qpair failed and we were unable to recover it. 00:34:23.852 [2024-07-25 01:20:16.666154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.852 [2024-07-25 01:20:16.666179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.852 qpair failed and we were unable to recover it. 00:34:23.852 [2024-07-25 01:20:16.666304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.852 [2024-07-25 01:20:16.666330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.852 qpair failed and we were unable to recover it. 00:34:23.852 [2024-07-25 01:20:16.666460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.852 [2024-07-25 01:20:16.666486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.852 qpair failed and we were unable to recover it. 00:34:23.852 [2024-07-25 01:20:16.666632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.852 [2024-07-25 01:20:16.666657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.852 qpair failed and we were unable to recover it. 00:34:23.852 [2024-07-25 01:20:16.666801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.852 [2024-07-25 01:20:16.666827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.852 qpair failed and we were unable to recover it. 00:34:23.852 [2024-07-25 01:20:16.666941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.852 [2024-07-25 01:20:16.666967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.852 qpair failed and we were unable to recover it. 00:34:23.852 [2024-07-25 01:20:16.667111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.852 [2024-07-25 01:20:16.667137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.852 qpair failed and we were unable to recover it. 00:34:23.852 [2024-07-25 01:20:16.667311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.852 [2024-07-25 01:20:16.667338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.852 qpair failed and we were unable to recover it. 00:34:23.852 [2024-07-25 01:20:16.667448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.852 [2024-07-25 01:20:16.667474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.852 qpair failed and we were unable to recover it. 00:34:23.852 [2024-07-25 01:20:16.667595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.852 [2024-07-25 01:20:16.667621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.852 qpair failed and we were unable to recover it. 00:34:23.852 [2024-07-25 01:20:16.667744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.853 [2024-07-25 01:20:16.667769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.853 qpair failed and we were unable to recover it. 00:34:23.853 [2024-07-25 01:20:16.667917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.853 [2024-07-25 01:20:16.667944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.853 qpair failed and we were unable to recover it. 00:34:23.853 [2024-07-25 01:20:16.668117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.853 [2024-07-25 01:20:16.668142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.853 qpair failed and we were unable to recover it. 00:34:23.853 [2024-07-25 01:20:16.668261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.853 [2024-07-25 01:20:16.668289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.853 qpair failed and we were unable to recover it. 00:34:23.853 [2024-07-25 01:20:16.668402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.853 [2024-07-25 01:20:16.668428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.853 qpair failed and we were unable to recover it. 00:34:23.853 [2024-07-25 01:20:16.668542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.853 [2024-07-25 01:20:16.668568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.853 qpair failed and we were unable to recover it. 00:34:23.853 [2024-07-25 01:20:16.668713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.853 [2024-07-25 01:20:16.668739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.853 qpair failed and we were unable to recover it. 00:34:23.853 [2024-07-25 01:20:16.668862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.853 [2024-07-25 01:20:16.668888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.853 qpair failed and we were unable to recover it. 00:34:23.853 [2024-07-25 01:20:16.669001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.853 [2024-07-25 01:20:16.669026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.853 qpair failed and we were unable to recover it. 00:34:23.853 [2024-07-25 01:20:16.669150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.853 [2024-07-25 01:20:16.669176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.853 qpair failed and we were unable to recover it. 00:34:23.853 [2024-07-25 01:20:16.669320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.853 [2024-07-25 01:20:16.669347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.853 qpair failed and we were unable to recover it. 00:34:23.853 [2024-07-25 01:20:16.669458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.853 [2024-07-25 01:20:16.669484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.853 qpair failed and we were unable to recover it. 00:34:23.853 [2024-07-25 01:20:16.669622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.853 [2024-07-25 01:20:16.669648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.853 qpair failed and we were unable to recover it. 00:34:23.853 [2024-07-25 01:20:16.669769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.853 [2024-07-25 01:20:16.669795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.853 qpair failed and we were unable to recover it. 00:34:23.853 [2024-07-25 01:20:16.669911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.853 [2024-07-25 01:20:16.669938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.853 qpair failed and we were unable to recover it. 00:34:23.853 [2024-07-25 01:20:16.670076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.853 [2024-07-25 01:20:16.670102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.853 qpair failed and we were unable to recover it. 00:34:23.853 [2024-07-25 01:20:16.670250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.853 [2024-07-25 01:20:16.670276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.853 qpair failed and we were unable to recover it. 00:34:23.853 [2024-07-25 01:20:16.670392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.853 [2024-07-25 01:20:16.670417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.853 qpair failed and we were unable to recover it. 00:34:23.853 [2024-07-25 01:20:16.670560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.853 [2024-07-25 01:20:16.670586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.853 qpair failed and we were unable to recover it. 00:34:23.853 [2024-07-25 01:20:16.670728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.853 [2024-07-25 01:20:16.670753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.853 qpair failed and we were unable to recover it. 00:34:23.853 [2024-07-25 01:20:16.670892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.853 [2024-07-25 01:20:16.670918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.853 qpair failed and we were unable to recover it. 00:34:23.853 [2024-07-25 01:20:16.671042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.853 [2024-07-25 01:20:16.671068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.853 qpair failed and we were unable to recover it. 00:34:23.853 [2024-07-25 01:20:16.671238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.853 [2024-07-25 01:20:16.671287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.853 qpair failed and we were unable to recover it. 00:34:23.853 [2024-07-25 01:20:16.671403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.853 [2024-07-25 01:20:16.671430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.853 qpair failed and we were unable to recover it. 00:34:23.853 [2024-07-25 01:20:16.671542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.853 [2024-07-25 01:20:16.671568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.853 qpair failed and we were unable to recover it. 00:34:23.853 [2024-07-25 01:20:16.671691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.853 [2024-07-25 01:20:16.671717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.853 qpair failed and we were unable to recover it. 00:34:23.853 [2024-07-25 01:20:16.671840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.853 [2024-07-25 01:20:16.671869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.853 qpair failed and we were unable to recover it. 00:34:23.853 [2024-07-25 01:20:16.672021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.853 [2024-07-25 01:20:16.672046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.853 qpair failed and we were unable to recover it. 00:34:23.853 [2024-07-25 01:20:16.672183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.853 [2024-07-25 01:20:16.672210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.853 qpair failed and we were unable to recover it. 00:34:23.853 [2024-07-25 01:20:16.672369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.853 [2024-07-25 01:20:16.672395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.853 qpair failed and we were unable to recover it. 00:34:23.853 [2024-07-25 01:20:16.672499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.853 [2024-07-25 01:20:16.672526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.853 qpair failed and we were unable to recover it. 00:34:23.853 [2024-07-25 01:20:16.672671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.853 [2024-07-25 01:20:16.672696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.853 qpair failed and we were unable to recover it. 00:34:23.853 [2024-07-25 01:20:16.672813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.853 [2024-07-25 01:20:16.672838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.853 qpair failed and we were unable to recover it. 00:34:23.853 [2024-07-25 01:20:16.672983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.853 [2024-07-25 01:20:16.673008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.853 qpair failed and we were unable to recover it. 00:34:23.853 [2024-07-25 01:20:16.673127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.853 [2024-07-25 01:20:16.673153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.853 qpair failed and we were unable to recover it. 00:34:23.853 [2024-07-25 01:20:16.673261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.854 [2024-07-25 01:20:16.673288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.854 qpair failed and we were unable to recover it. 00:34:23.854 [2024-07-25 01:20:16.673432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.854 [2024-07-25 01:20:16.673458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.854 qpair failed and we were unable to recover it. 00:34:23.854 [2024-07-25 01:20:16.673568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.854 [2024-07-25 01:20:16.673593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.854 qpair failed and we were unable to recover it. 00:34:23.854 [2024-07-25 01:20:16.673741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.854 [2024-07-25 01:20:16.673766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.854 qpair failed and we were unable to recover it. 00:34:23.854 [2024-07-25 01:20:16.673913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.854 [2024-07-25 01:20:16.673938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.854 qpair failed and we were unable to recover it. 00:34:23.854 [2024-07-25 01:20:16.674085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.854 [2024-07-25 01:20:16.674111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.854 qpair failed and we were unable to recover it. 00:34:23.854 [2024-07-25 01:20:16.674220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.854 [2024-07-25 01:20:16.674251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.854 qpair failed and we were unable to recover it. 00:34:23.854 [2024-07-25 01:20:16.674423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.854 [2024-07-25 01:20:16.674448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.854 qpair failed and we were unable to recover it. 00:34:23.854 [2024-07-25 01:20:16.674562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.854 [2024-07-25 01:20:16.674587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.854 qpair failed and we were unable to recover it. 00:34:23.854 [2024-07-25 01:20:16.674728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.854 [2024-07-25 01:20:16.674754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.854 qpair failed and we were unable to recover it. 00:34:23.854 [2024-07-25 01:20:16.674875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.854 [2024-07-25 01:20:16.674900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.854 qpair failed and we were unable to recover it. 00:34:23.854 [2024-07-25 01:20:16.675023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.854 [2024-07-25 01:20:16.675048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.854 qpair failed and we were unable to recover it. 00:34:23.854 [2024-07-25 01:20:16.675169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.854 [2024-07-25 01:20:16.675194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.854 qpair failed and we were unable to recover it. 00:34:23.854 [2024-07-25 01:20:16.675314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.854 [2024-07-25 01:20:16.675347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.854 qpair failed and we were unable to recover it. 00:34:23.854 [2024-07-25 01:20:16.675501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.854 [2024-07-25 01:20:16.675536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.854 qpair failed and we were unable to recover it. 00:34:23.854 [2024-07-25 01:20:16.675721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.854 [2024-07-25 01:20:16.675749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.854 qpair failed and we were unable to recover it. 00:34:23.854 [2024-07-25 01:20:16.675875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.854 [2024-07-25 01:20:16.675901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.854 qpair failed and we were unable to recover it. 00:34:23.854 [2024-07-25 01:20:16.676020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.854 [2024-07-25 01:20:16.676047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.854 qpair failed and we were unable to recover it. 00:34:23.854 [2024-07-25 01:20:16.676249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.854 [2024-07-25 01:20:16.676290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.854 qpair failed and we were unable to recover it. 00:34:23.854 [2024-07-25 01:20:16.676525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.854 [2024-07-25 01:20:16.676553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.854 qpair failed and we were unable to recover it. 00:34:23.854 [2024-07-25 01:20:16.676697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.854 [2024-07-25 01:20:16.676723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.854 qpair failed and we were unable to recover it. 00:34:23.854 [2024-07-25 01:20:16.676943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.854 [2024-07-25 01:20:16.676969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.854 qpair failed and we were unable to recover it. 00:34:23.854 [2024-07-25 01:20:16.677115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.854 [2024-07-25 01:20:16.677142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.854 qpair failed and we were unable to recover it. 00:34:23.854 [2024-07-25 01:20:16.677308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.854 [2024-07-25 01:20:16.677335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.854 qpair failed and we were unable to recover it. 00:34:23.854 [2024-07-25 01:20:16.677488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.854 [2024-07-25 01:20:16.677514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.854 qpair failed and we were unable to recover it. 00:34:23.854 [2024-07-25 01:20:16.677656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.854 [2024-07-25 01:20:16.677682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.854 qpair failed and we were unable to recover it. 00:34:23.854 [2024-07-25 01:20:16.677824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.854 [2024-07-25 01:20:16.677851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.854 qpair failed and we were unable to recover it. 00:34:23.854 [2024-07-25 01:20:16.677999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.854 [2024-07-25 01:20:16.678026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.854 qpair failed and we were unable to recover it. 00:34:23.854 [2024-07-25 01:20:16.678173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.854 [2024-07-25 01:20:16.678198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.854 qpair failed and we were unable to recover it. 00:34:23.854 [2024-07-25 01:20:16.678320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.855 [2024-07-25 01:20:16.678346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.855 qpair failed and we were unable to recover it. 00:34:23.855 [2024-07-25 01:20:16.678498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.855 [2024-07-25 01:20:16.678523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.855 qpair failed and we were unable to recover it. 00:34:23.855 [2024-07-25 01:20:16.678640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.855 [2024-07-25 01:20:16.678669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.855 qpair failed and we were unable to recover it. 00:34:23.855 [2024-07-25 01:20:16.678777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.855 [2024-07-25 01:20:16.678803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.855 qpair failed and we were unable to recover it. 00:34:23.855 [2024-07-25 01:20:16.678942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.855 [2024-07-25 01:20:16.678970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.855 qpair failed and we were unable to recover it. 00:34:23.855 [2024-07-25 01:20:16.679198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.855 [2024-07-25 01:20:16.679224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.855 qpair failed and we were unable to recover it. 00:34:23.855 [2024-07-25 01:20:16.679348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.855 [2024-07-25 01:20:16.679374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.855 qpair failed and we were unable to recover it. 00:34:23.855 [2024-07-25 01:20:16.679521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.855 [2024-07-25 01:20:16.679548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.855 qpair failed and we were unable to recover it. 00:34:23.855 [2024-07-25 01:20:16.679780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.855 [2024-07-25 01:20:16.679806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.855 qpair failed and we were unable to recover it. 00:34:23.855 [2024-07-25 01:20:16.680024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.855 [2024-07-25 01:20:16.680050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.855 qpair failed and we were unable to recover it. 00:34:23.855 [2024-07-25 01:20:16.680159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.855 [2024-07-25 01:20:16.680185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.855 qpair failed and we were unable to recover it. 00:34:23.855 [2024-07-25 01:20:16.680304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.855 [2024-07-25 01:20:16.680331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.855 qpair failed and we were unable to recover it. 00:34:23.855 [2024-07-25 01:20:16.680446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.855 [2024-07-25 01:20:16.680472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.855 qpair failed and we were unable to recover it. 00:34:23.855 [2024-07-25 01:20:16.680613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.855 [2024-07-25 01:20:16.680639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.855 qpair failed and we were unable to recover it. 00:34:23.855 [2024-07-25 01:20:16.680755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.855 [2024-07-25 01:20:16.680781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.855 qpair failed and we were unable to recover it. 00:34:23.855 [2024-07-25 01:20:16.680919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.855 [2024-07-25 01:20:16.680944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.855 qpair failed and we were unable to recover it. 00:34:23.855 [2024-07-25 01:20:16.681105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.855 [2024-07-25 01:20:16.681131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.855 qpair failed and we were unable to recover it. 00:34:23.855 [2024-07-25 01:20:16.681272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.855 [2024-07-25 01:20:16.681298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.855 qpair failed and we were unable to recover it. 00:34:23.855 [2024-07-25 01:20:16.681441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.855 [2024-07-25 01:20:16.681468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.855 qpair failed and we were unable to recover it. 00:34:23.855 [2024-07-25 01:20:16.681607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.855 [2024-07-25 01:20:16.681634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.855 qpair failed and we were unable to recover it. 00:34:23.855 [2024-07-25 01:20:16.681801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.855 [2024-07-25 01:20:16.681826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.855 qpair failed and we were unable to recover it. 00:34:23.855 [2024-07-25 01:20:16.681965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.855 [2024-07-25 01:20:16.681991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.855 qpair failed and we were unable to recover it. 00:34:23.855 [2024-07-25 01:20:16.682134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.855 [2024-07-25 01:20:16.682160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.855 qpair failed and we were unable to recover it. 00:34:23.855 [2024-07-25 01:20:16.682317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.855 [2024-07-25 01:20:16.682343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.855 qpair failed and we were unable to recover it. 00:34:23.855 [2024-07-25 01:20:16.682462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.855 [2024-07-25 01:20:16.682488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.855 qpair failed and we were unable to recover it. 00:34:23.855 [2024-07-25 01:20:16.682643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.855 [2024-07-25 01:20:16.682669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.855 qpair failed and we were unable to recover it. 00:34:23.855 [2024-07-25 01:20:16.682784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.855 [2024-07-25 01:20:16.682810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.855 qpair failed and we were unable to recover it. 00:34:23.855 [2024-07-25 01:20:16.682922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.855 [2024-07-25 01:20:16.682948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.855 qpair failed and we were unable to recover it. 00:34:23.855 [2024-07-25 01:20:16.683056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.855 [2024-07-25 01:20:16.683083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.855 qpair failed and we were unable to recover it. 00:34:23.855 [2024-07-25 01:20:16.683220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.855 [2024-07-25 01:20:16.683257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.855 qpair failed and we were unable to recover it. 00:34:23.855 [2024-07-25 01:20:16.683380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.855 [2024-07-25 01:20:16.683406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.855 qpair failed and we were unable to recover it. 00:34:23.855 [2024-07-25 01:20:16.683516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.855 [2024-07-25 01:20:16.683542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.855 qpair failed and we were unable to recover it. 00:34:23.855 [2024-07-25 01:20:16.683704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.855 [2024-07-25 01:20:16.683730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.855 qpair failed and we were unable to recover it. 00:34:23.855 [2024-07-25 01:20:16.683843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.855 [2024-07-25 01:20:16.683868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.855 qpair failed and we were unable to recover it. 00:34:23.855 [2024-07-25 01:20:16.683993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.855 [2024-07-25 01:20:16.684021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.856 qpair failed and we were unable to recover it. 00:34:23.856 [2024-07-25 01:20:16.684133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.856 [2024-07-25 01:20:16.684159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.856 qpair failed and we were unable to recover it. 00:34:23.856 [2024-07-25 01:20:16.684257] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:23.856 [2024-07-25 01:20:16.684291] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:23.856 [2024-07-25 01:20:16.684306] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:23.856 [2024-07-25 01:20:16.684306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.856 [2024-07-25 01:20:16.684318] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:23.856 [2024-07-25 01:20:16.684330] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:23.856 [2024-07-25 01:20:16.684330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.856 qpair failed and we were unable to recover it. 00:34:23.856 [2024-07-25 01:20:16.684447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.856 [2024-07-25 01:20:16.684411] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:34:23.856 [2024-07-25 01:20:16.684471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.856 qpair failed and we were unable to recover it. 00:34:23.856 [2024-07-25 01:20:16.684466] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:34:23.856 [2024-07-25 01:20:16.684514] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:34:23.856 [2024-07-25 01:20:16.684516] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:34:23.856 [2024-07-25 01:20:16.684609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.856 [2024-07-25 01:20:16.684634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.856 qpair failed and we were unable to recover it. 00:34:23.856 [2024-07-25 01:20:16.684753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.856 [2024-07-25 01:20:16.684782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.856 qpair failed and we were unable to recover it. 00:34:23.856 [2024-07-25 01:20:16.684909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.856 [2024-07-25 01:20:16.684934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.856 qpair failed and we were unable to recover it. 00:34:23.856 [2024-07-25 01:20:16.685074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.856 [2024-07-25 01:20:16.685099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.856 qpair failed and we were unable to recover it. 00:34:23.856 [2024-07-25 01:20:16.685249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.856 [2024-07-25 01:20:16.685276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.856 qpair failed and we were unable to recover it. 00:34:23.856 [2024-07-25 01:20:16.685396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.856 [2024-07-25 01:20:16.685422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.856 qpair failed and we were unable to recover it. 00:34:23.856 [2024-07-25 01:20:16.685540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.856 [2024-07-25 01:20:16.685565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.856 qpair failed and we were unable to recover it. 00:34:23.856 [2024-07-25 01:20:16.685676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.856 [2024-07-25 01:20:16.685703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.856 qpair failed and we were unable to recover it. 00:34:23.856 [2024-07-25 01:20:16.685851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.856 [2024-07-25 01:20:16.685877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.856 qpair failed and we were unable to recover it. 00:34:23.856 [2024-07-25 01:20:16.686001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.856 [2024-07-25 01:20:16.686026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.856 qpair failed and we were unable to recover it. 00:34:23.856 [2024-07-25 01:20:16.686162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.856 [2024-07-25 01:20:16.686188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.856 qpair failed and we were unable to recover it. 00:34:23.856 [2024-07-25 01:20:16.686343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.856 [2024-07-25 01:20:16.686372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.856 qpair failed and we were unable to recover it. 00:34:23.856 [2024-07-25 01:20:16.686489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.856 [2024-07-25 01:20:16.686515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.856 qpair failed and we were unable to recover it. 00:34:23.856 [2024-07-25 01:20:16.686629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.856 [2024-07-25 01:20:16.686656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.856 qpair failed and we were unable to recover it. 00:34:23.856 [2024-07-25 01:20:16.686797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.856 [2024-07-25 01:20:16.686823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.856 qpair failed and we were unable to recover it. 00:34:23.856 [2024-07-25 01:20:16.686953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.856 [2024-07-25 01:20:16.686980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.856 qpair failed and we were unable to recover it. 00:34:23.856 [2024-07-25 01:20:16.687097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.856 [2024-07-25 01:20:16.687125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.856 qpair failed and we were unable to recover it. 00:34:23.856 [2024-07-25 01:20:16.687271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.856 [2024-07-25 01:20:16.687298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.856 qpair failed and we were unable to recover it. 00:34:23.856 [2024-07-25 01:20:16.687410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.856 [2024-07-25 01:20:16.687436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.857 qpair failed and we were unable to recover it. 00:34:23.857 [2024-07-25 01:20:16.687585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.857 [2024-07-25 01:20:16.687610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.857 qpair failed and we were unable to recover it. 00:34:23.857 [2024-07-25 01:20:16.687729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.857 [2024-07-25 01:20:16.687755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.857 qpair failed and we were unable to recover it. 00:34:23.857 [2024-07-25 01:20:16.687869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.857 [2024-07-25 01:20:16.687894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.857 qpair failed and we were unable to recover it. 00:34:23.857 [2024-07-25 01:20:16.688033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.857 [2024-07-25 01:20:16.688059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.857 qpair failed and we were unable to recover it. 00:34:23.857 [2024-07-25 01:20:16.688171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.857 [2024-07-25 01:20:16.688197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.857 qpair failed and we were unable to recover it. 00:34:23.857 [2024-07-25 01:20:16.688314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.857 [2024-07-25 01:20:16.688340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.857 qpair failed and we were unable to recover it. 00:34:23.857 [2024-07-25 01:20:16.688465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.857 [2024-07-25 01:20:16.688491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.857 qpair failed and we were unable to recover it. 00:34:23.857 [2024-07-25 01:20:16.688609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.857 [2024-07-25 01:20:16.688634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.857 qpair failed and we were unable to recover it. 00:34:23.857 [2024-07-25 01:20:16.688750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.857 [2024-07-25 01:20:16.688776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.857 qpair failed and we were unable to recover it. 00:34:23.857 [2024-07-25 01:20:16.688925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.857 [2024-07-25 01:20:16.688951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.857 qpair failed and we were unable to recover it. 00:34:23.857 [2024-07-25 01:20:16.689077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.857 [2024-07-25 01:20:16.689116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.857 qpair failed and we were unable to recover it. 00:34:23.857 [2024-07-25 01:20:16.689240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.857 [2024-07-25 01:20:16.689281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.857 qpair failed and we were unable to recover it. 00:34:23.857 [2024-07-25 01:20:16.689417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.857 [2024-07-25 01:20:16.689443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.857 qpair failed and we were unable to recover it. 00:34:23.857 [2024-07-25 01:20:16.689571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.857 [2024-07-25 01:20:16.689597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.857 qpair failed and we were unable to recover it. 00:34:23.857 [2024-07-25 01:20:16.689718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.857 [2024-07-25 01:20:16.689745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.857 qpair failed and we were unable to recover it. 00:34:23.857 [2024-07-25 01:20:16.689872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.857 [2024-07-25 01:20:16.689898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.857 qpair failed and we were unable to recover it. 00:34:23.857 [2024-07-25 01:20:16.690012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.857 [2024-07-25 01:20:16.690039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.857 qpair failed and we were unable to recover it. 00:34:23.857 [2024-07-25 01:20:16.690173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.857 [2024-07-25 01:20:16.690199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.857 qpair failed and we were unable to recover it. 00:34:23.857 [2024-07-25 01:20:16.690324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.857 [2024-07-25 01:20:16.690351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.857 qpair failed and we were unable to recover it. 00:34:23.857 [2024-07-25 01:20:16.690497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.857 [2024-07-25 01:20:16.690522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.857 qpair failed and we were unable to recover it. 00:34:23.857 [2024-07-25 01:20:16.690661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.857 [2024-07-25 01:20:16.690687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.857 qpair failed and we were unable to recover it. 00:34:23.857 [2024-07-25 01:20:16.690808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.857 [2024-07-25 01:20:16.690833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.857 qpair failed and we were unable to recover it. 00:34:23.857 [2024-07-25 01:20:16.690960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.857 [2024-07-25 01:20:16.690996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.857 qpair failed and we were unable to recover it. 00:34:23.857 [2024-07-25 01:20:16.691120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.857 [2024-07-25 01:20:16.691146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.857 qpair failed and we were unable to recover it. 00:34:23.858 [2024-07-25 01:20:16.691260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.858 [2024-07-25 01:20:16.691286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.858 qpair failed and we were unable to recover it. 00:34:23.858 [2024-07-25 01:20:16.691405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.858 [2024-07-25 01:20:16.691431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.858 qpair failed and we were unable to recover it. 00:34:23.858 [2024-07-25 01:20:16.691548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.858 [2024-07-25 01:20:16.691574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.858 qpair failed and we were unable to recover it. 00:34:23.858 [2024-07-25 01:20:16.691684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.858 [2024-07-25 01:20:16.691710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.858 qpair failed and we were unable to recover it. 00:34:23.858 [2024-07-25 01:20:16.691869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.858 [2024-07-25 01:20:16.691895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.858 qpair failed and we were unable to recover it. 00:34:23.858 [2024-07-25 01:20:16.692012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.858 [2024-07-25 01:20:16.692037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.858 qpair failed and we were unable to recover it. 00:34:23.858 [2024-07-25 01:20:16.692192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.858 [2024-07-25 01:20:16.692218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.858 qpair failed and we were unable to recover it. 00:34:23.858 [2024-07-25 01:20:16.692335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.858 [2024-07-25 01:20:16.692362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.858 qpair failed and we were unable to recover it. 00:34:23.858 [2024-07-25 01:20:16.692470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.858 [2024-07-25 01:20:16.692496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.858 qpair failed and we were unable to recover it. 00:34:23.858 [2024-07-25 01:20:16.692612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.858 [2024-07-25 01:20:16.692640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.858 qpair failed and we were unable to recover it. 00:34:23.858 [2024-07-25 01:20:16.692753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.858 [2024-07-25 01:20:16.692779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.858 qpair failed and we were unable to recover it. 00:34:23.858 [2024-07-25 01:20:16.692951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.858 [2024-07-25 01:20:16.692977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.858 qpair failed and we were unable to recover it. 00:34:23.858 [2024-07-25 01:20:16.693163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.858 [2024-07-25 01:20:16.693189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.858 qpair failed and we were unable to recover it. 00:34:23.858 [2024-07-25 01:20:16.693347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.858 [2024-07-25 01:20:16.693385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.858 qpair failed and we were unable to recover it. 00:34:23.858 [2024-07-25 01:20:16.693552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.858 [2024-07-25 01:20:16.693593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.858 qpair failed and we were unable to recover it. 00:34:23.858 [2024-07-25 01:20:16.693720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.858 [2024-07-25 01:20:16.693747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.858 qpair failed and we were unable to recover it. 00:34:23.858 [2024-07-25 01:20:16.693870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.858 [2024-07-25 01:20:16.693896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.858 qpair failed and we were unable to recover it. 00:34:23.858 [2024-07-25 01:20:16.694011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.858 [2024-07-25 01:20:16.694037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.858 qpair failed and we were unable to recover it. 00:34:23.858 [2024-07-25 01:20:16.694162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.858 [2024-07-25 01:20:16.694187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.858 qpair failed and we were unable to recover it. 00:34:23.858 [2024-07-25 01:20:16.694306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.858 [2024-07-25 01:20:16.694334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.858 qpair failed and we were unable to recover it. 00:34:23.858 [2024-07-25 01:20:16.694449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.858 [2024-07-25 01:20:16.694475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.858 qpair failed and we were unable to recover it. 00:34:23.858 [2024-07-25 01:20:16.694617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.858 [2024-07-25 01:20:16.694643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.858 qpair failed and we were unable to recover it. 00:34:23.858 [2024-07-25 01:20:16.694773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.858 [2024-07-25 01:20:16.694799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.858 qpair failed and we were unable to recover it. 00:34:23.858 [2024-07-25 01:20:16.694921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.858 [2024-07-25 01:20:16.694946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.858 qpair failed and we were unable to recover it. 00:34:23.858 [2024-07-25 01:20:16.695067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.858 [2024-07-25 01:20:16.695092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.858 qpair failed and we were unable to recover it. 00:34:23.859 [2024-07-25 01:20:16.695218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.859 [2024-07-25 01:20:16.695268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.859 qpair failed and we were unable to recover it. 00:34:23.859 [2024-07-25 01:20:16.695407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.859 [2024-07-25 01:20:16.695437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.859 qpair failed and we were unable to recover it. 00:34:23.859 [2024-07-25 01:20:16.695590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.859 [2024-07-25 01:20:16.695616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.859 qpair failed and we were unable to recover it. 00:34:23.859 [2024-07-25 01:20:16.695739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.859 [2024-07-25 01:20:16.695764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.859 qpair failed and we were unable to recover it. 00:34:23.859 [2024-07-25 01:20:16.695879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.859 [2024-07-25 01:20:16.695906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.859 qpair failed and we were unable to recover it. 00:34:23.859 [2024-07-25 01:20:16.696082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.859 [2024-07-25 01:20:16.696108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.859 qpair failed and we were unable to recover it. 00:34:23.859 [2024-07-25 01:20:16.696222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.859 [2024-07-25 01:20:16.696255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.859 qpair failed and we were unable to recover it. 00:34:23.859 [2024-07-25 01:20:16.696380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.859 [2024-07-25 01:20:16.696406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.859 qpair failed and we were unable to recover it. 00:34:23.859 [2024-07-25 01:20:16.696545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.859 [2024-07-25 01:20:16.696570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.859 qpair failed and we were unable to recover it. 00:34:23.859 [2024-07-25 01:20:16.696687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.859 [2024-07-25 01:20:16.696714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.859 qpair failed and we were unable to recover it. 00:34:23.859 [2024-07-25 01:20:16.696889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.859 [2024-07-25 01:20:16.696914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.859 qpair failed and we were unable to recover it. 00:34:23.859 [2024-07-25 01:20:16.697033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.859 [2024-07-25 01:20:16.697058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.859 qpair failed and we were unable to recover it. 00:34:23.859 [2024-07-25 01:20:16.697198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.859 [2024-07-25 01:20:16.697223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.859 qpair failed and we were unable to recover it. 00:34:23.859 [2024-07-25 01:20:16.697381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.859 [2024-07-25 01:20:16.697419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.859 qpair failed and we were unable to recover it. 00:34:23.859 [2024-07-25 01:20:16.697547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.859 [2024-07-25 01:20:16.697575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.859 qpair failed and we were unable to recover it. 00:34:23.859 [2024-07-25 01:20:16.697689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.859 [2024-07-25 01:20:16.697715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.859 qpair failed and we were unable to recover it. 00:34:23.859 [2024-07-25 01:20:16.697855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.859 [2024-07-25 01:20:16.697881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.859 qpair failed and we were unable to recover it. 00:34:23.859 [2024-07-25 01:20:16.698004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.859 [2024-07-25 01:20:16.698031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.859 qpair failed and we were unable to recover it. 00:34:23.859 [2024-07-25 01:20:16.698184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.859 [2024-07-25 01:20:16.698223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.859 qpair failed and we were unable to recover it. 00:34:23.859 [2024-07-25 01:20:16.698349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.859 [2024-07-25 01:20:16.698376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.859 qpair failed and we were unable to recover it. 00:34:23.859 [2024-07-25 01:20:16.698497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.859 [2024-07-25 01:20:16.698523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.859 qpair failed and we were unable to recover it. 00:34:23.859 [2024-07-25 01:20:16.698648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.859 [2024-07-25 01:20:16.698676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.859 qpair failed and we were unable to recover it. 00:34:23.859 [2024-07-25 01:20:16.698836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.859 [2024-07-25 01:20:16.698861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.859 qpair failed and we were unable to recover it. 00:34:23.859 [2024-07-25 01:20:16.698971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.859 [2024-07-25 01:20:16.698996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.859 qpair failed and we were unable to recover it. 00:34:23.859 [2024-07-25 01:20:16.699140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.860 [2024-07-25 01:20:16.699165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.860 qpair failed and we were unable to recover it. 00:34:23.860 [2024-07-25 01:20:16.699310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.860 [2024-07-25 01:20:16.699337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.860 qpair failed and we were unable to recover it. 00:34:23.860 [2024-07-25 01:20:16.699451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.860 [2024-07-25 01:20:16.699477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.860 qpair failed and we were unable to recover it. 00:34:23.860 [2024-07-25 01:20:16.699608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.860 [2024-07-25 01:20:16.699633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.860 qpair failed and we were unable to recover it. 00:34:23.860 [2024-07-25 01:20:16.699739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.860 [2024-07-25 01:20:16.699764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.860 qpair failed and we were unable to recover it. 00:34:23.860 [2024-07-25 01:20:16.699880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.860 [2024-07-25 01:20:16.699907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.860 qpair failed and we were unable to recover it. 00:34:23.860 [2024-07-25 01:20:16.700054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.860 [2024-07-25 01:20:16.700079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.860 qpair failed and we were unable to recover it. 00:34:23.860 [2024-07-25 01:20:16.700221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.860 [2024-07-25 01:20:16.700251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.860 qpair failed and we were unable to recover it. 00:34:23.860 [2024-07-25 01:20:16.700371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.860 [2024-07-25 01:20:16.700397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.860 qpair failed and we were unable to recover it. 00:34:23.860 [2024-07-25 01:20:16.700523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.860 [2024-07-25 01:20:16.700552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.860 qpair failed and we were unable to recover it. 00:34:23.860 [2024-07-25 01:20:16.700675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.860 [2024-07-25 01:20:16.700701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.860 qpair failed and we were unable to recover it. 00:34:23.860 [2024-07-25 01:20:16.700809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.860 [2024-07-25 01:20:16.700835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.860 qpair failed and we were unable to recover it. 00:34:23.860 [2024-07-25 01:20:16.700949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.860 [2024-07-25 01:20:16.700975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.860 qpair failed and we were unable to recover it. 00:34:23.860 [2024-07-25 01:20:16.701112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.860 [2024-07-25 01:20:16.701138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.860 qpair failed and we were unable to recover it. 00:34:23.860 [2024-07-25 01:20:16.701270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.860 [2024-07-25 01:20:16.701297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.860 qpair failed and we were unable to recover it. 00:34:23.860 [2024-07-25 01:20:16.701449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.860 [2024-07-25 01:20:16.701474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.860 qpair failed and we were unable to recover it. 00:34:23.860 [2024-07-25 01:20:16.701588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.860 [2024-07-25 01:20:16.701618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.860 qpair failed and we were unable to recover it. 00:34:23.860 [2024-07-25 01:20:16.701731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.860 [2024-07-25 01:20:16.701756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.860 qpair failed and we were unable to recover it. 00:34:23.860 [2024-07-25 01:20:16.701906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.860 [2024-07-25 01:20:16.701932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.860 qpair failed and we were unable to recover it. 00:34:23.860 [2024-07-25 01:20:16.702056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.860 [2024-07-25 01:20:16.702081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.860 qpair failed and we were unable to recover it. 00:34:23.860 [2024-07-25 01:20:16.702206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.861 [2024-07-25 01:20:16.702237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.861 qpair failed and we were unable to recover it. 00:34:23.861 [2024-07-25 01:20:16.702388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.861 [2024-07-25 01:20:16.702414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.861 qpair failed and we were unable to recover it. 00:34:23.861 [2024-07-25 01:20:16.702524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.861 [2024-07-25 01:20:16.702549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.861 qpair failed and we were unable to recover it. 00:34:23.861 [2024-07-25 01:20:16.702657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.861 [2024-07-25 01:20:16.702682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.861 qpair failed and we were unable to recover it. 00:34:23.861 [2024-07-25 01:20:16.702820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.861 [2024-07-25 01:20:16.702846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.861 qpair failed and we were unable to recover it. 00:34:23.861 [2024-07-25 01:20:16.702961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.861 [2024-07-25 01:20:16.702988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.861 qpair failed and we were unable to recover it. 00:34:23.861 [2024-07-25 01:20:16.703128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.861 [2024-07-25 01:20:16.703154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.861 qpair failed and we were unable to recover it. 00:34:23.861 [2024-07-25 01:20:16.703375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.861 [2024-07-25 01:20:16.703401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.861 qpair failed and we were unable to recover it. 00:34:23.861 [2024-07-25 01:20:16.703547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.861 [2024-07-25 01:20:16.703573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.861 qpair failed and we were unable to recover it. 00:34:23.861 [2024-07-25 01:20:16.703756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.861 [2024-07-25 01:20:16.703782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.861 qpair failed and we were unable to recover it. 00:34:23.861 [2024-07-25 01:20:16.703954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.861 [2024-07-25 01:20:16.703980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.861 qpair failed and we were unable to recover it. 00:34:23.861 [2024-07-25 01:20:16.704106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.861 [2024-07-25 01:20:16.704131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.861 qpair failed and we were unable to recover it. 00:34:23.861 [2024-07-25 01:20:16.704252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.861 [2024-07-25 01:20:16.704279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.861 qpair failed and we were unable to recover it. 00:34:23.861 [2024-07-25 01:20:16.704392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.861 [2024-07-25 01:20:16.704418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.861 qpair failed and we were unable to recover it. 00:34:23.861 [2024-07-25 01:20:16.704530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.861 [2024-07-25 01:20:16.704556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.861 qpair failed and we were unable to recover it. 00:34:23.861 [2024-07-25 01:20:16.704701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.861 [2024-07-25 01:20:16.704726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.861 qpair failed and we were unable to recover it. 00:34:23.861 [2024-07-25 01:20:16.704852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.861 [2024-07-25 01:20:16.704879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.861 qpair failed and we were unable to recover it. 00:34:23.861 [2024-07-25 01:20:16.705000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.861 [2024-07-25 01:20:16.705025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.861 qpair failed and we were unable to recover it. 00:34:23.861 [2024-07-25 01:20:16.705167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.861 [2024-07-25 01:20:16.705192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.861 qpair failed and we were unable to recover it. 00:34:23.861 [2024-07-25 01:20:16.705324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.861 [2024-07-25 01:20:16.705349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.861 qpair failed and we were unable to recover it. 00:34:23.861 [2024-07-25 01:20:16.705484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.861 [2024-07-25 01:20:16.705524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.861 qpair failed and we were unable to recover it. 00:34:23.861 [2024-07-25 01:20:16.705652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.861 [2024-07-25 01:20:16.705679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.861 qpair failed and we were unable to recover it. 00:34:23.861 [2024-07-25 01:20:16.705802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.862 [2024-07-25 01:20:16.705829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.862 qpair failed and we were unable to recover it. 00:34:23.862 [2024-07-25 01:20:16.705948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.862 [2024-07-25 01:20:16.705978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.862 qpair failed and we were unable to recover it. 00:34:23.862 [2024-07-25 01:20:16.706101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.862 [2024-07-25 01:20:16.706128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.862 qpair failed and we were unable to recover it. 00:34:23.862 [2024-07-25 01:20:16.706269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.862 [2024-07-25 01:20:16.706296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.862 qpair failed and we were unable to recover it. 00:34:23.862 [2024-07-25 01:20:16.706414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.862 [2024-07-25 01:20:16.706440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.862 qpair failed and we were unable to recover it. 00:34:23.862 [2024-07-25 01:20:16.706558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.862 [2024-07-25 01:20:16.706585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.862 qpair failed and we were unable to recover it. 00:34:23.862 [2024-07-25 01:20:16.706722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.862 [2024-07-25 01:20:16.706748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.862 qpair failed and we were unable to recover it. 00:34:23.862 [2024-07-25 01:20:16.706868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.862 [2024-07-25 01:20:16.706895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.862 qpair failed and we were unable to recover it. 00:34:23.862 [2024-07-25 01:20:16.707042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.862 [2024-07-25 01:20:16.707068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.862 qpair failed and we were unable to recover it. 00:34:23.862 [2024-07-25 01:20:16.707175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.862 [2024-07-25 01:20:16.707201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.862 qpair failed and we were unable to recover it. 00:34:23.862 [2024-07-25 01:20:16.707323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.862 [2024-07-25 01:20:16.707350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.862 qpair failed and we were unable to recover it. 00:34:23.862 [2024-07-25 01:20:16.707474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.862 [2024-07-25 01:20:16.707499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.862 qpair failed and we were unable to recover it. 00:34:23.862 [2024-07-25 01:20:16.707630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.862 [2024-07-25 01:20:16.707656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.862 qpair failed and we were unable to recover it. 00:34:23.862 [2024-07-25 01:20:16.707798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.862 [2024-07-25 01:20:16.707823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.862 qpair failed and we were unable to recover it. 00:34:23.862 [2024-07-25 01:20:16.707944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.862 [2024-07-25 01:20:16.707969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.862 qpair failed and we were unable to recover it. 00:34:23.862 [2024-07-25 01:20:16.708115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.862 [2024-07-25 01:20:16.708140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.862 qpair failed and we were unable to recover it. 00:34:23.862 [2024-07-25 01:20:16.708258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.862 [2024-07-25 01:20:16.708284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.862 qpair failed and we were unable to recover it. 00:34:23.862 [2024-07-25 01:20:16.708407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.862 [2024-07-25 01:20:16.708432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.862 qpair failed and we were unable to recover it. 00:34:23.862 [2024-07-25 01:20:16.708557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.862 [2024-07-25 01:20:16.708583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.862 qpair failed and we were unable to recover it. 00:34:23.862 [2024-07-25 01:20:16.708722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.862 [2024-07-25 01:20:16.708747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.862 qpair failed and we were unable to recover it. 00:34:23.862 [2024-07-25 01:20:16.708869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.862 [2024-07-25 01:20:16.708894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.862 qpair failed and we were unable to recover it. 00:34:23.862 [2024-07-25 01:20:16.709011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.862 [2024-07-25 01:20:16.709036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.862 qpair failed and we were unable to recover it. 00:34:23.862 [2024-07-25 01:20:16.709155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.862 [2024-07-25 01:20:16.709180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.862 qpair failed and we were unable to recover it. 00:34:23.862 [2024-07-25 01:20:16.709301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.862 [2024-07-25 01:20:16.709328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.862 qpair failed and we were unable to recover it. 00:34:23.862 [2024-07-25 01:20:16.709451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.862 [2024-07-25 01:20:16.709477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.862 qpair failed and we were unable to recover it. 00:34:23.862 [2024-07-25 01:20:16.709589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.862 [2024-07-25 01:20:16.709614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.862 qpair failed and we were unable to recover it. 00:34:23.862 [2024-07-25 01:20:16.709783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.863 [2024-07-25 01:20:16.709808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.863 qpair failed and we were unable to recover it. 00:34:23.863 [2024-07-25 01:20:16.709952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.863 [2024-07-25 01:20:16.709978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.863 qpair failed and we were unable to recover it. 00:34:23.863 [2024-07-25 01:20:16.710113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.863 [2024-07-25 01:20:16.710157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.863 qpair failed and we were unable to recover it. 00:34:23.863 [2024-07-25 01:20:16.710310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.863 [2024-07-25 01:20:16.710339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.863 qpair failed and we were unable to recover it. 00:34:23.863 [2024-07-25 01:20:16.710461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.863 [2024-07-25 01:20:16.710486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.863 qpair failed and we were unable to recover it. 00:34:23.863 [2024-07-25 01:20:16.710604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.863 [2024-07-25 01:20:16.710631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.863 qpair failed and we were unable to recover it. 00:34:23.863 [2024-07-25 01:20:16.710751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.863 [2024-07-25 01:20:16.710777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.863 qpair failed and we were unable to recover it. 00:34:23.863 [2024-07-25 01:20:16.710891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.863 [2024-07-25 01:20:16.710916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.863 qpair failed and we were unable to recover it. 00:34:23.863 [2024-07-25 01:20:16.711070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.863 [2024-07-25 01:20:16.711097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.863 qpair failed and we were unable to recover it. 00:34:23.863 [2024-07-25 01:20:16.711217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.863 [2024-07-25 01:20:16.711247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.863 qpair failed and we were unable to recover it. 00:34:23.863 [2024-07-25 01:20:16.711380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.863 [2024-07-25 01:20:16.711406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.863 qpair failed and we were unable to recover it. 00:34:23.863 [2024-07-25 01:20:16.711520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.863 [2024-07-25 01:20:16.711545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.863 qpair failed and we were unable to recover it. 00:34:23.863 [2024-07-25 01:20:16.711691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.863 [2024-07-25 01:20:16.711716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.863 qpair failed and we were unable to recover it. 00:34:23.863 [2024-07-25 01:20:16.711832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.863 [2024-07-25 01:20:16.711857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.863 qpair failed and we were unable to recover it. 00:34:23.863 [2024-07-25 01:20:16.711991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.863 [2024-07-25 01:20:16.712016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.863 qpair failed and we were unable to recover it. 00:34:23.863 [2024-07-25 01:20:16.712129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.863 [2024-07-25 01:20:16.712154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.863 qpair failed and we were unable to recover it. 00:34:23.863 [2024-07-25 01:20:16.712266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.863 [2024-07-25 01:20:16.712294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.863 qpair failed and we were unable to recover it. 00:34:23.863 [2024-07-25 01:20:16.712411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.863 [2024-07-25 01:20:16.712436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.863 qpair failed and we were unable to recover it. 00:34:23.863 [2024-07-25 01:20:16.712548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.863 [2024-07-25 01:20:16.712573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.863 qpair failed and we were unable to recover it. 00:34:23.863 [2024-07-25 01:20:16.712715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.863 [2024-07-25 01:20:16.712740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.863 qpair failed and we were unable to recover it. 00:34:23.863 [2024-07-25 01:20:16.712916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.863 [2024-07-25 01:20:16.712941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.863 qpair failed and we were unable to recover it. 00:34:23.863 [2024-07-25 01:20:16.713061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.863 [2024-07-25 01:20:16.713086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.863 qpair failed and we were unable to recover it. 00:34:23.863 [2024-07-25 01:20:16.713204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.863 [2024-07-25 01:20:16.713228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.863 qpair failed and we were unable to recover it. 00:34:23.863 [2024-07-25 01:20:16.713347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.863 [2024-07-25 01:20:16.713373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.863 qpair failed and we were unable to recover it. 00:34:23.863 [2024-07-25 01:20:16.713509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.863 [2024-07-25 01:20:16.713534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.863 qpair failed and we were unable to recover it. 00:34:23.863 [2024-07-25 01:20:16.713642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.863 [2024-07-25 01:20:16.713667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.863 qpair failed and we were unable to recover it. 00:34:23.863 [2024-07-25 01:20:16.713788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.864 [2024-07-25 01:20:16.713827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.864 qpair failed and we were unable to recover it. 00:34:23.864 [2024-07-25 01:20:16.713948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.864 [2024-07-25 01:20:16.713974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.864 qpair failed and we were unable to recover it. 00:34:23.864 [2024-07-25 01:20:16.714197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.864 [2024-07-25 01:20:16.714223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.864 qpair failed and we were unable to recover it. 00:34:23.864 [2024-07-25 01:20:16.714372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.864 [2024-07-25 01:20:16.714404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.864 qpair failed and we were unable to recover it. 00:34:23.864 [2024-07-25 01:20:16.714627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.864 [2024-07-25 01:20:16.714653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.864 qpair failed and we were unable to recover it. 00:34:23.864 [2024-07-25 01:20:16.714767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.864 [2024-07-25 01:20:16.714792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.864 qpair failed and we were unable to recover it. 00:34:23.864 [2024-07-25 01:20:16.714923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.864 [2024-07-25 01:20:16.714949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.864 qpair failed and we were unable to recover it. 00:34:23.864 [2024-07-25 01:20:16.715086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.864 [2024-07-25 01:20:16.715112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.864 qpair failed and we were unable to recover it. 00:34:23.864 [2024-07-25 01:20:16.715280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.864 [2024-07-25 01:20:16.715307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.864 qpair failed and we were unable to recover it. 00:34:23.864 [2024-07-25 01:20:16.715444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.864 [2024-07-25 01:20:16.715470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.864 qpair failed and we were unable to recover it. 00:34:23.864 [2024-07-25 01:20:16.715645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.864 [2024-07-25 01:20:16.715671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.864 qpair failed and we were unable to recover it. 00:34:23.864 [2024-07-25 01:20:16.715808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.864 [2024-07-25 01:20:16.715834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.864 qpair failed and we were unable to recover it. 00:34:23.864 [2024-07-25 01:20:16.715983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.864 [2024-07-25 01:20:16.716009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.864 qpair failed and we were unable to recover it. 00:34:23.864 [2024-07-25 01:20:16.716156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.864 [2024-07-25 01:20:16.716182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.864 qpair failed and we were unable to recover it. 00:34:23.864 [2024-07-25 01:20:16.716307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.864 [2024-07-25 01:20:16.716333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.864 qpair failed and we were unable to recover it. 00:34:23.864 [2024-07-25 01:20:16.716474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.864 [2024-07-25 01:20:16.716499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.864 qpair failed and we were unable to recover it. 00:34:23.864 [2024-07-25 01:20:16.716640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.864 [2024-07-25 01:20:16.716665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.864 qpair failed and we were unable to recover it. 00:34:23.864 [2024-07-25 01:20:16.716806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.864 [2024-07-25 01:20:16.716832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.864 qpair failed and we were unable to recover it. 00:34:23.864 [2024-07-25 01:20:16.716951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.864 [2024-07-25 01:20:16.716976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.864 qpair failed and we were unable to recover it. 00:34:23.864 [2024-07-25 01:20:16.717119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.864 [2024-07-25 01:20:16.717144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.864 qpair failed and we were unable to recover it. 00:34:23.864 [2024-07-25 01:20:16.717263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.864 [2024-07-25 01:20:16.717289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.864 qpair failed and we were unable to recover it. 00:34:23.864 [2024-07-25 01:20:16.717404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.864 [2024-07-25 01:20:16.717430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.864 qpair failed and we were unable to recover it. 00:34:23.864 [2024-07-25 01:20:16.717544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.864 [2024-07-25 01:20:16.717569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.864 qpair failed and we were unable to recover it. 00:34:23.864 [2024-07-25 01:20:16.717746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.864 [2024-07-25 01:20:16.717771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.864 qpair failed and we were unable to recover it. 00:34:23.864 [2024-07-25 01:20:16.717897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.864 [2024-07-25 01:20:16.717922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.864 qpair failed and we were unable to recover it. 00:34:23.864 [2024-07-25 01:20:16.718033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.864 [2024-07-25 01:20:16.718059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.864 qpair failed and we were unable to recover it. 00:34:23.864 [2024-07-25 01:20:16.718180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.864 [2024-07-25 01:20:16.718206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.864 qpair failed and we were unable to recover it. 00:34:23.864 [2024-07-25 01:20:16.718339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.864 [2024-07-25 01:20:16.718367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.864 qpair failed and we were unable to recover it. 00:34:23.864 [2024-07-25 01:20:16.718487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.864 [2024-07-25 01:20:16.718512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.864 qpair failed and we were unable to recover it. 00:34:23.864 [2024-07-25 01:20:16.718631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.864 [2024-07-25 01:20:16.718657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.864 qpair failed and we were unable to recover it. 00:34:23.864 [2024-07-25 01:20:16.718801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.864 [2024-07-25 01:20:16.718826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.864 qpair failed and we were unable to recover it. 00:34:23.864 [2024-07-25 01:20:16.718944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.864 [2024-07-25 01:20:16.718969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.864 qpair failed and we were unable to recover it. 00:34:23.864 [2024-07-25 01:20:16.719119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.865 [2024-07-25 01:20:16.719160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.865 qpair failed and we were unable to recover it. 00:34:23.865 [2024-07-25 01:20:16.719342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.865 [2024-07-25 01:20:16.719370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.865 qpair failed and we were unable to recover it. 00:34:23.865 [2024-07-25 01:20:16.719498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.865 [2024-07-25 01:20:16.719526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.865 qpair failed and we were unable to recover it. 00:34:23.865 [2024-07-25 01:20:16.719653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.865 [2024-07-25 01:20:16.719679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.865 qpair failed and we were unable to recover it. 00:34:23.865 [2024-07-25 01:20:16.719857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.865 [2024-07-25 01:20:16.719883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.865 qpair failed and we were unable to recover it. 00:34:23.865 [2024-07-25 01:20:16.720004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.865 [2024-07-25 01:20:16.720031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.865 qpair failed and we were unable to recover it. 00:34:23.865 [2024-07-25 01:20:16.720153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.865 [2024-07-25 01:20:16.720178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.865 qpair failed and we were unable to recover it. 00:34:23.865 [2024-07-25 01:20:16.720323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.865 [2024-07-25 01:20:16.720349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.865 qpair failed and we were unable to recover it. 00:34:23.865 [2024-07-25 01:20:16.720468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.865 [2024-07-25 01:20:16.720495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.865 qpair failed and we were unable to recover it. 00:34:23.865 [2024-07-25 01:20:16.720634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.865 [2024-07-25 01:20:16.720660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.865 qpair failed and we were unable to recover it. 00:34:23.865 [2024-07-25 01:20:16.720806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.865 [2024-07-25 01:20:16.720831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.865 qpair failed and we were unable to recover it. 00:34:23.865 [2024-07-25 01:20:16.720971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.865 [2024-07-25 01:20:16.721002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.865 qpair failed and we were unable to recover it. 00:34:23.865 [2024-07-25 01:20:16.721155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.865 [2024-07-25 01:20:16.721193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.865 qpair failed and we were unable to recover it. 00:34:23.865 [2024-07-25 01:20:16.721327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.865 [2024-07-25 01:20:16.721356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.865 qpair failed and we were unable to recover it. 00:34:23.865 [2024-07-25 01:20:16.721498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.865 [2024-07-25 01:20:16.721525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.865 qpair failed and we were unable to recover it. 00:34:23.865 [2024-07-25 01:20:16.721664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.865 [2024-07-25 01:20:16.721689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.865 qpair failed and we were unable to recover it. 00:34:23.865 [2024-07-25 01:20:16.721811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.865 [2024-07-25 01:20:16.721837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.865 qpair failed and we were unable to recover it. 00:34:23.865 [2024-07-25 01:20:16.721958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.865 [2024-07-25 01:20:16.721984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.865 qpair failed and we were unable to recover it. 00:34:23.865 [2024-07-25 01:20:16.722106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.865 [2024-07-25 01:20:16.722131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.865 qpair failed and we were unable to recover it. 00:34:23.865 [2024-07-25 01:20:16.722264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.865 [2024-07-25 01:20:16.722304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.865 qpair failed and we were unable to recover it. 00:34:23.865 [2024-07-25 01:20:16.722458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.865 [2024-07-25 01:20:16.722486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.865 qpair failed and we were unable to recover it. 00:34:23.865 [2024-07-25 01:20:16.722606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.865 [2024-07-25 01:20:16.722632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.865 qpair failed and we were unable to recover it. 00:34:23.865 [2024-07-25 01:20:16.722764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.865 [2024-07-25 01:20:16.722790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.865 qpair failed and we were unable to recover it. 00:34:23.865 [2024-07-25 01:20:16.722935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.865 [2024-07-25 01:20:16.722961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.865 qpair failed and we were unable to recover it. 00:34:23.865 [2024-07-25 01:20:16.723092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.865 [2024-07-25 01:20:16.723118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.865 qpair failed and we were unable to recover it. 00:34:23.865 [2024-07-25 01:20:16.723270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.866 [2024-07-25 01:20:16.723297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.866 qpair failed and we were unable to recover it. 00:34:23.866 [2024-07-25 01:20:16.723416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.866 [2024-07-25 01:20:16.723442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.866 qpair failed and we were unable to recover it. 00:34:23.866 [2024-07-25 01:20:16.723582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.866 [2024-07-25 01:20:16.723607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.866 qpair failed and we were unable to recover it. 00:34:23.866 [2024-07-25 01:20:16.723766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.866 [2024-07-25 01:20:16.723794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.866 qpair failed and we were unable to recover it. 00:34:23.866 [2024-07-25 01:20:16.723922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.866 [2024-07-25 01:20:16.723947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.866 qpair failed and we were unable to recover it. 00:34:23.866 [2024-07-25 01:20:16.724071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.866 [2024-07-25 01:20:16.724096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.866 qpair failed and we were unable to recover it. 00:34:23.866 [2024-07-25 01:20:16.724211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.866 [2024-07-25 01:20:16.724237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.866 qpair failed and we were unable to recover it. 00:34:23.866 [2024-07-25 01:20:16.724378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.866 [2024-07-25 01:20:16.724403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.866 qpair failed and we were unable to recover it. 00:34:23.866 [2024-07-25 01:20:16.724526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.866 [2024-07-25 01:20:16.724564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.866 qpair failed and we were unable to recover it. 00:34:23.866 [2024-07-25 01:20:16.724685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.866 [2024-07-25 01:20:16.724712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.866 qpair failed and we were unable to recover it. 00:34:23.866 [2024-07-25 01:20:16.724854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.866 [2024-07-25 01:20:16.724880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.866 qpair failed and we were unable to recover it. 00:34:23.866 [2024-07-25 01:20:16.724995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.866 [2024-07-25 01:20:16.725021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.866 qpair failed and we were unable to recover it. 00:34:23.866 [2024-07-25 01:20:16.725145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.866 [2024-07-25 01:20:16.725184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.866 qpair failed and we were unable to recover it. 00:34:23.866 [2024-07-25 01:20:16.725308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.866 [2024-07-25 01:20:16.725341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.866 qpair failed and we were unable to recover it. 00:34:23.866 [2024-07-25 01:20:16.725489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.866 [2024-07-25 01:20:16.725515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.866 qpair failed and we were unable to recover it. 00:34:23.866 [2024-07-25 01:20:16.725662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.866 [2024-07-25 01:20:16.725688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.866 qpair failed and we were unable to recover it. 00:34:23.866 [2024-07-25 01:20:16.725806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.866 [2024-07-25 01:20:16.725832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.866 qpair failed and we were unable to recover it. 00:34:23.866 [2024-07-25 01:20:16.725947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.866 [2024-07-25 01:20:16.725973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.866 qpair failed and we were unable to recover it. 00:34:23.866 [2024-07-25 01:20:16.726085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.866 [2024-07-25 01:20:16.726111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.866 qpair failed and we were unable to recover it. 00:34:23.866 [2024-07-25 01:20:16.726224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.866 [2024-07-25 01:20:16.726256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.866 qpair failed and we were unable to recover it. 00:34:23.866 [2024-07-25 01:20:16.726382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.866 [2024-07-25 01:20:16.726407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.866 qpair failed and we were unable to recover it. 00:34:23.866 [2024-07-25 01:20:16.726526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.866 [2024-07-25 01:20:16.726550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.866 qpair failed and we were unable to recover it. 00:34:23.866 [2024-07-25 01:20:16.726668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.866 [2024-07-25 01:20:16.726693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.866 qpair failed and we were unable to recover it. 00:34:23.866 [2024-07-25 01:20:16.726842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.866 [2024-07-25 01:20:16.726867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.866 qpair failed and we were unable to recover it. 00:34:23.866 [2024-07-25 01:20:16.727016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.866 [2024-07-25 01:20:16.727041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.867 qpair failed and we were unable to recover it. 00:34:23.867 [2024-07-25 01:20:16.727159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.867 [2024-07-25 01:20:16.727184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.867 qpair failed and we were unable to recover it. 00:34:23.867 [2024-07-25 01:20:16.727332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.867 [2024-07-25 01:20:16.727360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.867 qpair failed and we were unable to recover it. 00:34:23.867 [2024-07-25 01:20:16.727480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.867 [2024-07-25 01:20:16.727505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.867 qpair failed and we were unable to recover it. 00:34:23.867 [2024-07-25 01:20:16.727656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.867 [2024-07-25 01:20:16.727683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.867 qpair failed and we were unable to recover it. 00:34:23.867 [2024-07-25 01:20:16.727802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.867 [2024-07-25 01:20:16.727829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.867 qpair failed and we were unable to recover it. 00:34:23.867 [2024-07-25 01:20:16.727946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.867 [2024-07-25 01:20:16.727972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.867 qpair failed and we were unable to recover it. 00:34:23.867 [2024-07-25 01:20:16.728084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.867 [2024-07-25 01:20:16.728110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.867 qpair failed and we were unable to recover it. 00:34:23.867 [2024-07-25 01:20:16.728218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.867 [2024-07-25 01:20:16.728250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.867 qpair failed and we were unable to recover it. 00:34:23.867 [2024-07-25 01:20:16.728378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.867 [2024-07-25 01:20:16.728405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.867 qpair failed and we were unable to recover it. 00:34:23.867 [2024-07-25 01:20:16.728531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.867 [2024-07-25 01:20:16.728556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.867 qpair failed and we were unable to recover it. 00:34:23.867 [2024-07-25 01:20:16.728679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.867 [2024-07-25 01:20:16.728705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.867 qpair failed and we were unable to recover it. 00:34:23.867 [2024-07-25 01:20:16.728852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.867 [2024-07-25 01:20:16.728878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.867 qpair failed and we were unable to recover it. 00:34:23.867 [2024-07-25 01:20:16.728995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.867 [2024-07-25 01:20:16.729022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.867 qpair failed and we were unable to recover it. 00:34:23.867 [2024-07-25 01:20:16.729137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.867 [2024-07-25 01:20:16.729164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.867 qpair failed and we were unable to recover it. 00:34:23.867 [2024-07-25 01:20:16.729299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.867 [2024-07-25 01:20:16.729337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.867 qpair failed and we were unable to recover it. 00:34:23.867 [2024-07-25 01:20:16.729468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.867 [2024-07-25 01:20:16.729495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.867 qpair failed and we were unable to recover it. 00:34:23.867 [2024-07-25 01:20:16.729646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.867 [2024-07-25 01:20:16.729672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.867 qpair failed and we were unable to recover it. 00:34:23.867 [2024-07-25 01:20:16.729818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.867 [2024-07-25 01:20:16.729845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.867 qpair failed and we were unable to recover it. 00:34:23.868 [2024-07-25 01:20:16.729982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.868 [2024-07-25 01:20:16.730007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.868 qpair failed and we were unable to recover it. 00:34:23.868 [2024-07-25 01:20:16.730122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.868 [2024-07-25 01:20:16.730148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.868 qpair failed and we were unable to recover it. 00:34:23.868 [2024-07-25 01:20:16.730265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.868 [2024-07-25 01:20:16.730292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.868 qpair failed and we were unable to recover it. 00:34:23.868 [2024-07-25 01:20:16.730412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.868 [2024-07-25 01:20:16.730438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.868 qpair failed and we were unable to recover it. 00:34:23.868 [2024-07-25 01:20:16.730544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.868 [2024-07-25 01:20:16.730569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.868 qpair failed and we were unable to recover it. 00:34:23.868 [2024-07-25 01:20:16.730678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.868 [2024-07-25 01:20:16.730704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.868 qpair failed and we were unable to recover it. 00:34:23.868 [2024-07-25 01:20:16.730814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.868 [2024-07-25 01:20:16.730839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.868 qpair failed and we were unable to recover it. 00:34:23.868 [2024-07-25 01:20:16.730980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.868 [2024-07-25 01:20:16.731005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.868 qpair failed and we were unable to recover it. 00:34:23.868 [2024-07-25 01:20:16.731118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.868 [2024-07-25 01:20:16.731145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.868 qpair failed and we were unable to recover it. 00:34:23.868 [2024-07-25 01:20:16.731272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.868 [2024-07-25 01:20:16.731298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.868 qpair failed and we were unable to recover it. 00:34:23.868 [2024-07-25 01:20:16.731415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.868 [2024-07-25 01:20:16.731443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.868 qpair failed and we were unable to recover it. 00:34:23.868 [2024-07-25 01:20:16.731569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.868 [2024-07-25 01:20:16.731594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.868 qpair failed and we were unable to recover it. 00:34:23.868 [2024-07-25 01:20:16.731736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.868 [2024-07-25 01:20:16.731762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.868 qpair failed and we were unable to recover it. 00:34:23.868 [2024-07-25 01:20:16.731889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.868 [2024-07-25 01:20:16.731914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.868 qpair failed and we were unable to recover it. 00:34:23.868 [2024-07-25 01:20:16.732035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.868 [2024-07-25 01:20:16.732062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.868 qpair failed and we were unable to recover it. 00:34:23.868 [2024-07-25 01:20:16.732203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.868 [2024-07-25 01:20:16.732229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.868 qpair failed and we were unable to recover it. 00:34:23.868 [2024-07-25 01:20:16.732354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.868 [2024-07-25 01:20:16.732380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.868 qpair failed and we were unable to recover it. 00:34:23.868 [2024-07-25 01:20:16.732497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.868 [2024-07-25 01:20:16.732524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.868 qpair failed and we were unable to recover it. 00:34:23.868 [2024-07-25 01:20:16.732682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.868 [2024-07-25 01:20:16.732707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.868 qpair failed and we were unable to recover it. 00:34:23.868 [2024-07-25 01:20:16.732823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.868 [2024-07-25 01:20:16.732848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.868 qpair failed and we were unable to recover it. 00:34:23.868 [2024-07-25 01:20:16.732983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.868 [2024-07-25 01:20:16.733010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.868 qpair failed and we were unable to recover it. 00:34:23.868 [2024-07-25 01:20:16.733119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.868 [2024-07-25 01:20:16.733144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.868 qpair failed and we were unable to recover it. 00:34:23.868 [2024-07-25 01:20:16.733262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.868 [2024-07-25 01:20:16.733288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.868 qpair failed and we were unable to recover it. 00:34:23.868 [2024-07-25 01:20:16.733428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.868 [2024-07-25 01:20:16.733453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.868 qpair failed and we were unable to recover it. 00:34:23.868 [2024-07-25 01:20:16.733601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.868 [2024-07-25 01:20:16.733628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.868 qpair failed and we were unable to recover it. 00:34:23.868 [2024-07-25 01:20:16.733755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.869 [2024-07-25 01:20:16.733781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.869 qpair failed and we were unable to recover it. 00:34:23.869 [2024-07-25 01:20:16.733899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.869 [2024-07-25 01:20:16.733926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.869 qpair failed and we were unable to recover it. 00:34:23.869 [2024-07-25 01:20:16.734072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.869 [2024-07-25 01:20:16.734098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.869 qpair failed and we were unable to recover it. 00:34:23.869 [2024-07-25 01:20:16.734249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.869 [2024-07-25 01:20:16.734276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.869 qpair failed and we were unable to recover it. 00:34:23.869 [2024-07-25 01:20:16.734390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.869 [2024-07-25 01:20:16.734416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.869 qpair failed and we were unable to recover it. 00:34:23.869 [2024-07-25 01:20:16.734555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.869 [2024-07-25 01:20:16.734580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.869 qpair failed and we were unable to recover it. 00:34:23.869 [2024-07-25 01:20:16.734700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.869 [2024-07-25 01:20:16.734725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.869 qpair failed and we were unable to recover it. 00:34:23.869 [2024-07-25 01:20:16.734836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.869 [2024-07-25 01:20:16.734861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.869 qpair failed and we were unable to recover it. 00:34:23.869 [2024-07-25 01:20:16.735000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.869 [2024-07-25 01:20:16.735025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.869 qpair failed and we were unable to recover it. 00:34:23.869 [2024-07-25 01:20:16.735146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.869 [2024-07-25 01:20:16.735172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.869 qpair failed and we were unable to recover it. 00:34:23.869 [2024-07-25 01:20:16.735287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.869 [2024-07-25 01:20:16.735314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.869 qpair failed and we were unable to recover it. 00:34:23.869 [2024-07-25 01:20:16.735432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.869 [2024-07-25 01:20:16.735457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.869 qpair failed and we were unable to recover it. 00:34:23.869 [2024-07-25 01:20:16.735567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.869 [2024-07-25 01:20:16.735598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.869 qpair failed and we were unable to recover it. 00:34:23.869 [2024-07-25 01:20:16.735715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.869 [2024-07-25 01:20:16.735742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.869 qpair failed and we were unable to recover it. 00:34:23.869 [2024-07-25 01:20:16.735857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.869 [2024-07-25 01:20:16.735883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.869 qpair failed and we were unable to recover it. 00:34:23.869 [2024-07-25 01:20:16.736037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.869 [2024-07-25 01:20:16.736062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.869 qpair failed and we were unable to recover it. 00:34:23.869 [2024-07-25 01:20:16.736176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.869 [2024-07-25 01:20:16.736201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.869 qpair failed and we were unable to recover it. 00:34:23.869 [2024-07-25 01:20:16.736320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.869 [2024-07-25 01:20:16.736347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.869 qpair failed and we were unable to recover it. 00:34:23.869 [2024-07-25 01:20:16.736479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.869 [2024-07-25 01:20:16.736505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.869 qpair failed and we were unable to recover it. 00:34:23.869 [2024-07-25 01:20:16.736627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.869 [2024-07-25 01:20:16.736653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.869 qpair failed and we were unable to recover it. 00:34:23.869 [2024-07-25 01:20:16.736810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.869 [2024-07-25 01:20:16.736835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.869 qpair failed and we were unable to recover it. 00:34:23.869 [2024-07-25 01:20:16.736950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.869 [2024-07-25 01:20:16.736975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.869 qpair failed and we were unable to recover it. 00:34:23.869 [2024-07-25 01:20:16.737085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.869 [2024-07-25 01:20:16.737113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.869 qpair failed and we were unable to recover it. 00:34:23.869 [2024-07-25 01:20:16.737263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.869 [2024-07-25 01:20:16.737289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.869 qpair failed and we were unable to recover it. 00:34:23.870 [2024-07-25 01:20:16.737408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.870 [2024-07-25 01:20:16.737434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.870 qpair failed and we were unable to recover it. 00:34:23.870 [2024-07-25 01:20:16.737560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.870 [2024-07-25 01:20:16.737585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.870 qpair failed and we were unable to recover it. 00:34:23.870 [2024-07-25 01:20:16.737702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.870 [2024-07-25 01:20:16.737727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.870 qpair failed and we were unable to recover it. 00:34:23.870 [2024-07-25 01:20:16.737841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.870 [2024-07-25 01:20:16.737867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.870 qpair failed and we were unable to recover it. 00:34:23.870 [2024-07-25 01:20:16.737988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.870 [2024-07-25 01:20:16.738014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.870 qpair failed and we were unable to recover it. 00:34:23.870 [2024-07-25 01:20:16.738186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.870 [2024-07-25 01:20:16.738212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.870 qpair failed and we were unable to recover it. 00:34:23.870 [2024-07-25 01:20:16.738370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.870 [2024-07-25 01:20:16.738410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.870 qpair failed and we were unable to recover it. 00:34:23.870 [2024-07-25 01:20:16.738541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.870 [2024-07-25 01:20:16.738568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.870 qpair failed and we were unable to recover it. 00:34:23.870 [2024-07-25 01:20:16.738687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.870 [2024-07-25 01:20:16.738713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.870 qpair failed and we were unable to recover it. 00:34:23.870 [2024-07-25 01:20:16.738826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.870 [2024-07-25 01:20:16.738852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.870 qpair failed and we were unable to recover it. 00:34:23.870 [2024-07-25 01:20:16.738961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.870 [2024-07-25 01:20:16.738987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.870 qpair failed and we were unable to recover it. 00:34:23.870 [2024-07-25 01:20:16.739168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.870 [2024-07-25 01:20:16.739192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.870 qpair failed and we were unable to recover it. 00:34:23.870 [2024-07-25 01:20:16.739320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.870 [2024-07-25 01:20:16.739346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.870 qpair failed and we were unable to recover it. 00:34:23.870 [2024-07-25 01:20:16.739473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.870 [2024-07-25 01:20:16.739498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.870 qpair failed and we were unable to recover it. 00:34:23.870 [2024-07-25 01:20:16.739614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.870 [2024-07-25 01:20:16.739639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.870 qpair failed and we were unable to recover it. 00:34:23.870 [2024-07-25 01:20:16.739754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.870 [2024-07-25 01:20:16.739783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.870 qpair failed and we were unable to recover it. 00:34:23.870 [2024-07-25 01:20:16.739897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.870 [2024-07-25 01:20:16.739922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.870 qpair failed and we were unable to recover it. 00:34:23.870 [2024-07-25 01:20:16.740067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.870 [2024-07-25 01:20:16.740093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.870 qpair failed and we were unable to recover it. 00:34:23.870 [2024-07-25 01:20:16.740200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.870 [2024-07-25 01:20:16.740226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.870 qpair failed and we were unable to recover it. 00:34:23.870 [2024-07-25 01:20:16.740345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.870 [2024-07-25 01:20:16.740370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.870 qpair failed and we were unable to recover it. 00:34:23.870 [2024-07-25 01:20:16.740486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.870 [2024-07-25 01:20:16.740511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.870 qpair failed and we were unable to recover it. 00:34:23.870 [2024-07-25 01:20:16.740619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.870 [2024-07-25 01:20:16.740644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.870 qpair failed and we were unable to recover it. 00:34:23.870 [2024-07-25 01:20:16.740756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.870 [2024-07-25 01:20:16.740782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.870 qpair failed and we were unable to recover it. 00:34:23.870 [2024-07-25 01:20:16.740903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.870 [2024-07-25 01:20:16.740929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.870 qpair failed and we were unable to recover it. 00:34:23.870 [2024-07-25 01:20:16.741054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.871 [2024-07-25 01:20:16.741093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.871 qpair failed and we were unable to recover it. 00:34:23.871 [2024-07-25 01:20:16.741214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.871 [2024-07-25 01:20:16.741254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.871 qpair failed and we were unable to recover it. 00:34:23.871 [2024-07-25 01:20:16.741374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.871 [2024-07-25 01:20:16.741400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.871 qpair failed and we were unable to recover it. 00:34:23.871 [2024-07-25 01:20:16.741542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.871 [2024-07-25 01:20:16.741568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.871 qpair failed and we were unable to recover it. 00:34:23.871 [2024-07-25 01:20:16.741677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.871 [2024-07-25 01:20:16.741703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.871 qpair failed and we were unable to recover it. 00:34:23.871 [2024-07-25 01:20:16.741857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.871 [2024-07-25 01:20:16.741883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.871 qpair failed and we were unable to recover it. 00:34:23.871 [2024-07-25 01:20:16.742000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.871 [2024-07-25 01:20:16.742027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.871 qpair failed and we were unable to recover it. 00:34:23.871 [2024-07-25 01:20:16.742138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.871 [2024-07-25 01:20:16.742163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.871 qpair failed and we were unable to recover it. 00:34:23.871 [2024-07-25 01:20:16.742286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.871 [2024-07-25 01:20:16.742312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.871 qpair failed and we were unable to recover it. 00:34:23.871 [2024-07-25 01:20:16.742448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.871 [2024-07-25 01:20:16.742474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.871 qpair failed and we were unable to recover it. 00:34:23.871 [2024-07-25 01:20:16.742592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.871 [2024-07-25 01:20:16.742616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.871 qpair failed and we were unable to recover it. 00:34:23.871 [2024-07-25 01:20:16.742730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.871 [2024-07-25 01:20:16.742755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.871 qpair failed and we were unable to recover it. 00:34:23.871 [2024-07-25 01:20:16.742894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.871 [2024-07-25 01:20:16.742919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.871 qpair failed and we were unable to recover it. 00:34:23.871 [2024-07-25 01:20:16.743037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.871 [2024-07-25 01:20:16.743062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.871 qpair failed and we were unable to recover it. 00:34:23.871 [2024-07-25 01:20:16.743171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.871 [2024-07-25 01:20:16.743195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.871 qpair failed and we were unable to recover it. 00:34:23.871 [2024-07-25 01:20:16.743315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.871 [2024-07-25 01:20:16.743344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.871 qpair failed and we were unable to recover it. 00:34:23.871 [2024-07-25 01:20:16.743477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.871 [2024-07-25 01:20:16.743516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.871 qpair failed and we were unable to recover it. 00:34:23.871 [2024-07-25 01:20:16.743669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.871 [2024-07-25 01:20:16.743697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.871 qpair failed and we were unable to recover it. 00:34:23.871 [2024-07-25 01:20:16.743849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.871 [2024-07-25 01:20:16.743880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.871 qpair failed and we were unable to recover it. 00:34:23.871 [2024-07-25 01:20:16.743999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.871 [2024-07-25 01:20:16.744031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.871 qpair failed and we were unable to recover it. 00:34:23.871 [2024-07-25 01:20:16.744154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.871 [2024-07-25 01:20:16.744179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.871 qpair failed and we were unable to recover it. 00:34:23.871 [2024-07-25 01:20:16.744299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.871 [2024-07-25 01:20:16.744325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.871 qpair failed and we were unable to recover it. 00:34:23.871 [2024-07-25 01:20:16.744453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.871 [2024-07-25 01:20:16.744478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.871 qpair failed and we were unable to recover it. 00:34:23.871 [2024-07-25 01:20:16.744596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.871 [2024-07-25 01:20:16.744622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.871 qpair failed and we were unable to recover it. 00:34:23.871 [2024-07-25 01:20:16.744734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.871 [2024-07-25 01:20:16.744759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.871 qpair failed and we were unable to recover it. 00:34:23.871 [2024-07-25 01:20:16.744897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.871 [2024-07-25 01:20:16.744922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.871 qpair failed and we were unable to recover it. 00:34:23.872 [2024-07-25 01:20:16.745041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.872 [2024-07-25 01:20:16.745066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.872 qpair failed and we were unable to recover it. 00:34:23.872 [2024-07-25 01:20:16.745176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.872 [2024-07-25 01:20:16.745201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.872 qpair failed and we were unable to recover it. 00:34:23.872 [2024-07-25 01:20:16.745330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.872 [2024-07-25 01:20:16.745356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.872 qpair failed and we were unable to recover it. 00:34:23.872 [2024-07-25 01:20:16.745467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.872 [2024-07-25 01:20:16.745492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.872 qpair failed and we were unable to recover it. 00:34:23.872 [2024-07-25 01:20:16.745629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.872 [2024-07-25 01:20:16.745654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.872 qpair failed and we were unable to recover it. 00:34:23.872 [2024-07-25 01:20:16.745761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.872 [2024-07-25 01:20:16.745786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.872 qpair failed and we were unable to recover it. 00:34:23.872 [2024-07-25 01:20:16.745900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.872 [2024-07-25 01:20:16.745925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.872 qpair failed and we were unable to recover it. 00:34:23.872 [2024-07-25 01:20:16.746065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.872 [2024-07-25 01:20:16.746092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.872 qpair failed and we were unable to recover it. 00:34:23.872 [2024-07-25 01:20:16.746206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.872 [2024-07-25 01:20:16.746231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.872 qpair failed and we were unable to recover it. 00:34:23.872 [2024-07-25 01:20:16.746376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.872 [2024-07-25 01:20:16.746402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.872 qpair failed and we were unable to recover it. 00:34:23.872 [2024-07-25 01:20:16.746569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.872 [2024-07-25 01:20:16.746595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.872 qpair failed and we were unable to recover it. 00:34:23.872 [2024-07-25 01:20:16.746729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.872 [2024-07-25 01:20:16.746754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.872 qpair failed and we were unable to recover it. 00:34:23.872 [2024-07-25 01:20:16.746859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.872 [2024-07-25 01:20:16.746884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.872 qpair failed and we were unable to recover it. 00:34:23.872 [2024-07-25 01:20:16.747006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.872 [2024-07-25 01:20:16.747033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.872 qpair failed and we were unable to recover it. 00:34:23.872 [2024-07-25 01:20:16.747173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.872 [2024-07-25 01:20:16.747198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.872 qpair failed and we were unable to recover it. 00:34:23.872 [2024-07-25 01:20:16.747348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.872 [2024-07-25 01:20:16.747374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.872 qpair failed and we were unable to recover it. 00:34:23.872 [2024-07-25 01:20:16.747516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.872 [2024-07-25 01:20:16.747542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.872 qpair failed and we were unable to recover it. 00:34:23.872 [2024-07-25 01:20:16.747692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.872 [2024-07-25 01:20:16.747718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.872 qpair failed and we were unable to recover it. 00:34:23.872 [2024-07-25 01:20:16.747833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.872 [2024-07-25 01:20:16.747860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.872 qpair failed and we were unable to recover it. 00:34:23.872 [2024-07-25 01:20:16.747978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.872 [2024-07-25 01:20:16.748008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.872 qpair failed and we were unable to recover it. 00:34:23.872 [2024-07-25 01:20:16.748151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.872 [2024-07-25 01:20:16.748176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.872 qpair failed and we were unable to recover it. 00:34:23.872 [2024-07-25 01:20:16.748307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.872 [2024-07-25 01:20:16.748334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.872 qpair failed and we were unable to recover it. 00:34:23.872 [2024-07-25 01:20:16.748446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.872 [2024-07-25 01:20:16.748472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.872 qpair failed and we were unable to recover it. 00:34:23.872 [2024-07-25 01:20:16.748585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.872 [2024-07-25 01:20:16.748611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.872 qpair failed and we were unable to recover it. 00:34:23.872 [2024-07-25 01:20:16.748726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.872 [2024-07-25 01:20:16.748752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.872 qpair failed and we were unable to recover it. 00:34:23.872 [2024-07-25 01:20:16.748869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.873 [2024-07-25 01:20:16.748895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.873 qpair failed and we were unable to recover it. 00:34:23.873 [2024-07-25 01:20:16.749012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.873 [2024-07-25 01:20:16.749037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.873 qpair failed and we were unable to recover it. 00:34:23.873 [2024-07-25 01:20:16.749164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.873 [2024-07-25 01:20:16.749189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.873 qpair failed and we were unable to recover it. 00:34:23.873 [2024-07-25 01:20:16.749339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.873 [2024-07-25 01:20:16.749365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.873 qpair failed and we were unable to recover it. 00:34:23.873 [2024-07-25 01:20:16.749473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.873 [2024-07-25 01:20:16.749499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.873 qpair failed and we were unable to recover it. 00:34:23.873 [2024-07-25 01:20:16.749622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.873 [2024-07-25 01:20:16.749647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.873 qpair failed and we were unable to recover it. 00:34:23.873 [2024-07-25 01:20:16.749755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.873 [2024-07-25 01:20:16.749780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.873 qpair failed and we were unable to recover it. 00:34:23.873 [2024-07-25 01:20:16.749909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.873 [2024-07-25 01:20:16.749936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.873 qpair failed and we were unable to recover it. 00:34:23.873 [2024-07-25 01:20:16.750060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.873 [2024-07-25 01:20:16.750085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.873 qpair failed and we were unable to recover it. 00:34:23.873 [2024-07-25 01:20:16.750205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.873 [2024-07-25 01:20:16.750231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.873 qpair failed and we were unable to recover it. 00:34:23.873 [2024-07-25 01:20:16.750376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.873 [2024-07-25 01:20:16.750402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.873 qpair failed and we were unable to recover it. 00:34:23.873 [2024-07-25 01:20:16.750544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.873 [2024-07-25 01:20:16.750569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.873 qpair failed and we were unable to recover it. 00:34:23.873 [2024-07-25 01:20:16.750712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.873 [2024-07-25 01:20:16.750737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.873 qpair failed and we were unable to recover it. 00:34:23.873 [2024-07-25 01:20:16.750857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.873 [2024-07-25 01:20:16.750882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.873 qpair failed and we were unable to recover it. 00:34:23.873 [2024-07-25 01:20:16.751025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.873 [2024-07-25 01:20:16.751051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.873 qpair failed and we were unable to recover it. 00:34:23.873 [2024-07-25 01:20:16.751170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.873 [2024-07-25 01:20:16.751196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.873 qpair failed and we were unable to recover it. 00:34:23.873 [2024-07-25 01:20:16.751350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.873 [2024-07-25 01:20:16.751376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.873 qpair failed and we were unable to recover it. 00:34:23.873 [2024-07-25 01:20:16.751488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.873 [2024-07-25 01:20:16.751514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.873 qpair failed and we were unable to recover it. 00:34:23.873 [2024-07-25 01:20:16.751655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.873 [2024-07-25 01:20:16.751681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.873 qpair failed and we were unable to recover it. 00:34:23.873 [2024-07-25 01:20:16.751805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.873 [2024-07-25 01:20:16.751830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.873 qpair failed and we were unable to recover it. 00:34:23.873 [2024-07-25 01:20:16.751947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.873 [2024-07-25 01:20:16.751972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.873 qpair failed and we were unable to recover it. 00:34:23.873 [2024-07-25 01:20:16.752080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.873 [2024-07-25 01:20:16.752105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.873 qpair failed and we were unable to recover it. 00:34:23.873 [2024-07-25 01:20:16.752227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.873 [2024-07-25 01:20:16.752259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.873 qpair failed and we were unable to recover it. 00:34:23.873 [2024-07-25 01:20:16.752428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.873 [2024-07-25 01:20:16.752454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.873 qpair failed and we were unable to recover it. 00:34:23.873 [2024-07-25 01:20:16.752567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.873 [2024-07-25 01:20:16.752593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.873 qpair failed and we were unable to recover it. 00:34:23.873 [2024-07-25 01:20:16.752712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.873 [2024-07-25 01:20:16.752738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.873 qpair failed and we were unable to recover it. 00:34:23.873 [2024-07-25 01:20:16.752852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.873 [2024-07-25 01:20:16.752878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.873 qpair failed and we were unable to recover it. 00:34:23.873 [2024-07-25 01:20:16.752985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.873 [2024-07-25 01:20:16.753010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.873 qpair failed and we were unable to recover it. 00:34:23.873 [2024-07-25 01:20:16.753119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.873 [2024-07-25 01:20:16.753146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.873 qpair failed and we were unable to recover it. 00:34:23.873 [2024-07-25 01:20:16.753266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.873 [2024-07-25 01:20:16.753291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.873 qpair failed and we were unable to recover it. 00:34:23.873 [2024-07-25 01:20:16.753452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.874 [2024-07-25 01:20:16.753492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.874 qpair failed and we were unable to recover it. 00:34:23.874 [2024-07-25 01:20:16.753633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.874 [2024-07-25 01:20:16.753662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.874 qpair failed and we were unable to recover it. 00:34:23.874 [2024-07-25 01:20:16.753802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.874 [2024-07-25 01:20:16.753828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.874 qpair failed and we were unable to recover it. 00:34:23.874 [2024-07-25 01:20:16.753973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.874 [2024-07-25 01:20:16.753999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.874 qpair failed and we were unable to recover it. 00:34:23.874 [2024-07-25 01:20:16.754136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.874 [2024-07-25 01:20:16.754176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.874 qpair failed and we were unable to recover it. 00:34:23.874 [2024-07-25 01:20:16.754341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.874 [2024-07-25 01:20:16.754381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafc4000b90 with addr=10.0.0.2, port=4420 00:34:23.874 qpair failed and we were unable to recover it. 00:34:23.874 [2024-07-25 01:20:16.754533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.874 [2024-07-25 01:20:16.754560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.874 qpair failed and we were unable to recover it. 00:34:23.874 [2024-07-25 01:20:16.754726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.874 [2024-07-25 01:20:16.754751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.874 qpair failed and we were unable to recover it. 00:34:23.874 [2024-07-25 01:20:16.754865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.874 [2024-07-25 01:20:16.754891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.874 qpair failed and we were unable to recover it. 00:34:23.874 [2024-07-25 01:20:16.755008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.874 [2024-07-25 01:20:16.755033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.874 qpair failed and we were unable to recover it. 00:34:23.874 [2024-07-25 01:20:16.755178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.874 [2024-07-25 01:20:16.755204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.874 qpair failed and we were unable to recover it. 00:34:23.874 [2024-07-25 01:20:16.755360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.874 [2024-07-25 01:20:16.755400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.874 qpair failed and we were unable to recover it. 00:34:23.874 [2024-07-25 01:20:16.755554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.874 [2024-07-25 01:20:16.755581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.874 qpair failed and we were unable to recover it. 00:34:23.874 [2024-07-25 01:20:16.755703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.874 [2024-07-25 01:20:16.755729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.874 qpair failed and we were unable to recover it. 00:34:23.874 [2024-07-25 01:20:16.755872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.874 [2024-07-25 01:20:16.755898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.874 qpair failed and we were unable to recover it. 00:34:23.874 [2024-07-25 01:20:16.756037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.874 [2024-07-25 01:20:16.756063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.874 qpair failed and we were unable to recover it. 00:34:23.874 [2024-07-25 01:20:16.756184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.874 [2024-07-25 01:20:16.756211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.874 qpair failed and we were unable to recover it. 00:34:23.874 [2024-07-25 01:20:16.756345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.874 [2024-07-25 01:20:16.756372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.874 qpair failed and we were unable to recover it. 00:34:23.874 [2024-07-25 01:20:16.756504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.874 [2024-07-25 01:20:16.756529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.874 qpair failed and we were unable to recover it. 00:34:23.874 [2024-07-25 01:20:16.756659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.874 [2024-07-25 01:20:16.756685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.874 qpair failed and we were unable to recover it. 00:34:23.874 [2024-07-25 01:20:16.756805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.874 [2024-07-25 01:20:16.756830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.874 qpair failed and we were unable to recover it. 00:34:23.874 [2024-07-25 01:20:16.756946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.874 [2024-07-25 01:20:16.756972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.874 qpair failed and we were unable to recover it. 00:34:23.874 [2024-07-25 01:20:16.757091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.874 [2024-07-25 01:20:16.757116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.874 qpair failed and we were unable to recover it. 00:34:23.874 [2024-07-25 01:20:16.757266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.874 [2024-07-25 01:20:16.757325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.874 qpair failed and we were unable to recover it. 00:34:23.874 [2024-07-25 01:20:16.757445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.874 [2024-07-25 01:20:16.757471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.874 qpair failed and we were unable to recover it. 00:34:23.874 [2024-07-25 01:20:16.757600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.874 [2024-07-25 01:20:16.757625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.874 qpair failed and we were unable to recover it. 00:34:23.874 [2024-07-25 01:20:16.757735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.874 [2024-07-25 01:20:16.757760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.874 qpair failed and we were unable to recover it. 00:34:23.874 [2024-07-25 01:20:16.757872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.874 [2024-07-25 01:20:16.757898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.874 qpair failed and we were unable to recover it. 00:34:23.875 [2024-07-25 01:20:16.758018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.875 [2024-07-25 01:20:16.758043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.875 qpair failed and we were unable to recover it. 00:34:23.875 [2024-07-25 01:20:16.758189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.875 [2024-07-25 01:20:16.758214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.875 qpair failed and we were unable to recover it. 00:34:23.875 [2024-07-25 01:20:16.758342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.875 [2024-07-25 01:20:16.758368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.875 qpair failed and we were unable to recover it. 00:34:23.875 [2024-07-25 01:20:16.758477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.875 [2024-07-25 01:20:16.758502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.875 qpair failed and we were unable to recover it. 00:34:23.875 [2024-07-25 01:20:16.758652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.875 [2024-07-25 01:20:16.758678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.875 qpair failed and we were unable to recover it. 00:34:23.875 [2024-07-25 01:20:16.758789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.875 [2024-07-25 01:20:16.758813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.875 qpair failed and we were unable to recover it. 00:34:23.875 [2024-07-25 01:20:16.758938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.875 [2024-07-25 01:20:16.758963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.875 qpair failed and we were unable to recover it. 00:34:23.875 [2024-07-25 01:20:16.759076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.875 [2024-07-25 01:20:16.759101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.875 qpair failed and we were unable to recover it. 00:34:23.875 [2024-07-25 01:20:16.759268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.875 [2024-07-25 01:20:16.759302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.875 qpair failed and we were unable to recover it. 00:34:23.875 [2024-07-25 01:20:16.759408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.875 [2024-07-25 01:20:16.759433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.875 qpair failed and we were unable to recover it. 00:34:23.875 [2024-07-25 01:20:16.759577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.875 [2024-07-25 01:20:16.759603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.875 qpair failed and we were unable to recover it. 00:34:23.875 [2024-07-25 01:20:16.759721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.875 [2024-07-25 01:20:16.759745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.875 qpair failed and we were unable to recover it. 00:34:23.875 [2024-07-25 01:20:16.759914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.875 [2024-07-25 01:20:16.759939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.875 qpair failed and we were unable to recover it. 00:34:23.875 [2024-07-25 01:20:16.760083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.875 [2024-07-25 01:20:16.760108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.875 qpair failed and we were unable to recover it. 00:34:23.875 [2024-07-25 01:20:16.760215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.875 [2024-07-25 01:20:16.760240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.875 qpair failed and we were unable to recover it. 00:34:23.875 [2024-07-25 01:20:16.760375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.875 [2024-07-25 01:20:16.760401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.875 qpair failed and we were unable to recover it. 00:34:23.875 [2024-07-25 01:20:16.760517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.875 [2024-07-25 01:20:16.760542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.875 qpair failed and we were unable to recover it. 00:34:23.875 [2024-07-25 01:20:16.760688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.875 [2024-07-25 01:20:16.760714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.875 qpair failed and we were unable to recover it. 00:34:23.875 [2024-07-25 01:20:16.760843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.875 [2024-07-25 01:20:16.760869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.875 qpair failed and we were unable to recover it. 00:34:23.875 [2024-07-25 01:20:16.760987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.875 [2024-07-25 01:20:16.761013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.875 qpair failed and we were unable to recover it. 00:34:23.875 [2024-07-25 01:20:16.761132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.875 [2024-07-25 01:20:16.761158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.875 qpair failed and we were unable to recover it. 00:34:23.875 [2024-07-25 01:20:16.761282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.875 [2024-07-25 01:20:16.761311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.875 qpair failed and we were unable to recover it. 00:34:23.875 [2024-07-25 01:20:16.761429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.875 [2024-07-25 01:20:16.761454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.875 qpair failed and we were unable to recover it. 00:34:23.875 [2024-07-25 01:20:16.761561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.875 [2024-07-25 01:20:16.761586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.875 qpair failed and we were unable to recover it. 00:34:23.875 [2024-07-25 01:20:16.761701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.875 [2024-07-25 01:20:16.761727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.875 qpair failed and we were unable to recover it. 00:34:23.875 [2024-07-25 01:20:16.761900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.875 [2024-07-25 01:20:16.761925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.875 qpair failed and we were unable to recover it. 00:34:23.875 [2024-07-25 01:20:16.762072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.875 [2024-07-25 01:20:16.762097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.875 qpair failed and we were unable to recover it. 00:34:23.875 [2024-07-25 01:20:16.762223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.875 [2024-07-25 01:20:16.762253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.875 qpair failed and we were unable to recover it. 00:34:23.875 [2024-07-25 01:20:16.762370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.875 [2024-07-25 01:20:16.762395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.875 qpair failed and we were unable to recover it. 00:34:23.875 [2024-07-25 01:20:16.762515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.875 [2024-07-25 01:20:16.762540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.875 qpair failed and we were unable to recover it. 00:34:23.875 [2024-07-25 01:20:16.762732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.875 [2024-07-25 01:20:16.762756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.875 qpair failed and we were unable to recover it. 00:34:23.875 [2024-07-25 01:20:16.762885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.875 [2024-07-25 01:20:16.762915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.875 qpair failed and we were unable to recover it. 00:34:23.875 [2024-07-25 01:20:16.763059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.875 [2024-07-25 01:20:16.763084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.875 qpair failed and we were unable to recover it. 00:34:23.875 [2024-07-25 01:20:16.763193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.875 [2024-07-25 01:20:16.763218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.875 qpair failed and we were unable to recover it. 00:34:23.875 [2024-07-25 01:20:16.763363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.875 [2024-07-25 01:20:16.763389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.875 qpair failed and we were unable to recover it. 00:34:23.875 [2024-07-25 01:20:16.763535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.875 [2024-07-25 01:20:16.763560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.875 qpair failed and we were unable to recover it. 00:34:23.875 [2024-07-25 01:20:16.763726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.875 [2024-07-25 01:20:16.763752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.875 qpair failed and we were unable to recover it. 00:34:23.875 [2024-07-25 01:20:16.763868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.875 [2024-07-25 01:20:16.763893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.876 qpair failed and we were unable to recover it. 00:34:23.876 [2024-07-25 01:20:16.764011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.876 [2024-07-25 01:20:16.764037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.876 qpair failed and we were unable to recover it. 00:34:23.876 [2024-07-25 01:20:16.764178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.876 [2024-07-25 01:20:16.764203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.876 qpair failed and we were unable to recover it. 00:34:23.876 [2024-07-25 01:20:16.764317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.876 [2024-07-25 01:20:16.764343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.876 qpair failed and we were unable to recover it. 00:34:23.876 [2024-07-25 01:20:16.764452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.876 [2024-07-25 01:20:16.764477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.876 qpair failed and we were unable to recover it. 00:34:23.876 [2024-07-25 01:20:16.764615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.876 [2024-07-25 01:20:16.764640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.876 qpair failed and we were unable to recover it. 00:34:23.876 [2024-07-25 01:20:16.764789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.876 [2024-07-25 01:20:16.764815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.876 qpair failed and we were unable to recover it. 00:34:23.876 [2024-07-25 01:20:16.764929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.876 [2024-07-25 01:20:16.764955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.876 qpair failed and we were unable to recover it. 00:34:23.876 [2024-07-25 01:20:16.765079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.876 [2024-07-25 01:20:16.765105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.876 qpair failed and we were unable to recover it. 00:34:23.876 [2024-07-25 01:20:16.765254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.876 [2024-07-25 01:20:16.765280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.876 qpair failed and we were unable to recover it. 00:34:23.876 [2024-07-25 01:20:16.765393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.876 [2024-07-25 01:20:16.765418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.876 qpair failed and we were unable to recover it. 00:34:23.876 [2024-07-25 01:20:16.765543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.876 [2024-07-25 01:20:16.765569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.876 qpair failed and we were unable to recover it. 00:34:23.876 [2024-07-25 01:20:16.765686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.876 [2024-07-25 01:20:16.765711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.876 qpair failed and we were unable to recover it. 00:34:23.876 [2024-07-25 01:20:16.765820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.876 [2024-07-25 01:20:16.765846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.876 qpair failed and we were unable to recover it. 00:34:23.876 [2024-07-25 01:20:16.765961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.876 [2024-07-25 01:20:16.765986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.876 qpair failed and we were unable to recover it. 00:34:23.876 [2024-07-25 01:20:16.766153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.876 [2024-07-25 01:20:16.766195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.876 qpair failed and we were unable to recover it. 00:34:23.876 [2024-07-25 01:20:16.766352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.876 [2024-07-25 01:20:16.766380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.876 qpair failed and we were unable to recover it. 00:34:23.876 [2024-07-25 01:20:16.766502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.876 [2024-07-25 01:20:16.766529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.876 qpair failed and we were unable to recover it. 00:34:23.876 [2024-07-25 01:20:16.766670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.876 [2024-07-25 01:20:16.766696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.876 qpair failed and we were unable to recover it. 00:34:23.876 [2024-07-25 01:20:16.766816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.876 [2024-07-25 01:20:16.766843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.876 qpair failed and we were unable to recover it. 00:34:23.876 [2024-07-25 01:20:16.766960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.876 [2024-07-25 01:20:16.766985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.876 qpair failed and we were unable to recover it. 00:34:23.876 [2024-07-25 01:20:16.767096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.876 [2024-07-25 01:20:16.767127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.876 qpair failed and we were unable to recover it. 00:34:23.876 [2024-07-25 01:20:16.767240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.876 [2024-07-25 01:20:16.767270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.876 qpair failed and we were unable to recover it. 00:34:23.876 [2024-07-25 01:20:16.767392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.876 [2024-07-25 01:20:16.767418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.876 qpair failed and we were unable to recover it. 00:34:23.876 [2024-07-25 01:20:16.767527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.876 [2024-07-25 01:20:16.767560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.876 qpair failed and we were unable to recover it. 00:34:23.876 [2024-07-25 01:20:16.767710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.876 [2024-07-25 01:20:16.767736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.876 qpair failed and we were unable to recover it. 00:34:23.876 [2024-07-25 01:20:16.767854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.876 [2024-07-25 01:20:16.767880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.876 qpair failed and we were unable to recover it. 00:34:23.876 [2024-07-25 01:20:16.768025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.876 [2024-07-25 01:20:16.768052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.876 qpair failed and we were unable to recover it. 00:34:23.876 [2024-07-25 01:20:16.768171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.876 [2024-07-25 01:20:16.768198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.876 qpair failed and we were unable to recover it. 00:34:23.876 [2024-07-25 01:20:16.768346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.876 [2024-07-25 01:20:16.768386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.876 qpair failed and we were unable to recover it. 00:34:23.876 [2024-07-25 01:20:16.768512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.876 [2024-07-25 01:20:16.768539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.876 qpair failed and we were unable to recover it. 00:34:23.876 [2024-07-25 01:20:16.768663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.876 [2024-07-25 01:20:16.768689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.876 qpair failed and we were unable to recover it. 00:34:23.876 [2024-07-25 01:20:16.768821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.876 [2024-07-25 01:20:16.768848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.876 qpair failed and we were unable to recover it. 00:34:23.876 [2024-07-25 01:20:16.768977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.876 [2024-07-25 01:20:16.769003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.876 qpair failed and we were unable to recover it. 00:34:23.876 [2024-07-25 01:20:16.769122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.876 [2024-07-25 01:20:16.769148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.876 qpair failed and we were unable to recover it. 00:34:23.876 [2024-07-25 01:20:16.769276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.876 [2024-07-25 01:20:16.769313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.876 qpair failed and we were unable to recover it. 00:34:23.876 [2024-07-25 01:20:16.769449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.876 [2024-07-25 01:20:16.769475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.876 qpair failed and we were unable to recover it. 00:34:23.876 [2024-07-25 01:20:16.769583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.876 [2024-07-25 01:20:16.769609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.876 qpair failed and we were unable to recover it. 00:34:23.877 [2024-07-25 01:20:16.769732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.877 [2024-07-25 01:20:16.769759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.877 qpair failed and we were unable to recover it. 00:34:23.877 [2024-07-25 01:20:16.769876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.877 [2024-07-25 01:20:16.769903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.877 qpair failed and we were unable to recover it. 00:34:23.877 [2024-07-25 01:20:16.770023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.877 [2024-07-25 01:20:16.770048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.877 qpair failed and we were unable to recover it. 00:34:23.877 [2024-07-25 01:20:16.770163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.877 [2024-07-25 01:20:16.770190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.877 qpair failed and we were unable to recover it. 00:34:23.877 [2024-07-25 01:20:16.770330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.877 [2024-07-25 01:20:16.770368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.877 qpair failed and we were unable to recover it. 00:34:23.877 [2024-07-25 01:20:16.770494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.877 [2024-07-25 01:20:16.770522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.877 qpair failed and we were unable to recover it. 00:34:23.877 [2024-07-25 01:20:16.770660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.877 [2024-07-25 01:20:16.770686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.877 qpair failed and we were unable to recover it. 00:34:23.877 [2024-07-25 01:20:16.770795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.877 [2024-07-25 01:20:16.770821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.877 qpair failed and we were unable to recover it. 00:34:23.877 [2024-07-25 01:20:16.770936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.877 [2024-07-25 01:20:16.770962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.877 qpair failed and we were unable to recover it. 00:34:23.877 [2024-07-25 01:20:16.771100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.877 [2024-07-25 01:20:16.771126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.877 qpair failed and we were unable to recover it. 00:34:23.877 [2024-07-25 01:20:16.771258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.877 [2024-07-25 01:20:16.771307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.877 qpair failed and we were unable to recover it. 00:34:23.877 [2024-07-25 01:20:16.771441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.877 [2024-07-25 01:20:16.771468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.877 qpair failed and we were unable to recover it. 00:34:23.877 [2024-07-25 01:20:16.771595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.877 [2024-07-25 01:20:16.771620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.877 qpair failed and we were unable to recover it. 00:34:23.877 [2024-07-25 01:20:16.771738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.877 [2024-07-25 01:20:16.771764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.877 qpair failed and we were unable to recover it. 00:34:23.877 [2024-07-25 01:20:16.771906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.877 [2024-07-25 01:20:16.771931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.877 qpair failed and we were unable to recover it. 00:34:23.877 [2024-07-25 01:20:16.772045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.877 [2024-07-25 01:20:16.772070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.877 qpair failed and we were unable to recover it. 00:34:23.877 [2024-07-25 01:20:16.772211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.877 [2024-07-25 01:20:16.772237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.877 qpair failed and we were unable to recover it. 00:34:23.877 [2024-07-25 01:20:16.772366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.877 [2024-07-25 01:20:16.772391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.877 qpair failed and we were unable to recover it. 00:34:23.877 [2024-07-25 01:20:16.772506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.877 [2024-07-25 01:20:16.772530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.877 qpair failed and we were unable to recover it. 00:34:23.877 [2024-07-25 01:20:16.772650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.877 [2024-07-25 01:20:16.772676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.877 qpair failed and we were unable to recover it. 00:34:23.877 [2024-07-25 01:20:16.772782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.877 [2024-07-25 01:20:16.772807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.877 qpair failed and we were unable to recover it. 00:34:23.877 [2024-07-25 01:20:16.772929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.877 [2024-07-25 01:20:16.772953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.877 qpair failed and we were unable to recover it. 00:34:23.877 [2024-07-25 01:20:16.773060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.877 [2024-07-25 01:20:16.773085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.877 qpair failed and we were unable to recover it. 00:34:23.877 [2024-07-25 01:20:16.773210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.877 [2024-07-25 01:20:16.773235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.877 qpair failed and we were unable to recover it. 00:34:23.877 [2024-07-25 01:20:16.773362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.877 [2024-07-25 01:20:16.773388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.877 qpair failed and we were unable to recover it. 00:34:23.877 [2024-07-25 01:20:16.773535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.877 [2024-07-25 01:20:16.773560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.877 qpair failed and we were unable to recover it. 00:34:23.877 [2024-07-25 01:20:16.773674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.877 [2024-07-25 01:20:16.773699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.877 qpair failed and we were unable to recover it. 00:34:23.877 [2024-07-25 01:20:16.773825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.877 [2024-07-25 01:20:16.773852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.877 qpair failed and we were unable to recover it. 00:34:23.877 [2024-07-25 01:20:16.773977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.877 [2024-07-25 01:20:16.774002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.877 qpair failed and we were unable to recover it. 00:34:23.877 [2024-07-25 01:20:16.774123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.877 [2024-07-25 01:20:16.774149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.877 qpair failed and we were unable to recover it. 00:34:23.877 [2024-07-25 01:20:16.774267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.877 [2024-07-25 01:20:16.774304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.877 qpair failed and we were unable to recover it. 00:34:23.877 [2024-07-25 01:20:16.774429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.877 [2024-07-25 01:20:16.774453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.877 qpair failed and we were unable to recover it. 00:34:23.877 [2024-07-25 01:20:16.774594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.877 [2024-07-25 01:20:16.774619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.877 qpair failed and we were unable to recover it. 00:34:23.877 [2024-07-25 01:20:16.774734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.877 [2024-07-25 01:20:16.774759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.877 qpair failed and we were unable to recover it. 00:34:23.877 [2024-07-25 01:20:16.774874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.877 [2024-07-25 01:20:16.774898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.877 qpair failed and we were unable to recover it. 00:34:23.877 [2024-07-25 01:20:16.775005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.877 [2024-07-25 01:20:16.775031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.877 qpair failed and we were unable to recover it. 00:34:23.877 [2024-07-25 01:20:16.775140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.877 [2024-07-25 01:20:16.775165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.877 qpair failed and we were unable to recover it. 00:34:23.877 [2024-07-25 01:20:16.775301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.877 [2024-07-25 01:20:16.775331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.877 qpair failed and we were unable to recover it. 00:34:23.877 [2024-07-25 01:20:16.775479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.877 [2024-07-25 01:20:16.775503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.877 qpair failed and we were unable to recover it. 00:34:23.877 [2024-07-25 01:20:16.775609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.877 [2024-07-25 01:20:16.775634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.878 qpair failed and we were unable to recover it. 00:34:23.878 [2024-07-25 01:20:16.775761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.878 [2024-07-25 01:20:16.775786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.878 qpair failed and we were unable to recover it. 00:34:23.878 [2024-07-25 01:20:16.775962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.878 [2024-07-25 01:20:16.775987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.878 qpair failed and we were unable to recover it. 00:34:23.878 [2024-07-25 01:20:16.776112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.878 [2024-07-25 01:20:16.776136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.878 qpair failed and we were unable to recover it. 00:34:23.878 [2024-07-25 01:20:16.776253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.878 [2024-07-25 01:20:16.776280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.878 qpair failed and we were unable to recover it. 00:34:23.878 [2024-07-25 01:20:16.776392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.878 [2024-07-25 01:20:16.776417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.878 qpair failed and we were unable to recover it. 00:34:23.878 [2024-07-25 01:20:16.776539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.878 [2024-07-25 01:20:16.776564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.878 qpair failed and we were unable to recover it. 00:34:23.878 [2024-07-25 01:20:16.776688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.878 [2024-07-25 01:20:16.776713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.878 qpair failed and we were unable to recover it. 00:34:23.878 [2024-07-25 01:20:16.776835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.878 [2024-07-25 01:20:16.776860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.878 qpair failed and we were unable to recover it. 00:34:23.878 [2024-07-25 01:20:16.776973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.878 [2024-07-25 01:20:16.776998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.878 qpair failed and we were unable to recover it. 00:34:23.878 [2024-07-25 01:20:16.777118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.878 [2024-07-25 01:20:16.777143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.878 qpair failed and we were unable to recover it. 00:34:23.878 [2024-07-25 01:20:16.777286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.878 [2024-07-25 01:20:16.777311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.878 qpair failed and we were unable to recover it. 00:34:23.878 [2024-07-25 01:20:16.777463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.878 [2024-07-25 01:20:16.777488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.878 qpair failed and we were unable to recover it. 00:34:23.878 [2024-07-25 01:20:16.777595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.878 [2024-07-25 01:20:16.777620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.878 qpair failed and we were unable to recover it. 00:34:23.878 [2024-07-25 01:20:16.777732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.878 [2024-07-25 01:20:16.777756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.878 qpair failed and we were unable to recover it. 00:34:23.878 [2024-07-25 01:20:16.777870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.878 [2024-07-25 01:20:16.777895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.878 qpair failed and we were unable to recover it. 00:34:23.878 [2024-07-25 01:20:16.778073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.878 [2024-07-25 01:20:16.778098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.878 qpair failed and we were unable to recover it. 00:34:23.878 [2024-07-25 01:20:16.778206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.878 [2024-07-25 01:20:16.778230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.878 qpair failed and we were unable to recover it. 00:34:23.878 [2024-07-25 01:20:16.778363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.878 [2024-07-25 01:20:16.778388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.878 qpair failed and we were unable to recover it. 00:34:23.878 [2024-07-25 01:20:16.778534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.878 [2024-07-25 01:20:16.778559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.878 qpair failed and we were unable to recover it. 00:34:23.878 [2024-07-25 01:20:16.778697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.878 [2024-07-25 01:20:16.778722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.878 qpair failed and we were unable to recover it. 00:34:23.878 [2024-07-25 01:20:16.778858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.878 [2024-07-25 01:20:16.778884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.878 qpair failed and we were unable to recover it. 00:34:23.878 [2024-07-25 01:20:16.779008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.878 [2024-07-25 01:20:16.779033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.878 qpair failed and we were unable to recover it. 00:34:23.878 [2024-07-25 01:20:16.779148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.878 [2024-07-25 01:20:16.779173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.878 qpair failed and we were unable to recover it. 00:34:23.878 [2024-07-25 01:20:16.779289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.878 [2024-07-25 01:20:16.779315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.878 qpair failed and we were unable to recover it. 00:34:23.878 [2024-07-25 01:20:16.779474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.878 [2024-07-25 01:20:16.779503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.878 qpair failed and we were unable to recover it. 00:34:23.878 [2024-07-25 01:20:16.779638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.878 [2024-07-25 01:20:16.779672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.878 qpair failed and we were unable to recover it. 00:34:23.878 [2024-07-25 01:20:16.779830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.878 [2024-07-25 01:20:16.779857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.878 qpair failed and we were unable to recover it. 00:34:23.878 [2024-07-25 01:20:16.779984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.878 [2024-07-25 01:20:16.780009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.878 qpair failed and we were unable to recover it. 00:34:23.878 [2024-07-25 01:20:16.780128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.878 [2024-07-25 01:20:16.780154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.878 qpair failed and we were unable to recover it. 00:34:23.878 [2024-07-25 01:20:16.780308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.879 [2024-07-25 01:20:16.780348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.879 qpair failed and we were unable to recover it. 00:34:23.879 [2024-07-25 01:20:16.780486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.879 [2024-07-25 01:20:16.780513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.879 qpair failed and we were unable to recover it. 00:34:23.879 [2024-07-25 01:20:16.780654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.879 [2024-07-25 01:20:16.780680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.879 qpair failed and we were unable to recover it. 00:34:23.879 [2024-07-25 01:20:16.780807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.879 [2024-07-25 01:20:16.780832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.879 qpair failed and we were unable to recover it. 00:34:23.879 [2024-07-25 01:20:16.780957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.879 [2024-07-25 01:20:16.780983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.879 qpair failed and we were unable to recover it. 00:34:23.879 [2024-07-25 01:20:16.781094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.879 [2024-07-25 01:20:16.781120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.879 qpair failed and we were unable to recover it. 00:34:23.879 [2024-07-25 01:20:16.781257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.879 [2024-07-25 01:20:16.781284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.879 qpair failed and we were unable to recover it. 00:34:23.879 [2024-07-25 01:20:16.781430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.879 [2024-07-25 01:20:16.781457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.879 qpair failed and we were unable to recover it. 00:34:23.879 [2024-07-25 01:20:16.781644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.879 [2024-07-25 01:20:16.781670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.879 qpair failed and we were unable to recover it. 00:34:23.879 [2024-07-25 01:20:16.781845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.879 [2024-07-25 01:20:16.781871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.879 qpair failed and we were unable to recover it. 00:34:23.879 [2024-07-25 01:20:16.782002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.879 [2024-07-25 01:20:16.782028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.879 qpair failed and we were unable to recover it. 00:34:23.879 [2024-07-25 01:20:16.782140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.879 [2024-07-25 01:20:16.782166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.879 qpair failed and we were unable to recover it. 00:34:23.879 [2024-07-25 01:20:16.782308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.879 [2024-07-25 01:20:16.782335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.879 qpair failed and we were unable to recover it. 00:34:23.879 [2024-07-25 01:20:16.782457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.879 [2024-07-25 01:20:16.782484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.879 qpair failed and we were unable to recover it. 00:34:23.879 [2024-07-25 01:20:16.782591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.879 [2024-07-25 01:20:16.782617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.879 qpair failed and we were unable to recover it. 00:34:23.879 [2024-07-25 01:20:16.782744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.879 [2024-07-25 01:20:16.782770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.879 qpair failed and we were unable to recover it. 00:34:23.879 [2024-07-25 01:20:16.782885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.879 [2024-07-25 01:20:16.782911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.879 qpair failed and we were unable to recover it. 00:34:23.879 [2024-07-25 01:20:16.783061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.879 [2024-07-25 01:20:16.783086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.879 qpair failed and we were unable to recover it. 00:34:23.879 [2024-07-25 01:20:16.783229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.879 [2024-07-25 01:20:16.783261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.879 qpair failed and we were unable to recover it. 00:34:23.879 [2024-07-25 01:20:16.783388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.879 [2024-07-25 01:20:16.783414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.879 qpair failed and we were unable to recover it. 00:34:23.879 [2024-07-25 01:20:16.783535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.880 [2024-07-25 01:20:16.783560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.880 qpair failed and we were unable to recover it. 00:34:23.880 [2024-07-25 01:20:16.783676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.880 [2024-07-25 01:20:16.783703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.880 qpair failed and we were unable to recover it. 00:34:23.880 [2024-07-25 01:20:16.783880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.880 [2024-07-25 01:20:16.783906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.880 qpair failed and we were unable to recover it. 00:34:23.880 [2024-07-25 01:20:16.784051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.880 [2024-07-25 01:20:16.784076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.880 qpair failed and we were unable to recover it. 00:34:23.880 [2024-07-25 01:20:16.784188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.880 [2024-07-25 01:20:16.784213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.880 qpair failed and we were unable to recover it. 00:34:23.880 [2024-07-25 01:20:16.784356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.880 [2024-07-25 01:20:16.784396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.880 qpair failed and we were unable to recover it. 00:34:23.880 [2024-07-25 01:20:16.784540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.880 [2024-07-25 01:20:16.784566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.880 qpair failed and we were unable to recover it. 00:34:23.880 [2024-07-25 01:20:16.784711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.880 [2024-07-25 01:20:16.784737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.880 qpair failed and we were unable to recover it. 00:34:23.880 [2024-07-25 01:20:16.784849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.880 [2024-07-25 01:20:16.784875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.880 qpair failed and we were unable to recover it. 00:34:23.880 [2024-07-25 01:20:16.785051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.880 [2024-07-25 01:20:16.785076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.880 qpair failed and we were unable to recover it. 00:34:23.880 [2024-07-25 01:20:16.785191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.880 [2024-07-25 01:20:16.785217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.880 qpair failed and we were unable to recover it. 00:34:23.880 [2024-07-25 01:20:16.785353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.880 [2024-07-25 01:20:16.785380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.880 qpair failed and we were unable to recover it. 00:34:23.880 [2024-07-25 01:20:16.785503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.880 [2024-07-25 01:20:16.785529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.880 qpair failed and we were unable to recover it. 00:34:23.880 [2024-07-25 01:20:16.785642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.880 [2024-07-25 01:20:16.785667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.880 qpair failed and we were unable to recover it. 00:34:23.880 [2024-07-25 01:20:16.785779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.880 [2024-07-25 01:20:16.785806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.880 qpair failed and we were unable to recover it. 00:34:23.880 [2024-07-25 01:20:16.785951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.880 [2024-07-25 01:20:16.785981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.880 qpair failed and we were unable to recover it. 00:34:23.880 [2024-07-25 01:20:16.786126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.880 [2024-07-25 01:20:16.786152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.880 qpair failed and we were unable to recover it. 00:34:23.880 [2024-07-25 01:20:16.786271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.880 [2024-07-25 01:20:16.786297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.880 qpair failed and we were unable to recover it. 00:34:23.880 [2024-07-25 01:20:16.786414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.880 [2024-07-25 01:20:16.786440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.880 qpair failed and we were unable to recover it. 00:34:23.880 [2024-07-25 01:20:16.786585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.880 [2024-07-25 01:20:16.786611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.880 qpair failed and we were unable to recover it. 00:34:23.880 [2024-07-25 01:20:16.786735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.880 [2024-07-25 01:20:16.786760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.880 qpair failed and we were unable to recover it. 00:34:23.880 [2024-07-25 01:20:16.786876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.880 [2024-07-25 01:20:16.786902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.880 qpair failed and we were unable to recover it. 00:34:23.880 [2024-07-25 01:20:16.787015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.880 [2024-07-25 01:20:16.787041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.880 qpair failed and we were unable to recover it. 00:34:23.880 [2024-07-25 01:20:16.787158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.880 [2024-07-25 01:20:16.787185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.880 qpair failed and we were unable to recover it. 00:34:23.880 [2024-07-25 01:20:16.787314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.880 [2024-07-25 01:20:16.787340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.880 qpair failed and we were unable to recover it. 00:34:23.880 [2024-07-25 01:20:16.787466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.880 [2024-07-25 01:20:16.787492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.880 qpair failed and we were unable to recover it. 00:34:23.880 [2024-07-25 01:20:16.787609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.880 [2024-07-25 01:20:16.787636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.880 qpair failed and we were unable to recover it. 00:34:23.880 [2024-07-25 01:20:16.787776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.880 [2024-07-25 01:20:16.787801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.880 qpair failed and we were unable to recover it. 00:34:23.880 [2024-07-25 01:20:16.787937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.880 [2024-07-25 01:20:16.787984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.881 qpair failed and we were unable to recover it. 00:34:23.881 [2024-07-25 01:20:16.788145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.881 [2024-07-25 01:20:16.788183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.881 qpair failed and we were unable to recover it. 00:34:23.881 [2024-07-25 01:20:16.788348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.881 [2024-07-25 01:20:16.788387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.881 qpair failed and we were unable to recover it. 00:34:23.881 [2024-07-25 01:20:16.788523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.881 [2024-07-25 01:20:16.788549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.881 qpair failed and we were unable to recover it. 00:34:23.881 [2024-07-25 01:20:16.788670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.881 [2024-07-25 01:20:16.788696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.881 qpair failed and we were unable to recover it. 00:34:23.881 [2024-07-25 01:20:16.788806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.881 [2024-07-25 01:20:16.788832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.881 qpair failed and we were unable to recover it. 00:34:23.881 [2024-07-25 01:20:16.788946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.881 [2024-07-25 01:20:16.788972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.881 qpair failed and we were unable to recover it. 00:34:23.881 [2024-07-25 01:20:16.789089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.881 [2024-07-25 01:20:16.789116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.881 qpair failed and we were unable to recover it. 00:34:23.881 [2024-07-25 01:20:16.789269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.881 [2024-07-25 01:20:16.789302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.881 qpair failed and we were unable to recover it. 00:34:23.881 [2024-07-25 01:20:16.789453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.881 [2024-07-25 01:20:16.789478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.881 qpair failed and we were unable to recover it. 00:34:23.881 [2024-07-25 01:20:16.789595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.881 [2024-07-25 01:20:16.789621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.881 qpair failed and we were unable to recover it. 00:34:23.881 [2024-07-25 01:20:16.789735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.881 [2024-07-25 01:20:16.789760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.881 qpair failed and we were unable to recover it. 00:34:23.881 [2024-07-25 01:20:16.789876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.881 [2024-07-25 01:20:16.789901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.881 qpair failed and we were unable to recover it. 00:34:23.881 [2024-07-25 01:20:16.790034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.881 [2024-07-25 01:20:16.790060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.881 qpair failed and we were unable to recover it. 00:34:23.881 [2024-07-25 01:20:16.790181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.881 [2024-07-25 01:20:16.790207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.881 qpair failed and we were unable to recover it. 00:34:23.881 [2024-07-25 01:20:16.790320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.881 [2024-07-25 01:20:16.790346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.881 qpair failed and we were unable to recover it. 00:34:23.881 [2024-07-25 01:20:16.790469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.881 [2024-07-25 01:20:16.790507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.881 qpair failed and we were unable to recover it. 00:34:23.881 [2024-07-25 01:20:16.790630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.881 [2024-07-25 01:20:16.790656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.881 qpair failed and we were unable to recover it. 00:34:23.881 [2024-07-25 01:20:16.790772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.881 [2024-07-25 01:20:16.790798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.881 qpair failed and we were unable to recover it. 00:34:23.881 [2024-07-25 01:20:16.790917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.881 [2024-07-25 01:20:16.790943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.881 qpair failed and we were unable to recover it. 00:34:23.881 [2024-07-25 01:20:16.791062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.881 [2024-07-25 01:20:16.791087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.881 qpair failed and we were unable to recover it. 00:34:23.881 [2024-07-25 01:20:16.791202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.881 [2024-07-25 01:20:16.791227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.881 qpair failed and we were unable to recover it. 00:34:23.881 [2024-07-25 01:20:16.791389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.881 [2024-07-25 01:20:16.791414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.881 qpair failed and we were unable to recover it. 00:34:23.881 [2024-07-25 01:20:16.791535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.881 [2024-07-25 01:20:16.791561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.881 qpair failed and we were unable to recover it. 00:34:23.881 [2024-07-25 01:20:16.791672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.881 [2024-07-25 01:20:16.791698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.881 qpair failed and we were unable to recover it. 00:34:23.881 [2024-07-25 01:20:16.791819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.881 [2024-07-25 01:20:16.791845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.881 qpair failed and we were unable to recover it. 00:34:23.881 [2024-07-25 01:20:16.791977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.881 [2024-07-25 01:20:16.792002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.881 qpair failed and we were unable to recover it. 00:34:23.881 [2024-07-25 01:20:16.792146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.881 [2024-07-25 01:20:16.792176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.881 qpair failed and we were unable to recover it. 00:34:23.881 [2024-07-25 01:20:16.792316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.881 [2024-07-25 01:20:16.792343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.881 qpair failed and we were unable to recover it. 00:34:23.881 [2024-07-25 01:20:16.792470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.882 [2024-07-25 01:20:16.792495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.882 qpair failed and we were unable to recover it. 00:34:23.882 [2024-07-25 01:20:16.792606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.882 [2024-07-25 01:20:16.792631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.882 qpair failed and we were unable to recover it. 00:34:23.882 [2024-07-25 01:20:16.792759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.882 [2024-07-25 01:20:16.792784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.882 qpair failed and we were unable to recover it. 00:34:23.882 [2024-07-25 01:20:16.792895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.882 [2024-07-25 01:20:16.792920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.882 qpair failed and we were unable to recover it. 00:34:23.882 [2024-07-25 01:20:16.793033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.882 [2024-07-25 01:20:16.793058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.882 qpair failed and we were unable to recover it. 00:34:23.882 [2024-07-25 01:20:16.793189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.882 [2024-07-25 01:20:16.793214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.882 qpair failed and we were unable to recover it. 00:34:23.882 [2024-07-25 01:20:16.793367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.882 [2024-07-25 01:20:16.793393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.882 qpair failed and we were unable to recover it. 00:34:23.882 [2024-07-25 01:20:16.793520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.882 [2024-07-25 01:20:16.793546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.882 qpair failed and we were unable to recover it. 00:34:23.882 [2024-07-25 01:20:16.793662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.882 [2024-07-25 01:20:16.793687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.882 qpair failed and we were unable to recover it. 00:34:23.882 [2024-07-25 01:20:16.793824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.882 [2024-07-25 01:20:16.793849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.882 qpair failed and we were unable to recover it. 00:34:23.882 [2024-07-25 01:20:16.793960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.882 [2024-07-25 01:20:16.793986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.882 qpair failed and we were unable to recover it. 00:34:23.882 [2024-07-25 01:20:16.794110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.882 [2024-07-25 01:20:16.794148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.882 qpair failed and we were unable to recover it. 00:34:23.882 [2024-07-25 01:20:16.794306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.882 [2024-07-25 01:20:16.794334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.882 qpair failed and we were unable to recover it. 00:34:23.882 [2024-07-25 01:20:16.794459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.882 [2024-07-25 01:20:16.794484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.882 qpair failed and we were unable to recover it. 00:34:23.882 [2024-07-25 01:20:16.794599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.882 [2024-07-25 01:20:16.794625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.882 qpair failed and we were unable to recover it. 00:34:23.882 [2024-07-25 01:20:16.794771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.882 [2024-07-25 01:20:16.794798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.882 qpair failed and we were unable to recover it. 00:34:23.882 [2024-07-25 01:20:16.794910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.882 [2024-07-25 01:20:16.794935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.882 qpair failed and we were unable to recover it. 00:34:23.882 [2024-07-25 01:20:16.795043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.882 [2024-07-25 01:20:16.795069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.882 qpair failed and we were unable to recover it. 00:34:23.882 [2024-07-25 01:20:16.795191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.882 [2024-07-25 01:20:16.795216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.882 qpair failed and we were unable to recover it. 00:34:23.882 [2024-07-25 01:20:16.795373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.882 [2024-07-25 01:20:16.795399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.882 qpair failed and we were unable to recover it. 00:34:23.882 [2024-07-25 01:20:16.795517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.882 [2024-07-25 01:20:16.795543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.882 qpair failed and we were unable to recover it. 00:34:23.882 [2024-07-25 01:20:16.795702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.882 [2024-07-25 01:20:16.795728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.882 qpair failed and we were unable to recover it. 00:34:23.882 [2024-07-25 01:20:16.795832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.882 [2024-07-25 01:20:16.795858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.882 qpair failed and we were unable to recover it. 00:34:23.882 [2024-07-25 01:20:16.795969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.882 [2024-07-25 01:20:16.795996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.882 qpair failed and we were unable to recover it. 00:34:23.882 [2024-07-25 01:20:16.796165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.882 [2024-07-25 01:20:16.796190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.882 qpair failed and we were unable to recover it. 00:34:23.882 [2024-07-25 01:20:16.796323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.882 [2024-07-25 01:20:16.796350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.882 qpair failed and we were unable to recover it. 00:34:23.882 [2024-07-25 01:20:16.796466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.882 [2024-07-25 01:20:16.796492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.882 qpair failed and we were unable to recover it. 00:34:23.882 [2024-07-25 01:20:16.796604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.882 [2024-07-25 01:20:16.796629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.882 qpair failed and we were unable to recover it. 00:34:23.882 [2024-07-25 01:20:16.796740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.882 [2024-07-25 01:20:16.796765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.882 qpair failed and we were unable to recover it. 00:34:23.883 [2024-07-25 01:20:16.796875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.883 [2024-07-25 01:20:16.796900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.883 qpair failed and we were unable to recover it. 00:34:23.883 [2024-07-25 01:20:16.797036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.883 [2024-07-25 01:20:16.797061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.883 qpair failed and we were unable to recover it. 00:34:23.883 [2024-07-25 01:20:16.797173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.883 [2024-07-25 01:20:16.797198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.883 qpair failed and we were unable to recover it. 00:34:23.883 [2024-07-25 01:20:16.797361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.883 [2024-07-25 01:20:16.797388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.883 qpair failed and we were unable to recover it. 00:34:23.883 [2024-07-25 01:20:16.797499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.883 [2024-07-25 01:20:16.797524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.883 qpair failed and we were unable to recover it. 00:34:23.883 [2024-07-25 01:20:16.797637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.883 [2024-07-25 01:20:16.797662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.883 qpair failed and we were unable to recover it. 00:34:23.883 [2024-07-25 01:20:16.797771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.883 [2024-07-25 01:20:16.797796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.883 qpair failed and we were unable to recover it. 00:34:23.883 [2024-07-25 01:20:16.797939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.883 [2024-07-25 01:20:16.797963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.883 qpair failed and we were unable to recover it. 00:34:23.883 [2024-07-25 01:20:16.798103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.883 [2024-07-25 01:20:16.798129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.883 qpair failed and we were unable to recover it. 00:34:23.883 [2024-07-25 01:20:16.798260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.883 [2024-07-25 01:20:16.798294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.883 qpair failed and we were unable to recover it. 00:34:23.883 [2024-07-25 01:20:16.798454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.883 [2024-07-25 01:20:16.798479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.883 qpair failed and we were unable to recover it. 00:34:23.883 [2024-07-25 01:20:16.798611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.883 [2024-07-25 01:20:16.798636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.883 qpair failed and we were unable to recover it. 00:34:23.883 [2024-07-25 01:20:16.798765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.883 [2024-07-25 01:20:16.798791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.883 qpair failed and we were unable to recover it. 00:34:23.883 [2024-07-25 01:20:16.798943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.883 [2024-07-25 01:20:16.798969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.883 qpair failed and we were unable to recover it. 00:34:23.883 [2024-07-25 01:20:16.799137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.883 [2024-07-25 01:20:16.799162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.883 qpair failed and we were unable to recover it. 00:34:23.883 [2024-07-25 01:20:16.799310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.883 [2024-07-25 01:20:16.799349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.883 qpair failed and we were unable to recover it. 00:34:23.883 [2024-07-25 01:20:16.799480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.883 [2024-07-25 01:20:16.799507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.883 qpair failed and we were unable to recover it. 00:34:23.883 [2024-07-25 01:20:16.799648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.883 [2024-07-25 01:20:16.799674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.883 qpair failed and we were unable to recover it. 00:34:23.883 [2024-07-25 01:20:16.799785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.883 [2024-07-25 01:20:16.799811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.883 qpair failed and we were unable to recover it. 00:34:23.883 [2024-07-25 01:20:16.799937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.883 [2024-07-25 01:20:16.799964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.883 qpair failed and we were unable to recover it. 00:34:23.883 [2024-07-25 01:20:16.800091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.883 [2024-07-25 01:20:16.800117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.883 qpair failed and we were unable to recover it. 00:34:23.883 [2024-07-25 01:20:16.800252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.883 [2024-07-25 01:20:16.800279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.883 qpair failed and we were unable to recover it. 00:34:23.883 [2024-07-25 01:20:16.800443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.883 [2024-07-25 01:20:16.800468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.883 qpair failed and we were unable to recover it. 00:34:23.883 [2024-07-25 01:20:16.800626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.883 [2024-07-25 01:20:16.800651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.883 qpair failed and we were unable to recover it. 00:34:23.883 [2024-07-25 01:20:16.800764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.883 [2024-07-25 01:20:16.800789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.883 qpair failed and we were unable to recover it. 00:34:23.883 [2024-07-25 01:20:16.800908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.883 [2024-07-25 01:20:16.800933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.883 qpair failed and we were unable to recover it. 00:34:23.883 [2024-07-25 01:20:16.801059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.883 [2024-07-25 01:20:16.801084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.883 qpair failed and we were unable to recover it. 00:34:23.883 [2024-07-25 01:20:16.801200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.883 [2024-07-25 01:20:16.801226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.883 qpair failed and we were unable to recover it. 00:34:23.884 [2024-07-25 01:20:16.801361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.884 [2024-07-25 01:20:16.801387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.884 qpair failed and we were unable to recover it. 00:34:23.884 [2024-07-25 01:20:16.801505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.884 [2024-07-25 01:20:16.801531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.884 qpair failed and we were unable to recover it. 00:34:23.884 [2024-07-25 01:20:16.801650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.884 [2024-07-25 01:20:16.801676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.884 qpair failed and we were unable to recover it. 00:34:23.884 [2024-07-25 01:20:16.801805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.884 [2024-07-25 01:20:16.801831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.884 qpair failed and we were unable to recover it. 00:34:23.884 [2024-07-25 01:20:16.801956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.884 [2024-07-25 01:20:16.801982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.884 qpair failed and we were unable to recover it. 00:34:23.884 [2024-07-25 01:20:16.802098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.884 [2024-07-25 01:20:16.802123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.884 qpair failed and we were unable to recover it. 00:34:23.884 [2024-07-25 01:20:16.802264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.884 [2024-07-25 01:20:16.802291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.884 qpair failed and we were unable to recover it. 00:34:23.884 [2024-07-25 01:20:16.802418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.884 [2024-07-25 01:20:16.802443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.884 qpair failed and we were unable to recover it. 00:34:23.884 [2024-07-25 01:20:16.802565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.884 [2024-07-25 01:20:16.802596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.884 qpair failed and we were unable to recover it. 00:34:23.884 [2024-07-25 01:20:16.802717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.884 [2024-07-25 01:20:16.802743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.884 qpair failed and we were unable to recover it. 00:34:23.884 [2024-07-25 01:20:16.802857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.884 [2024-07-25 01:20:16.802883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.884 qpair failed and we were unable to recover it. 00:34:23.884 [2024-07-25 01:20:16.803000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.884 [2024-07-25 01:20:16.803026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.884 qpair failed and we were unable to recover it. 00:34:23.884 [2024-07-25 01:20:16.803174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.884 [2024-07-25 01:20:16.803200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.884 qpair failed and we were unable to recover it. 00:34:23.884 [2024-07-25 01:20:16.803334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.884 [2024-07-25 01:20:16.803360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.884 qpair failed and we were unable to recover it. 00:34:23.884 [2024-07-25 01:20:16.803480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.884 [2024-07-25 01:20:16.803505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.884 qpair failed and we were unable to recover it. 00:34:23.884 [2024-07-25 01:20:16.803659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.884 [2024-07-25 01:20:16.803685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.884 qpair failed and we were unable to recover it. 00:34:23.884 [2024-07-25 01:20:16.803840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.884 [2024-07-25 01:20:16.803866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.884 qpair failed and we were unable to recover it. 00:34:23.884 [2024-07-25 01:20:16.803994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.884 [2024-07-25 01:20:16.804020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.884 qpair failed and we were unable to recover it. 00:34:23.884 [2024-07-25 01:20:16.804150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.884 [2024-07-25 01:20:16.804175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.884 qpair failed and we were unable to recover it. 00:34:23.884 [2024-07-25 01:20:16.804308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.884 [2024-07-25 01:20:16.804346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.884 qpair failed and we were unable to recover it. 00:34:23.884 [2024-07-25 01:20:16.804476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.884 [2024-07-25 01:20:16.804503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.884 qpair failed and we were unable to recover it. 00:34:23.884 [2024-07-25 01:20:16.804626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.884 [2024-07-25 01:20:16.804652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.884 qpair failed and we were unable to recover it. 00:34:23.884 [2024-07-25 01:20:16.804777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.884 [2024-07-25 01:20:16.804803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.884 qpair failed and we were unable to recover it. 00:34:23.884 [2024-07-25 01:20:16.804927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.884 [2024-07-25 01:20:16.804952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.884 qpair failed and we were unable to recover it. 00:34:23.884 [2024-07-25 01:20:16.805076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.884 [2024-07-25 01:20:16.805101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.884 qpair failed and we were unable to recover it. 00:34:23.884 [2024-07-25 01:20:16.805227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.884 [2024-07-25 01:20:16.805258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.884 qpair failed and we were unable to recover it. 00:34:23.884 [2024-07-25 01:20:16.805409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.884 [2024-07-25 01:20:16.805434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.884 qpair failed and we were unable to recover it. 00:34:23.884 [2024-07-25 01:20:16.805551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.884 [2024-07-25 01:20:16.805577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.884 qpair failed and we were unable to recover it. 00:34:23.884 [2024-07-25 01:20:16.805696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.884 [2024-07-25 01:20:16.805722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.884 qpair failed and we were unable to recover it. 00:34:23.884 [2024-07-25 01:20:16.805851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.884 [2024-07-25 01:20:16.805877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.885 qpair failed and we were unable to recover it. 00:34:23.885 [2024-07-25 01:20:16.806008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.885 [2024-07-25 01:20:16.806035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.885 qpair failed and we were unable to recover it. 00:34:23.885 [2024-07-25 01:20:16.806158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.885 [2024-07-25 01:20:16.806186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.885 qpair failed and we were unable to recover it. 00:34:23.885 [2024-07-25 01:20:16.806331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.885 [2024-07-25 01:20:16.806357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.885 qpair failed and we were unable to recover it. 00:34:23.885 [2024-07-25 01:20:16.806471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.885 [2024-07-25 01:20:16.806496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.885 qpair failed and we were unable to recover it. 00:34:23.885 [2024-07-25 01:20:16.806656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.885 [2024-07-25 01:20:16.806681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.885 qpair failed and we were unable to recover it. 00:34:23.885 [2024-07-25 01:20:16.806801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.885 [2024-07-25 01:20:16.806826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.885 qpair failed and we were unable to recover it. 00:34:23.885 [2024-07-25 01:20:16.806974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.885 [2024-07-25 01:20:16.806999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.885 qpair failed and we were unable to recover it. 00:34:23.885 [2024-07-25 01:20:16.807120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.885 [2024-07-25 01:20:16.807145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.885 qpair failed and we were unable to recover it. 00:34:23.885 [2024-07-25 01:20:16.807261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.885 [2024-07-25 01:20:16.807286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.885 qpair failed and we were unable to recover it. 00:34:23.885 [2024-07-25 01:20:16.807407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.885 [2024-07-25 01:20:16.807432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.885 qpair failed and we were unable to recover it. 00:34:23.885 [2024-07-25 01:20:16.807551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.885 [2024-07-25 01:20:16.807576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.885 qpair failed and we were unable to recover it. 00:34:23.885 [2024-07-25 01:20:16.807695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.885 [2024-07-25 01:20:16.807720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.885 qpair failed and we were unable to recover it. 00:34:23.885 [2024-07-25 01:20:16.807835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.885 [2024-07-25 01:20:16.807859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.885 qpair failed and we were unable to recover it. 00:34:23.885 [2024-07-25 01:20:16.807972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.885 [2024-07-25 01:20:16.807997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.885 qpair failed and we were unable to recover it. 00:34:23.885 [2024-07-25 01:20:16.808127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.885 [2024-07-25 01:20:16.808165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.885 qpair failed and we were unable to recover it. 00:34:23.885 [2024-07-25 01:20:16.808325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.885 [2024-07-25 01:20:16.808363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.885 qpair failed and we were unable to recover it. 00:34:23.885 [2024-07-25 01:20:16.808508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.885 [2024-07-25 01:20:16.808544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.885 qpair failed and we were unable to recover it. 00:34:23.885 [2024-07-25 01:20:16.808671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.885 [2024-07-25 01:20:16.808696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.885 qpair failed and we were unable to recover it. 00:34:23.885 [2024-07-25 01:20:16.808811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.885 [2024-07-25 01:20:16.808836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.885 qpair failed and we were unable to recover it. 00:34:23.885 [2024-07-25 01:20:16.809000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.885 [2024-07-25 01:20:16.809025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.885 qpair failed and we were unable to recover it. 00:34:23.885 [2024-07-25 01:20:16.809140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.885 [2024-07-25 01:20:16.809167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.885 qpair failed and we were unable to recover it. 00:34:23.885 [2024-07-25 01:20:16.809298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.885 [2024-07-25 01:20:16.809324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.885 qpair failed and we were unable to recover it. 00:34:23.885 [2024-07-25 01:20:16.809443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.885 [2024-07-25 01:20:16.809468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.885 qpair failed and we were unable to recover it. 00:34:23.885 [2024-07-25 01:20:16.809607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.885 [2024-07-25 01:20:16.809633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.886 qpair failed and we were unable to recover it. 00:34:23.886 [2024-07-25 01:20:16.809746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.886 [2024-07-25 01:20:16.809771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.886 qpair failed and we were unable to recover it. 00:34:23.886 [2024-07-25 01:20:16.809894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.886 [2024-07-25 01:20:16.809919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.886 qpair failed and we were unable to recover it. 00:34:23.886 [2024-07-25 01:20:16.810045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.886 [2024-07-25 01:20:16.810070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.886 qpair failed and we were unable to recover it. 00:34:23.886 [2024-07-25 01:20:16.810186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.886 [2024-07-25 01:20:16.810211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.886 qpair failed and we were unable to recover it. 00:34:23.886 [2024-07-25 01:20:16.810357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.886 [2024-07-25 01:20:16.810383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.886 qpair failed and we were unable to recover it. 00:34:23.886 [2024-07-25 01:20:16.810501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.886 [2024-07-25 01:20:16.810528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.886 qpair failed and we were unable to recover it. 00:34:23.886 [2024-07-25 01:20:16.810670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.886 [2024-07-25 01:20:16.810695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.886 qpair failed and we were unable to recover it. 00:34:23.886 [2024-07-25 01:20:16.810831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.886 [2024-07-25 01:20:16.810856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.886 qpair failed and we were unable to recover it. 00:34:23.886 [2024-07-25 01:20:16.810976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.886 [2024-07-25 01:20:16.811008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.886 qpair failed and we were unable to recover it. 00:34:23.886 [2024-07-25 01:20:16.811156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.886 [2024-07-25 01:20:16.811183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.886 qpair failed and we were unable to recover it. 00:34:23.886 [2024-07-25 01:20:16.811319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.886 [2024-07-25 01:20:16.811358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.886 qpair failed and we were unable to recover it. 00:34:23.886 [2024-07-25 01:20:16.811483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.886 [2024-07-25 01:20:16.811510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.886 qpair failed and we were unable to recover it. 00:34:23.886 [2024-07-25 01:20:16.811632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.886 [2024-07-25 01:20:16.811660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.886 qpair failed and we were unable to recover it. 00:34:23.886 [2024-07-25 01:20:16.811773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.886 [2024-07-25 01:20:16.811800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.886 qpair failed and we were unable to recover it. 00:34:23.886 [2024-07-25 01:20:16.811911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.886 [2024-07-25 01:20:16.811937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.886 qpair failed and we were unable to recover it. 00:34:23.886 [2024-07-25 01:20:16.812055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.886 [2024-07-25 01:20:16.812081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.886 qpair failed and we were unable to recover it. 00:34:23.886 [2024-07-25 01:20:16.812198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.886 [2024-07-25 01:20:16.812224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.886 qpair failed and we were unable to recover it. 00:34:23.886 [2024-07-25 01:20:16.812359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.886 [2024-07-25 01:20:16.812386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.886 qpair failed and we were unable to recover it. 00:34:23.886 [2024-07-25 01:20:16.812499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.886 [2024-07-25 01:20:16.812525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.886 qpair failed and we were unable to recover it. 00:34:23.886 [2024-07-25 01:20:16.812649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.886 [2024-07-25 01:20:16.812675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.886 qpair failed and we were unable to recover it. 00:34:23.886 [2024-07-25 01:20:16.812788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.886 [2024-07-25 01:20:16.812814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.886 qpair failed and we were unable to recover it. 00:34:23.886 [2024-07-25 01:20:16.812940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.886 [2024-07-25 01:20:16.812971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.886 qpair failed and we were unable to recover it. 00:34:23.886 [2024-07-25 01:20:16.813090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.886 [2024-07-25 01:20:16.813116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.886 qpair failed and we were unable to recover it. 00:34:23.886 [2024-07-25 01:20:16.813255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.886 [2024-07-25 01:20:16.813282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.886 qpair failed and we were unable to recover it. 00:34:23.886 [2024-07-25 01:20:16.813399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.886 [2024-07-25 01:20:16.813425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.886 qpair failed and we were unable to recover it. 00:34:23.886 [2024-07-25 01:20:16.813541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.886 [2024-07-25 01:20:16.813567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.886 qpair failed and we were unable to recover it. 00:34:23.886 [2024-07-25 01:20:16.813706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.886 [2024-07-25 01:20:16.813732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.886 qpair failed and we were unable to recover it. 00:34:23.886 [2024-07-25 01:20:16.813872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.886 [2024-07-25 01:20:16.813898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.886 qpair failed and we were unable to recover it. 00:34:23.886 [2024-07-25 01:20:16.814013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.886 [2024-07-25 01:20:16.814039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.886 qpair failed and we were unable to recover it. 00:34:23.886 [2024-07-25 01:20:16.814182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.886 [2024-07-25 01:20:16.814208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.886 qpair failed and we were unable to recover it. 00:34:23.886 [2024-07-25 01:20:16.814333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.886 [2024-07-25 01:20:16.814360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.886 qpair failed and we were unable to recover it. 00:34:23.886 [2024-07-25 01:20:16.814483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.886 [2024-07-25 01:20:16.814510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.886 qpair failed and we were unable to recover it. 00:34:23.886 [2024-07-25 01:20:16.814640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.886 [2024-07-25 01:20:16.814668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.886 qpair failed and we were unable to recover it. 00:34:23.886 [2024-07-25 01:20:16.814801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.886 [2024-07-25 01:20:16.814826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.886 qpair failed and we were unable to recover it. 00:34:23.886 [2024-07-25 01:20:16.814939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.886 [2024-07-25 01:20:16.814964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.886 qpair failed and we were unable to recover it. 00:34:23.886 [2024-07-25 01:20:16.815085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.886 [2024-07-25 01:20:16.815111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.886 qpair failed and we were unable to recover it. 00:34:23.886 [2024-07-25 01:20:16.815236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.886 [2024-07-25 01:20:16.815269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.886 qpair failed and we were unable to recover it. 00:34:23.886 [2024-07-25 01:20:16.815411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.886 [2024-07-25 01:20:16.815437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.886 qpair failed and we were unable to recover it. 00:34:23.886 [2024-07-25 01:20:16.815547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.886 [2024-07-25 01:20:16.815572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.886 qpair failed and we were unable to recover it. 00:34:23.886 [2024-07-25 01:20:16.815687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.886 [2024-07-25 01:20:16.815712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.886 qpair failed and we were unable to recover it. 00:34:23.886 [2024-07-25 01:20:16.815826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.886 [2024-07-25 01:20:16.815851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.886 qpair failed and we were unable to recover it. 00:34:23.886 [2024-07-25 01:20:16.815983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.886 [2024-07-25 01:20:16.816008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.886 qpair failed and we were unable to recover it. 00:34:23.886 [2024-07-25 01:20:16.816148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.887 [2024-07-25 01:20:16.816173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.887 qpair failed and we were unable to recover it. 00:34:23.887 [2024-07-25 01:20:16.816295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.887 [2024-07-25 01:20:16.816321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.887 qpair failed and we were unable to recover it. 00:34:23.887 [2024-07-25 01:20:16.816435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.887 [2024-07-25 01:20:16.816461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.887 qpair failed and we were unable to recover it. 00:34:23.887 [2024-07-25 01:20:16.816590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.887 [2024-07-25 01:20:16.816616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.887 qpair failed and we were unable to recover it. 00:34:23.887 [2024-07-25 01:20:16.816731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.887 [2024-07-25 01:20:16.816756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.887 qpair failed and we were unable to recover it. 00:34:23.887 [2024-07-25 01:20:16.816881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.887 [2024-07-25 01:20:16.816907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.887 qpair failed and we were unable to recover it. 00:34:23.887 [2024-07-25 01:20:16.817056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.887 [2024-07-25 01:20:16.817087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.887 qpair failed and we were unable to recover it. 00:34:23.887 [2024-07-25 01:20:16.817198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.887 [2024-07-25 01:20:16.817223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.887 qpair failed and we were unable to recover it. 00:34:23.887 [2024-07-25 01:20:16.817375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.887 [2024-07-25 01:20:16.817401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.887 qpair failed and we were unable to recover it. 00:34:23.887 [2024-07-25 01:20:16.817531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.887 [2024-07-25 01:20:16.817557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.887 qpair failed and we were unable to recover it. 00:34:23.887 [2024-07-25 01:20:16.817711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.887 [2024-07-25 01:20:16.817737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.887 qpair failed and we were unable to recover it. 00:34:23.887 [2024-07-25 01:20:16.817860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.887 [2024-07-25 01:20:16.817887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.887 qpair failed and we were unable to recover it. 00:34:23.887 [2024-07-25 01:20:16.818042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.887 [2024-07-25 01:20:16.818068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.887 qpair failed and we were unable to recover it. 00:34:23.887 [2024-07-25 01:20:16.818177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.887 [2024-07-25 01:20:16.818202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.887 qpair failed and we were unable to recover it. 00:34:23.887 [2024-07-25 01:20:16.818339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.887 [2024-07-25 01:20:16.818365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.887 qpair failed and we were unable to recover it. 00:34:23.887 [2024-07-25 01:20:16.818500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.887 [2024-07-25 01:20:16.818525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.887 qpair failed and we were unable to recover it. 00:34:23.887 [2024-07-25 01:20:16.818651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.887 [2024-07-25 01:20:16.818676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.887 qpair failed and we were unable to recover it. 00:34:23.887 [2024-07-25 01:20:16.818793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.887 [2024-07-25 01:20:16.818817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.887 qpair failed and we were unable to recover it. 00:34:23.887 [2024-07-25 01:20:16.818949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.887 [2024-07-25 01:20:16.818974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.887 qpair failed and we were unable to recover it. 00:34:23.887 [2024-07-25 01:20:16.819104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.887 [2024-07-25 01:20:16.819129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.887 qpair failed and we were unable to recover it. 00:34:23.887 [2024-07-25 01:20:16.819283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.887 [2024-07-25 01:20:16.819309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.887 qpair failed and we were unable to recover it. 00:34:23.887 [2024-07-25 01:20:16.819428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.887 [2024-07-25 01:20:16.819453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.887 qpair failed and we were unable to recover it. 00:34:23.887 [2024-07-25 01:20:16.819594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.887 [2024-07-25 01:20:16.819619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.887 qpair failed and we were unable to recover it. 00:34:23.887 [2024-07-25 01:20:16.819725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.887 [2024-07-25 01:20:16.819750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.887 qpair failed and we were unable to recover it. 00:34:23.887 [2024-07-25 01:20:16.819864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.887 [2024-07-25 01:20:16.819889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.887 qpair failed and we were unable to recover it. 00:34:23.887 [2024-07-25 01:20:16.820000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.887 [2024-07-25 01:20:16.820024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.887 qpair failed and we were unable to recover it. 00:34:23.887 [2024-07-25 01:20:16.820136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.887 [2024-07-25 01:20:16.820161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.887 qpair failed and we were unable to recover it. 00:34:23.887 [2024-07-25 01:20:16.820288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.887 [2024-07-25 01:20:16.820314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.887 qpair failed and we were unable to recover it. 00:34:23.887 [2024-07-25 01:20:16.820442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.887 [2024-07-25 01:20:16.820467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.887 qpair failed and we were unable to recover it. 00:34:23.887 [2024-07-25 01:20:16.820587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.887 [2024-07-25 01:20:16.820612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.887 qpair failed and we were unable to recover it. 00:34:23.887 [2024-07-25 01:20:16.820729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.887 [2024-07-25 01:20:16.820754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.887 qpair failed and we were unable to recover it. 00:34:23.887 [2024-07-25 01:20:16.820897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.887 [2024-07-25 01:20:16.820922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.887 qpair failed and we were unable to recover it. 00:34:23.887 [2024-07-25 01:20:16.821031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.887 [2024-07-25 01:20:16.821056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.887 qpair failed and we were unable to recover it. 00:34:23.887 [2024-07-25 01:20:16.821164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.887 [2024-07-25 01:20:16.821193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.887 qpair failed and we were unable to recover it. 00:34:23.887 [2024-07-25 01:20:16.821308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.887 [2024-07-25 01:20:16.821334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.887 qpair failed and we were unable to recover it. 00:34:23.887 [2024-07-25 01:20:16.821454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.887 [2024-07-25 01:20:16.821479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.887 qpair failed and we were unable to recover it. 00:34:23.887 [2024-07-25 01:20:16.821589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.887 [2024-07-25 01:20:16.821614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.887 qpair failed and we were unable to recover it. 00:34:23.887 [2024-07-25 01:20:16.821727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.887 [2024-07-25 01:20:16.821752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.887 qpair failed and we were unable to recover it. 00:34:23.887 [2024-07-25 01:20:16.821886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.887 [2024-07-25 01:20:16.821925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.887 qpair failed and we were unable to recover it. 00:34:23.887 [2024-07-25 01:20:16.822050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.887 [2024-07-25 01:20:16.822077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.887 qpair failed and we were unable to recover it. 00:34:23.887 [2024-07-25 01:20:16.822203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.887 [2024-07-25 01:20:16.822229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.887 qpair failed and we were unable to recover it. 00:34:23.887 [2024-07-25 01:20:16.822363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.887 [2024-07-25 01:20:16.822389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.887 qpair failed and we were unable to recover it. 00:34:23.887 [2024-07-25 01:20:16.822514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.887 [2024-07-25 01:20:16.822539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.887 qpair failed and we were unable to recover it. 00:34:23.887 [2024-07-25 01:20:16.822684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.887 [2024-07-25 01:20:16.822709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.887 qpair failed and we were unable to recover it. 00:34:23.887 [2024-07-25 01:20:16.822824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.887 [2024-07-25 01:20:16.822850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.887 qpair failed and we were unable to recover it. 00:34:23.887 [2024-07-25 01:20:16.822968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.887 [2024-07-25 01:20:16.822993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.887 qpair failed and we were unable to recover it. 00:34:23.887 [2024-07-25 01:20:16.823106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.887 [2024-07-25 01:20:16.823131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.887 qpair failed and we were unable to recover it. 00:34:23.887 [2024-07-25 01:20:16.823425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.887 [2024-07-25 01:20:16.823450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.887 qpair failed and we were unable to recover it. 00:34:23.887 [2024-07-25 01:20:16.823572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.887 [2024-07-25 01:20:16.823597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.887 qpair failed and we were unable to recover it. 00:34:23.888 [2024-07-25 01:20:16.823715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.888 [2024-07-25 01:20:16.823741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.888 qpair failed and we were unable to recover it. 00:34:23.888 [2024-07-25 01:20:16.823867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.888 [2024-07-25 01:20:16.823895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.888 qpair failed and we were unable to recover it. 00:34:23.888 [2024-07-25 01:20:16.824049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.888 [2024-07-25 01:20:16.824075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.888 qpair failed and we were unable to recover it. 00:34:23.888 [2024-07-25 01:20:16.824186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.888 [2024-07-25 01:20:16.824212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.888 qpair failed and we were unable to recover it. 00:34:23.888 [2024-07-25 01:20:16.824345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.888 [2024-07-25 01:20:16.824372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.888 qpair failed and we were unable to recover it. 00:34:23.888 [2024-07-25 01:20:16.824523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.888 [2024-07-25 01:20:16.824548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.888 qpair failed and we were unable to recover it. 00:34:23.888 [2024-07-25 01:20:16.824657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.888 [2024-07-25 01:20:16.824682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.888 qpair failed and we were unable to recover it. 00:34:23.888 [2024-07-25 01:20:16.824823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.888 [2024-07-25 01:20:16.824849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.888 qpair failed and we were unable to recover it. 00:34:23.888 [2024-07-25 01:20:16.824993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.888 [2024-07-25 01:20:16.825019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.888 qpair failed and we were unable to recover it. 00:34:23.888 01:20:16 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:34:23.888 [2024-07-25 01:20:16.825131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.888 [2024-07-25 01:20:16.825159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.888 qpair failed and we were unable to recover it. 00:34:23.888 [2024-07-25 01:20:16.825289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.888 [2024-07-25 01:20:16.825317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.888 qpair failed and we were unable to recover it. 00:34:23.888 01:20:16 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@860 -- # return 0 00:34:23.888 [2024-07-25 01:20:16.825437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.888 [2024-07-25 01:20:16.825465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.888 qpair failed and we were unable to recover it. 00:34:23.888 [2024-07-25 01:20:16.825606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.888 [2024-07-25 01:20:16.825632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.888 qpair failed and we were unable to recover it. 00:34:23.888 [2024-07-25 01:20:16.825747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.888 01:20:16 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:34:23.888 [2024-07-25 01:20:16.825774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.888 qpair failed and we were unable to recover it. 00:34:23.888 [2024-07-25 01:20:16.825891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.888 [2024-07-25 01:20:16.825917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.888 qpair failed and we were unable to recover it. 00:34:23.888 01:20:16 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:23.888 [2024-07-25 01:20:16.826075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.888 [2024-07-25 01:20:16.826102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.888 qpair failed and we were unable to recover it. 00:34:23.888 [2024-07-25 01:20:16.826229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.888 [2024-07-25 01:20:16.826268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.888 qpair failed and we were unable to recover it. 00:34:23.888 01:20:16 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:23.888 [2024-07-25 01:20:16.826400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.888 [2024-07-25 01:20:16.826426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.888 qpair failed and we were unable to recover it. 00:34:23.888 [2024-07-25 01:20:16.826546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.888 [2024-07-25 01:20:16.826572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.888 qpair failed and we were unable to recover it. 00:34:23.888 [2024-07-25 01:20:16.826712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.888 [2024-07-25 01:20:16.826737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.888 qpair failed and we were unable to recover it. 00:34:23.888 [2024-07-25 01:20:16.826856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.888 [2024-07-25 01:20:16.826881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.888 qpair failed and we were unable to recover it. 00:34:23.888 [2024-07-25 01:20:16.827009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.888 [2024-07-25 01:20:16.827034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.888 qpair failed and we were unable to recover it. 00:34:23.888 [2024-07-25 01:20:16.827149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.888 [2024-07-25 01:20:16.827174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.888 qpair failed and we were unable to recover it. 00:34:23.888 [2024-07-25 01:20:16.827310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.888 [2024-07-25 01:20:16.827338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.888 qpair failed and we were unable to recover it. 00:34:23.888 [2024-07-25 01:20:16.827453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.888 [2024-07-25 01:20:16.827478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.888 qpair failed and we were unable to recover it. 00:34:23.888 [2024-07-25 01:20:16.827589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.888 [2024-07-25 01:20:16.827616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.888 qpair failed and we were unable to recover it. 00:34:23.888 [2024-07-25 01:20:16.827753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.888 [2024-07-25 01:20:16.827778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.888 qpair failed and we were unable to recover it. 00:34:23.888 [2024-07-25 01:20:16.827896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.888 [2024-07-25 01:20:16.827922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.888 qpair failed and we were unable to recover it. 00:34:23.888 [2024-07-25 01:20:16.828077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.888 [2024-07-25 01:20:16.828102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.888 qpair failed and we were unable to recover it. 00:34:23.888 [2024-07-25 01:20:16.828235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.888 [2024-07-25 01:20:16.828267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.888 qpair failed and we were unable to recover it. 00:34:23.888 [2024-07-25 01:20:16.828384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.888 [2024-07-25 01:20:16.828410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.888 qpair failed and we were unable to recover it. 00:34:23.888 [2024-07-25 01:20:16.828538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.888 [2024-07-25 01:20:16.828563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.888 qpair failed and we were unable to recover it. 00:34:23.888 [2024-07-25 01:20:16.828718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.888 [2024-07-25 01:20:16.828744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.888 qpair failed and we were unable to recover it. 00:34:23.888 [2024-07-25 01:20:16.828852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.888 [2024-07-25 01:20:16.828878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.888 qpair failed and we were unable to recover it. 00:34:23.888 [2024-07-25 01:20:16.829009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.888 [2024-07-25 01:20:16.829035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.888 qpair failed and we were unable to recover it. 00:34:23.888 [2024-07-25 01:20:16.829163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.888 [2024-07-25 01:20:16.829188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.888 qpair failed and we were unable to recover it. 00:34:23.888 [2024-07-25 01:20:16.829329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.888 [2024-07-25 01:20:16.829361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.888 qpair failed and we were unable to recover it. 00:34:23.888 [2024-07-25 01:20:16.829475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.888 [2024-07-25 01:20:16.829501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.888 qpair failed and we were unable to recover it. 00:34:23.888 [2024-07-25 01:20:16.829634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.888 [2024-07-25 01:20:16.829659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.888 qpair failed and we were unable to recover it. 00:34:23.888 [2024-07-25 01:20:16.829802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.888 [2024-07-25 01:20:16.829828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.888 qpair failed and we were unable to recover it. 00:34:23.888 [2024-07-25 01:20:16.829949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.888 [2024-07-25 01:20:16.829974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.888 qpair failed and we were unable to recover it. 00:34:23.888 [2024-07-25 01:20:16.830090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.888 [2024-07-25 01:20:16.830115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.888 qpair failed and we were unable to recover it. 00:34:23.888 [2024-07-25 01:20:16.830258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.888 [2024-07-25 01:20:16.830296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.888 qpair failed and we were unable to recover it. 00:34:23.888 [2024-07-25 01:20:16.830420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.888 [2024-07-25 01:20:16.830446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.888 qpair failed and we were unable to recover it. 00:34:23.888 [2024-07-25 01:20:16.830569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.888 [2024-07-25 01:20:16.830594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.888 qpair failed and we were unable to recover it. 00:34:23.888 [2024-07-25 01:20:16.830722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.888 [2024-07-25 01:20:16.830747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.888 qpair failed and we were unable to recover it. 00:34:23.888 [2024-07-25 01:20:16.830870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.888 [2024-07-25 01:20:16.830896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.888 qpair failed and we were unable to recover it. 00:34:23.888 [2024-07-25 01:20:16.831020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.888 [2024-07-25 01:20:16.831045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.888 qpair failed and we were unable to recover it. 00:34:23.888 [2024-07-25 01:20:16.831170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.889 [2024-07-25 01:20:16.831196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.889 qpair failed and we were unable to recover it. 00:34:23.889 [2024-07-25 01:20:16.831347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.889 [2024-07-25 01:20:16.831388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.889 qpair failed and we were unable to recover it. 00:34:23.889 [2024-07-25 01:20:16.831524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.889 [2024-07-25 01:20:16.831554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.889 qpair failed and we were unable to recover it. 00:34:23.889 [2024-07-25 01:20:16.831673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.889 [2024-07-25 01:20:16.831700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.889 qpair failed and we were unable to recover it. 00:34:23.889 [2024-07-25 01:20:16.831815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.889 [2024-07-25 01:20:16.831841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.889 qpair failed and we were unable to recover it. 00:34:23.889 [2024-07-25 01:20:16.831961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.889 [2024-07-25 01:20:16.831987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.889 qpair failed and we were unable to recover it. 00:34:23.889 [2024-07-25 01:20:16.832098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.889 [2024-07-25 01:20:16.832123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.889 qpair failed and we were unable to recover it. 00:34:23.889 [2024-07-25 01:20:16.832281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.889 [2024-07-25 01:20:16.832308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.889 qpair failed and we were unable to recover it. 00:34:23.889 [2024-07-25 01:20:16.832425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.889 [2024-07-25 01:20:16.832450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.889 qpair failed and we were unable to recover it. 00:34:23.889 [2024-07-25 01:20:16.832579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.889 [2024-07-25 01:20:16.832605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.889 qpair failed and we were unable to recover it. 00:34:23.889 [2024-07-25 01:20:16.832738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.889 [2024-07-25 01:20:16.832765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.889 qpair failed and we were unable to recover it. 00:34:23.889 [2024-07-25 01:20:16.832907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.889 [2024-07-25 01:20:16.832934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.889 qpair failed and we were unable to recover it. 00:34:23.889 [2024-07-25 01:20:16.833057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.889 [2024-07-25 01:20:16.833083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.889 qpair failed and we were unable to recover it. 00:34:23.889 [2024-07-25 01:20:16.833204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.889 [2024-07-25 01:20:16.833230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.889 qpair failed and we were unable to recover it. 00:34:23.889 [2024-07-25 01:20:16.833349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.889 [2024-07-25 01:20:16.833375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.889 qpair failed and we were unable to recover it. 00:34:23.889 [2024-07-25 01:20:16.833482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.889 [2024-07-25 01:20:16.833513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.889 qpair failed and we were unable to recover it. 00:34:23.889 [2024-07-25 01:20:16.833625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.889 [2024-07-25 01:20:16.833651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.889 qpair failed and we were unable to recover it. 00:34:23.889 [2024-07-25 01:20:16.833771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.889 [2024-07-25 01:20:16.833796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.889 qpair failed and we were unable to recover it. 00:34:23.889 [2024-07-25 01:20:16.833918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.889 [2024-07-25 01:20:16.833946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.889 qpair failed and we were unable to recover it. 00:34:23.889 [2024-07-25 01:20:16.834063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.889 [2024-07-25 01:20:16.834091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.889 qpair failed and we were unable to recover it. 00:34:23.889 [2024-07-25 01:20:16.834210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.889 [2024-07-25 01:20:16.834235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.889 qpair failed and we were unable to recover it. 00:34:23.889 [2024-07-25 01:20:16.834359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.889 [2024-07-25 01:20:16.834385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.889 qpair failed and we were unable to recover it. 00:34:23.889 [2024-07-25 01:20:16.834496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.889 [2024-07-25 01:20:16.834522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.889 qpair failed and we were unable to recover it. 00:34:23.889 [2024-07-25 01:20:16.834644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.889 [2024-07-25 01:20:16.834670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.889 qpair failed and we were unable to recover it. 00:34:23.889 [2024-07-25 01:20:16.834795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.889 [2024-07-25 01:20:16.834820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.889 qpair failed and we were unable to recover it. 00:34:23.889 [2024-07-25 01:20:16.834951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.889 [2024-07-25 01:20:16.834976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.889 qpair failed and we were unable to recover it. 00:34:23.889 [2024-07-25 01:20:16.835126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.889 [2024-07-25 01:20:16.835153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.889 qpair failed and we were unable to recover it. 00:34:23.889 [2024-07-25 01:20:16.835272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.889 [2024-07-25 01:20:16.835300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.889 qpair failed and we were unable to recover it. 00:34:23.889 [2024-07-25 01:20:16.835432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.889 [2024-07-25 01:20:16.835458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.889 qpair failed and we were unable to recover it. 00:34:23.889 [2024-07-25 01:20:16.835578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.889 [2024-07-25 01:20:16.835603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.889 qpair failed and we were unable to recover it. 00:34:23.889 [2024-07-25 01:20:16.835719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.889 [2024-07-25 01:20:16.835744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.889 qpair failed and we were unable to recover it. 00:34:23.889 [2024-07-25 01:20:16.835860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.889 [2024-07-25 01:20:16.835887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.889 qpair failed and we were unable to recover it. 00:34:23.889 [2024-07-25 01:20:16.836003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.889 [2024-07-25 01:20:16.836040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.889 qpair failed and we were unable to recover it. 00:34:23.889 [2024-07-25 01:20:16.836182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.889 [2024-07-25 01:20:16.836208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.889 qpair failed and we were unable to recover it. 00:34:23.889 [2024-07-25 01:20:16.836337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.889 [2024-07-25 01:20:16.836364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.889 qpair failed and we were unable to recover it. 00:34:23.889 [2024-07-25 01:20:16.836474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.889 [2024-07-25 01:20:16.836500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.889 qpair failed and we were unable to recover it. 00:34:23.889 [2024-07-25 01:20:16.836669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.889 [2024-07-25 01:20:16.836695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.889 qpair failed and we were unable to recover it. 00:34:23.889 [2024-07-25 01:20:16.836812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.889 [2024-07-25 01:20:16.836837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.889 qpair failed and we were unable to recover it. 00:34:23.889 [2024-07-25 01:20:16.836952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.889 [2024-07-25 01:20:16.836977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.889 qpair failed and we were unable to recover it. 00:34:23.889 [2024-07-25 01:20:16.837090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.889 [2024-07-25 01:20:16.837115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.889 qpair failed and we were unable to recover it. 00:34:23.889 [2024-07-25 01:20:16.837228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.889 [2024-07-25 01:20:16.837261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.889 qpair failed and we were unable to recover it. 00:34:23.889 [2024-07-25 01:20:16.837374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.889 [2024-07-25 01:20:16.837399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.889 qpair failed and we were unable to recover it. 00:34:23.890 [2024-07-25 01:20:16.837513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.890 [2024-07-25 01:20:16.837551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.890 qpair failed and we were unable to recover it. 00:34:23.890 [2024-07-25 01:20:16.837668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.890 [2024-07-25 01:20:16.837693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.890 qpair failed and we were unable to recover it. 00:34:23.890 [2024-07-25 01:20:16.837803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.890 [2024-07-25 01:20:16.837829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.890 qpair failed and we were unable to recover it. 00:34:23.890 [2024-07-25 01:20:16.837948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.890 [2024-07-25 01:20:16.837973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.890 qpair failed and we were unable to recover it. 00:34:23.890 [2024-07-25 01:20:16.838083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.890 [2024-07-25 01:20:16.838108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.890 qpair failed and we were unable to recover it. 00:34:23.890 [2024-07-25 01:20:16.838222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.890 [2024-07-25 01:20:16.838253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.890 qpair failed and we were unable to recover it. 00:34:23.890 [2024-07-25 01:20:16.838386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.890 [2024-07-25 01:20:16.838411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.890 qpair failed and we were unable to recover it. 00:34:23.890 [2024-07-25 01:20:16.838523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.890 [2024-07-25 01:20:16.838549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.890 qpair failed and we were unable to recover it. 00:34:23.890 [2024-07-25 01:20:16.838659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.890 [2024-07-25 01:20:16.838684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.890 qpair failed and we were unable to recover it. 00:34:23.890 [2024-07-25 01:20:16.838804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.890 [2024-07-25 01:20:16.838829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.890 qpair failed and we were unable to recover it. 00:34:23.890 [2024-07-25 01:20:16.838974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.890 [2024-07-25 01:20:16.839000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.890 qpair failed and we were unable to recover it. 00:34:23.890 [2024-07-25 01:20:16.839110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.890 [2024-07-25 01:20:16.839137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.890 qpair failed and we were unable to recover it. 00:34:23.890 [2024-07-25 01:20:16.839265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.890 [2024-07-25 01:20:16.839291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.890 qpair failed and we were unable to recover it. 00:34:23.890 [2024-07-25 01:20:16.839406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.890 [2024-07-25 01:20:16.839432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.890 qpair failed and we were unable to recover it. 00:34:23.890 [2024-07-25 01:20:16.839550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.890 [2024-07-25 01:20:16.839577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.890 qpair failed and we were unable to recover it. 00:34:23.890 [2024-07-25 01:20:16.839693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.890 [2024-07-25 01:20:16.839719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.890 qpair failed and we were unable to recover it. 00:34:23.890 [2024-07-25 01:20:16.839862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.890 [2024-07-25 01:20:16.839888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.890 qpair failed and we were unable to recover it. 00:34:23.890 [2024-07-25 01:20:16.840003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.890 [2024-07-25 01:20:16.840030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.890 qpair failed and we were unable to recover it. 00:34:23.890 [2024-07-25 01:20:16.840168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.890 [2024-07-25 01:20:16.840202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.890 qpair failed and we were unable to recover it. 00:34:23.890 [2024-07-25 01:20:16.840349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.890 [2024-07-25 01:20:16.840375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.890 qpair failed and we were unable to recover it. 00:34:23.890 [2024-07-25 01:20:16.840499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.890 [2024-07-25 01:20:16.840525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.890 qpair failed and we were unable to recover it. 00:34:23.890 [2024-07-25 01:20:16.840669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.890 [2024-07-25 01:20:16.840694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.890 qpair failed and we were unable to recover it. 00:34:23.890 [2024-07-25 01:20:16.840815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.890 [2024-07-25 01:20:16.840841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.890 qpair failed and we were unable to recover it. 00:34:23.890 [2024-07-25 01:20:16.840984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.890 [2024-07-25 01:20:16.841009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.890 qpair failed and we were unable to recover it. 00:34:23.890 [2024-07-25 01:20:16.841115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.890 [2024-07-25 01:20:16.841140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.890 qpair failed and we were unable to recover it. 00:34:23.890 [2024-07-25 01:20:16.841258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.890 [2024-07-25 01:20:16.841284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.890 qpair failed and we were unable to recover it. 00:34:23.890 [2024-07-25 01:20:16.841397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.890 [2024-07-25 01:20:16.841422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.890 qpair failed and we were unable to recover it. 00:34:23.890 [2024-07-25 01:20:16.841535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.890 [2024-07-25 01:20:16.841564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.890 qpair failed and we were unable to recover it. 00:34:23.890 [2024-07-25 01:20:16.841711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.890 [2024-07-25 01:20:16.841736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.890 qpair failed and we were unable to recover it. 00:34:23.890 [2024-07-25 01:20:16.841849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.890 [2024-07-25 01:20:16.841876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.890 qpair failed and we were unable to recover it. 00:34:23.890 [2024-07-25 01:20:16.841986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.890 [2024-07-25 01:20:16.842013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.890 qpair failed and we were unable to recover it. 00:34:23.890 [2024-07-25 01:20:16.842134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.890 [2024-07-25 01:20:16.842160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.890 qpair failed and we were unable to recover it. 00:34:23.890 [2024-07-25 01:20:16.842272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.890 [2024-07-25 01:20:16.842309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.890 qpair failed and we were unable to recover it. 00:34:23.890 [2024-07-25 01:20:16.842423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.890 [2024-07-25 01:20:16.842449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.890 qpair failed and we were unable to recover it. 00:34:23.890 [2024-07-25 01:20:16.842555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.890 [2024-07-25 01:20:16.842581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.890 qpair failed and we were unable to recover it. 00:34:23.890 [2024-07-25 01:20:16.842736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.890 [2024-07-25 01:20:16.842761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.890 qpair failed and we were unable to recover it. 00:34:23.890 [2024-07-25 01:20:16.842898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.890 [2024-07-25 01:20:16.842923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.890 qpair failed and we were unable to recover it. 00:34:23.890 [2024-07-25 01:20:16.843032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.890 [2024-07-25 01:20:16.843058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.890 qpair failed and we were unable to recover it. 00:34:23.890 [2024-07-25 01:20:16.843167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.890 [2024-07-25 01:20:16.843193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.890 qpair failed and we were unable to recover it. 00:34:23.890 [2024-07-25 01:20:16.843318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.890 [2024-07-25 01:20:16.843345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.890 qpair failed and we were unable to recover it. 00:34:23.890 [2024-07-25 01:20:16.843469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.890 [2024-07-25 01:20:16.843494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.890 qpair failed and we were unable to recover it. 00:34:23.890 [2024-07-25 01:20:16.843642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.890 [2024-07-25 01:20:16.843667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.890 qpair failed and we were unable to recover it. 00:34:23.890 [2024-07-25 01:20:16.843806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.890 [2024-07-25 01:20:16.843831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.890 qpair failed and we were unable to recover it. 00:34:23.890 [2024-07-25 01:20:16.843935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.890 [2024-07-25 01:20:16.843960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.890 qpair failed and we were unable to recover it. 00:34:23.890 [2024-07-25 01:20:16.844073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.890 [2024-07-25 01:20:16.844099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.890 qpair failed and we were unable to recover it. 00:34:23.890 [2024-07-25 01:20:16.844207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.890 [2024-07-25 01:20:16.844232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.890 qpair failed and we were unable to recover it. 00:34:23.890 [2024-07-25 01:20:16.844360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.890 [2024-07-25 01:20:16.844386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.890 qpair failed and we were unable to recover it. 00:34:23.890 [2024-07-25 01:20:16.844532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.890 [2024-07-25 01:20:16.844558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.890 qpair failed and we were unable to recover it. 00:34:23.890 [2024-07-25 01:20:16.844668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.890 [2024-07-25 01:20:16.844693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.890 qpair failed and we were unable to recover it. 00:34:23.890 [2024-07-25 01:20:16.844817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.890 [2024-07-25 01:20:16.844842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.890 qpair failed and we were unable to recover it. 00:34:23.891 [2024-07-25 01:20:16.844953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.891 [2024-07-25 01:20:16.844978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.891 qpair failed and we were unable to recover it. 00:34:23.891 [2024-07-25 01:20:16.845093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.891 [2024-07-25 01:20:16.845119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.891 qpair failed and we were unable to recover it. 00:34:23.891 [2024-07-25 01:20:16.845227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.891 [2024-07-25 01:20:16.845257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.891 qpair failed and we were unable to recover it. 00:34:23.891 [2024-07-25 01:20:16.845364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.891 [2024-07-25 01:20:16.845390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.891 qpair failed and we were unable to recover it. 00:34:23.891 [2024-07-25 01:20:16.845507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.891 [2024-07-25 01:20:16.845532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.891 qpair failed and we were unable to recover it. 00:34:23.891 [2024-07-25 01:20:16.845683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.891 [2024-07-25 01:20:16.845708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.891 qpair failed and we were unable to recover it. 00:34:23.891 [2024-07-25 01:20:16.845816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.891 [2024-07-25 01:20:16.845841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.891 qpair failed and we were unable to recover it. 00:34:23.891 [2024-07-25 01:20:16.845950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.891 [2024-07-25 01:20:16.845977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.891 qpair failed and we were unable to recover it. 00:34:23.891 [2024-07-25 01:20:16.846103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.891 [2024-07-25 01:20:16.846128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.891 qpair failed and we were unable to recover it. 00:34:23.891 [2024-07-25 01:20:16.846233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.891 [2024-07-25 01:20:16.846265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.891 qpair failed and we were unable to recover it. 00:34:23.891 [2024-07-25 01:20:16.846383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.891 [2024-07-25 01:20:16.846410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.891 qpair failed and we were unable to recover it. 00:34:23.891 [2024-07-25 01:20:16.846552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.891 [2024-07-25 01:20:16.846578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.891 qpair failed and we were unable to recover it. 00:34:23.891 [2024-07-25 01:20:16.846689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.891 [2024-07-25 01:20:16.846715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.891 qpair failed and we were unable to recover it. 00:34:23.891 [2024-07-25 01:20:16.846831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.891 [2024-07-25 01:20:16.846857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.891 qpair failed and we were unable to recover it. 00:34:23.891 [2024-07-25 01:20:16.846965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.891 [2024-07-25 01:20:16.846991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.891 qpair failed and we were unable to recover it. 00:34:23.891 [2024-07-25 01:20:16.847121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.891 [2024-07-25 01:20:16.847146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.891 qpair failed and we were unable to recover it. 00:34:23.891 [2024-07-25 01:20:16.847266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.891 [2024-07-25 01:20:16.847297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.891 qpair failed and we were unable to recover it. 00:34:23.891 [2024-07-25 01:20:16.847420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.891 [2024-07-25 01:20:16.847446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.891 qpair failed and we were unable to recover it. 00:34:23.891 [2024-07-25 01:20:16.847574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.891 [2024-07-25 01:20:16.847603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.891 qpair failed and we were unable to recover it. 00:34:23.891 [2024-07-25 01:20:16.847726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.891 [2024-07-25 01:20:16.847752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.891 qpair failed and we were unable to recover it. 00:34:23.891 [2024-07-25 01:20:16.847860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.891 [2024-07-25 01:20:16.847885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.891 qpair failed and we were unable to recover it. 00:34:23.891 [2024-07-25 01:20:16.848001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.891 [2024-07-25 01:20:16.848025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.891 qpair failed and we were unable to recover it. 00:34:23.891 [2024-07-25 01:20:16.848148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.891 [2024-07-25 01:20:16.848174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.891 qpair failed and we were unable to recover it. 00:34:23.891 [2024-07-25 01:20:16.848315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.891 [2024-07-25 01:20:16.848340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.891 qpair failed and we were unable to recover it. 00:34:23.891 [2024-07-25 01:20:16.848462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.891 [2024-07-25 01:20:16.848487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.891 qpair failed and we were unable to recover it. 00:34:23.891 [2024-07-25 01:20:16.848604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.891 [2024-07-25 01:20:16.848629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.891 qpair failed and we were unable to recover it. 00:34:23.891 [2024-07-25 01:20:16.848767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.891 [2024-07-25 01:20:16.848792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.891 qpair failed and we were unable to recover it. 00:34:23.891 [2024-07-25 01:20:16.848897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.891 [2024-07-25 01:20:16.848922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.891 qpair failed and we were unable to recover it. 00:34:23.891 [2024-07-25 01:20:16.849030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.891 01:20:16 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:23.891 [2024-07-25 01:20:16.849056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.891 qpair failed and we were unable to recover it. 00:34:23.891 [2024-07-25 01:20:16.849173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.891 [2024-07-25 01:20:16.849198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.891 qpair failed and we were unable to recover it. 00:34:23.891 [2024-07-25 01:20:16.849350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.891 [2024-07-25 01:20:16.849376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 wit 01:20:16 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:34:23.891 h addr=10.0.0.2, port=4420 00:34:23.891 qpair failed and we were unable to recover it. 00:34:23.891 [2024-07-25 01:20:16.849501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.891 [2024-07-25 01:20:16.849526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.891 qpair failed and we were unable to recover it. 00:34:23.891 01:20:16 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:23.891 [2024-07-25 01:20:16.849636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.891 [2024-07-25 01:20:16.849661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.891 qpair failed and we were unable to recover it. 00:34:23.891 [2024-07-25 01:20:16.849780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.891 [2024-07-25 01:20:16.849807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.891 qpair failed and we were unable to recover it. 00:34:23.891 01:20:16 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:23.891 [2024-07-25 01:20:16.849915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.891 [2024-07-25 01:20:16.849940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.891 qpair failed and we were unable to recover it. 00:34:23.891 [2024-07-25 01:20:16.850057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.891 [2024-07-25 01:20:16.850082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.891 qpair failed and we were unable to recover it. 00:34:23.891 [2024-07-25 01:20:16.850188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.891 [2024-07-25 01:20:16.850213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.891 qpair failed and we were unable to recover it. 00:34:23.891 [2024-07-25 01:20:16.850393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.891 [2024-07-25 01:20:16.850419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.891 qpair failed and we were unable to recover it. 00:34:23.891 [2024-07-25 01:20:16.850534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.891 [2024-07-25 01:20:16.850559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.891 qpair failed and we were unable to recover it. 00:34:23.891 [2024-07-25 01:20:16.850671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.891 [2024-07-25 01:20:16.850696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.891 qpair failed and we were unable to recover it. 00:34:23.891 [2024-07-25 01:20:16.850808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.891 [2024-07-25 01:20:16.850833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.891 qpair failed and we were unable to recover it. 00:34:23.891 [2024-07-25 01:20:16.850943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.891 [2024-07-25 01:20:16.850969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.891 qpair failed and we were unable to recover it. 00:34:23.891 [2024-07-25 01:20:16.851079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.891 [2024-07-25 01:20:16.851106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.891 qpair failed and we were unable to recover it. 00:34:23.891 [2024-07-25 01:20:16.851236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.891 [2024-07-25 01:20:16.851273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.891 qpair failed and we were unable to recover it. 00:34:23.891 [2024-07-25 01:20:16.851393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.891 [2024-07-25 01:20:16.851419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.891 qpair failed and we were unable to recover it. 00:34:23.891 [2024-07-25 01:20:16.851566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.891 [2024-07-25 01:20:16.851591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.891 qpair failed and we were unable to recover it. 00:34:23.891 [2024-07-25 01:20:16.851738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.891 [2024-07-25 01:20:16.851763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.891 qpair failed and we were unable to recover it. 00:34:23.891 [2024-07-25 01:20:16.851886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.891 [2024-07-25 01:20:16.851912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.891 qpair failed and we were unable to recover it. 00:34:23.891 [2024-07-25 01:20:16.852037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.892 [2024-07-25 01:20:16.852062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.892 qpair failed and we were unable to recover it. 00:34:23.892 [2024-07-25 01:20:16.852175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.892 [2024-07-25 01:20:16.852201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.892 qpair failed and we were unable to recover it. 00:34:23.892 [2024-07-25 01:20:16.852329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.892 [2024-07-25 01:20:16.852357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.892 qpair failed and we were unable to recover it. 00:34:23.892 [2024-07-25 01:20:16.852467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.892 [2024-07-25 01:20:16.852492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.892 qpair failed and we were unable to recover it. 00:34:23.892 [2024-07-25 01:20:16.852611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.892 [2024-07-25 01:20:16.852636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.892 qpair failed and we were unable to recover it. 00:34:23.892 [2024-07-25 01:20:16.852778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.892 [2024-07-25 01:20:16.852803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.892 qpair failed and we were unable to recover it. 00:34:23.892 [2024-07-25 01:20:16.852914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.892 [2024-07-25 01:20:16.852938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.892 qpair failed and we were unable to recover it. 00:34:23.892 [2024-07-25 01:20:16.853059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.892 [2024-07-25 01:20:16.853084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.892 qpair failed and we were unable to recover it. 00:34:23.892 [2024-07-25 01:20:16.853206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.892 [2024-07-25 01:20:16.853232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.892 qpair failed and we were unable to recover it. 00:34:23.892 [2024-07-25 01:20:16.853365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.892 [2024-07-25 01:20:16.853392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.892 qpair failed and we were unable to recover it. 00:34:23.892 [2024-07-25 01:20:16.853509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.892 [2024-07-25 01:20:16.853534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.892 qpair failed and we were unable to recover it. 00:34:23.892 [2024-07-25 01:20:16.853652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.892 [2024-07-25 01:20:16.853677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.892 qpair failed and we were unable to recover it. 00:34:23.892 [2024-07-25 01:20:16.853808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.892 [2024-07-25 01:20:16.853833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.892 qpair failed and we were unable to recover it. 00:34:23.892 [2024-07-25 01:20:16.853961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.892 [2024-07-25 01:20:16.853986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.892 qpair failed and we were unable to recover it. 00:34:23.892 [2024-07-25 01:20:16.854104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.892 [2024-07-25 01:20:16.854131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.892 qpair failed and we were unable to recover it. 00:34:23.892 [2024-07-25 01:20:16.854247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.892 [2024-07-25 01:20:16.854273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.892 qpair failed and we were unable to recover it. 00:34:23.892 [2024-07-25 01:20:16.854387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.892 [2024-07-25 01:20:16.854412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.892 qpair failed and we were unable to recover it. 00:34:23.892 [2024-07-25 01:20:16.854534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.892 [2024-07-25 01:20:16.854559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.892 qpair failed and we were unable to recover it. 00:34:23.892 [2024-07-25 01:20:16.854708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.892 [2024-07-25 01:20:16.854733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.892 qpair failed and we were unable to recover it. 00:34:23.892 [2024-07-25 01:20:16.854846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.892 [2024-07-25 01:20:16.854872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.892 qpair failed and we were unable to recover it. 00:34:23.892 [2024-07-25 01:20:16.854986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.892 [2024-07-25 01:20:16.855011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.892 qpair failed and we were unable to recover it. 00:34:23.892 [2024-07-25 01:20:16.855126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.892 [2024-07-25 01:20:16.855151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.892 qpair failed and we were unable to recover it. 00:34:23.892 [2024-07-25 01:20:16.855296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.892 [2024-07-25 01:20:16.855322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.892 qpair failed and we were unable to recover it. 00:34:23.892 [2024-07-25 01:20:16.855448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.892 [2024-07-25 01:20:16.855474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.892 qpair failed and we were unable to recover it. 00:34:23.892 [2024-07-25 01:20:16.855598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.892 [2024-07-25 01:20:16.855624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.892 qpair failed and we were unable to recover it. 00:34:23.892 [2024-07-25 01:20:16.855740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.892 [2024-07-25 01:20:16.855765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.892 qpair failed and we were unable to recover it. 00:34:23.892 [2024-07-25 01:20:16.855905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.892 [2024-07-25 01:20:16.855930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.892 qpair failed and we were unable to recover it. 00:34:23.892 [2024-07-25 01:20:16.856049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.892 [2024-07-25 01:20:16.856074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.892 qpair failed and we were unable to recover it. 00:34:23.892 [2024-07-25 01:20:16.856185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.892 [2024-07-25 01:20:16.856211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.892 qpair failed and we were unable to recover it. 00:34:23.892 [2024-07-25 01:20:16.856362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.892 [2024-07-25 01:20:16.856388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.892 qpair failed and we were unable to recover it. 00:34:23.892 [2024-07-25 01:20:16.856503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.892 [2024-07-25 01:20:16.856529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.892 qpair failed and we were unable to recover it. 00:34:23.892 [2024-07-25 01:20:16.856653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.892 [2024-07-25 01:20:16.856678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.892 qpair failed and we were unable to recover it. 00:34:23.892 [2024-07-25 01:20:16.856781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.892 [2024-07-25 01:20:16.856806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.892 qpair failed and we were unable to recover it. 00:34:23.892 [2024-07-25 01:20:16.856944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.892 [2024-07-25 01:20:16.856969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.892 qpair failed and we were unable to recover it. 00:34:23.892 [2024-07-25 01:20:16.857099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.892 [2024-07-25 01:20:16.857124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.892 qpair failed and we were unable to recover it. 00:34:23.892 [2024-07-25 01:20:16.857256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.892 [2024-07-25 01:20:16.857283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.892 qpair failed and we were unable to recover it. 00:34:23.892 [2024-07-25 01:20:16.857428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.892 [2024-07-25 01:20:16.857455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.892 qpair failed and we were unable to recover it. 00:34:23.892 [2024-07-25 01:20:16.857572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.892 [2024-07-25 01:20:16.857597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.892 qpair failed and we were unable to recover it. 00:34:23.892 [2024-07-25 01:20:16.857713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.892 [2024-07-25 01:20:16.857738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.892 qpair failed and we were unable to recover it. 00:34:23.892 [2024-07-25 01:20:16.857886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.892 [2024-07-25 01:20:16.857911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.892 qpair failed and we were unable to recover it. 00:34:23.892 [2024-07-25 01:20:16.858034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.892 [2024-07-25 01:20:16.858060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.892 qpair failed and we were unable to recover it. 00:34:23.892 [2024-07-25 01:20:16.858221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.892 [2024-07-25 01:20:16.858251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.892 qpair failed and we were unable to recover it. 00:34:23.892 [2024-07-25 01:20:16.858373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.892 [2024-07-25 01:20:16.858397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.892 qpair failed and we were unable to recover it. 00:34:23.892 [2024-07-25 01:20:16.858536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.892 [2024-07-25 01:20:16.858562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.892 qpair failed and we were unable to recover it. 00:34:23.892 [2024-07-25 01:20:16.858688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.892 [2024-07-25 01:20:16.858713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.892 qpair failed and we were unable to recover it. 00:34:23.892 [2024-07-25 01:20:16.858848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.892 [2024-07-25 01:20:16.858873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.892 qpair failed and we were unable to recover it. 00:34:23.892 [2024-07-25 01:20:16.859025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.892 [2024-07-25 01:20:16.859050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.892 qpair failed and we were unable to recover it. 00:34:23.892 [2024-07-25 01:20:16.859182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.892 [2024-07-25 01:20:16.859208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.892 qpair failed and we were unable to recover it. 00:34:23.892 [2024-07-25 01:20:16.859404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.892 [2024-07-25 01:20:16.859430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.892 qpair failed and we were unable to recover it. 00:34:23.892 [2024-07-25 01:20:16.859556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.892 [2024-07-25 01:20:16.859581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.893 qpair failed and we were unable to recover it. 00:34:23.893 [2024-07-25 01:20:16.859725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.893 [2024-07-25 01:20:16.859751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.893 qpair failed and we were unable to recover it. 00:34:23.893 [2024-07-25 01:20:16.859884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.893 [2024-07-25 01:20:16.859909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.893 qpair failed and we were unable to recover it. 00:34:23.893 [2024-07-25 01:20:16.860020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.893 [2024-07-25 01:20:16.860045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.893 qpair failed and we were unable to recover it. 00:34:23.893 [2024-07-25 01:20:16.860162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.893 [2024-07-25 01:20:16.860186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.893 qpair failed and we were unable to recover it. 00:34:23.893 [2024-07-25 01:20:16.860297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.893 [2024-07-25 01:20:16.860351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.893 qpair failed and we were unable to recover it. 00:34:23.893 [2024-07-25 01:20:16.860528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.893 [2024-07-25 01:20:16.860578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.893 qpair failed and we were unable to recover it. 00:34:23.893 [2024-07-25 01:20:16.860733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.893 [2024-07-25 01:20:16.860771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.893 qpair failed and we were unable to recover it. 00:34:23.893 [2024-07-25 01:20:16.860911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.893 [2024-07-25 01:20:16.860948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.893 qpair failed and we were unable to recover it. 00:34:23.893 [2024-07-25 01:20:16.861115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.893 [2024-07-25 01:20:16.861140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.893 qpair failed and we were unable to recover it. 00:34:23.893 [2024-07-25 01:20:16.861265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.893 [2024-07-25 01:20:16.861290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.893 qpair failed and we were unable to recover it. 00:34:23.893 [2024-07-25 01:20:16.861408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.893 [2024-07-25 01:20:16.861433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.893 qpair failed and we were unable to recover it. 00:34:23.893 [2024-07-25 01:20:16.861547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.893 [2024-07-25 01:20:16.861572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.893 qpair failed and we were unable to recover it. 00:34:23.893 [2024-07-25 01:20:16.861686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.893 [2024-07-25 01:20:16.861711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.893 qpair failed and we were unable to recover it. 00:34:23.893 [2024-07-25 01:20:16.861831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.893 [2024-07-25 01:20:16.861860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.893 qpair failed and we were unable to recover it. 00:34:23.893 [2024-07-25 01:20:16.861980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.893 [2024-07-25 01:20:16.862005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.893 qpair failed and we were unable to recover it. 00:34:23.893 [2024-07-25 01:20:16.862143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.893 [2024-07-25 01:20:16.862168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.893 qpair failed and we were unable to recover it. 00:34:23.893 [2024-07-25 01:20:16.862314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.893 [2024-07-25 01:20:16.862339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.893 qpair failed and we were unable to recover it. 00:34:23.893 [2024-07-25 01:20:16.862460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.893 [2024-07-25 01:20:16.862485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.893 qpair failed and we were unable to recover it. 00:34:23.893 [2024-07-25 01:20:16.862605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.893 [2024-07-25 01:20:16.862630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.893 qpair failed and we were unable to recover it. 00:34:23.893 [2024-07-25 01:20:16.862760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.893 [2024-07-25 01:20:16.862786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.893 qpair failed and we were unable to recover it. 00:34:23.893 [2024-07-25 01:20:16.862903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.893 [2024-07-25 01:20:16.862928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.893 qpair failed and we were unable to recover it. 00:34:23.893 [2024-07-25 01:20:16.863050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.893 [2024-07-25 01:20:16.863077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.893 qpair failed and we were unable to recover it. 00:34:23.893 [2024-07-25 01:20:16.863204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.893 [2024-07-25 01:20:16.863229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.893 qpair failed and we were unable to recover it. 00:34:23.893 [2024-07-25 01:20:16.863369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.893 [2024-07-25 01:20:16.863394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.893 qpair failed and we were unable to recover it. 00:34:23.893 [2024-07-25 01:20:16.863510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.893 [2024-07-25 01:20:16.863536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.893 qpair failed and we were unable to recover it. 00:34:23.893 [2024-07-25 01:20:16.863650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.893 [2024-07-25 01:20:16.863676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.893 qpair failed and we were unable to recover it. 00:34:23.893 [2024-07-25 01:20:16.863794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.893 [2024-07-25 01:20:16.863819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.893 qpair failed and we were unable to recover it. 00:34:23.893 [2024-07-25 01:20:16.863956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.893 [2024-07-25 01:20:16.863982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.893 qpair failed and we were unable to recover it. 00:34:23.893 [2024-07-25 01:20:16.864100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.893 [2024-07-25 01:20:16.864126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.893 qpair failed and we were unable to recover it. 00:34:23.893 [2024-07-25 01:20:16.864237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.893 [2024-07-25 01:20:16.864309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.893 qpair failed and we were unable to recover it. 00:34:23.893 [2024-07-25 01:20:16.864417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.894 [2024-07-25 01:20:16.864442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.894 qpair failed and we were unable to recover it. 00:34:23.894 [2024-07-25 01:20:16.864575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.894 [2024-07-25 01:20:16.864601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.894 qpair failed and we were unable to recover it. 00:34:23.894 [2024-07-25 01:20:16.864893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.894 [2024-07-25 01:20:16.864919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.894 qpair failed and we were unable to recover it. 00:34:23.894 [2024-07-25 01:20:16.865035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.894 [2024-07-25 01:20:16.865060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.894 qpair failed and we were unable to recover it. 00:34:23.894 [2024-07-25 01:20:16.865188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.894 [2024-07-25 01:20:16.865214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.894 qpair failed and we were unable to recover it. 00:34:23.894 [2024-07-25 01:20:16.865344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.894 [2024-07-25 01:20:16.865371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.894 qpair failed and we were unable to recover it. 00:34:23.894 [2024-07-25 01:20:16.865490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.894 [2024-07-25 01:20:16.865515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.894 qpair failed and we were unable to recover it. 00:34:23.894 [2024-07-25 01:20:16.865637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.894 [2024-07-25 01:20:16.865663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.894 qpair failed and we were unable to recover it. 00:34:23.894 [2024-07-25 01:20:16.865785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.894 [2024-07-25 01:20:16.865810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.894 qpair failed and we were unable to recover it. 00:34:23.894 [2024-07-25 01:20:16.865931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.894 [2024-07-25 01:20:16.865955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.894 qpair failed and we were unable to recover it. 00:34:23.894 [2024-07-25 01:20:16.866075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.894 [2024-07-25 01:20:16.866104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.894 qpair failed and we were unable to recover it. 00:34:23.894 [2024-07-25 01:20:16.866229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.894 [2024-07-25 01:20:16.866261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.894 qpair failed and we were unable to recover it. 00:34:23.894 [2024-07-25 01:20:16.866397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.894 [2024-07-25 01:20:16.866422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.894 qpair failed and we were unable to recover it. 00:34:23.894 [2024-07-25 01:20:16.866535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.894 [2024-07-25 01:20:16.866560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.894 qpair failed and we were unable to recover it. 00:34:23.894 [2024-07-25 01:20:16.866695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.894 [2024-07-25 01:20:16.866720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.894 qpair failed and we were unable to recover it. 00:34:23.894 [2024-07-25 01:20:16.866855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.894 [2024-07-25 01:20:16.866879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.894 qpair failed and we were unable to recover it. 00:34:23.894 [2024-07-25 01:20:16.867004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.894 [2024-07-25 01:20:16.867030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.894 qpair failed and we were unable to recover it. 00:34:23.894 [2024-07-25 01:20:16.867142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.894 [2024-07-25 01:20:16.867167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.894 qpair failed and we were unable to recover it. 00:34:23.894 [2024-07-25 01:20:16.867287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.894 [2024-07-25 01:20:16.867313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.894 qpair failed and we were unable to recover it. 00:34:23.894 [2024-07-25 01:20:16.867445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.894 [2024-07-25 01:20:16.867470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.894 qpair failed and we were unable to recover it. 00:34:23.894 [2024-07-25 01:20:16.867588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.894 [2024-07-25 01:20:16.867612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.894 qpair failed and we were unable to recover it. 00:34:23.894 [2024-07-25 01:20:16.867747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.894 [2024-07-25 01:20:16.867772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.894 qpair failed and we were unable to recover it. 00:34:23.894 [2024-07-25 01:20:16.867901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.894 [2024-07-25 01:20:16.867928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.894 qpair failed and we were unable to recover it. 00:34:23.894 [2024-07-25 01:20:16.868048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.894 [2024-07-25 01:20:16.868073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.894 qpair failed and we were unable to recover it. 00:34:23.894 [2024-07-25 01:20:16.868197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.894 [2024-07-25 01:20:16.868222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.894 qpair failed and we were unable to recover it. 00:34:23.894 [2024-07-25 01:20:16.868387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.894 [2024-07-25 01:20:16.868413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.894 qpair failed and we were unable to recover it. 00:34:23.894 [2024-07-25 01:20:16.868556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.894 [2024-07-25 01:20:16.868581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.894 qpair failed and we were unable to recover it. 00:34:23.894 [2024-07-25 01:20:16.868695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.894 [2024-07-25 01:20:16.868720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.894 qpair failed and we were unable to recover it. 00:34:23.894 [2024-07-25 01:20:16.868836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.894 [2024-07-25 01:20:16.868861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.894 qpair failed and we were unable to recover it. 00:34:23.894 [2024-07-25 01:20:16.868985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.894 [2024-07-25 01:20:16.869010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.894 qpair failed and we were unable to recover it. 00:34:23.894 [2024-07-25 01:20:16.869138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.894 [2024-07-25 01:20:16.869163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.894 qpair failed and we were unable to recover it. 00:34:23.894 [2024-07-25 01:20:16.869296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.894 [2024-07-25 01:20:16.869321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.894 qpair failed and we were unable to recover it. 00:34:23.894 [2024-07-25 01:20:16.869518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.894 [2024-07-25 01:20:16.869543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.894 qpair failed and we were unable to recover it. 00:34:23.894 [2024-07-25 01:20:16.869718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.894 [2024-07-25 01:20:16.869743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.894 qpair failed and we were unable to recover it. 00:34:23.894 [2024-07-25 01:20:16.869891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.894 [2024-07-25 01:20:16.869915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.894 qpair failed and we were unable to recover it. 00:34:23.894 [2024-07-25 01:20:16.870041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.894 [2024-07-25 01:20:16.870066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.894 qpair failed and we were unable to recover it. 00:34:23.894 [2024-07-25 01:20:16.870183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.894 [2024-07-25 01:20:16.870207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.894 qpair failed and we were unable to recover it. 00:34:23.894 [2024-07-25 01:20:16.870332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.894 [2024-07-25 01:20:16.870362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.894 qpair failed and we were unable to recover it. 00:34:23.895 [2024-07-25 01:20:16.870479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.895 [2024-07-25 01:20:16.870505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.895 qpair failed and we were unable to recover it. 00:34:23.895 [2024-07-25 01:20:16.870637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.895 [2024-07-25 01:20:16.870662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.895 qpair failed and we were unable to recover it. 00:34:23.895 [2024-07-25 01:20:16.870780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.895 [2024-07-25 01:20:16.870805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.895 qpair failed and we were unable to recover it. 00:34:23.895 [2024-07-25 01:20:16.870917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.895 [2024-07-25 01:20:16.870942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.895 qpair failed and we were unable to recover it. 00:34:23.895 [2024-07-25 01:20:16.871081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.895 [2024-07-25 01:20:16.871106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.895 qpair failed and we were unable to recover it. 00:34:23.895 [2024-07-25 01:20:16.871253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.895 [2024-07-25 01:20:16.871279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.895 qpair failed and we were unable to recover it. 00:34:23.895 [2024-07-25 01:20:16.871392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.895 [2024-07-25 01:20:16.871417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.895 qpair failed and we were unable to recover it. 00:34:23.895 [2024-07-25 01:20:16.871523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.895 [2024-07-25 01:20:16.871548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.895 qpair failed and we were unable to recover it. 00:34:23.895 [2024-07-25 01:20:16.871661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.895 [2024-07-25 01:20:16.871686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.895 qpair failed and we were unable to recover it. 00:34:23.895 [2024-07-25 01:20:16.871791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.895 [2024-07-25 01:20:16.871816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.895 qpair failed and we were unable to recover it. 00:34:23.895 [2024-07-25 01:20:16.871932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.895 [2024-07-25 01:20:16.871956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.895 qpair failed and we were unable to recover it. 00:34:23.895 [2024-07-25 01:20:16.872076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.895 [2024-07-25 01:20:16.872101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.895 qpair failed and we were unable to recover it. 00:34:23.895 [2024-07-25 01:20:16.872212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.895 [2024-07-25 01:20:16.872236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.895 qpair failed and we were unable to recover it. 00:34:23.895 [2024-07-25 01:20:16.872451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.895 [2024-07-25 01:20:16.872477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.895 qpair failed and we were unable to recover it. 00:34:23.895 [2024-07-25 01:20:16.872599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.895 [2024-07-25 01:20:16.872624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.895 qpair failed and we were unable to recover it. 00:34:23.895 [2024-07-25 01:20:16.872729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.895 [2024-07-25 01:20:16.872753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.895 qpair failed and we were unable to recover it. 00:34:23.895 [2024-07-25 01:20:16.872882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.895 [2024-07-25 01:20:16.872907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.895 qpair failed and we were unable to recover it. 00:34:23.895 [2024-07-25 01:20:16.873042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.895 [2024-07-25 01:20:16.873067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.895 qpair failed and we were unable to recover it. 00:34:23.895 [2024-07-25 01:20:16.873178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.895 [2024-07-25 01:20:16.873203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.895 qpair failed and we were unable to recover it. 00:34:23.895 [2024-07-25 01:20:16.873323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.895 [2024-07-25 01:20:16.873349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.895 qpair failed and we were unable to recover it. 00:34:23.895 [2024-07-25 01:20:16.873496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.895 [2024-07-25 01:20:16.873521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.895 qpair failed and we were unable to recover it. 00:34:23.895 [2024-07-25 01:20:16.873633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.895 [2024-07-25 01:20:16.873658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.895 qpair failed and we were unable to recover it. 00:34:23.895 [2024-07-25 01:20:16.873798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.895 [2024-07-25 01:20:16.873823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.895 qpair failed and we were unable to recover it. 00:34:23.895 [2024-07-25 01:20:16.873936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.895 [2024-07-25 01:20:16.873961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.895 qpair failed and we were unable to recover it. 00:34:23.895 [2024-07-25 01:20:16.874088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.895 [2024-07-25 01:20:16.874113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.895 qpair failed and we were unable to recover it. 00:34:23.895 [2024-07-25 01:20:16.874225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.895 [2024-07-25 01:20:16.874254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.895 qpair failed and we were unable to recover it. 00:34:23.895 [2024-07-25 01:20:16.874380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.895 [2024-07-25 01:20:16.874405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.895 qpair failed and we were unable to recover it. 00:34:23.895 [2024-07-25 01:20:16.874552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.895 [2024-07-25 01:20:16.874577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.895 qpair failed and we were unable to recover it. 00:34:23.895 [2024-07-25 01:20:16.874692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.895 [2024-07-25 01:20:16.874717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.895 qpair failed and we were unable to recover it. 00:34:23.895 [2024-07-25 01:20:16.874861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.895 [2024-07-25 01:20:16.874885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.895 qpair failed and we were unable to recover it. 00:34:23.895 [2024-07-25 01:20:16.875006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.895 [2024-07-25 01:20:16.875031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.895 qpair failed and we were unable to recover it. 00:34:23.895 [2024-07-25 01:20:16.875167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.895 [2024-07-25 01:20:16.875193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.895 qpair failed and we were unable to recover it. 00:34:23.895 [2024-07-25 01:20:16.875309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.895 [2024-07-25 01:20:16.875335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.895 qpair failed and we were unable to recover it. 00:34:23.895 [2024-07-25 01:20:16.875457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.895 [2024-07-25 01:20:16.875483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.895 qpair failed and we were unable to recover it. 00:34:23.895 [2024-07-25 01:20:16.875598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.895 [2024-07-25 01:20:16.875623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.895 qpair failed and we were unable to recover it. 00:34:23.895 Malloc0 00:34:23.895 [2024-07-25 01:20:16.875746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.895 [2024-07-25 01:20:16.875772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.895 qpair failed and we were unable to recover it. 00:34:23.895 [2024-07-25 01:20:16.875885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.895 [2024-07-25 01:20:16.875910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.895 qpair failed and we were unable to recover it. 00:34:23.896 [2024-07-25 01:20:16.876031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.896 [2024-07-25 01:20:16.876057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.896 01:20:16 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:23.896 qpair failed and we were unable to recover it. 00:34:23.896 [2024-07-25 01:20:16.876173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.896 [2024-07-25 01:20:16.876199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.896 qpair failed and we were unable to recover it. 00:34:23.896 01:20:16 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:34:23.896 [2024-07-25 01:20:16.876312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.896 [2024-07-25 01:20:16.876342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.896 qpair failed and we were unable to recover it. 00:34:23.896 01:20:16 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:23.896 [2024-07-25 01:20:16.876468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.896 [2024-07-25 01:20:16.876493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.896 qpair failed and we were unable to recover it. 00:34:23.896 01:20:16 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:23.896 [2024-07-25 01:20:16.876607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.896 [2024-07-25 01:20:16.876633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.896 qpair failed and we were unable to recover it. 00:34:23.896 [2024-07-25 01:20:16.876747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.896 [2024-07-25 01:20:16.876772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.896 qpair failed and we were unable to recover it. 00:34:23.896 [2024-07-25 01:20:16.876891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.896 [2024-07-25 01:20:16.876916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.896 qpair failed and we were unable to recover it. 00:34:23.896 [2024-07-25 01:20:16.877040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.896 [2024-07-25 01:20:16.877066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.896 qpair failed and we were unable to recover it. 00:34:23.896 [2024-07-25 01:20:16.877189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.896 [2024-07-25 01:20:16.877214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.896 qpair failed and we were unable to recover it. 00:34:23.896 [2024-07-25 01:20:16.877349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.896 [2024-07-25 01:20:16.877374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.896 qpair failed and we were unable to recover it. 00:34:23.896 [2024-07-25 01:20:16.877486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.896 [2024-07-25 01:20:16.877511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.896 qpair failed and we were unable to recover it. 00:34:23.896 [2024-07-25 01:20:16.877627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.896 [2024-07-25 01:20:16.877652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.896 qpair failed and we were unable to recover it. 00:34:23.896 [2024-07-25 01:20:16.877761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.896 [2024-07-25 01:20:16.877786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.896 qpair failed and we were unable to recover it. 00:34:23.896 [2024-07-25 01:20:16.877899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.896 [2024-07-25 01:20:16.877924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.896 qpair failed and we were unable to recover it. 00:34:23.896 [2024-07-25 01:20:16.878040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.896 [2024-07-25 01:20:16.878065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.896 qpair failed and we were unable to recover it. 00:34:23.896 [2024-07-25 01:20:16.878207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.896 [2024-07-25 01:20:16.878254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.896 qpair failed and we were unable to recover it. 00:34:23.896 [2024-07-25 01:20:16.878393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.896 [2024-07-25 01:20:16.878420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.896 qpair failed and we were unable to recover it. 00:34:23.896 [2024-07-25 01:20:16.878540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.896 [2024-07-25 01:20:16.878567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.896 qpair failed and we were unable to recover it. 00:34:23.896 [2024-07-25 01:20:16.878715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.896 [2024-07-25 01:20:16.878752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.896 qpair failed and we were unable to recover it. 00:34:23.896 [2024-07-25 01:20:16.878931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.896 [2024-07-25 01:20:16.878967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.896 qpair failed and we were unable to recover it. 00:34:23.896 [2024-07-25 01:20:16.879116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.896 [2024-07-25 01:20:16.879153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafd4000b90 with addr=10.0.0.2, port=4420 00:34:23.896 qpair failed and we were unable to recover it. 00:34:23.896 [2024-07-25 01:20:16.879315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.896 [2024-07-25 01:20:16.879342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.896 [2024-07-25 01:20:16.879330] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:23.896 qpair failed and we were unable to recover it. 00:34:23.896 [2024-07-25 01:20:16.879463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.896 [2024-07-25 01:20:16.879488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.896 qpair failed and we were unable to recover it. 00:34:23.896 [2024-07-25 01:20:16.879654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.896 [2024-07-25 01:20:16.879679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.896 qpair failed and we were unable to recover it. 00:34:23.896 [2024-07-25 01:20:16.879825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.896 [2024-07-25 01:20:16.879850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.896 qpair failed and we were unable to recover it. 00:34:23.896 [2024-07-25 01:20:16.879965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.896 [2024-07-25 01:20:16.879990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.896 qpair failed and we were unable to recover it. 00:34:23.896 [2024-07-25 01:20:16.880126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.896 [2024-07-25 01:20:16.880151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.896 qpair failed and we were unable to recover it. 00:34:23.896 [2024-07-25 01:20:16.880278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.896 [2024-07-25 01:20:16.880312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.896 qpair failed and we were unable to recover it. 00:34:23.896 [2024-07-25 01:20:16.880438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.896 [2024-07-25 01:20:16.880468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.896 qpair failed and we were unable to recover it. 00:34:23.896 [2024-07-25 01:20:16.880603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.896 [2024-07-25 01:20:16.880628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.896 qpair failed and we were unable to recover it. 00:34:23.896 [2024-07-25 01:20:16.880770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.896 [2024-07-25 01:20:16.880795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.896 qpair failed and we were unable to recover it. 00:34:23.896 [2024-07-25 01:20:16.880910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.896 [2024-07-25 01:20:16.880935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.896 qpair failed and we were unable to recover it. 00:34:23.896 [2024-07-25 01:20:16.881045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.896 [2024-07-25 01:20:16.881069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.896 qpair failed and we were unable to recover it. 00:34:23.896 [2024-07-25 01:20:16.881178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.896 [2024-07-25 01:20:16.881204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.896 qpair failed and we were unable to recover it. 00:34:23.896 [2024-07-25 01:20:16.881322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.896 [2024-07-25 01:20:16.881348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.896 qpair failed and we were unable to recover it. 00:34:23.896 [2024-07-25 01:20:16.881487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.896 [2024-07-25 01:20:16.881511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.896 qpair failed and we were unable to recover it. 00:34:23.897 [2024-07-25 01:20:16.881634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.897 [2024-07-25 01:20:16.881659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.897 qpair failed and we were unable to recover it. 00:34:23.897 [2024-07-25 01:20:16.881775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.897 [2024-07-25 01:20:16.881800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.897 qpair failed and we were unable to recover it. 00:34:23.897 [2024-07-25 01:20:16.881917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.897 [2024-07-25 01:20:16.881942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.897 qpair failed and we were unable to recover it. 00:34:23.897 [2024-07-25 01:20:16.882077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.897 [2024-07-25 01:20:16.882102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.897 qpair failed and we were unable to recover it. 00:34:23.897 [2024-07-25 01:20:16.882221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.897 [2024-07-25 01:20:16.882252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.897 qpair failed and we were unable to recover it. 00:34:23.897 [2024-07-25 01:20:16.882376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.897 [2024-07-25 01:20:16.882401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.897 qpair failed and we were unable to recover it. 00:34:23.897 [2024-07-25 01:20:16.882520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.897 [2024-07-25 01:20:16.882545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.897 qpair failed and we were unable to recover it. 00:34:23.897 [2024-07-25 01:20:16.882663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.897 [2024-07-25 01:20:16.882688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.897 qpair failed and we were unable to recover it. 00:34:23.897 [2024-07-25 01:20:16.882801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.897 [2024-07-25 01:20:16.882826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.897 qpair failed and we were unable to recover it. 00:34:23.897 [2024-07-25 01:20:16.882945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.897 [2024-07-25 01:20:16.882970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.897 qpair failed and we were unable to recover it. 00:34:23.897 [2024-07-25 01:20:16.883078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.897 [2024-07-25 01:20:16.883104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.897 qpair failed and we were unable to recover it. 00:34:23.897 [2024-07-25 01:20:16.883253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.897 [2024-07-25 01:20:16.883280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.897 qpair failed and we were unable to recover it. 00:34:23.897 [2024-07-25 01:20:16.883392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.897 [2024-07-25 01:20:16.883417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.897 qpair failed and we were unable to recover it. 00:34:23.897 [2024-07-25 01:20:16.883531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.897 [2024-07-25 01:20:16.883556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.897 qpair failed and we were unable to recover it. 00:34:23.897 [2024-07-25 01:20:16.883680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.897 [2024-07-25 01:20:16.883705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.897 qpair failed and we were unable to recover it. 00:34:23.897 [2024-07-25 01:20:16.883818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.897 [2024-07-25 01:20:16.883843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.897 qpair failed and we were unable to recover it. 00:34:23.897 [2024-07-25 01:20:16.883956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.897 [2024-07-25 01:20:16.883980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.897 qpair failed and we were unable to recover it. 00:34:23.897 [2024-07-25 01:20:16.884099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.897 [2024-07-25 01:20:16.884124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.897 qpair failed and we were unable to recover it. 00:34:23.897 [2024-07-25 01:20:16.884247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.897 [2024-07-25 01:20:16.884273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.897 qpair failed and we were unable to recover it. 00:34:23.897 [2024-07-25 01:20:16.884394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.897 [2024-07-25 01:20:16.884424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.897 qpair failed and we were unable to recover it. 00:34:23.897 [2024-07-25 01:20:16.884542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.897 [2024-07-25 01:20:16.884567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.897 qpair failed and we were unable to recover it. 00:34:23.897 [2024-07-25 01:20:16.884709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.897 [2024-07-25 01:20:16.884734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.897 qpair failed and we were unable to recover it. 00:34:23.897 [2024-07-25 01:20:16.884850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.897 [2024-07-25 01:20:16.884875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.897 qpair failed and we were unable to recover it. 00:34:23.897 [2024-07-25 01:20:16.884999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.897 [2024-07-25 01:20:16.885024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.897 qpair failed and we were unable to recover it. 00:34:23.897 [2024-07-25 01:20:16.885139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.897 [2024-07-25 01:20:16.885165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.897 qpair failed and we were unable to recover it. 00:34:23.897 [2024-07-25 01:20:16.885287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.897 [2024-07-25 01:20:16.885317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.897 qpair failed and we were unable to recover it. 00:34:23.897 [2024-07-25 01:20:16.885463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.897 [2024-07-25 01:20:16.885488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.897 qpair failed and we were unable to recover it. 00:34:23.897 [2024-07-25 01:20:16.885620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.897 [2024-07-25 01:20:16.885645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.897 qpair failed and we were unable to recover it. 00:34:23.897 [2024-07-25 01:20:16.885765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.897 [2024-07-25 01:20:16.885790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.897 qpair failed and we were unable to recover it. 00:34:23.897 [2024-07-25 01:20:16.885902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.897 [2024-07-25 01:20:16.885927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.897 qpair failed and we were unable to recover it. 00:34:23.897 [2024-07-25 01:20:16.886058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.897 [2024-07-25 01:20:16.886083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.898 qpair failed and we were unable to recover it. 00:34:23.898 [2024-07-25 01:20:16.886209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.898 [2024-07-25 01:20:16.886234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.898 qpair failed and we were unable to recover it. 00:34:23.898 [2024-07-25 01:20:16.886372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.898 [2024-07-25 01:20:16.886397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.898 qpair failed and we were unable to recover it. 00:34:23.898 [2024-07-25 01:20:16.886541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.898 [2024-07-25 01:20:16.886566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.898 qpair failed and we were unable to recover it. 00:34:23.898 [2024-07-25 01:20:16.886696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.898 [2024-07-25 01:20:16.886721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.898 qpair failed and we were unable to recover it. 00:34:23.898 [2024-07-25 01:20:16.886858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.898 [2024-07-25 01:20:16.886883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.898 qpair failed and we were unable to recover it. 00:34:23.898 [2024-07-25 01:20:16.886989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.898 [2024-07-25 01:20:16.887013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.898 qpair failed and we were unable to recover it. 00:34:23.898 [2024-07-25 01:20:16.887127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.898 [2024-07-25 01:20:16.887152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.898 qpair failed and we were unable to recover it. 00:34:23.898 [2024-07-25 01:20:16.887268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.898 [2024-07-25 01:20:16.887304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.898 qpair failed and we were unable to recover it. 00:34:23.898 [2024-07-25 01:20:16.887417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.898 [2024-07-25 01:20:16.887443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.898 qpair failed and we were unable to recover it. 00:34:23.898 01:20:16 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:23.898 [2024-07-25 01:20:16.887557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.898 [2024-07-25 01:20:16.887584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.898 qpair failed and we were unable to recover it. 00:34:23.898 [2024-07-25 01:20:16.887691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.898 01:20:16 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:34:23.898 [2024-07-25 01:20:16.887716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.898 qpair failed and we were unable to recover it. 00:34:23.898 01:20:16 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:23.898 [2024-07-25 01:20:16.887835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.898 [2024-07-25 01:20:16.887860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.898 qpair failed and we were unable to recover it. 00:34:23.898 01:20:16 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:23.898 [2024-07-25 01:20:16.887991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.898 [2024-07-25 01:20:16.888017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.898 qpair failed and we were unable to recover it. 00:34:23.898 [2024-07-25 01:20:16.888133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.898 [2024-07-25 01:20:16.888158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.898 qpair failed and we were unable to recover it. 00:34:23.898 [2024-07-25 01:20:16.888288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.898 [2024-07-25 01:20:16.888316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.898 qpair failed and we were unable to recover it. 00:34:23.898 [2024-07-25 01:20:16.888459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.898 [2024-07-25 01:20:16.888484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.898 qpair failed and we were unable to recover it. 00:34:23.898 [2024-07-25 01:20:16.888602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.898 [2024-07-25 01:20:16.888627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.898 qpair failed and we were unable to recover it. 00:34:23.898 [2024-07-25 01:20:16.888767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.898 [2024-07-25 01:20:16.888792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.898 qpair failed and we were unable to recover it. 00:34:23.898 [2024-07-25 01:20:16.888945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.898 [2024-07-25 01:20:16.888970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.898 qpair failed and we were unable to recover it. 00:34:23.898 [2024-07-25 01:20:16.889102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.898 [2024-07-25 01:20:16.889126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.898 qpair failed and we were unable to recover it. 00:34:23.898 [2024-07-25 01:20:16.889234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.898 [2024-07-25 01:20:16.889266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.898 qpair failed and we were unable to recover it. 00:34:23.898 [2024-07-25 01:20:16.889386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.898 [2024-07-25 01:20:16.889411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.898 qpair failed and we were unable to recover it. 00:34:23.898 [2024-07-25 01:20:16.889535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.898 [2024-07-25 01:20:16.889561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.898 qpair failed and we were unable to recover it. 00:34:23.898 [2024-07-25 01:20:16.889688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.898 [2024-07-25 01:20:16.889713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.898 qpair failed and we were unable to recover it. 00:34:23.898 [2024-07-25 01:20:16.889831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.898 [2024-07-25 01:20:16.889857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.898 qpair failed and we were unable to recover it. 00:34:23.898 [2024-07-25 01:20:16.889973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.898 [2024-07-25 01:20:16.889998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.898 qpair failed and we were unable to recover it. 00:34:23.898 [2024-07-25 01:20:16.890153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.898 [2024-07-25 01:20:16.890178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.898 qpair failed and we were unable to recover it. 00:34:23.898 [2024-07-25 01:20:16.890312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.898 [2024-07-25 01:20:16.890341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.898 qpair failed and we were unable to recover it. 00:34:23.898 [2024-07-25 01:20:16.890497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.898 [2024-07-25 01:20:16.890522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.898 qpair failed and we were unable to recover it. 00:34:23.898 [2024-07-25 01:20:16.890634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.898 [2024-07-25 01:20:16.890659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.898 qpair failed and we were unable to recover it. 00:34:23.898 [2024-07-25 01:20:16.890803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.898 [2024-07-25 01:20:16.890829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.898 qpair failed and we were unable to recover it. 00:34:23.898 [2024-07-25 01:20:16.890948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.898 [2024-07-25 01:20:16.890973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.898 qpair failed and we were unable to recover it. 00:34:23.898 [2024-07-25 01:20:16.891119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.898 [2024-07-25 01:20:16.891144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.898 qpair failed and we were unable to recover it. 00:34:23.898 [2024-07-25 01:20:16.891256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.898 [2024-07-25 01:20:16.891282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.898 qpair failed and we were unable to recover it. 00:34:23.898 [2024-07-25 01:20:16.891400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.898 [2024-07-25 01:20:16.891425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.898 qpair failed and we were unable to recover it. 00:34:23.898 [2024-07-25 01:20:16.891543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.899 [2024-07-25 01:20:16.891569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.899 qpair failed and we were unable to recover it. 00:34:23.899 [2024-07-25 01:20:16.891713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.899 [2024-07-25 01:20:16.891738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.899 qpair failed and we were unable to recover it. 00:34:23.899 [2024-07-25 01:20:16.891856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.899 [2024-07-25 01:20:16.891881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.899 qpair failed and we were unable to recover it. 00:34:23.899 [2024-07-25 01:20:16.891991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.899 [2024-07-25 01:20:16.892016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.899 qpair failed and we were unable to recover it. 00:34:23.899 [2024-07-25 01:20:16.892142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.899 [2024-07-25 01:20:16.892167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.899 qpair failed and we were unable to recover it. 00:34:23.899 [2024-07-25 01:20:16.892280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.899 [2024-07-25 01:20:16.892308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.899 qpair failed and we were unable to recover it. 00:34:23.899 [2024-07-25 01:20:16.892447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.899 [2024-07-25 01:20:16.892472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.899 qpair failed and we were unable to recover it. 00:34:23.899 [2024-07-25 01:20:16.892582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.899 [2024-07-25 01:20:16.892607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.899 qpair failed and we were unable to recover it. 00:34:23.899 [2024-07-25 01:20:16.892752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.899 [2024-07-25 01:20:16.892777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.899 qpair failed and we were unable to recover it. 00:34:23.899 [2024-07-25 01:20:16.892885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.899 [2024-07-25 01:20:16.892910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.899 qpair failed and we were unable to recover it. 00:34:23.899 [2024-07-25 01:20:16.893053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.899 [2024-07-25 01:20:16.893078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.899 qpair failed and we were unable to recover it. 00:34:23.899 [2024-07-25 01:20:16.893222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.899 [2024-07-25 01:20:16.893251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.899 qpair failed and we were unable to recover it. 00:34:23.899 [2024-07-25 01:20:16.893360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.899 [2024-07-25 01:20:16.893385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.899 qpair failed and we were unable to recover it. 00:34:23.899 [2024-07-25 01:20:16.893496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.899 [2024-07-25 01:20:16.893521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.899 qpair failed and we were unable to recover it. 00:34:23.899 [2024-07-25 01:20:16.893661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.899 [2024-07-25 01:20:16.893686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.899 qpair failed and we were unable to recover it. 00:34:23.899 [2024-07-25 01:20:16.893825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.899 [2024-07-25 01:20:16.893850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.899 qpair failed and we were unable to recover it. 00:34:23.899 [2024-07-25 01:20:16.893965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.899 [2024-07-25 01:20:16.893990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.899 qpair failed and we were unable to recover it. 00:34:23.899 [2024-07-25 01:20:16.894132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.899 [2024-07-25 01:20:16.894159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.899 qpair failed and we were unable to recover it. 00:34:23.899 [2024-07-25 01:20:16.894263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.899 [2024-07-25 01:20:16.894289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.899 qpair failed and we were unable to recover it. 00:34:23.899 [2024-07-25 01:20:16.894409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.899 [2024-07-25 01:20:16.894439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.899 qpair failed and we were unable to recover it. 00:34:23.899 [2024-07-25 01:20:16.894549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.899 [2024-07-25 01:20:16.894574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.899 qpair failed and we were unable to recover it. 00:34:23.899 [2024-07-25 01:20:16.894696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.899 [2024-07-25 01:20:16.894721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.899 qpair failed and we were unable to recover it. 00:34:23.899 [2024-07-25 01:20:16.894827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.899 [2024-07-25 01:20:16.894853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.899 qpair failed and we were unable to recover it. 00:34:23.899 [2024-07-25 01:20:16.894957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.899 [2024-07-25 01:20:16.894982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.899 qpair failed and we were unable to recover it. 00:34:23.899 [2024-07-25 01:20:16.895096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.899 [2024-07-25 01:20:16.895121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.899 qpair failed and we were unable to recover it. 00:34:23.899 [2024-07-25 01:20:16.895228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.899 [2024-07-25 01:20:16.895258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.899 qpair failed and we were unable to recover it. 00:34:23.899 [2024-07-25 01:20:16.895389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.899 [2024-07-25 01:20:16.895414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.899 qpair failed and we were unable to recover it. 00:34:23.899 [2024-07-25 01:20:16.895525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.899 [2024-07-25 01:20:16.895550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.899 qpair failed and we were unable to recover it. 00:34:23.899 01:20:16 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:23.899 [2024-07-25 01:20:16.895665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.899 [2024-07-25 01:20:16.895691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.899 01:20:16 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:34:23.899 qpair failed and we were unable to recover it. 00:34:23.899 [2024-07-25 01:20:16.895812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.899 [2024-07-25 01:20:16.895838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 wit 01:20:16 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:23.899 h addr=10.0.0.2, port=4420 00:34:23.899 qpair failed and we were unable to recover it. 00:34:23.899 01:20:16 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:23.899 [2024-07-25 01:20:16.895951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.899 [2024-07-25 01:20:16.895977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.899 qpair failed and we were unable to recover it. 00:34:23.899 [2024-07-25 01:20:16.896141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.899 [2024-07-25 01:20:16.896167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.899 qpair failed and we were unable to recover it. 00:34:23.899 [2024-07-25 01:20:16.896288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.899 [2024-07-25 01:20:16.896313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.899 qpair failed and we were unable to recover it. 00:34:23.899 [2024-07-25 01:20:16.896437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.899 [2024-07-25 01:20:16.896462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.899 qpair failed and we were unable to recover it. 00:34:23.899 [2024-07-25 01:20:16.896613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.899 [2024-07-25 01:20:16.896637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.899 qpair failed and we were unable to recover it. 00:34:23.899 [2024-07-25 01:20:16.896751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.899 [2024-07-25 01:20:16.896776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.899 qpair failed and we were unable to recover it. 00:34:23.899 [2024-07-25 01:20:16.896896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.899 [2024-07-25 01:20:16.896921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.899 qpair failed and we were unable to recover it. 00:34:23.900 [2024-07-25 01:20:16.897050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.900 [2024-07-25 01:20:16.897074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.900 qpair failed and we were unable to recover it. 00:34:23.900 [2024-07-25 01:20:16.897193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.900 [2024-07-25 01:20:16.897218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.900 qpair failed and we were unable to recover it. 00:34:23.900 [2024-07-25 01:20:16.897348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.900 [2024-07-25 01:20:16.897374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.900 qpair failed and we were unable to recover it. 00:34:23.900 [2024-07-25 01:20:16.897502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.900 [2024-07-25 01:20:16.897526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.900 qpair failed and we were unable to recover it. 00:34:23.900 [2024-07-25 01:20:16.897648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.900 [2024-07-25 01:20:16.897673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.900 qpair failed and we were unable to recover it. 00:34:23.900 [2024-07-25 01:20:16.897813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.900 [2024-07-25 01:20:16.897838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.900 qpair failed and we were unable to recover it. 00:34:23.900 [2024-07-25 01:20:16.897973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.900 [2024-07-25 01:20:16.897998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.900 qpair failed and we were unable to recover it. 00:34:23.900 [2024-07-25 01:20:16.898115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.900 [2024-07-25 01:20:16.898141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.900 qpair failed and we were unable to recover it. 00:34:23.900 [2024-07-25 01:20:16.898267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.900 [2024-07-25 01:20:16.898299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.900 qpair failed and we were unable to recover it. 00:34:23.900 [2024-07-25 01:20:16.898414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.900 [2024-07-25 01:20:16.898439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.900 qpair failed and we were unable to recover it. 00:34:23.900 [2024-07-25 01:20:16.898554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.900 [2024-07-25 01:20:16.898579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.900 qpair failed and we were unable to recover it. 00:34:23.900 [2024-07-25 01:20:16.898801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.900 [2024-07-25 01:20:16.898826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.900 qpair failed and we were unable to recover it. 00:34:23.900 [2024-07-25 01:20:16.898944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.900 [2024-07-25 01:20:16.898969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.900 qpair failed and we were unable to recover it. 00:34:23.900 [2024-07-25 01:20:16.899083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.900 [2024-07-25 01:20:16.899108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.900 qpair failed and we were unable to recover it. 00:34:23.900 [2024-07-25 01:20:16.899231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.900 [2024-07-25 01:20:16.899261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.900 qpair failed and we were unable to recover it. 00:34:23.900 [2024-07-25 01:20:16.899377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.900 [2024-07-25 01:20:16.899402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.900 qpair failed and we were unable to recover it. 00:34:23.900 [2024-07-25 01:20:16.899523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.900 [2024-07-25 01:20:16.899548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.900 qpair failed and we were unable to recover it. 00:34:23.900 [2024-07-25 01:20:16.899670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.900 [2024-07-25 01:20:16.899696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.900 qpair failed and we were unable to recover it. 00:34:23.900 [2024-07-25 01:20:16.899829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.900 [2024-07-25 01:20:16.899854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.900 qpair failed and we were unable to recover it. 00:34:23.900 [2024-07-25 01:20:16.899976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.900 [2024-07-25 01:20:16.900001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.900 qpair failed and we were unable to recover it. 00:34:23.900 [2024-07-25 01:20:16.900144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.900 [2024-07-25 01:20:16.900169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.900 qpair failed and we were unable to recover it. 00:34:23.900 [2024-07-25 01:20:16.900278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.900 [2024-07-25 01:20:16.900312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.900 qpair failed and we were unable to recover it. 00:34:23.900 [2024-07-25 01:20:16.900434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.900 [2024-07-25 01:20:16.900459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.900 qpair failed and we were unable to recover it. 00:34:23.900 [2024-07-25 01:20:16.900572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.900 [2024-07-25 01:20:16.900597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.900 qpair failed and we were unable to recover it. 00:34:23.900 [2024-07-25 01:20:16.900720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.900 [2024-07-25 01:20:16.900745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.900 qpair failed and we were unable to recover it. 00:34:23.900 [2024-07-25 01:20:16.900865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.900 [2024-07-25 01:20:16.900890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.900 qpair failed and we were unable to recover it. 00:34:23.900 [2024-07-25 01:20:16.901010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.900 [2024-07-25 01:20:16.901035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.900 qpair failed and we were unable to recover it. 00:34:23.900 [2024-07-25 01:20:16.901154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.900 [2024-07-25 01:20:16.901179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.900 qpair failed and we were unable to recover it. 00:34:23.900 [2024-07-25 01:20:16.901305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.900 [2024-07-25 01:20:16.901330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.900 qpair failed and we were unable to recover it. 00:34:23.900 [2024-07-25 01:20:16.901450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.900 [2024-07-25 01:20:16.901475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.900 qpair failed and we were unable to recover it. 00:34:23.900 [2024-07-25 01:20:16.901618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.900 [2024-07-25 01:20:16.901643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.900 qpair failed and we were unable to recover it. 00:34:23.900 [2024-07-25 01:20:16.901753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.900 [2024-07-25 01:20:16.901778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.900 qpair failed and we were unable to recover it. 00:34:23.900 [2024-07-25 01:20:16.901889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.900 [2024-07-25 01:20:16.901914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.900 qpair failed and we were unable to recover it. 00:34:23.900 [2024-07-25 01:20:16.902028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.900 [2024-07-25 01:20:16.902053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.900 qpair failed and we were unable to recover it. 00:34:23.900 [2024-07-25 01:20:16.902175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.900 [2024-07-25 01:20:16.902200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.900 qpair failed and we were unable to recover it. 00:34:23.900 [2024-07-25 01:20:16.902328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.900 [2024-07-25 01:20:16.902361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.900 qpair failed and we were unable to recover it. 00:34:23.900 [2024-07-25 01:20:16.902482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.900 [2024-07-25 01:20:16.902507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.900 qpair failed and we were unable to recover it. 00:34:23.900 [2024-07-25 01:20:16.902632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.900 [2024-07-25 01:20:16.902657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.900 qpair failed and we were unable to recover it. 00:34:23.900 [2024-07-25 01:20:16.902772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.901 [2024-07-25 01:20:16.902805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.901 qpair failed and we were unable to recover it. 00:34:23.901 [2024-07-25 01:20:16.902921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.901 [2024-07-25 01:20:16.902946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.901 qpair failed and we were unable to recover it. 00:34:23.901 [2024-07-25 01:20:16.903094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.901 [2024-07-25 01:20:16.903118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.901 qpair failed and we were unable to recover it. 00:34:23.901 [2024-07-25 01:20:16.903296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.901 [2024-07-25 01:20:16.903322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.901 qpair failed and we were unable to recover it. 00:34:23.901 [2024-07-25 01:20:16.903439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.901 [2024-07-25 01:20:16.903465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.901 qpair failed and we were unable to recover it. 00:34:23.901 01:20:16 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:23.901 [2024-07-25 01:20:16.903570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.901 [2024-07-25 01:20:16.903596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.901 qpair failed and we were unable to recover it. 00:34:23.901 01:20:16 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:23.901 [2024-07-25 01:20:16.903705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.901 [2024-07-25 01:20:16.903733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.901 qpair failed and we were unable to recover it. 00:34:23.901 01:20:16 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:23.901 [2024-07-25 01:20:16.903845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.901 [2024-07-25 01:20:16.903871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.901 qpair failed and we were unable to recover it. 00:34:23.901 01:20:16 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:23.901 [2024-07-25 01:20:16.904010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.901 [2024-07-25 01:20:16.904036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.901 qpair failed and we were unable to recover it. 00:34:23.901 [2024-07-25 01:20:16.904166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.901 [2024-07-25 01:20:16.904203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.901 qpair failed and we were unable to recover it. 00:34:23.901 [2024-07-25 01:20:16.904340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.901 [2024-07-25 01:20:16.904369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.901 qpair failed and we were unable to recover it. 00:34:23.901 [2024-07-25 01:20:16.904488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.901 [2024-07-25 01:20:16.904516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.901 qpair failed and we were unable to recover it. 00:34:23.901 [2024-07-25 01:20:16.904660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.901 [2024-07-25 01:20:16.904686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.901 qpair failed and we were unable to recover it. 00:34:23.901 [2024-07-25 01:20:16.904830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.901 [2024-07-25 01:20:16.904856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.901 qpair failed and we were unable to recover it. 00:34:23.901 [2024-07-25 01:20:16.904981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.901 [2024-07-25 01:20:16.905007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.901 qpair failed and we were unable to recover it. 00:34:23.901 [2024-07-25 01:20:16.905147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.901 [2024-07-25 01:20:16.905173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.901 qpair failed and we were unable to recover it. 00:34:23.901 [2024-07-25 01:20:16.905296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.901 [2024-07-25 01:20:16.905323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.901 qpair failed and we were unable to recover it. 00:34:23.901 [2024-07-25 01:20:16.905473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.901 [2024-07-25 01:20:16.905500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.901 qpair failed and we were unable to recover it. 00:34:23.901 [2024-07-25 01:20:16.905642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.901 [2024-07-25 01:20:16.905669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.901 qpair failed and we were unable to recover it. 00:34:23.901 [2024-07-25 01:20:16.905785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.901 [2024-07-25 01:20:16.905810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.901 qpair failed and we were unable to recover it. 00:34:23.901 [2024-07-25 01:20:16.905919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.901 [2024-07-25 01:20:16.905944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.901 qpair failed and we were unable to recover it. 00:34:23.901 [2024-07-25 01:20:16.906087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.901 [2024-07-25 01:20:16.906113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.901 qpair failed and we were unable to recover it. 00:34:23.901 [2024-07-25 01:20:16.906230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.901 [2024-07-25 01:20:16.906262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.901 qpair failed and we were unable to recover it. 00:34:23.901 [2024-07-25 01:20:16.906375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.901 [2024-07-25 01:20:16.906401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.901 qpair failed and we were unable to recover it. 00:34:23.901 [2024-07-25 01:20:16.906545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.901 [2024-07-25 01:20:16.906572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.901 qpair failed and we were unable to recover it. 00:34:23.901 [2024-07-25 01:20:16.906718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.901 [2024-07-25 01:20:16.906743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.901 qpair failed and we were unable to recover it. 00:34:23.901 [2024-07-25 01:20:16.906885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.901 [2024-07-25 01:20:16.906911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.901 qpair failed and we were unable to recover it. 00:34:23.901 [2024-07-25 01:20:16.907026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.901 [2024-07-25 01:20:16.907052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.901 qpair failed and we were unable to recover it. 00:34:23.901 [2024-07-25 01:20:16.907198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.901 [2024-07-25 01:20:16.907223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fafcc000b90 with addr=10.0.0.2, port=4420 00:34:23.901 qpair failed and we were unable to recover it. 00:34:23.901 [2024-07-25 01:20:16.907365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.901 [2024-07-25 01:20:16.907392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9f840 with addr=10.0.0.2, port=4420 00:34:23.901 qpair failed and we were unable to recover it. 00:34:23.901 [2024-07-25 01:20:16.907660] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:23.901 [2024-07-25 01:20:16.910166] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.901 [2024-07-25 01:20:16.910354] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.901 [2024-07-25 01:20:16.910385] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.901 [2024-07-25 01:20:16.910402] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.901 [2024-07-25 01:20:16.910416] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:23.901 [2024-07-25 01:20:16.910451] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:23.901 qpair failed and we were unable to recover it. 00:34:23.901 01:20:16 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:23.901 01:20:16 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:34:23.901 01:20:16 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:23.901 01:20:16 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:23.901 01:20:16 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:23.901 01:20:16 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 3927682 00:34:23.901 [2024-07-25 01:20:16.919991] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.902 [2024-07-25 01:20:16.920108] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.902 [2024-07-25 01:20:16.920136] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.902 [2024-07-25 01:20:16.920150] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.902 [2024-07-25 01:20:16.920163] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:23.902 [2024-07-25 01:20:16.920192] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:23.902 qpair failed and we were unable to recover it. 00:34:23.902 [2024-07-25 01:20:16.929938] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.902 [2024-07-25 01:20:16.930051] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.902 [2024-07-25 01:20:16.930077] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.902 [2024-07-25 01:20:16.930092] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.902 [2024-07-25 01:20:16.930105] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:23.902 [2024-07-25 01:20:16.930134] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:23.902 qpair failed and we were unable to recover it. 00:34:23.902 [2024-07-25 01:20:16.939891] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.902 [2024-07-25 01:20:16.940017] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.902 [2024-07-25 01:20:16.940044] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.902 [2024-07-25 01:20:16.940059] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.902 [2024-07-25 01:20:16.940072] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:23.902 [2024-07-25 01:20:16.940100] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:23.902 qpair failed and we were unable to recover it. 00:34:24.161 [2024-07-25 01:20:16.949999] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.161 [2024-07-25 01:20:16.950125] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.161 [2024-07-25 01:20:16.950154] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.162 [2024-07-25 01:20:16.950173] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.162 [2024-07-25 01:20:16.950188] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:24.162 [2024-07-25 01:20:16.950218] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:24.162 qpair failed and we were unable to recover it. 00:34:24.162 [2024-07-25 01:20:16.959959] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.162 [2024-07-25 01:20:16.960070] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.162 [2024-07-25 01:20:16.960102] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.162 [2024-07-25 01:20:16.960118] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.162 [2024-07-25 01:20:16.960132] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:24.162 [2024-07-25 01:20:16.960160] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:24.162 qpair failed and we were unable to recover it. 00:34:24.162 [2024-07-25 01:20:16.969983] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.162 [2024-07-25 01:20:16.970097] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.162 [2024-07-25 01:20:16.970122] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.162 [2024-07-25 01:20:16.970137] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.162 [2024-07-25 01:20:16.970150] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:24.162 [2024-07-25 01:20:16.970178] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:24.162 qpair failed and we were unable to recover it. 00:34:24.162 [2024-07-25 01:20:16.979981] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.162 [2024-07-25 01:20:16.980106] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.162 [2024-07-25 01:20:16.980132] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.162 [2024-07-25 01:20:16.980146] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.162 [2024-07-25 01:20:16.980159] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:24.162 [2024-07-25 01:20:16.980187] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:24.162 qpair failed and we were unable to recover it. 00:34:24.162 [2024-07-25 01:20:16.989984] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.162 [2024-07-25 01:20:16.990095] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.162 [2024-07-25 01:20:16.990127] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.162 [2024-07-25 01:20:16.990141] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.162 [2024-07-25 01:20:16.990155] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:24.162 [2024-07-25 01:20:16.990183] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:24.162 qpair failed and we were unable to recover it. 00:34:24.162 [2024-07-25 01:20:17.000110] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.162 [2024-07-25 01:20:17.000226] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.162 [2024-07-25 01:20:17.000262] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.162 [2024-07-25 01:20:17.000277] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.162 [2024-07-25 01:20:17.000290] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:24.162 [2024-07-25 01:20:17.000324] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:24.162 qpair failed and we were unable to recover it. 00:34:24.162 [2024-07-25 01:20:17.010050] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.162 [2024-07-25 01:20:17.010162] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.162 [2024-07-25 01:20:17.010189] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.162 [2024-07-25 01:20:17.010203] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.162 [2024-07-25 01:20:17.010217] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:24.162 [2024-07-25 01:20:17.010252] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:24.162 qpair failed and we were unable to recover it. 00:34:24.162 [2024-07-25 01:20:17.020156] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.162 [2024-07-25 01:20:17.020289] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.162 [2024-07-25 01:20:17.020316] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.162 [2024-07-25 01:20:17.020331] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.162 [2024-07-25 01:20:17.020345] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:24.162 [2024-07-25 01:20:17.020375] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:24.162 qpair failed and we were unable to recover it. 00:34:24.162 [2024-07-25 01:20:17.030147] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.162 [2024-07-25 01:20:17.030302] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.162 [2024-07-25 01:20:17.030329] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.162 [2024-07-25 01:20:17.030344] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.162 [2024-07-25 01:20:17.030358] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:24.162 [2024-07-25 01:20:17.030387] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:24.162 qpair failed and we were unable to recover it. 00:34:24.162 [2024-07-25 01:20:17.040264] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.162 [2024-07-25 01:20:17.040380] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.162 [2024-07-25 01:20:17.040406] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.162 [2024-07-25 01:20:17.040420] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.162 [2024-07-25 01:20:17.040433] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:24.162 [2024-07-25 01:20:17.040462] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:24.162 qpair failed and we were unable to recover it. 00:34:24.162 [2024-07-25 01:20:17.050196] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.162 [2024-07-25 01:20:17.050324] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.162 [2024-07-25 01:20:17.050358] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.162 [2024-07-25 01:20:17.050373] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.162 [2024-07-25 01:20:17.050387] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:24.162 [2024-07-25 01:20:17.050415] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:24.162 qpair failed and we were unable to recover it. 00:34:24.162 [2024-07-25 01:20:17.060200] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.162 [2024-07-25 01:20:17.060326] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.162 [2024-07-25 01:20:17.060353] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.162 [2024-07-25 01:20:17.060368] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.162 [2024-07-25 01:20:17.060381] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:24.162 [2024-07-25 01:20:17.060409] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:24.162 qpair failed and we were unable to recover it. 00:34:24.162 [2024-07-25 01:20:17.070362] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.162 [2024-07-25 01:20:17.070481] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.162 [2024-07-25 01:20:17.070507] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.162 [2024-07-25 01:20:17.070521] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.162 [2024-07-25 01:20:17.070534] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:24.162 [2024-07-25 01:20:17.070562] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:24.162 qpair failed and we were unable to recover it. 00:34:24.162 [2024-07-25 01:20:17.080278] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.162 [2024-07-25 01:20:17.080403] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.162 [2024-07-25 01:20:17.080430] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.162 [2024-07-25 01:20:17.080444] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.162 [2024-07-25 01:20:17.080457] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:24.163 [2024-07-25 01:20:17.080485] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:24.163 qpair failed and we were unable to recover it. 00:34:24.163 [2024-07-25 01:20:17.090303] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.163 [2024-07-25 01:20:17.090424] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.163 [2024-07-25 01:20:17.090449] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.163 [2024-07-25 01:20:17.090463] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.163 [2024-07-25 01:20:17.090481] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:24.163 [2024-07-25 01:20:17.090510] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:24.163 qpair failed and we were unable to recover it. 00:34:24.163 [2024-07-25 01:20:17.100321] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.163 [2024-07-25 01:20:17.100451] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.163 [2024-07-25 01:20:17.100475] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.163 [2024-07-25 01:20:17.100489] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.163 [2024-07-25 01:20:17.100502] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:24.163 [2024-07-25 01:20:17.100530] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:24.163 qpair failed and we were unable to recover it. 00:34:24.163 [2024-07-25 01:20:17.110463] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.163 [2024-07-25 01:20:17.110594] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.163 [2024-07-25 01:20:17.110620] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.163 [2024-07-25 01:20:17.110635] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.163 [2024-07-25 01:20:17.110648] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:24.163 [2024-07-25 01:20:17.110675] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:24.163 qpair failed and we were unable to recover it. 00:34:24.163 [2024-07-25 01:20:17.120401] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.163 [2024-07-25 01:20:17.120518] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.163 [2024-07-25 01:20:17.120544] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.163 [2024-07-25 01:20:17.120559] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.163 [2024-07-25 01:20:17.120572] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:24.163 [2024-07-25 01:20:17.120599] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:24.163 qpair failed and we were unable to recover it. 00:34:24.163 [2024-07-25 01:20:17.130427] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.163 [2024-07-25 01:20:17.130542] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.163 [2024-07-25 01:20:17.130567] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.163 [2024-07-25 01:20:17.130582] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.163 [2024-07-25 01:20:17.130595] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:24.163 [2024-07-25 01:20:17.130623] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:24.163 qpair failed and we were unable to recover it. 00:34:24.163 [2024-07-25 01:20:17.140487] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.163 [2024-07-25 01:20:17.140661] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.163 [2024-07-25 01:20:17.140687] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.163 [2024-07-25 01:20:17.140702] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.163 [2024-07-25 01:20:17.140716] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:24.163 [2024-07-25 01:20:17.140743] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:24.163 qpair failed and we were unable to recover it. 00:34:24.163 [2024-07-25 01:20:17.150485] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.163 [2024-07-25 01:20:17.150605] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.163 [2024-07-25 01:20:17.150631] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.163 [2024-07-25 01:20:17.150645] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.163 [2024-07-25 01:20:17.150658] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:24.163 [2024-07-25 01:20:17.150687] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:24.163 qpair failed and we were unable to recover it. 00:34:24.163 [2024-07-25 01:20:17.160501] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.163 [2024-07-25 01:20:17.160644] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.163 [2024-07-25 01:20:17.160669] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.163 [2024-07-25 01:20:17.160684] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.163 [2024-07-25 01:20:17.160698] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:24.163 [2024-07-25 01:20:17.160725] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:24.163 qpair failed and we were unable to recover it. 00:34:24.163 [2024-07-25 01:20:17.170526] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.163 [2024-07-25 01:20:17.170640] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.163 [2024-07-25 01:20:17.170665] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.163 [2024-07-25 01:20:17.170679] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.163 [2024-07-25 01:20:17.170692] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:24.163 [2024-07-25 01:20:17.170720] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:24.163 qpair failed and we were unable to recover it. 00:34:24.163 [2024-07-25 01:20:17.180563] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.163 [2024-07-25 01:20:17.180685] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.163 [2024-07-25 01:20:17.180710] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.163 [2024-07-25 01:20:17.180726] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.163 [2024-07-25 01:20:17.180745] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:24.163 [2024-07-25 01:20:17.180775] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:24.163 qpair failed and we were unable to recover it. 00:34:24.163 [2024-07-25 01:20:17.190586] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.163 [2024-07-25 01:20:17.190716] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.163 [2024-07-25 01:20:17.190742] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.163 [2024-07-25 01:20:17.190756] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.163 [2024-07-25 01:20:17.190769] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:24.163 [2024-07-25 01:20:17.190797] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:24.163 qpair failed and we were unable to recover it. 00:34:24.163 [2024-07-25 01:20:17.200627] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.163 [2024-07-25 01:20:17.200745] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.163 [2024-07-25 01:20:17.200770] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.163 [2024-07-25 01:20:17.200785] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.163 [2024-07-25 01:20:17.200799] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:24.163 [2024-07-25 01:20:17.200827] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:24.163 qpair failed and we were unable to recover it. 00:34:24.163 [2024-07-25 01:20:17.210723] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.163 [2024-07-25 01:20:17.210841] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.163 [2024-07-25 01:20:17.210866] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.163 [2024-07-25 01:20:17.210881] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.163 [2024-07-25 01:20:17.210894] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:24.163 [2024-07-25 01:20:17.210921] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:24.163 qpair failed and we were unable to recover it. 00:34:24.163 [2024-07-25 01:20:17.220652] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.164 [2024-07-25 01:20:17.220772] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.164 [2024-07-25 01:20:17.220797] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.164 [2024-07-25 01:20:17.220811] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.164 [2024-07-25 01:20:17.220823] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:24.164 [2024-07-25 01:20:17.220852] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:24.164 qpair failed and we were unable to recover it. 00:34:24.164 [2024-07-25 01:20:17.230696] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.164 [2024-07-25 01:20:17.230820] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.164 [2024-07-25 01:20:17.230846] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.164 [2024-07-25 01:20:17.230861] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.164 [2024-07-25 01:20:17.230873] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:24.164 [2024-07-25 01:20:17.230901] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:24.164 qpair failed and we were unable to recover it. 00:34:24.164 [2024-07-25 01:20:17.240704] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.164 [2024-07-25 01:20:17.240823] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.164 [2024-07-25 01:20:17.240849] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.164 [2024-07-25 01:20:17.240864] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.164 [2024-07-25 01:20:17.240878] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:24.164 [2024-07-25 01:20:17.240906] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:24.164 qpair failed and we were unable to recover it. 00:34:24.164 [2024-07-25 01:20:17.250804] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.164 [2024-07-25 01:20:17.250918] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.164 [2024-07-25 01:20:17.250944] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.164 [2024-07-25 01:20:17.250959] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.164 [2024-07-25 01:20:17.250972] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:24.164 [2024-07-25 01:20:17.251000] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:24.164 qpair failed and we were unable to recover it. 00:34:24.164 [2024-07-25 01:20:17.260752] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.164 [2024-07-25 01:20:17.260873] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.164 [2024-07-25 01:20:17.260898] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.164 [2024-07-25 01:20:17.260912] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.164 [2024-07-25 01:20:17.260925] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:24.164 [2024-07-25 01:20:17.260953] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:24.164 qpair failed and we were unable to recover it. 00:34:24.164 [2024-07-25 01:20:17.270811] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.164 [2024-07-25 01:20:17.270927] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.164 [2024-07-25 01:20:17.270952] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.164 [2024-07-25 01:20:17.270966] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.164 [2024-07-25 01:20:17.270985] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:24.164 [2024-07-25 01:20:17.271014] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:24.164 qpair failed and we were unable to recover it. 00:34:24.164 [2024-07-25 01:20:17.280808] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.164 [2024-07-25 01:20:17.280920] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.164 [2024-07-25 01:20:17.280945] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.164 [2024-07-25 01:20:17.280960] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.164 [2024-07-25 01:20:17.280973] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:24.164 [2024-07-25 01:20:17.281000] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:24.164 qpair failed and we were unable to recover it. 00:34:24.164 [2024-07-25 01:20:17.290886] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.164 [2024-07-25 01:20:17.290999] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.164 [2024-07-25 01:20:17.291024] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.164 [2024-07-25 01:20:17.291038] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.164 [2024-07-25 01:20:17.291052] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:24.164 [2024-07-25 01:20:17.291082] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:24.164 qpair failed and we were unable to recover it. 00:34:24.164 [2024-07-25 01:20:17.300885] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.164 [2024-07-25 01:20:17.301039] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.164 [2024-07-25 01:20:17.301065] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.164 [2024-07-25 01:20:17.301079] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.164 [2024-07-25 01:20:17.301092] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:24.164 [2024-07-25 01:20:17.301120] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:24.164 qpair failed and we were unable to recover it. 00:34:24.164 [2024-07-25 01:20:17.310926] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.164 [2024-07-25 01:20:17.311042] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.164 [2024-07-25 01:20:17.311068] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.164 [2024-07-25 01:20:17.311083] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.164 [2024-07-25 01:20:17.311096] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:24.164 [2024-07-25 01:20:17.311127] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:24.164 qpair failed and we were unable to recover it. 00:34:24.423 [2024-07-25 01:20:17.320924] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.423 [2024-07-25 01:20:17.321038] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.423 [2024-07-25 01:20:17.321064] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.423 [2024-07-25 01:20:17.321078] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.423 [2024-07-25 01:20:17.321092] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:24.423 [2024-07-25 01:20:17.321120] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:24.423 qpair failed and we were unable to recover it. 00:34:24.423 [2024-07-25 01:20:17.331072] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.423 [2024-07-25 01:20:17.331191] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.423 [2024-07-25 01:20:17.331216] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.423 [2024-07-25 01:20:17.331230] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.423 [2024-07-25 01:20:17.331249] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:24.423 [2024-07-25 01:20:17.331280] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:24.423 qpair failed and we were unable to recover it. 00:34:24.423 [2024-07-25 01:20:17.341043] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.423 [2024-07-25 01:20:17.341196] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.423 [2024-07-25 01:20:17.341221] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.423 [2024-07-25 01:20:17.341236] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.424 [2024-07-25 01:20:17.341259] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:24.424 [2024-07-25 01:20:17.341289] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:24.424 qpair failed and we were unable to recover it. 00:34:24.424 [2024-07-25 01:20:17.351019] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.424 [2024-07-25 01:20:17.351137] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.424 [2024-07-25 01:20:17.351162] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.424 [2024-07-25 01:20:17.351175] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.424 [2024-07-25 01:20:17.351189] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:24.424 [2024-07-25 01:20:17.351218] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:24.424 qpair failed and we were unable to recover it. 00:34:24.424 [2024-07-25 01:20:17.361143] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.424 [2024-07-25 01:20:17.361267] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.424 [2024-07-25 01:20:17.361292] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.424 [2024-07-25 01:20:17.361314] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.424 [2024-07-25 01:20:17.361327] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:24.424 [2024-07-25 01:20:17.361357] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:24.424 qpair failed and we were unable to recover it. 00:34:24.424 [2024-07-25 01:20:17.371130] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.424 [2024-07-25 01:20:17.371240] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.424 [2024-07-25 01:20:17.371272] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.424 [2024-07-25 01:20:17.371287] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.424 [2024-07-25 01:20:17.371300] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:24.424 [2024-07-25 01:20:17.371328] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:24.424 qpair failed and we were unable to recover it. 00:34:24.424 [2024-07-25 01:20:17.381130] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.424 [2024-07-25 01:20:17.381256] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.424 [2024-07-25 01:20:17.381282] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.424 [2024-07-25 01:20:17.381297] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.424 [2024-07-25 01:20:17.381310] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:24.424 [2024-07-25 01:20:17.381338] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:24.424 qpair failed and we were unable to recover it. 00:34:24.424 [2024-07-25 01:20:17.391134] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.424 [2024-07-25 01:20:17.391287] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.424 [2024-07-25 01:20:17.391316] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.424 [2024-07-25 01:20:17.391332] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.424 [2024-07-25 01:20:17.391346] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:24.424 [2024-07-25 01:20:17.391376] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:24.424 qpair failed and we were unable to recover it. 00:34:24.424 [2024-07-25 01:20:17.401156] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.424 [2024-07-25 01:20:17.401276] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.424 [2024-07-25 01:20:17.401302] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.424 [2024-07-25 01:20:17.401317] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.424 [2024-07-25 01:20:17.401330] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:24.424 [2024-07-25 01:20:17.401359] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:24.424 qpair failed and we were unable to recover it. 00:34:24.424 [2024-07-25 01:20:17.411269] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.424 [2024-07-25 01:20:17.411385] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.424 [2024-07-25 01:20:17.411411] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.424 [2024-07-25 01:20:17.411425] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.424 [2024-07-25 01:20:17.411438] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:24.424 [2024-07-25 01:20:17.411467] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:24.424 qpair failed and we were unable to recover it. 00:34:24.424 [2024-07-25 01:20:17.421325] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.424 [2024-07-25 01:20:17.421461] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.424 [2024-07-25 01:20:17.421488] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.424 [2024-07-25 01:20:17.421505] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.424 [2024-07-25 01:20:17.421521] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:24.424 [2024-07-25 01:20:17.421550] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:24.424 qpair failed and we were unable to recover it. 00:34:24.424 [2024-07-25 01:20:17.431238] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.424 [2024-07-25 01:20:17.431371] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.424 [2024-07-25 01:20:17.431397] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.424 [2024-07-25 01:20:17.431412] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.424 [2024-07-25 01:20:17.431425] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:24.424 [2024-07-25 01:20:17.431455] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:24.424 qpair failed and we were unable to recover it. 00:34:24.424 [2024-07-25 01:20:17.441287] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.424 [2024-07-25 01:20:17.441451] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.424 [2024-07-25 01:20:17.441480] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.424 [2024-07-25 01:20:17.441496] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.424 [2024-07-25 01:20:17.441509] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:24.424 [2024-07-25 01:20:17.441538] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:24.424 qpair failed and we were unable to recover it. 00:34:24.424 [2024-07-25 01:20:17.451391] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.424 [2024-07-25 01:20:17.451501] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.424 [2024-07-25 01:20:17.451527] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.424 [2024-07-25 01:20:17.451547] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.424 [2024-07-25 01:20:17.451561] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:24.424 [2024-07-25 01:20:17.451590] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:24.424 qpair failed and we were unable to recover it. 00:34:24.424 [2024-07-25 01:20:17.461351] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.424 [2024-07-25 01:20:17.461471] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.424 [2024-07-25 01:20:17.461497] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.424 [2024-07-25 01:20:17.461511] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.424 [2024-07-25 01:20:17.461524] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:24.424 [2024-07-25 01:20:17.461553] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:24.424 qpair failed and we were unable to recover it. 00:34:24.424 [2024-07-25 01:20:17.471334] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.424 [2024-07-25 01:20:17.471446] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.424 [2024-07-25 01:20:17.471472] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.424 [2024-07-25 01:20:17.471486] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.424 [2024-07-25 01:20:17.471499] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:24.424 [2024-07-25 01:20:17.471527] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:24.425 qpair failed and we were unable to recover it. 00:34:24.425 [2024-07-25 01:20:17.481467] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.425 [2024-07-25 01:20:17.481581] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.425 [2024-07-25 01:20:17.481606] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.425 [2024-07-25 01:20:17.481620] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.425 [2024-07-25 01:20:17.481634] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:24.425 [2024-07-25 01:20:17.481662] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:24.425 qpair failed and we were unable to recover it. 00:34:24.425 [2024-07-25 01:20:17.491515] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.425 [2024-07-25 01:20:17.491634] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.425 [2024-07-25 01:20:17.491660] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.425 [2024-07-25 01:20:17.491675] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.425 [2024-07-25 01:20:17.491688] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:24.425 [2024-07-25 01:20:17.491718] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:24.425 qpair failed and we were unable to recover it. 00:34:24.425 [2024-07-25 01:20:17.501555] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.425 [2024-07-25 01:20:17.501675] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.425 [2024-07-25 01:20:17.501701] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.425 [2024-07-25 01:20:17.501715] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.425 [2024-07-25 01:20:17.501728] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:24.425 [2024-07-25 01:20:17.501756] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:24.425 qpair failed and we were unable to recover it. 00:34:24.425 [2024-07-25 01:20:17.511498] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.425 [2024-07-25 01:20:17.511623] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.425 [2024-07-25 01:20:17.511648] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.425 [2024-07-25 01:20:17.511663] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.425 [2024-07-25 01:20:17.511676] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:24.425 [2024-07-25 01:20:17.511705] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:24.425 qpair failed and we were unable to recover it. 00:34:24.425 [2024-07-25 01:20:17.521519] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.425 [2024-07-25 01:20:17.521636] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.425 [2024-07-25 01:20:17.521661] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.425 [2024-07-25 01:20:17.521675] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.425 [2024-07-25 01:20:17.521687] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:24.425 [2024-07-25 01:20:17.521715] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:24.425 qpair failed and we were unable to recover it. 00:34:24.425 [2024-07-25 01:20:17.531538] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.425 [2024-07-25 01:20:17.531658] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.425 [2024-07-25 01:20:17.531683] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.425 [2024-07-25 01:20:17.531698] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.425 [2024-07-25 01:20:17.531710] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:24.425 [2024-07-25 01:20:17.531738] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:24.425 qpair failed and we were unable to recover it. 00:34:24.425 [2024-07-25 01:20:17.541610] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.425 [2024-07-25 01:20:17.541759] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.425 [2024-07-25 01:20:17.541784] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.425 [2024-07-25 01:20:17.541804] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.425 [2024-07-25 01:20:17.541819] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:24.425 [2024-07-25 01:20:17.541847] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:24.425 qpair failed and we were unable to recover it. 00:34:24.425 [2024-07-25 01:20:17.551608] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.425 [2024-07-25 01:20:17.551750] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.425 [2024-07-25 01:20:17.551776] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.425 [2024-07-25 01:20:17.551790] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.425 [2024-07-25 01:20:17.551804] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:24.425 [2024-07-25 01:20:17.551832] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:24.425 qpair failed and we were unable to recover it. 00:34:24.425 [2024-07-25 01:20:17.561632] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.425 [2024-07-25 01:20:17.561790] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.425 [2024-07-25 01:20:17.561815] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.425 [2024-07-25 01:20:17.561830] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.425 [2024-07-25 01:20:17.561843] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:24.425 [2024-07-25 01:20:17.561871] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:24.425 qpair failed and we were unable to recover it. 00:34:24.425 [2024-07-25 01:20:17.571732] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.425 [2024-07-25 01:20:17.571845] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.425 [2024-07-25 01:20:17.571871] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.425 [2024-07-25 01:20:17.571886] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.425 [2024-07-25 01:20:17.571898] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:24.425 [2024-07-25 01:20:17.571926] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:24.425 qpair failed and we were unable to recover it. 00:34:24.683 [2024-07-25 01:20:17.581690] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.683 [2024-07-25 01:20:17.581810] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.683 [2024-07-25 01:20:17.581835] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.683 [2024-07-25 01:20:17.581849] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.683 [2024-07-25 01:20:17.581863] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:24.683 [2024-07-25 01:20:17.581891] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:24.683 qpair failed and we were unable to recover it. 00:34:24.683 [2024-07-25 01:20:17.591713] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.683 [2024-07-25 01:20:17.591828] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.683 [2024-07-25 01:20:17.591853] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.683 [2024-07-25 01:20:17.591868] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.684 [2024-07-25 01:20:17.591881] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:24.684 [2024-07-25 01:20:17.591910] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:24.684 qpair failed and we were unable to recover it. 00:34:24.684 [2024-07-25 01:20:17.601737] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.684 [2024-07-25 01:20:17.601851] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.684 [2024-07-25 01:20:17.601877] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.684 [2024-07-25 01:20:17.601891] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.684 [2024-07-25 01:20:17.601904] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:24.684 [2024-07-25 01:20:17.601932] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:24.684 qpair failed and we were unable to recover it. 00:34:24.684 [2024-07-25 01:20:17.611773] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.684 [2024-07-25 01:20:17.611927] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.684 [2024-07-25 01:20:17.611952] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.684 [2024-07-25 01:20:17.611967] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.684 [2024-07-25 01:20:17.611980] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:24.684 [2024-07-25 01:20:17.612007] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:24.684 qpair failed and we were unable to recover it. 00:34:24.684 [2024-07-25 01:20:17.621903] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.684 [2024-07-25 01:20:17.622045] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.684 [2024-07-25 01:20:17.622071] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.684 [2024-07-25 01:20:17.622085] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.684 [2024-07-25 01:20:17.622099] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:24.684 [2024-07-25 01:20:17.622127] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:24.684 qpair failed and we were unable to recover it. 00:34:24.684 [2024-07-25 01:20:17.631825] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.684 [2024-07-25 01:20:17.631944] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.684 [2024-07-25 01:20:17.631975] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.684 [2024-07-25 01:20:17.631990] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.684 [2024-07-25 01:20:17.632003] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:24.684 [2024-07-25 01:20:17.632032] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:24.684 qpair failed and we were unable to recover it. 00:34:24.684 [2024-07-25 01:20:17.641827] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.684 [2024-07-25 01:20:17.641944] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.684 [2024-07-25 01:20:17.641970] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.684 [2024-07-25 01:20:17.641984] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.684 [2024-07-25 01:20:17.641997] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:24.684 [2024-07-25 01:20:17.642025] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:24.684 qpair failed and we were unable to recover it. 00:34:24.684 [2024-07-25 01:20:17.651878] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.684 [2024-07-25 01:20:17.651999] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.684 [2024-07-25 01:20:17.652024] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.684 [2024-07-25 01:20:17.652038] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.684 [2024-07-25 01:20:17.652052] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:24.684 [2024-07-25 01:20:17.652079] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:24.684 qpair failed and we were unable to recover it. 00:34:24.684 [2024-07-25 01:20:17.662006] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.684 [2024-07-25 01:20:17.662172] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.684 [2024-07-25 01:20:17.662197] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.684 [2024-07-25 01:20:17.662211] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.684 [2024-07-25 01:20:17.662224] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:24.684 [2024-07-25 01:20:17.662259] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:24.684 qpair failed and we were unable to recover it. 00:34:24.684 [2024-07-25 01:20:17.671928] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.684 [2024-07-25 01:20:17.672049] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.684 [2024-07-25 01:20:17.672075] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.684 [2024-07-25 01:20:17.672089] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.684 [2024-07-25 01:20:17.672102] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:24.684 [2024-07-25 01:20:17.672136] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:24.684 qpair failed and we were unable to recover it. 00:34:24.684 [2024-07-25 01:20:17.681987] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.684 [2024-07-25 01:20:17.682114] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.684 [2024-07-25 01:20:17.682139] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.684 [2024-07-25 01:20:17.682154] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.684 [2024-07-25 01:20:17.682167] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:24.684 [2024-07-25 01:20:17.682195] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:24.684 qpair failed and we were unable to recover it. 00:34:24.684 [2024-07-25 01:20:17.692002] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.684 [2024-07-25 01:20:17.692111] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.684 [2024-07-25 01:20:17.692136] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.684 [2024-07-25 01:20:17.692151] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.684 [2024-07-25 01:20:17.692165] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:24.684 [2024-07-25 01:20:17.692192] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:24.684 qpair failed and we were unable to recover it. 00:34:24.684 [2024-07-25 01:20:17.702148] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.684 [2024-07-25 01:20:17.702281] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.684 [2024-07-25 01:20:17.702307] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.684 [2024-07-25 01:20:17.702322] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.684 [2024-07-25 01:20:17.702335] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:24.684 [2024-07-25 01:20:17.702364] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:24.684 qpair failed and we were unable to recover it. 00:34:24.684 [2024-07-25 01:20:17.712182] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.684 [2024-07-25 01:20:17.712303] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.684 [2024-07-25 01:20:17.712328] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.684 [2024-07-25 01:20:17.712343] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.684 [2024-07-25 01:20:17.712357] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:24.684 [2024-07-25 01:20:17.712386] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:24.684 qpair failed and we were unable to recover it. 00:34:24.684 [2024-07-25 01:20:17.722214] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.684 [2024-07-25 01:20:17.722343] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.684 [2024-07-25 01:20:17.722374] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.684 [2024-07-25 01:20:17.722389] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.685 [2024-07-25 01:20:17.722402] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:24.685 [2024-07-25 01:20:17.722430] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:24.685 qpair failed and we were unable to recover it. 00:34:24.685 [2024-07-25 01:20:17.732173] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.685 [2024-07-25 01:20:17.732299] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.685 [2024-07-25 01:20:17.732324] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.685 [2024-07-25 01:20:17.732338] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.685 [2024-07-25 01:20:17.732351] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:24.685 [2024-07-25 01:20:17.732380] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:24.685 qpair failed and we were unable to recover it. 00:34:24.685 [2024-07-25 01:20:17.742142] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.685 [2024-07-25 01:20:17.742269] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.685 [2024-07-25 01:20:17.742295] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.685 [2024-07-25 01:20:17.742309] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.685 [2024-07-25 01:20:17.742322] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:24.685 [2024-07-25 01:20:17.742350] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:24.685 qpair failed and we were unable to recover it. 00:34:24.685 [2024-07-25 01:20:17.752167] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.685 [2024-07-25 01:20:17.752328] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.685 [2024-07-25 01:20:17.752354] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.685 [2024-07-25 01:20:17.752369] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.685 [2024-07-25 01:20:17.752382] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:24.685 [2024-07-25 01:20:17.752411] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:24.685 qpair failed and we were unable to recover it. 00:34:24.685 [2024-07-25 01:20:17.762184] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.685 [2024-07-25 01:20:17.762312] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.685 [2024-07-25 01:20:17.762338] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.685 [2024-07-25 01:20:17.762352] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.685 [2024-07-25 01:20:17.762365] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:24.685 [2024-07-25 01:20:17.762399] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:24.685 qpair failed and we were unable to recover it. 00:34:24.685 [2024-07-25 01:20:17.772324] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.685 [2024-07-25 01:20:17.772441] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.685 [2024-07-25 01:20:17.772467] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.685 [2024-07-25 01:20:17.772481] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.685 [2024-07-25 01:20:17.772494] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:24.685 [2024-07-25 01:20:17.772522] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:24.685 qpair failed and we were unable to recover it. 00:34:24.685 [2024-07-25 01:20:17.782282] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.685 [2024-07-25 01:20:17.782400] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.685 [2024-07-25 01:20:17.782424] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.685 [2024-07-25 01:20:17.782439] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.685 [2024-07-25 01:20:17.782452] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:24.685 [2024-07-25 01:20:17.782480] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:24.685 qpair failed and we were unable to recover it. 00:34:24.685 [2024-07-25 01:20:17.792309] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.685 [2024-07-25 01:20:17.792436] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.685 [2024-07-25 01:20:17.792461] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.685 [2024-07-25 01:20:17.792476] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.685 [2024-07-25 01:20:17.792489] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:24.685 [2024-07-25 01:20:17.792517] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:24.685 qpair failed and we were unable to recover it. 00:34:24.685 [2024-07-25 01:20:17.802419] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.685 [2024-07-25 01:20:17.802545] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.685 [2024-07-25 01:20:17.802570] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.685 [2024-07-25 01:20:17.802584] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.685 [2024-07-25 01:20:17.802598] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:24.685 [2024-07-25 01:20:17.802625] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:24.685 qpair failed and we were unable to recover it. 00:34:24.685 [2024-07-25 01:20:17.812419] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.685 [2024-07-25 01:20:17.812533] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.685 [2024-07-25 01:20:17.812565] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.685 [2024-07-25 01:20:17.812581] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.685 [2024-07-25 01:20:17.812594] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:24.685 [2024-07-25 01:20:17.812622] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:24.685 qpair failed and we were unable to recover it. 00:34:24.685 [2024-07-25 01:20:17.822458] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.685 [2024-07-25 01:20:17.822576] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.685 [2024-07-25 01:20:17.822602] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.685 [2024-07-25 01:20:17.822616] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.685 [2024-07-25 01:20:17.822630] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:24.685 [2024-07-25 01:20:17.822660] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:24.685 qpair failed and we were unable to recover it. 00:34:24.685 [2024-07-25 01:20:17.832381] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.685 [2024-07-25 01:20:17.832539] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.685 [2024-07-25 01:20:17.832564] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.685 [2024-07-25 01:20:17.832578] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.685 [2024-07-25 01:20:17.832591] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:24.685 [2024-07-25 01:20:17.832619] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:24.685 qpair failed and we were unable to recover it. 00:34:24.944 [2024-07-25 01:20:17.842423] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.944 [2024-07-25 01:20:17.842535] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.944 [2024-07-25 01:20:17.842561] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.944 [2024-07-25 01:20:17.842575] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.944 [2024-07-25 01:20:17.842589] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:24.944 [2024-07-25 01:20:17.842616] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:24.944 qpair failed and we were unable to recover it. 00:34:24.944 [2024-07-25 01:20:17.852474] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.944 [2024-07-25 01:20:17.852587] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.944 [2024-07-25 01:20:17.852612] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.944 [2024-07-25 01:20:17.852626] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.944 [2024-07-25 01:20:17.852644] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:24.944 [2024-07-25 01:20:17.852673] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:24.944 qpair failed and we were unable to recover it. 00:34:24.944 [2024-07-25 01:20:17.862581] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.944 [2024-07-25 01:20:17.862703] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.944 [2024-07-25 01:20:17.862729] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.944 [2024-07-25 01:20:17.862743] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.944 [2024-07-25 01:20:17.862756] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:24.944 [2024-07-25 01:20:17.862784] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:24.944 qpair failed and we were unable to recover it. 00:34:24.944 [2024-07-25 01:20:17.872553] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.944 [2024-07-25 01:20:17.872680] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.944 [2024-07-25 01:20:17.872705] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.944 [2024-07-25 01:20:17.872720] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.944 [2024-07-25 01:20:17.872733] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:24.944 [2024-07-25 01:20:17.872761] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:24.944 qpair failed and we were unable to recover it. 00:34:24.944 [2024-07-25 01:20:17.882538] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.944 [2024-07-25 01:20:17.882650] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.944 [2024-07-25 01:20:17.882676] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.944 [2024-07-25 01:20:17.882690] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.944 [2024-07-25 01:20:17.882704] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:24.944 [2024-07-25 01:20:17.882731] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:24.944 qpair failed and we were unable to recover it. 00:34:24.944 [2024-07-25 01:20:17.892545] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.944 [2024-07-25 01:20:17.892657] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.944 [2024-07-25 01:20:17.892683] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.944 [2024-07-25 01:20:17.892698] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.944 [2024-07-25 01:20:17.892711] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:24.944 [2024-07-25 01:20:17.892739] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:24.944 qpair failed and we were unable to recover it. 00:34:24.944 [2024-07-25 01:20:17.902632] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.944 [2024-07-25 01:20:17.902756] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.944 [2024-07-25 01:20:17.902781] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.944 [2024-07-25 01:20:17.902795] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.944 [2024-07-25 01:20:17.902808] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:24.944 [2024-07-25 01:20:17.902836] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:24.944 qpair failed and we were unable to recover it. 00:34:24.944 [2024-07-25 01:20:17.912695] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.944 [2024-07-25 01:20:17.912815] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.944 [2024-07-25 01:20:17.912839] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.944 [2024-07-25 01:20:17.912854] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.944 [2024-07-25 01:20:17.912867] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:24.944 [2024-07-25 01:20:17.912895] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:24.944 qpair failed and we were unable to recover it. 00:34:24.944 [2024-07-25 01:20:17.922672] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.944 [2024-07-25 01:20:17.922833] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.944 [2024-07-25 01:20:17.922858] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.945 [2024-07-25 01:20:17.922873] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.945 [2024-07-25 01:20:17.922886] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:24.945 [2024-07-25 01:20:17.922916] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:24.945 qpair failed and we were unable to recover it. 00:34:24.945 [2024-07-25 01:20:17.932659] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.945 [2024-07-25 01:20:17.932770] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.945 [2024-07-25 01:20:17.932796] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.945 [2024-07-25 01:20:17.932810] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.945 [2024-07-25 01:20:17.932823] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:24.945 [2024-07-25 01:20:17.932851] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:24.945 qpair failed and we were unable to recover it. 00:34:24.945 [2024-07-25 01:20:17.942736] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.945 [2024-07-25 01:20:17.942864] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.945 [2024-07-25 01:20:17.942888] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.945 [2024-07-25 01:20:17.942903] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.945 [2024-07-25 01:20:17.942921] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:24.945 [2024-07-25 01:20:17.942950] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:24.945 qpair failed and we were unable to recover it. 00:34:24.945 [2024-07-25 01:20:17.952719] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.945 [2024-07-25 01:20:17.952835] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.945 [2024-07-25 01:20:17.952861] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.945 [2024-07-25 01:20:17.952875] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.945 [2024-07-25 01:20:17.952888] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:24.945 [2024-07-25 01:20:17.952916] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:24.945 qpair failed and we were unable to recover it. 00:34:24.945 [2024-07-25 01:20:17.962837] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.945 [2024-07-25 01:20:17.962953] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.945 [2024-07-25 01:20:17.962978] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.945 [2024-07-25 01:20:17.962992] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.945 [2024-07-25 01:20:17.963004] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:24.945 [2024-07-25 01:20:17.963032] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:24.945 qpair failed and we were unable to recover it. 00:34:24.945 [2024-07-25 01:20:17.972820] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.945 [2024-07-25 01:20:17.972941] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.945 [2024-07-25 01:20:17.972966] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.945 [2024-07-25 01:20:17.972980] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.945 [2024-07-25 01:20:17.972993] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:24.945 [2024-07-25 01:20:17.973021] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:24.945 qpair failed and we were unable to recover it. 00:34:24.945 [2024-07-25 01:20:17.982815] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.945 [2024-07-25 01:20:17.982934] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.945 [2024-07-25 01:20:17.982959] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.945 [2024-07-25 01:20:17.982973] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.945 [2024-07-25 01:20:17.982987] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:24.945 [2024-07-25 01:20:17.983014] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:24.945 qpair failed and we were unable to recover it. 00:34:24.945 [2024-07-25 01:20:17.992920] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.945 [2024-07-25 01:20:17.993045] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.945 [2024-07-25 01:20:17.993071] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.945 [2024-07-25 01:20:17.993085] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.945 [2024-07-25 01:20:17.993100] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:24.945 [2024-07-25 01:20:17.993129] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:24.945 qpair failed and we were unable to recover it. 00:34:24.945 [2024-07-25 01:20:18.002970] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.945 [2024-07-25 01:20:18.003100] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.945 [2024-07-25 01:20:18.003126] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.945 [2024-07-25 01:20:18.003140] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.945 [2024-07-25 01:20:18.003153] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:24.945 [2024-07-25 01:20:18.003181] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:24.945 qpair failed and we were unable to recover it. 00:34:24.945 [2024-07-25 01:20:18.012881] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.945 [2024-07-25 01:20:18.012992] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.945 [2024-07-25 01:20:18.013017] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.945 [2024-07-25 01:20:18.013032] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.945 [2024-07-25 01:20:18.013046] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:24.945 [2024-07-25 01:20:18.013074] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:24.945 qpair failed and we were unable to recover it. 00:34:24.945 [2024-07-25 01:20:18.022935] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.945 [2024-07-25 01:20:18.023059] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.945 [2024-07-25 01:20:18.023084] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.945 [2024-07-25 01:20:18.023098] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.945 [2024-07-25 01:20:18.023112] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:24.945 [2024-07-25 01:20:18.023140] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:24.945 qpair failed and we were unable to recover it. 00:34:24.945 [2024-07-25 01:20:18.032967] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.945 [2024-07-25 01:20:18.033092] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.945 [2024-07-25 01:20:18.033117] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.945 [2024-07-25 01:20:18.033132] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.945 [2024-07-25 01:20:18.033151] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:24.945 [2024-07-25 01:20:18.033180] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:24.945 qpair failed and we were unable to recover it. 00:34:24.945 [2024-07-25 01:20:18.043031] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.945 [2024-07-25 01:20:18.043153] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.945 [2024-07-25 01:20:18.043179] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.945 [2024-07-25 01:20:18.043193] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.945 [2024-07-25 01:20:18.043207] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:24.945 [2024-07-25 01:20:18.043235] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:24.945 qpair failed and we were unable to recover it. 00:34:24.945 [2024-07-25 01:20:18.053055] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.945 [2024-07-25 01:20:18.053217] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.945 [2024-07-25 01:20:18.053250] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.945 [2024-07-25 01:20:18.053266] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.946 [2024-07-25 01:20:18.053280] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:24.946 [2024-07-25 01:20:18.053309] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:24.946 qpair failed and we were unable to recover it. 00:34:24.946 [2024-07-25 01:20:18.063081] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.946 [2024-07-25 01:20:18.063209] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.946 [2024-07-25 01:20:18.063234] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.946 [2024-07-25 01:20:18.063256] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.946 [2024-07-25 01:20:18.063269] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:24.946 [2024-07-25 01:20:18.063298] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:24.946 qpair failed and we were unable to recover it. 00:34:24.946 [2024-07-25 01:20:18.073062] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.946 [2024-07-25 01:20:18.073177] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.946 [2024-07-25 01:20:18.073203] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.946 [2024-07-25 01:20:18.073218] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.946 [2024-07-25 01:20:18.073231] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:24.946 [2024-07-25 01:20:18.073267] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:24.946 qpair failed and we were unable to recover it. 00:34:24.946 [2024-07-25 01:20:18.083078] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.946 [2024-07-25 01:20:18.083194] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.946 [2024-07-25 01:20:18.083220] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.946 [2024-07-25 01:20:18.083235] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.946 [2024-07-25 01:20:18.083257] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:24.946 [2024-07-25 01:20:18.083286] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:24.946 qpair failed and we were unable to recover it. 00:34:24.946 [2024-07-25 01:20:18.093180] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.946 [2024-07-25 01:20:18.093320] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.946 [2024-07-25 01:20:18.093346] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.946 [2024-07-25 01:20:18.093361] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.946 [2024-07-25 01:20:18.093374] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:24.946 [2024-07-25 01:20:18.093403] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:24.946 qpair failed and we were unable to recover it. 00:34:25.206 [2024-07-25 01:20:18.103160] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.206 [2024-07-25 01:20:18.103299] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.206 [2024-07-25 01:20:18.103325] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.206 [2024-07-25 01:20:18.103340] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.206 [2024-07-25 01:20:18.103353] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:25.206 [2024-07-25 01:20:18.103382] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:25.206 qpair failed and we were unable to recover it. 00:34:25.206 [2024-07-25 01:20:18.113196] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.206 [2024-07-25 01:20:18.113330] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.206 [2024-07-25 01:20:18.113359] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.206 [2024-07-25 01:20:18.113375] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.206 [2024-07-25 01:20:18.113389] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:25.206 [2024-07-25 01:20:18.113419] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:25.206 qpair failed and we were unable to recover it. 00:34:25.206 [2024-07-25 01:20:18.123220] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.206 [2024-07-25 01:20:18.123344] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.206 [2024-07-25 01:20:18.123370] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.206 [2024-07-25 01:20:18.123390] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.206 [2024-07-25 01:20:18.123404] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:25.206 [2024-07-25 01:20:18.123434] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:25.206 qpair failed and we were unable to recover it. 00:34:25.206 [2024-07-25 01:20:18.133273] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.206 [2024-07-25 01:20:18.133386] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.206 [2024-07-25 01:20:18.133412] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.206 [2024-07-25 01:20:18.133426] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.206 [2024-07-25 01:20:18.133439] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:25.206 [2024-07-25 01:20:18.133468] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:25.206 qpair failed and we were unable to recover it. 00:34:25.206 [2024-07-25 01:20:18.143282] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.206 [2024-07-25 01:20:18.143451] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.206 [2024-07-25 01:20:18.143477] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.206 [2024-07-25 01:20:18.143492] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.206 [2024-07-25 01:20:18.143505] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:25.206 [2024-07-25 01:20:18.143533] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:25.206 qpair failed and we were unable to recover it. 00:34:25.206 [2024-07-25 01:20:18.153349] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.206 [2024-07-25 01:20:18.153492] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.206 [2024-07-25 01:20:18.153520] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.206 [2024-07-25 01:20:18.153535] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.206 [2024-07-25 01:20:18.153548] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:25.206 [2024-07-25 01:20:18.153576] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:25.206 qpair failed and we were unable to recover it. 00:34:25.206 [2024-07-25 01:20:18.163332] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.206 [2024-07-25 01:20:18.163488] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.206 [2024-07-25 01:20:18.163515] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.206 [2024-07-25 01:20:18.163530] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.206 [2024-07-25 01:20:18.163543] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:25.206 [2024-07-25 01:20:18.163571] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:25.206 qpair failed and we were unable to recover it. 00:34:25.206 [2024-07-25 01:20:18.173392] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.206 [2024-07-25 01:20:18.173511] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.206 [2024-07-25 01:20:18.173538] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.206 [2024-07-25 01:20:18.173555] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.206 [2024-07-25 01:20:18.173571] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:25.206 [2024-07-25 01:20:18.173600] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:25.206 qpair failed and we were unable to recover it. 00:34:25.206 [2024-07-25 01:20:18.183481] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.206 [2024-07-25 01:20:18.183602] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.206 [2024-07-25 01:20:18.183628] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.206 [2024-07-25 01:20:18.183642] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.206 [2024-07-25 01:20:18.183656] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:25.206 [2024-07-25 01:20:18.183687] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:25.207 qpair failed and we were unable to recover it. 00:34:25.207 [2024-07-25 01:20:18.193479] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.207 [2024-07-25 01:20:18.193589] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.207 [2024-07-25 01:20:18.193615] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.207 [2024-07-25 01:20:18.193630] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.207 [2024-07-25 01:20:18.193643] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:25.207 [2024-07-25 01:20:18.193672] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:25.207 qpair failed and we were unable to recover it. 00:34:25.207 [2024-07-25 01:20:18.203442] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.207 [2024-07-25 01:20:18.203561] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.207 [2024-07-25 01:20:18.203588] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.207 [2024-07-25 01:20:18.203602] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.207 [2024-07-25 01:20:18.203615] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:25.207 [2024-07-25 01:20:18.203645] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:25.207 qpair failed and we were unable to recover it. 00:34:25.207 [2024-07-25 01:20:18.213481] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.207 [2024-07-25 01:20:18.213599] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.207 [2024-07-25 01:20:18.213625] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.207 [2024-07-25 01:20:18.213645] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.207 [2024-07-25 01:20:18.213660] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:25.207 [2024-07-25 01:20:18.213689] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:25.207 qpair failed and we were unable to recover it. 00:34:25.207 [2024-07-25 01:20:18.223539] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.207 [2024-07-25 01:20:18.223707] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.207 [2024-07-25 01:20:18.223732] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.207 [2024-07-25 01:20:18.223747] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.207 [2024-07-25 01:20:18.223760] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:25.207 [2024-07-25 01:20:18.223788] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:25.207 qpair failed and we were unable to recover it. 00:34:25.207 [2024-07-25 01:20:18.233509] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.207 [2024-07-25 01:20:18.233651] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.207 [2024-07-25 01:20:18.233677] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.207 [2024-07-25 01:20:18.233692] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.207 [2024-07-25 01:20:18.233705] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:25.207 [2024-07-25 01:20:18.233734] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:25.207 qpair failed and we were unable to recover it. 00:34:25.207 [2024-07-25 01:20:18.243531] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.207 [2024-07-25 01:20:18.243649] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.207 [2024-07-25 01:20:18.243675] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.207 [2024-07-25 01:20:18.243689] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.207 [2024-07-25 01:20:18.243702] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:25.207 [2024-07-25 01:20:18.243731] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:25.207 qpair failed and we were unable to recover it. 00:34:25.207 [2024-07-25 01:20:18.253557] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.207 [2024-07-25 01:20:18.253691] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.207 [2024-07-25 01:20:18.253717] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.207 [2024-07-25 01:20:18.253731] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.207 [2024-07-25 01:20:18.253745] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:25.207 [2024-07-25 01:20:18.253773] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:25.207 qpair failed and we were unable to recover it. 00:34:25.207 [2024-07-25 01:20:18.263660] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.207 [2024-07-25 01:20:18.263802] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.207 [2024-07-25 01:20:18.263828] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.207 [2024-07-25 01:20:18.263842] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.207 [2024-07-25 01:20:18.263855] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:25.207 [2024-07-25 01:20:18.263886] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:25.207 qpair failed and we were unable to recover it. 00:34:25.207 [2024-07-25 01:20:18.273761] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.207 [2024-07-25 01:20:18.273898] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.207 [2024-07-25 01:20:18.273924] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.207 [2024-07-25 01:20:18.273939] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.207 [2024-07-25 01:20:18.273952] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:25.207 [2024-07-25 01:20:18.273980] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:25.207 qpair failed and we were unable to recover it. 00:34:25.207 [2024-07-25 01:20:18.283673] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.207 [2024-07-25 01:20:18.283787] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.207 [2024-07-25 01:20:18.283813] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.207 [2024-07-25 01:20:18.283827] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.207 [2024-07-25 01:20:18.283840] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:25.207 [2024-07-25 01:20:18.283869] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:25.207 qpair failed and we were unable to recover it. 00:34:25.207 [2024-07-25 01:20:18.293794] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.207 [2024-07-25 01:20:18.293926] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.207 [2024-07-25 01:20:18.293952] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.207 [2024-07-25 01:20:18.293967] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.207 [2024-07-25 01:20:18.293980] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:25.207 [2024-07-25 01:20:18.294009] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:25.207 qpair failed and we were unable to recover it. 00:34:25.207 [2024-07-25 01:20:18.303736] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.207 [2024-07-25 01:20:18.303853] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.207 [2024-07-25 01:20:18.303879] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.207 [2024-07-25 01:20:18.303898] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.207 [2024-07-25 01:20:18.303913] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:25.207 [2024-07-25 01:20:18.303941] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:25.207 qpair failed and we were unable to recover it. 00:34:25.207 [2024-07-25 01:20:18.313809] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.207 [2024-07-25 01:20:18.313927] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.207 [2024-07-25 01:20:18.313953] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.207 [2024-07-25 01:20:18.313968] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.207 [2024-07-25 01:20:18.313981] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:25.207 [2024-07-25 01:20:18.314009] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:25.207 qpair failed and we were unable to recover it. 00:34:25.207 [2024-07-25 01:20:18.323742] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.208 [2024-07-25 01:20:18.323856] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.208 [2024-07-25 01:20:18.323882] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.208 [2024-07-25 01:20:18.323896] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.208 [2024-07-25 01:20:18.323909] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:25.208 [2024-07-25 01:20:18.323937] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:25.208 qpair failed and we were unable to recover it. 00:34:25.208 [2024-07-25 01:20:18.333811] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.208 [2024-07-25 01:20:18.333932] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.208 [2024-07-25 01:20:18.333958] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.208 [2024-07-25 01:20:18.333972] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.208 [2024-07-25 01:20:18.333986] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:25.208 [2024-07-25 01:20:18.334013] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:25.208 qpair failed and we were unable to recover it. 00:34:25.208 [2024-07-25 01:20:18.343824] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.208 [2024-07-25 01:20:18.343945] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.208 [2024-07-25 01:20:18.343969] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.208 [2024-07-25 01:20:18.343983] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.208 [2024-07-25 01:20:18.343996] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:25.208 [2024-07-25 01:20:18.344027] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:25.208 qpair failed and we were unable to recover it. 00:34:25.208 [2024-07-25 01:20:18.353837] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.208 [2024-07-25 01:20:18.353956] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.208 [2024-07-25 01:20:18.353982] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.208 [2024-07-25 01:20:18.353997] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.208 [2024-07-25 01:20:18.354010] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:25.208 [2024-07-25 01:20:18.354037] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:25.208 qpair failed and we were unable to recover it. 00:34:25.467 [2024-07-25 01:20:18.363908] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.467 [2024-07-25 01:20:18.364019] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.467 [2024-07-25 01:20:18.364044] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.467 [2024-07-25 01:20:18.364058] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.467 [2024-07-25 01:20:18.364071] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:25.467 [2024-07-25 01:20:18.364099] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:25.467 qpair failed and we were unable to recover it. 00:34:25.467 [2024-07-25 01:20:18.373953] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.467 [2024-07-25 01:20:18.374069] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.467 [2024-07-25 01:20:18.374095] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.467 [2024-07-25 01:20:18.374109] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.467 [2024-07-25 01:20:18.374122] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:25.467 [2024-07-25 01:20:18.374150] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:25.467 qpair failed and we were unable to recover it. 00:34:25.467 [2024-07-25 01:20:18.383948] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.467 [2024-07-25 01:20:18.384064] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.467 [2024-07-25 01:20:18.384090] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.467 [2024-07-25 01:20:18.384105] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.467 [2024-07-25 01:20:18.384117] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:25.467 [2024-07-25 01:20:18.384147] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:25.467 qpair failed and we were unable to recover it. 00:34:25.467 [2024-07-25 01:20:18.393956] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.467 [2024-07-25 01:20:18.394072] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.467 [2024-07-25 01:20:18.394103] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.467 [2024-07-25 01:20:18.394119] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.467 [2024-07-25 01:20:18.394133] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:25.467 [2024-07-25 01:20:18.394161] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:25.467 qpair failed and we were unable to recover it. 00:34:25.467 [2024-07-25 01:20:18.403971] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.467 [2024-07-25 01:20:18.404083] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.467 [2024-07-25 01:20:18.404108] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.467 [2024-07-25 01:20:18.404124] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.467 [2024-07-25 01:20:18.404137] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:25.467 [2024-07-25 01:20:18.404166] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:25.467 qpair failed and we were unable to recover it. 00:34:25.467 [2024-07-25 01:20:18.414005] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.467 [2024-07-25 01:20:18.414116] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.467 [2024-07-25 01:20:18.414142] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.467 [2024-07-25 01:20:18.414156] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.467 [2024-07-25 01:20:18.414169] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:25.467 [2024-07-25 01:20:18.414196] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:25.467 qpair failed and we were unable to recover it. 00:34:25.467 [2024-07-25 01:20:18.424068] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.467 [2024-07-25 01:20:18.424189] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.467 [2024-07-25 01:20:18.424215] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.467 [2024-07-25 01:20:18.424229] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.467 [2024-07-25 01:20:18.424249] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:25.467 [2024-07-25 01:20:18.424280] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:25.467 qpair failed and we were unable to recover it. 00:34:25.467 [2024-07-25 01:20:18.434063] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.467 [2024-07-25 01:20:18.434176] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.467 [2024-07-25 01:20:18.434203] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.467 [2024-07-25 01:20:18.434217] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.467 [2024-07-25 01:20:18.434230] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:25.467 [2024-07-25 01:20:18.434270] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:25.467 qpair failed and we were unable to recover it. 00:34:25.467 [2024-07-25 01:20:18.444088] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.467 [2024-07-25 01:20:18.444204] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.467 [2024-07-25 01:20:18.444230] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.467 [2024-07-25 01:20:18.444252] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.467 [2024-07-25 01:20:18.444268] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:25.467 [2024-07-25 01:20:18.444297] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:25.467 qpair failed and we were unable to recover it. 00:34:25.467 [2024-07-25 01:20:18.454116] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.467 [2024-07-25 01:20:18.454230] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.467 [2024-07-25 01:20:18.454263] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.467 [2024-07-25 01:20:18.454279] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.467 [2024-07-25 01:20:18.454293] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:25.467 [2024-07-25 01:20:18.454322] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:25.467 qpair failed and we were unable to recover it. 00:34:25.467 [2024-07-25 01:20:18.464164] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.467 [2024-07-25 01:20:18.464314] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.468 [2024-07-25 01:20:18.464339] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.468 [2024-07-25 01:20:18.464353] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.468 [2024-07-25 01:20:18.464366] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:25.468 [2024-07-25 01:20:18.464394] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:25.468 qpair failed and we were unable to recover it. 00:34:25.468 [2024-07-25 01:20:18.474182] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.468 [2024-07-25 01:20:18.474312] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.468 [2024-07-25 01:20:18.474338] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.468 [2024-07-25 01:20:18.474353] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.468 [2024-07-25 01:20:18.474366] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:25.468 [2024-07-25 01:20:18.474394] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:25.468 qpair failed and we were unable to recover it. 00:34:25.468 [2024-07-25 01:20:18.484231] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.468 [2024-07-25 01:20:18.484357] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.468 [2024-07-25 01:20:18.484388] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.468 [2024-07-25 01:20:18.484403] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.468 [2024-07-25 01:20:18.484416] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:25.468 [2024-07-25 01:20:18.484445] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:25.468 qpair failed and we were unable to recover it. 00:34:25.468 [2024-07-25 01:20:18.494264] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.468 [2024-07-25 01:20:18.494376] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.468 [2024-07-25 01:20:18.494402] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.468 [2024-07-25 01:20:18.494416] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.468 [2024-07-25 01:20:18.494429] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:25.468 [2024-07-25 01:20:18.494457] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:25.468 qpair failed and we were unable to recover it. 00:34:25.468 [2024-07-25 01:20:18.504382] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.468 [2024-07-25 01:20:18.504500] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.468 [2024-07-25 01:20:18.504525] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.468 [2024-07-25 01:20:18.504539] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.468 [2024-07-25 01:20:18.504552] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:25.468 [2024-07-25 01:20:18.504582] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:25.468 qpair failed and we were unable to recover it. 00:34:25.468 [2024-07-25 01:20:18.514379] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.468 [2024-07-25 01:20:18.514540] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.468 [2024-07-25 01:20:18.514566] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.468 [2024-07-25 01:20:18.514580] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.468 [2024-07-25 01:20:18.514594] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:25.468 [2024-07-25 01:20:18.514622] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:25.468 qpair failed and we were unable to recover it. 00:34:25.468 [2024-07-25 01:20:18.524331] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.468 [2024-07-25 01:20:18.524452] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.468 [2024-07-25 01:20:18.524476] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.468 [2024-07-25 01:20:18.524490] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.468 [2024-07-25 01:20:18.524502] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:25.468 [2024-07-25 01:20:18.524535] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:25.468 qpair failed and we were unable to recover it. 00:34:25.468 [2024-07-25 01:20:18.534400] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.468 [2024-07-25 01:20:18.534552] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.468 [2024-07-25 01:20:18.534578] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.468 [2024-07-25 01:20:18.534592] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.468 [2024-07-25 01:20:18.534604] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:25.468 [2024-07-25 01:20:18.534632] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:25.468 qpair failed and we were unable to recover it. 00:34:25.468 [2024-07-25 01:20:18.544417] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.468 [2024-07-25 01:20:18.544543] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.468 [2024-07-25 01:20:18.544568] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.468 [2024-07-25 01:20:18.544582] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.468 [2024-07-25 01:20:18.544595] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:25.468 [2024-07-25 01:20:18.544623] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:25.468 qpair failed and we were unable to recover it. 00:34:25.468 [2024-07-25 01:20:18.554499] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.468 [2024-07-25 01:20:18.554616] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.468 [2024-07-25 01:20:18.554642] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.468 [2024-07-25 01:20:18.554657] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.468 [2024-07-25 01:20:18.554670] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:25.468 [2024-07-25 01:20:18.554698] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:25.468 qpair failed and we were unable to recover it. 00:34:25.468 [2024-07-25 01:20:18.564440] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.468 [2024-07-25 01:20:18.564553] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.468 [2024-07-25 01:20:18.564578] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.468 [2024-07-25 01:20:18.564593] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.468 [2024-07-25 01:20:18.564606] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:25.468 [2024-07-25 01:20:18.564634] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:25.468 qpair failed and we were unable to recover it. 00:34:25.468 [2024-07-25 01:20:18.574596] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.468 [2024-07-25 01:20:18.574732] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.468 [2024-07-25 01:20:18.574765] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.468 [2024-07-25 01:20:18.574779] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.468 [2024-07-25 01:20:18.574793] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:25.468 [2024-07-25 01:20:18.574821] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:25.468 qpair failed and we were unable to recover it. 00:34:25.469 [2024-07-25 01:20:18.584505] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.469 [2024-07-25 01:20:18.584653] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.469 [2024-07-25 01:20:18.584678] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.469 [2024-07-25 01:20:18.584693] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.469 [2024-07-25 01:20:18.584706] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:25.469 [2024-07-25 01:20:18.584734] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:25.469 qpair failed and we were unable to recover it. 00:34:25.469 [2024-07-25 01:20:18.594608] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.469 [2024-07-25 01:20:18.594727] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.469 [2024-07-25 01:20:18.594753] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.469 [2024-07-25 01:20:18.594767] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.469 [2024-07-25 01:20:18.594780] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:25.469 [2024-07-25 01:20:18.594809] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:25.469 qpair failed and we were unable to recover it. 00:34:25.469 [2024-07-25 01:20:18.604574] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.469 [2024-07-25 01:20:18.604689] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.469 [2024-07-25 01:20:18.604715] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.469 [2024-07-25 01:20:18.604729] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.469 [2024-07-25 01:20:18.604742] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:25.469 [2024-07-25 01:20:18.604771] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:25.469 qpair failed and we were unable to recover it. 00:34:25.469 [2024-07-25 01:20:18.614650] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.469 [2024-07-25 01:20:18.614758] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.469 [2024-07-25 01:20:18.614784] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.469 [2024-07-25 01:20:18.614798] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.469 [2024-07-25 01:20:18.614810] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:25.469 [2024-07-25 01:20:18.614844] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:25.469 qpair failed and we were unable to recover it. 00:34:25.728 [2024-07-25 01:20:18.624589] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.728 [2024-07-25 01:20:18.624708] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.728 [2024-07-25 01:20:18.624733] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.728 [2024-07-25 01:20:18.624748] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.728 [2024-07-25 01:20:18.624761] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:25.728 [2024-07-25 01:20:18.624791] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:25.728 qpair failed and we were unable to recover it. 00:34:25.728 [2024-07-25 01:20:18.634701] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.728 [2024-07-25 01:20:18.634853] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.728 [2024-07-25 01:20:18.634879] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.728 [2024-07-25 01:20:18.634894] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.728 [2024-07-25 01:20:18.634907] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:25.728 [2024-07-25 01:20:18.634936] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:25.728 qpair failed and we were unable to recover it. 00:34:25.728 [2024-07-25 01:20:18.644729] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.728 [2024-07-25 01:20:18.644843] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.728 [2024-07-25 01:20:18.644869] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.728 [2024-07-25 01:20:18.644883] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.728 [2024-07-25 01:20:18.644897] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:25.728 [2024-07-25 01:20:18.644925] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:25.728 qpair failed and we were unable to recover it. 00:34:25.728 [2024-07-25 01:20:18.654716] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.728 [2024-07-25 01:20:18.654838] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.728 [2024-07-25 01:20:18.654863] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.728 [2024-07-25 01:20:18.654877] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.728 [2024-07-25 01:20:18.654891] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:25.728 [2024-07-25 01:20:18.654921] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:25.728 qpair failed and we were unable to recover it. 00:34:25.728 [2024-07-25 01:20:18.664771] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.728 [2024-07-25 01:20:18.664893] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.728 [2024-07-25 01:20:18.664923] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.728 [2024-07-25 01:20:18.664938] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.728 [2024-07-25 01:20:18.664951] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:25.728 [2024-07-25 01:20:18.664979] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:25.728 qpair failed and we were unable to recover it. 00:34:25.728 [2024-07-25 01:20:18.674810] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.728 [2024-07-25 01:20:18.674921] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.728 [2024-07-25 01:20:18.674946] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.728 [2024-07-25 01:20:18.674961] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.728 [2024-07-25 01:20:18.674974] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:25.728 [2024-07-25 01:20:18.675001] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:25.728 qpair failed and we were unable to recover it. 00:34:25.728 [2024-07-25 01:20:18.684821] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.728 [2024-07-25 01:20:18.684945] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.728 [2024-07-25 01:20:18.684971] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.728 [2024-07-25 01:20:18.684985] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.728 [2024-07-25 01:20:18.684998] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:25.728 [2024-07-25 01:20:18.685026] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:25.728 qpair failed and we were unable to recover it. 00:34:25.728 [2024-07-25 01:20:18.694794] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.728 [2024-07-25 01:20:18.694916] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.728 [2024-07-25 01:20:18.694941] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.728 [2024-07-25 01:20:18.694955] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.728 [2024-07-25 01:20:18.694969] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:25.728 [2024-07-25 01:20:18.694997] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:25.728 qpair failed and we were unable to recover it. 00:34:25.728 [2024-07-25 01:20:18.704823] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.728 [2024-07-25 01:20:18.704941] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.728 [2024-07-25 01:20:18.704966] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.728 [2024-07-25 01:20:18.704980] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.728 [2024-07-25 01:20:18.704999] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:25.728 [2024-07-25 01:20:18.705028] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:25.728 qpair failed and we were unable to recover it. 00:34:25.728 [2024-07-25 01:20:18.714866] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.728 [2024-07-25 01:20:18.714986] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.728 [2024-07-25 01:20:18.715012] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.728 [2024-07-25 01:20:18.715027] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.729 [2024-07-25 01:20:18.715041] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:25.729 [2024-07-25 01:20:18.715069] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:25.729 qpair failed and we were unable to recover it. 00:34:25.729 [2024-07-25 01:20:18.724866] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.729 [2024-07-25 01:20:18.724982] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.729 [2024-07-25 01:20:18.725007] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.729 [2024-07-25 01:20:18.725022] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.729 [2024-07-25 01:20:18.725035] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:25.729 [2024-07-25 01:20:18.725063] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:25.729 qpair failed and we were unable to recover it. 00:34:25.729 [2024-07-25 01:20:18.734925] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.729 [2024-07-25 01:20:18.735085] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.729 [2024-07-25 01:20:18.735110] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.729 [2024-07-25 01:20:18.735124] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.729 [2024-07-25 01:20:18.735137] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:25.729 [2024-07-25 01:20:18.735165] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:25.729 qpair failed and we were unable to recover it. 00:34:25.729 [2024-07-25 01:20:18.744976] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.729 [2024-07-25 01:20:18.745135] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.729 [2024-07-25 01:20:18.745160] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.729 [2024-07-25 01:20:18.745174] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.729 [2024-07-25 01:20:18.745187] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:25.729 [2024-07-25 01:20:18.745215] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:25.729 qpair failed and we were unable to recover it. 00:34:25.729 [2024-07-25 01:20:18.755012] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.729 [2024-07-25 01:20:18.755127] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.729 [2024-07-25 01:20:18.755152] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.729 [2024-07-25 01:20:18.755167] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.729 [2024-07-25 01:20:18.755179] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:25.729 [2024-07-25 01:20:18.755208] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:25.729 qpair failed and we were unable to recover it. 00:34:25.729 [2024-07-25 01:20:18.765005] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.729 [2024-07-25 01:20:18.765119] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.729 [2024-07-25 01:20:18.765145] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.729 [2024-07-25 01:20:18.765159] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.729 [2024-07-25 01:20:18.765172] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:25.729 [2024-07-25 01:20:18.765201] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:25.729 qpair failed and we were unable to recover it. 00:34:25.729 [2024-07-25 01:20:18.775022] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.729 [2024-07-25 01:20:18.775133] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.729 [2024-07-25 01:20:18.775160] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.729 [2024-07-25 01:20:18.775175] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.729 [2024-07-25 01:20:18.775189] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:25.729 [2024-07-25 01:20:18.775217] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:25.729 qpair failed and we were unable to recover it. 00:34:25.729 [2024-07-25 01:20:18.785058] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.729 [2024-07-25 01:20:18.785177] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.729 [2024-07-25 01:20:18.785203] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.729 [2024-07-25 01:20:18.785218] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.729 [2024-07-25 01:20:18.785231] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:25.729 [2024-07-25 01:20:18.785266] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:25.729 qpair failed and we were unable to recover it. 00:34:25.729 [2024-07-25 01:20:18.795119] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.729 [2024-07-25 01:20:18.795247] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.729 [2024-07-25 01:20:18.795275] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.729 [2024-07-25 01:20:18.795289] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.729 [2024-07-25 01:20:18.795308] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:25.729 [2024-07-25 01:20:18.795338] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:25.729 qpair failed and we were unable to recover it. 00:34:25.729 [2024-07-25 01:20:18.805127] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.729 [2024-07-25 01:20:18.805264] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.729 [2024-07-25 01:20:18.805290] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.729 [2024-07-25 01:20:18.805304] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.729 [2024-07-25 01:20:18.805317] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:25.729 [2024-07-25 01:20:18.805346] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:25.729 qpair failed and we were unable to recover it. 00:34:25.729 [2024-07-25 01:20:18.815186] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.729 [2024-07-25 01:20:18.815317] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.729 [2024-07-25 01:20:18.815343] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.729 [2024-07-25 01:20:18.815358] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.729 [2024-07-25 01:20:18.815371] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:25.729 [2024-07-25 01:20:18.815399] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:25.729 qpair failed and we were unable to recover it. 00:34:25.729 [2024-07-25 01:20:18.825182] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.729 [2024-07-25 01:20:18.825301] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.729 [2024-07-25 01:20:18.825327] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.729 [2024-07-25 01:20:18.825342] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.729 [2024-07-25 01:20:18.825355] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:25.729 [2024-07-25 01:20:18.825386] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:25.729 qpair failed and we were unable to recover it. 00:34:25.729 [2024-07-25 01:20:18.835208] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.729 [2024-07-25 01:20:18.835334] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.729 [2024-07-25 01:20:18.835360] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.729 [2024-07-25 01:20:18.835374] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.729 [2024-07-25 01:20:18.835387] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:25.729 [2024-07-25 01:20:18.835416] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:25.729 qpair failed and we were unable to recover it. 00:34:25.729 [2024-07-25 01:20:18.845235] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.729 [2024-07-25 01:20:18.845360] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.729 [2024-07-25 01:20:18.845386] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.729 [2024-07-25 01:20:18.845400] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.729 [2024-07-25 01:20:18.845414] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:25.730 [2024-07-25 01:20:18.845442] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:25.730 qpair failed and we were unable to recover it. 00:34:25.730 [2024-07-25 01:20:18.855288] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.730 [2024-07-25 01:20:18.855403] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.730 [2024-07-25 01:20:18.855429] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.730 [2024-07-25 01:20:18.855444] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.730 [2024-07-25 01:20:18.855457] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:25.730 [2024-07-25 01:20:18.855485] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:25.730 qpair failed and we were unable to recover it. 00:34:25.730 [2024-07-25 01:20:18.865319] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.730 [2024-07-25 01:20:18.865453] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.730 [2024-07-25 01:20:18.865478] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.730 [2024-07-25 01:20:18.865492] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.730 [2024-07-25 01:20:18.865505] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:25.730 [2024-07-25 01:20:18.865533] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:25.730 qpair failed and we were unable to recover it. 00:34:25.730 [2024-07-25 01:20:18.875404] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.730 [2024-07-25 01:20:18.875526] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.730 [2024-07-25 01:20:18.875552] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.730 [2024-07-25 01:20:18.875567] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.730 [2024-07-25 01:20:18.875579] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:25.730 [2024-07-25 01:20:18.875607] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:25.730 qpair failed and we were unable to recover it. 00:34:25.989 [2024-07-25 01:20:18.885356] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.989 [2024-07-25 01:20:18.885486] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.989 [2024-07-25 01:20:18.885514] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.989 [2024-07-25 01:20:18.885538] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.989 [2024-07-25 01:20:18.885552] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:25.989 [2024-07-25 01:20:18.885582] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:25.989 qpair failed and we were unable to recover it. 00:34:25.989 [2024-07-25 01:20:18.895377] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.989 [2024-07-25 01:20:18.895514] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.989 [2024-07-25 01:20:18.895541] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.989 [2024-07-25 01:20:18.895555] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.989 [2024-07-25 01:20:18.895569] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:25.989 [2024-07-25 01:20:18.895596] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:25.989 qpair failed and we were unable to recover it. 00:34:25.989 [2024-07-25 01:20:18.905402] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.989 [2024-07-25 01:20:18.905516] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.989 [2024-07-25 01:20:18.905542] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.989 [2024-07-25 01:20:18.905556] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.989 [2024-07-25 01:20:18.905569] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:25.989 [2024-07-25 01:20:18.905597] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:25.989 qpair failed and we were unable to recover it. 00:34:25.989 [2024-07-25 01:20:18.915478] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.989 [2024-07-25 01:20:18.915601] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.989 [2024-07-25 01:20:18.915627] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.989 [2024-07-25 01:20:18.915642] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.989 [2024-07-25 01:20:18.915655] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:25.989 [2024-07-25 01:20:18.915683] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:25.989 qpair failed and we were unable to recover it. 00:34:25.989 [2024-07-25 01:20:18.925442] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.989 [2024-07-25 01:20:18.925549] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.989 [2024-07-25 01:20:18.925575] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.989 [2024-07-25 01:20:18.925589] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.989 [2024-07-25 01:20:18.925602] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:25.989 [2024-07-25 01:20:18.925630] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:25.989 qpair failed and we were unable to recover it. 00:34:25.989 [2024-07-25 01:20:18.935484] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.989 [2024-07-25 01:20:18.935597] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.989 [2024-07-25 01:20:18.935623] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.989 [2024-07-25 01:20:18.935637] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.989 [2024-07-25 01:20:18.935651] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:25.989 [2024-07-25 01:20:18.935678] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:25.989 qpair failed and we were unable to recover it. 00:34:25.989 [2024-07-25 01:20:18.945566] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.989 [2024-07-25 01:20:18.945684] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.989 [2024-07-25 01:20:18.945710] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.989 [2024-07-25 01:20:18.945724] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.989 [2024-07-25 01:20:18.945737] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:25.989 [2024-07-25 01:20:18.945765] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:25.989 qpair failed and we were unable to recover it. 00:34:25.989 [2024-07-25 01:20:18.955572] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.989 [2024-07-25 01:20:18.955698] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.989 [2024-07-25 01:20:18.955723] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.989 [2024-07-25 01:20:18.955738] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.989 [2024-07-25 01:20:18.955751] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:25.989 [2024-07-25 01:20:18.955779] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:25.989 qpair failed and we were unable to recover it. 00:34:25.989 [2024-07-25 01:20:18.965567] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.989 [2024-07-25 01:20:18.965722] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.989 [2024-07-25 01:20:18.965748] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.989 [2024-07-25 01:20:18.965762] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.989 [2024-07-25 01:20:18.965775] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:25.990 [2024-07-25 01:20:18.965803] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:25.990 qpair failed and we were unable to recover it. 00:34:25.990 [2024-07-25 01:20:18.975592] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.990 [2024-07-25 01:20:18.975723] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.990 [2024-07-25 01:20:18.975749] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.990 [2024-07-25 01:20:18.975770] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.990 [2024-07-25 01:20:18.975784] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:25.990 [2024-07-25 01:20:18.975813] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:25.990 qpair failed and we were unable to recover it. 00:34:25.990 [2024-07-25 01:20:18.985659] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.990 [2024-07-25 01:20:18.985823] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.990 [2024-07-25 01:20:18.985849] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.990 [2024-07-25 01:20:18.985863] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.990 [2024-07-25 01:20:18.985876] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:25.990 [2024-07-25 01:20:18.985904] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:25.990 qpair failed and we were unable to recover it. 00:34:25.990 [2024-07-25 01:20:18.995689] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.990 [2024-07-25 01:20:18.995804] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.990 [2024-07-25 01:20:18.995830] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.990 [2024-07-25 01:20:18.995844] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.990 [2024-07-25 01:20:18.995858] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:25.990 [2024-07-25 01:20:18.995885] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:25.990 qpair failed and we were unable to recover it. 00:34:25.990 [2024-07-25 01:20:19.005782] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.990 [2024-07-25 01:20:19.005916] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.990 [2024-07-25 01:20:19.005941] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.990 [2024-07-25 01:20:19.005955] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.990 [2024-07-25 01:20:19.005968] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:25.990 [2024-07-25 01:20:19.005995] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:25.990 qpair failed and we were unable to recover it. 00:34:25.990 [2024-07-25 01:20:19.015798] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.990 [2024-07-25 01:20:19.015912] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.990 [2024-07-25 01:20:19.015938] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.990 [2024-07-25 01:20:19.015952] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.990 [2024-07-25 01:20:19.015964] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:25.990 [2024-07-25 01:20:19.015992] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:25.990 qpair failed and we were unable to recover it. 00:34:25.990 [2024-07-25 01:20:19.025768] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.990 [2024-07-25 01:20:19.025890] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.990 [2024-07-25 01:20:19.025915] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.990 [2024-07-25 01:20:19.025929] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.990 [2024-07-25 01:20:19.025942] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:25.990 [2024-07-25 01:20:19.025970] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:25.990 qpair failed and we were unable to recover it. 00:34:25.990 [2024-07-25 01:20:19.035814] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.990 [2024-07-25 01:20:19.035928] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.990 [2024-07-25 01:20:19.035954] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.990 [2024-07-25 01:20:19.035969] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.990 [2024-07-25 01:20:19.035982] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:25.990 [2024-07-25 01:20:19.036012] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:25.990 qpair failed and we were unable to recover it. 00:34:25.990 [2024-07-25 01:20:19.045864] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.990 [2024-07-25 01:20:19.045982] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.990 [2024-07-25 01:20:19.046007] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.990 [2024-07-25 01:20:19.046022] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.990 [2024-07-25 01:20:19.046035] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:25.990 [2024-07-25 01:20:19.046063] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:25.990 qpair failed and we were unable to recover it. 00:34:25.990 [2024-07-25 01:20:19.055837] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.990 [2024-07-25 01:20:19.055951] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.990 [2024-07-25 01:20:19.055976] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.990 [2024-07-25 01:20:19.055990] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.990 [2024-07-25 01:20:19.056003] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:25.990 [2024-07-25 01:20:19.056031] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:25.990 qpair failed and we were unable to recover it. 00:34:25.990 [2024-07-25 01:20:19.065853] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.990 [2024-07-25 01:20:19.065984] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.990 [2024-07-25 01:20:19.066009] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.990 [2024-07-25 01:20:19.066030] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.990 [2024-07-25 01:20:19.066044] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:25.990 [2024-07-25 01:20:19.066074] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:25.990 qpair failed and we were unable to recover it. 00:34:25.990 [2024-07-25 01:20:19.075907] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.990 [2024-07-25 01:20:19.076024] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.990 [2024-07-25 01:20:19.076049] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.990 [2024-07-25 01:20:19.076064] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.990 [2024-07-25 01:20:19.076077] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:25.990 [2024-07-25 01:20:19.076105] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:25.990 qpair failed and we were unable to recover it. 00:34:25.990 [2024-07-25 01:20:19.086006] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.990 [2024-07-25 01:20:19.086121] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.990 [2024-07-25 01:20:19.086146] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.990 [2024-07-25 01:20:19.086161] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.990 [2024-07-25 01:20:19.086174] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:25.990 [2024-07-25 01:20:19.086202] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:25.990 qpair failed and we were unable to recover it. 00:34:25.990 [2024-07-25 01:20:19.095953] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.990 [2024-07-25 01:20:19.096079] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.990 [2024-07-25 01:20:19.096105] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.990 [2024-07-25 01:20:19.096119] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.990 [2024-07-25 01:20:19.096133] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:25.990 [2024-07-25 01:20:19.096160] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:25.990 qpair failed and we were unable to recover it. 00:34:25.990 [2024-07-25 01:20:19.106069] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.991 [2024-07-25 01:20:19.106190] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.991 [2024-07-25 01:20:19.106216] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.991 [2024-07-25 01:20:19.106230] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.991 [2024-07-25 01:20:19.106251] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:25.991 [2024-07-25 01:20:19.106281] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:25.991 qpair failed and we were unable to recover it. 00:34:25.991 [2024-07-25 01:20:19.116025] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.991 [2024-07-25 01:20:19.116151] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.991 [2024-07-25 01:20:19.116178] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.991 [2024-07-25 01:20:19.116192] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.991 [2024-07-25 01:20:19.116205] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:25.991 [2024-07-25 01:20:19.116233] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:25.991 qpair failed and we were unable to recover it. 00:34:25.991 [2024-07-25 01:20:19.126051] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.991 [2024-07-25 01:20:19.126187] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.991 [2024-07-25 01:20:19.126212] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.991 [2024-07-25 01:20:19.126227] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.991 [2024-07-25 01:20:19.126240] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:25.991 [2024-07-25 01:20:19.126277] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:25.991 qpair failed and we were unable to recover it. 00:34:25.991 [2024-07-25 01:20:19.136082] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.991 [2024-07-25 01:20:19.136234] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.991 [2024-07-25 01:20:19.136265] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.991 [2024-07-25 01:20:19.136279] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.991 [2024-07-25 01:20:19.136293] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:25.991 [2024-07-25 01:20:19.136320] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:25.991 qpair failed and we were unable to recover it. 00:34:26.250 [2024-07-25 01:20:19.146183] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.250 [2024-07-25 01:20:19.146304] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.250 [2024-07-25 01:20:19.146330] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.250 [2024-07-25 01:20:19.146344] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.250 [2024-07-25 01:20:19.146357] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:26.250 [2024-07-25 01:20:19.146386] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:26.250 qpair failed and we were unable to recover it. 00:34:26.250 [2024-07-25 01:20:19.156156] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.250 [2024-07-25 01:20:19.156275] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.250 [2024-07-25 01:20:19.156306] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.250 [2024-07-25 01:20:19.156321] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.250 [2024-07-25 01:20:19.156334] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:26.250 [2024-07-25 01:20:19.156363] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:26.250 qpair failed and we were unable to recover it. 00:34:26.250 [2024-07-25 01:20:19.166132] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.250 [2024-07-25 01:20:19.166272] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.250 [2024-07-25 01:20:19.166298] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.250 [2024-07-25 01:20:19.166312] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.250 [2024-07-25 01:20:19.166326] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:26.250 [2024-07-25 01:20:19.166354] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:26.250 qpair failed and we were unable to recover it. 00:34:26.250 [2024-07-25 01:20:19.176162] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.250 [2024-07-25 01:20:19.176283] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.250 [2024-07-25 01:20:19.176309] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.250 [2024-07-25 01:20:19.176323] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.250 [2024-07-25 01:20:19.176336] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:26.250 [2024-07-25 01:20:19.176364] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:26.250 qpair failed and we were unable to recover it. 00:34:26.250 [2024-07-25 01:20:19.186271] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.250 [2024-07-25 01:20:19.186392] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.250 [2024-07-25 01:20:19.186418] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.250 [2024-07-25 01:20:19.186433] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.250 [2024-07-25 01:20:19.186446] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:26.250 [2024-07-25 01:20:19.186476] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:26.250 qpair failed and we were unable to recover it. 00:34:26.250 [2024-07-25 01:20:19.196221] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.250 [2024-07-25 01:20:19.196342] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.250 [2024-07-25 01:20:19.196368] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.250 [2024-07-25 01:20:19.196383] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.250 [2024-07-25 01:20:19.196396] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:26.250 [2024-07-25 01:20:19.196425] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:26.250 qpair failed and we were unable to recover it. 00:34:26.250 [2024-07-25 01:20:19.206278] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.250 [2024-07-25 01:20:19.206425] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.250 [2024-07-25 01:20:19.206451] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.250 [2024-07-25 01:20:19.206465] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.250 [2024-07-25 01:20:19.206478] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:26.250 [2024-07-25 01:20:19.206506] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:26.250 qpair failed and we were unable to recover it. 00:34:26.250 [2024-07-25 01:20:19.216288] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.250 [2024-07-25 01:20:19.216406] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.250 [2024-07-25 01:20:19.216433] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.250 [2024-07-25 01:20:19.216450] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.250 [2024-07-25 01:20:19.216465] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:26.250 [2024-07-25 01:20:19.216494] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:26.250 qpair failed and we were unable to recover it. 00:34:26.250 [2024-07-25 01:20:19.226328] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.250 [2024-07-25 01:20:19.226462] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.250 [2024-07-25 01:20:19.226488] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.250 [2024-07-25 01:20:19.226502] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.250 [2024-07-25 01:20:19.226516] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:26.250 [2024-07-25 01:20:19.226544] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:26.250 qpair failed and we were unable to recover it. 00:34:26.250 [2024-07-25 01:20:19.236443] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.250 [2024-07-25 01:20:19.236564] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.250 [2024-07-25 01:20:19.236589] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.250 [2024-07-25 01:20:19.236603] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.250 [2024-07-25 01:20:19.236616] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:26.250 [2024-07-25 01:20:19.236644] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:26.250 qpair failed and we were unable to recover it. 00:34:26.250 [2024-07-25 01:20:19.246384] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.250 [2024-07-25 01:20:19.246508] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.250 [2024-07-25 01:20:19.246539] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.250 [2024-07-25 01:20:19.246554] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.250 [2024-07-25 01:20:19.246567] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:26.250 [2024-07-25 01:20:19.246595] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:26.250 qpair failed and we were unable to recover it. 00:34:26.250 [2024-07-25 01:20:19.256468] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.250 [2024-07-25 01:20:19.256614] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.251 [2024-07-25 01:20:19.256640] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.251 [2024-07-25 01:20:19.256666] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.251 [2024-07-25 01:20:19.256679] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:26.251 [2024-07-25 01:20:19.256708] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:26.251 qpair failed and we were unable to recover it. 00:34:26.251 [2024-07-25 01:20:19.266448] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.251 [2024-07-25 01:20:19.266583] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.251 [2024-07-25 01:20:19.266610] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.251 [2024-07-25 01:20:19.266629] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.251 [2024-07-25 01:20:19.266644] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:26.251 [2024-07-25 01:20:19.266674] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:26.251 qpair failed and we were unable to recover it. 00:34:26.251 [2024-07-25 01:20:19.276530] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.251 [2024-07-25 01:20:19.276659] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.251 [2024-07-25 01:20:19.276695] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.251 [2024-07-25 01:20:19.276710] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.251 [2024-07-25 01:20:19.276723] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:26.251 [2024-07-25 01:20:19.276752] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:26.251 qpair failed and we were unable to recover it. 00:34:26.251 [2024-07-25 01:20:19.286473] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.251 [2024-07-25 01:20:19.286583] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.251 [2024-07-25 01:20:19.286608] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.251 [2024-07-25 01:20:19.286623] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.251 [2024-07-25 01:20:19.286636] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:26.251 [2024-07-25 01:20:19.286670] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:26.251 qpair failed and we were unable to recover it. 00:34:26.251 [2024-07-25 01:20:19.296507] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.251 [2024-07-25 01:20:19.296623] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.251 [2024-07-25 01:20:19.296649] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.251 [2024-07-25 01:20:19.296664] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.251 [2024-07-25 01:20:19.296676] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:26.251 [2024-07-25 01:20:19.296705] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:26.251 qpair failed and we were unable to recover it. 00:34:26.251 [2024-07-25 01:20:19.306577] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.251 [2024-07-25 01:20:19.306734] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.251 [2024-07-25 01:20:19.306760] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.251 [2024-07-25 01:20:19.306774] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.251 [2024-07-25 01:20:19.306787] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:26.251 [2024-07-25 01:20:19.306816] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:26.251 qpair failed and we were unable to recover it. 00:34:26.251 [2024-07-25 01:20:19.316577] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.251 [2024-07-25 01:20:19.316717] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.251 [2024-07-25 01:20:19.316743] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.251 [2024-07-25 01:20:19.316757] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.251 [2024-07-25 01:20:19.316771] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:26.251 [2024-07-25 01:20:19.316799] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:26.251 qpair failed and we were unable to recover it. 00:34:26.251 [2024-07-25 01:20:19.326616] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.251 [2024-07-25 01:20:19.326734] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.251 [2024-07-25 01:20:19.326760] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.251 [2024-07-25 01:20:19.326774] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.251 [2024-07-25 01:20:19.326788] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:26.251 [2024-07-25 01:20:19.326816] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:26.251 qpair failed and we were unable to recover it. 00:34:26.251 [2024-07-25 01:20:19.336681] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.251 [2024-07-25 01:20:19.336798] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.251 [2024-07-25 01:20:19.336834] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.251 [2024-07-25 01:20:19.336849] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.251 [2024-07-25 01:20:19.336863] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:26.251 [2024-07-25 01:20:19.336891] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:26.251 qpair failed and we were unable to recover it. 00:34:26.251 [2024-07-25 01:20:19.346756] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.251 [2024-07-25 01:20:19.346888] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.251 [2024-07-25 01:20:19.346913] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.251 [2024-07-25 01:20:19.346928] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.251 [2024-07-25 01:20:19.346941] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:26.251 [2024-07-25 01:20:19.346968] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:26.251 qpair failed and we were unable to recover it. 00:34:26.251 [2024-07-25 01:20:19.356727] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.251 [2024-07-25 01:20:19.356882] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.251 [2024-07-25 01:20:19.356907] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.251 [2024-07-25 01:20:19.356921] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.251 [2024-07-25 01:20:19.356935] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:26.251 [2024-07-25 01:20:19.356963] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:26.251 qpair failed and we were unable to recover it. 00:34:26.251 [2024-07-25 01:20:19.366739] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.251 [2024-07-25 01:20:19.366850] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.251 [2024-07-25 01:20:19.366877] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.251 [2024-07-25 01:20:19.366891] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.251 [2024-07-25 01:20:19.366904] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:26.251 [2024-07-25 01:20:19.366933] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:26.251 qpair failed and we were unable to recover it. 00:34:26.251 [2024-07-25 01:20:19.376767] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.251 [2024-07-25 01:20:19.376874] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.251 [2024-07-25 01:20:19.376899] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.251 [2024-07-25 01:20:19.376914] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.251 [2024-07-25 01:20:19.376927] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:26.251 [2024-07-25 01:20:19.376961] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:26.251 qpair failed and we were unable to recover it. 00:34:26.251 [2024-07-25 01:20:19.386768] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.251 [2024-07-25 01:20:19.386932] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.251 [2024-07-25 01:20:19.386957] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.251 [2024-07-25 01:20:19.386972] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.252 [2024-07-25 01:20:19.386985] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:26.252 [2024-07-25 01:20:19.387015] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:26.252 qpair failed and we were unable to recover it. 00:34:26.252 [2024-07-25 01:20:19.396771] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.252 [2024-07-25 01:20:19.396911] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.252 [2024-07-25 01:20:19.396936] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.252 [2024-07-25 01:20:19.396951] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.252 [2024-07-25 01:20:19.396964] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:26.252 [2024-07-25 01:20:19.396992] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:26.252 qpair failed and we were unable to recover it. 00:34:26.511 [2024-07-25 01:20:19.406817] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.511 [2024-07-25 01:20:19.406929] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.511 [2024-07-25 01:20:19.406954] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.511 [2024-07-25 01:20:19.406968] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.511 [2024-07-25 01:20:19.406981] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:26.511 [2024-07-25 01:20:19.407009] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:26.511 qpair failed and we were unable to recover it. 00:34:26.511 [2024-07-25 01:20:19.416831] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.511 [2024-07-25 01:20:19.416938] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.511 [2024-07-25 01:20:19.416963] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.511 [2024-07-25 01:20:19.416977] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.511 [2024-07-25 01:20:19.416991] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:26.511 [2024-07-25 01:20:19.417018] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:26.511 qpair failed and we were unable to recover it. 00:34:26.511 [2024-07-25 01:20:19.426868] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.511 [2024-07-25 01:20:19.426981] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.511 [2024-07-25 01:20:19.427011] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.511 [2024-07-25 01:20:19.427026] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.511 [2024-07-25 01:20:19.427039] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:26.511 [2024-07-25 01:20:19.427067] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:26.511 qpair failed and we were unable to recover it. 00:34:26.511 [2024-07-25 01:20:19.436897] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.511 [2024-07-25 01:20:19.437014] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.511 [2024-07-25 01:20:19.437039] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.511 [2024-07-25 01:20:19.437054] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.511 [2024-07-25 01:20:19.437067] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:26.511 [2024-07-25 01:20:19.437095] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:26.511 qpair failed and we were unable to recover it. 00:34:26.511 [2024-07-25 01:20:19.446923] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.511 [2024-07-25 01:20:19.447036] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.511 [2024-07-25 01:20:19.447062] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.511 [2024-07-25 01:20:19.447076] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.511 [2024-07-25 01:20:19.447089] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:26.511 [2024-07-25 01:20:19.447117] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:26.511 qpair failed and we were unable to recover it. 00:34:26.511 [2024-07-25 01:20:19.456992] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.511 [2024-07-25 01:20:19.457110] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.511 [2024-07-25 01:20:19.457135] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.511 [2024-07-25 01:20:19.457149] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.511 [2024-07-25 01:20:19.457162] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:26.511 [2024-07-25 01:20:19.457191] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:26.511 qpair failed and we were unable to recover it. 00:34:26.511 [2024-07-25 01:20:19.467002] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.511 [2024-07-25 01:20:19.467131] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.511 [2024-07-25 01:20:19.467156] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.511 [2024-07-25 01:20:19.467170] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.511 [2024-07-25 01:20:19.467189] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:26.511 [2024-07-25 01:20:19.467217] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:26.511 qpair failed and we were unable to recover it. 00:34:26.511 [2024-07-25 01:20:19.477007] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.511 [2024-07-25 01:20:19.477126] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.511 [2024-07-25 01:20:19.477151] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.511 [2024-07-25 01:20:19.477166] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.511 [2024-07-25 01:20:19.477180] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:26.511 [2024-07-25 01:20:19.477208] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:26.511 qpair failed and we were unable to recover it. 00:34:26.511 [2024-07-25 01:20:19.487057] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.511 [2024-07-25 01:20:19.487181] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.511 [2024-07-25 01:20:19.487206] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.511 [2024-07-25 01:20:19.487221] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.511 [2024-07-25 01:20:19.487235] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:26.511 [2024-07-25 01:20:19.487270] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:26.511 qpair failed and we were unable to recover it. 00:34:26.511 [2024-07-25 01:20:19.497153] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.511 [2024-07-25 01:20:19.497303] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.511 [2024-07-25 01:20:19.497329] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.511 [2024-07-25 01:20:19.497343] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.511 [2024-07-25 01:20:19.497357] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:26.512 [2024-07-25 01:20:19.497385] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:26.512 qpair failed and we were unable to recover it. 00:34:26.512 [2024-07-25 01:20:19.507092] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.512 [2024-07-25 01:20:19.507253] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.512 [2024-07-25 01:20:19.507279] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.512 [2024-07-25 01:20:19.507293] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.512 [2024-07-25 01:20:19.507307] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:26.512 [2024-07-25 01:20:19.507335] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:26.512 qpair failed and we were unable to recover it. 00:34:26.512 [2024-07-25 01:20:19.517148] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.512 [2024-07-25 01:20:19.517286] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.512 [2024-07-25 01:20:19.517312] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.512 [2024-07-25 01:20:19.517326] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.512 [2024-07-25 01:20:19.517340] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:26.512 [2024-07-25 01:20:19.517368] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:26.512 qpair failed and we were unable to recover it. 00:34:26.512 [2024-07-25 01:20:19.527143] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.512 [2024-07-25 01:20:19.527264] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.512 [2024-07-25 01:20:19.527288] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.512 [2024-07-25 01:20:19.527302] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.512 [2024-07-25 01:20:19.527314] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:26.512 [2024-07-25 01:20:19.527341] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:26.512 qpair failed and we were unable to recover it. 00:34:26.512 [2024-07-25 01:20:19.537184] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.512 [2024-07-25 01:20:19.537308] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.512 [2024-07-25 01:20:19.537333] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.512 [2024-07-25 01:20:19.537347] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.512 [2024-07-25 01:20:19.537360] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:26.512 [2024-07-25 01:20:19.537388] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:26.512 qpair failed and we were unable to recover it. 00:34:26.512 [2024-07-25 01:20:19.547212] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.512 [2024-07-25 01:20:19.547345] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.512 [2024-07-25 01:20:19.547370] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.512 [2024-07-25 01:20:19.547384] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.512 [2024-07-25 01:20:19.547398] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:26.512 [2024-07-25 01:20:19.547425] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:26.512 qpair failed and we were unable to recover it. 00:34:26.512 [2024-07-25 01:20:19.557322] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.512 [2024-07-25 01:20:19.557437] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.512 [2024-07-25 01:20:19.557463] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.512 [2024-07-25 01:20:19.557477] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.512 [2024-07-25 01:20:19.557495] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:26.512 [2024-07-25 01:20:19.557525] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:26.512 qpair failed and we were unable to recover it. 00:34:26.512 [2024-07-25 01:20:19.567377] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.512 [2024-07-25 01:20:19.567505] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.512 [2024-07-25 01:20:19.567531] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.512 [2024-07-25 01:20:19.567545] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.512 [2024-07-25 01:20:19.567558] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:26.512 [2024-07-25 01:20:19.567586] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:26.512 qpair failed and we were unable to recover it. 00:34:26.512 [2024-07-25 01:20:19.577308] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.512 [2024-07-25 01:20:19.577424] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.512 [2024-07-25 01:20:19.577450] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.512 [2024-07-25 01:20:19.577464] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.512 [2024-07-25 01:20:19.577477] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:26.512 [2024-07-25 01:20:19.577506] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:26.512 qpair failed and we were unable to recover it. 00:34:26.512 [2024-07-25 01:20:19.587326] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.512 [2024-07-25 01:20:19.587487] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.512 [2024-07-25 01:20:19.587513] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.512 [2024-07-25 01:20:19.587527] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.512 [2024-07-25 01:20:19.587540] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:26.512 [2024-07-25 01:20:19.587570] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:26.512 qpair failed and we were unable to recover it. 00:34:26.512 [2024-07-25 01:20:19.597365] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.512 [2024-07-25 01:20:19.597526] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.512 [2024-07-25 01:20:19.597552] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.512 [2024-07-25 01:20:19.597566] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.512 [2024-07-25 01:20:19.597579] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:26.512 [2024-07-25 01:20:19.597607] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:26.512 qpair failed and we were unable to recover it. 00:34:26.512 [2024-07-25 01:20:19.607382] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.512 [2024-07-25 01:20:19.607512] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.512 [2024-07-25 01:20:19.607538] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.512 [2024-07-25 01:20:19.607553] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.512 [2024-07-25 01:20:19.607566] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:26.512 [2024-07-25 01:20:19.607594] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:26.512 qpair failed and we were unable to recover it. 00:34:26.512 [2024-07-25 01:20:19.617451] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.512 [2024-07-25 01:20:19.617598] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.512 [2024-07-25 01:20:19.617624] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.512 [2024-07-25 01:20:19.617638] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.512 [2024-07-25 01:20:19.617651] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:26.512 [2024-07-25 01:20:19.617679] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:26.512 qpair failed and we were unable to recover it. 00:34:26.512 [2024-07-25 01:20:19.627434] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.512 [2024-07-25 01:20:19.627551] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.512 [2024-07-25 01:20:19.627577] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.512 [2024-07-25 01:20:19.627591] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.512 [2024-07-25 01:20:19.627604] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:26.512 [2024-07-25 01:20:19.627632] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:26.513 qpair failed and we were unable to recover it. 00:34:26.513 [2024-07-25 01:20:19.637457] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.513 [2024-07-25 01:20:19.637574] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.513 [2024-07-25 01:20:19.637599] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.513 [2024-07-25 01:20:19.637613] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.513 [2024-07-25 01:20:19.637626] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:26.513 [2024-07-25 01:20:19.637654] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:26.513 qpair failed and we were unable to recover it. 00:34:26.513 [2024-07-25 01:20:19.647516] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.513 [2024-07-25 01:20:19.647634] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.513 [2024-07-25 01:20:19.647660] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.513 [2024-07-25 01:20:19.647674] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.513 [2024-07-25 01:20:19.647693] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:26.513 [2024-07-25 01:20:19.647722] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:26.513 qpair failed and we were unable to recover it. 00:34:26.513 [2024-07-25 01:20:19.657503] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.513 [2024-07-25 01:20:19.657613] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.513 [2024-07-25 01:20:19.657638] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.513 [2024-07-25 01:20:19.657652] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.513 [2024-07-25 01:20:19.657665] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:26.513 [2024-07-25 01:20:19.657693] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:26.513 qpair failed and we were unable to recover it. 00:34:26.772 [2024-07-25 01:20:19.667647] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.772 [2024-07-25 01:20:19.667782] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.772 [2024-07-25 01:20:19.667809] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.772 [2024-07-25 01:20:19.667828] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.772 [2024-07-25 01:20:19.667842] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:26.772 [2024-07-25 01:20:19.667872] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:26.772 qpair failed and we were unable to recover it. 00:34:26.772 [2024-07-25 01:20:19.677593] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.772 [2024-07-25 01:20:19.677716] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.772 [2024-07-25 01:20:19.677743] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.772 [2024-07-25 01:20:19.677758] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.772 [2024-07-25 01:20:19.677774] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:26.772 [2024-07-25 01:20:19.677804] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:26.772 qpair failed and we were unable to recover it. 00:34:26.772 [2024-07-25 01:20:19.687691] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.772 [2024-07-25 01:20:19.687808] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.772 [2024-07-25 01:20:19.687834] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.772 [2024-07-25 01:20:19.687849] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.772 [2024-07-25 01:20:19.687862] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:26.772 [2024-07-25 01:20:19.687890] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:26.772 qpair failed and we were unable to recover it. 00:34:26.772 [2024-07-25 01:20:19.697627] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.772 [2024-07-25 01:20:19.697765] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.772 [2024-07-25 01:20:19.697791] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.772 [2024-07-25 01:20:19.697805] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.772 [2024-07-25 01:20:19.697818] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:26.772 [2024-07-25 01:20:19.697847] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:26.772 qpair failed and we were unable to recover it. 00:34:26.772 [2024-07-25 01:20:19.707693] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.772 [2024-07-25 01:20:19.707815] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.772 [2024-07-25 01:20:19.707842] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.772 [2024-07-25 01:20:19.707858] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.772 [2024-07-25 01:20:19.707874] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:26.772 [2024-07-25 01:20:19.707903] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:26.772 qpair failed and we were unable to recover it. 00:34:26.772 [2024-07-25 01:20:19.717701] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.772 [2024-07-25 01:20:19.717860] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.772 [2024-07-25 01:20:19.717886] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.772 [2024-07-25 01:20:19.717901] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.772 [2024-07-25 01:20:19.717914] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:26.772 [2024-07-25 01:20:19.717942] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:26.772 qpair failed and we were unable to recover it. 00:34:26.772 [2024-07-25 01:20:19.727739] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.772 [2024-07-25 01:20:19.727852] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.772 [2024-07-25 01:20:19.727877] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.772 [2024-07-25 01:20:19.727892] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.772 [2024-07-25 01:20:19.727905] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:26.772 [2024-07-25 01:20:19.727933] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:26.772 qpair failed and we were unable to recover it. 00:34:26.772 [2024-07-25 01:20:19.737771] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.772 [2024-07-25 01:20:19.737896] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.772 [2024-07-25 01:20:19.737922] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.772 [2024-07-25 01:20:19.737942] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.772 [2024-07-25 01:20:19.737956] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:26.772 [2024-07-25 01:20:19.737984] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:26.772 qpair failed and we were unable to recover it. 00:34:26.772 [2024-07-25 01:20:19.747815] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.772 [2024-07-25 01:20:19.747951] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.772 [2024-07-25 01:20:19.747978] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.772 [2024-07-25 01:20:19.747992] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.772 [2024-07-25 01:20:19.748005] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:26.772 [2024-07-25 01:20:19.748036] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:26.772 qpair failed and we were unable to recover it. 00:34:26.772 [2024-07-25 01:20:19.757915] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.772 [2024-07-25 01:20:19.758054] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.772 [2024-07-25 01:20:19.758080] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.772 [2024-07-25 01:20:19.758094] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.772 [2024-07-25 01:20:19.758108] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:26.772 [2024-07-25 01:20:19.758136] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:26.772 qpair failed and we were unable to recover it. 00:34:26.772 [2024-07-25 01:20:19.767864] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.772 [2024-07-25 01:20:19.767980] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.772 [2024-07-25 01:20:19.768006] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.772 [2024-07-25 01:20:19.768020] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.772 [2024-07-25 01:20:19.768033] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:26.772 [2024-07-25 01:20:19.768061] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:26.772 qpair failed and we were unable to recover it. 00:34:26.772 [2024-07-25 01:20:19.777870] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.772 [2024-07-25 01:20:19.777985] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.772 [2024-07-25 01:20:19.778011] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.772 [2024-07-25 01:20:19.778025] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.772 [2024-07-25 01:20:19.778038] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:26.772 [2024-07-25 01:20:19.778067] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:26.772 qpair failed and we were unable to recover it. 00:34:26.772 [2024-07-25 01:20:19.787876] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.773 [2024-07-25 01:20:19.787991] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.773 [2024-07-25 01:20:19.788016] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.773 [2024-07-25 01:20:19.788031] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.773 [2024-07-25 01:20:19.788044] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:26.773 [2024-07-25 01:20:19.788072] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:26.773 qpair failed and we were unable to recover it. 00:34:26.773 [2024-07-25 01:20:19.797934] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.773 [2024-07-25 01:20:19.798092] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.773 [2024-07-25 01:20:19.798117] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.773 [2024-07-25 01:20:19.798132] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.773 [2024-07-25 01:20:19.798145] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:26.773 [2024-07-25 01:20:19.798173] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:26.773 qpair failed and we were unable to recover it. 00:34:26.773 [2024-07-25 01:20:19.807934] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.773 [2024-07-25 01:20:19.808057] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.773 [2024-07-25 01:20:19.808084] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.773 [2024-07-25 01:20:19.808098] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.773 [2024-07-25 01:20:19.808112] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:26.773 [2024-07-25 01:20:19.808139] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:26.773 qpair failed and we were unable to recover it. 00:34:26.773 [2024-07-25 01:20:19.817973] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.773 [2024-07-25 01:20:19.818081] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.773 [2024-07-25 01:20:19.818107] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.773 [2024-07-25 01:20:19.818121] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.773 [2024-07-25 01:20:19.818133] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:26.773 [2024-07-25 01:20:19.818164] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:26.773 qpair failed and we were unable to recover it. 00:34:26.773 [2024-07-25 01:20:19.828017] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.773 [2024-07-25 01:20:19.828150] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.773 [2024-07-25 01:20:19.828175] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.773 [2024-07-25 01:20:19.828196] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.773 [2024-07-25 01:20:19.828211] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:26.773 [2024-07-25 01:20:19.828239] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:26.773 qpair failed and we were unable to recover it. 00:34:26.773 [2024-07-25 01:20:19.838020] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.773 [2024-07-25 01:20:19.838139] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.773 [2024-07-25 01:20:19.838164] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.773 [2024-07-25 01:20:19.838179] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.773 [2024-07-25 01:20:19.838192] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:26.773 [2024-07-25 01:20:19.838220] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:26.773 qpair failed and we were unable to recover it. 00:34:26.773 [2024-07-25 01:20:19.848139] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.773 [2024-07-25 01:20:19.848258] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.773 [2024-07-25 01:20:19.848284] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.773 [2024-07-25 01:20:19.848298] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.773 [2024-07-25 01:20:19.848312] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:26.773 [2024-07-25 01:20:19.848340] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:26.773 qpair failed and we were unable to recover it. 00:34:26.773 [2024-07-25 01:20:19.858093] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.773 [2024-07-25 01:20:19.858216] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.773 [2024-07-25 01:20:19.858248] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.773 [2024-07-25 01:20:19.858265] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.773 [2024-07-25 01:20:19.858279] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:26.773 [2024-07-25 01:20:19.858308] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:26.773 qpair failed and we were unable to recover it. 00:34:26.773 [2024-07-25 01:20:19.868253] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.773 [2024-07-25 01:20:19.868376] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.773 [2024-07-25 01:20:19.868402] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.773 [2024-07-25 01:20:19.868416] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.773 [2024-07-25 01:20:19.868429] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:26.773 [2024-07-25 01:20:19.868458] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:26.773 qpair failed and we were unable to recover it. 00:34:26.773 [2024-07-25 01:20:19.878134] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.773 [2024-07-25 01:20:19.878262] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.773 [2024-07-25 01:20:19.878298] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.773 [2024-07-25 01:20:19.878313] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.773 [2024-07-25 01:20:19.878326] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:26.773 [2024-07-25 01:20:19.878354] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:26.773 qpair failed and we were unable to recover it. 00:34:26.773 [2024-07-25 01:20:19.888144] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.773 [2024-07-25 01:20:19.888264] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.773 [2024-07-25 01:20:19.888292] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.773 [2024-07-25 01:20:19.888307] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.773 [2024-07-25 01:20:19.888320] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:26.773 [2024-07-25 01:20:19.888348] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:26.773 qpair failed and we were unable to recover it. 00:34:26.773 [2024-07-25 01:20:19.898177] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.773 [2024-07-25 01:20:19.898327] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.773 [2024-07-25 01:20:19.898352] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.773 [2024-07-25 01:20:19.898366] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.773 [2024-07-25 01:20:19.898379] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:26.773 [2024-07-25 01:20:19.898408] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:26.773 qpair failed and we were unable to recover it. 00:34:26.773 [2024-07-25 01:20:19.908258] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.773 [2024-07-25 01:20:19.908420] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.773 [2024-07-25 01:20:19.908445] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.773 [2024-07-25 01:20:19.908459] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.773 [2024-07-25 01:20:19.908472] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:26.773 [2024-07-25 01:20:19.908499] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:26.773 qpair failed and we were unable to recover it. 00:34:26.773 [2024-07-25 01:20:19.918240] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.773 [2024-07-25 01:20:19.918361] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.773 [2024-07-25 01:20:19.918391] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.774 [2024-07-25 01:20:19.918406] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.774 [2024-07-25 01:20:19.918419] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:26.774 [2024-07-25 01:20:19.918448] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:26.774 qpair failed and we were unable to recover it. 00:34:27.033 [2024-07-25 01:20:19.928364] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.033 [2024-07-25 01:20:19.928477] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.033 [2024-07-25 01:20:19.928502] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.033 [2024-07-25 01:20:19.928516] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.033 [2024-07-25 01:20:19.928530] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:27.033 [2024-07-25 01:20:19.928558] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:27.033 qpair failed and we were unable to recover it. 00:34:27.033 [2024-07-25 01:20:19.938306] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.033 [2024-07-25 01:20:19.938416] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.033 [2024-07-25 01:20:19.938441] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.033 [2024-07-25 01:20:19.938455] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.033 [2024-07-25 01:20:19.938468] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:27.033 [2024-07-25 01:20:19.938496] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:27.033 qpair failed and we were unable to recover it. 00:34:27.033 [2024-07-25 01:20:19.948355] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.033 [2024-07-25 01:20:19.948473] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.033 [2024-07-25 01:20:19.948498] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.033 [2024-07-25 01:20:19.948512] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.033 [2024-07-25 01:20:19.948525] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:27.033 [2024-07-25 01:20:19.948553] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:27.033 qpair failed and we were unable to recover it. 00:34:27.033 [2024-07-25 01:20:19.958355] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.033 [2024-07-25 01:20:19.958471] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.033 [2024-07-25 01:20:19.958497] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.033 [2024-07-25 01:20:19.958511] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.033 [2024-07-25 01:20:19.958524] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:27.033 [2024-07-25 01:20:19.958553] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:27.033 qpair failed and we were unable to recover it. 00:34:27.033 [2024-07-25 01:20:19.968369] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.033 [2024-07-25 01:20:19.968482] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.033 [2024-07-25 01:20:19.968508] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.033 [2024-07-25 01:20:19.968522] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.033 [2024-07-25 01:20:19.968536] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:27.033 [2024-07-25 01:20:19.968564] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:27.033 qpair failed and we were unable to recover it. 00:34:27.033 [2024-07-25 01:20:19.978408] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.033 [2024-07-25 01:20:19.978531] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.033 [2024-07-25 01:20:19.978556] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.033 [2024-07-25 01:20:19.978570] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.033 [2024-07-25 01:20:19.978584] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:27.033 [2024-07-25 01:20:19.978612] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:27.033 qpair failed and we were unable to recover it. 00:34:27.033 [2024-07-25 01:20:19.988454] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.033 [2024-07-25 01:20:19.988572] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.033 [2024-07-25 01:20:19.988597] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.033 [2024-07-25 01:20:19.988611] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.033 [2024-07-25 01:20:19.988625] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:27.033 [2024-07-25 01:20:19.988653] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:27.033 qpair failed and we were unable to recover it. 00:34:27.033 [2024-07-25 01:20:19.998573] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.033 [2024-07-25 01:20:19.998698] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.033 [2024-07-25 01:20:19.998723] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.033 [2024-07-25 01:20:19.998737] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.033 [2024-07-25 01:20:19.998750] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:27.033 [2024-07-25 01:20:19.998779] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:27.033 qpair failed and we were unable to recover it. 00:34:27.033 [2024-07-25 01:20:20.008500] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.033 [2024-07-25 01:20:20.008623] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.033 [2024-07-25 01:20:20.008656] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.033 [2024-07-25 01:20:20.008674] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.033 [2024-07-25 01:20:20.008687] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:27.033 [2024-07-25 01:20:20.008716] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:27.033 qpair failed and we were unable to recover it. 00:34:27.033 [2024-07-25 01:20:20.018600] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.033 [2024-07-25 01:20:20.018749] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.033 [2024-07-25 01:20:20.018780] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.033 [2024-07-25 01:20:20.018796] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.033 [2024-07-25 01:20:20.018810] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:27.033 [2024-07-25 01:20:20.018841] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:27.033 qpair failed and we were unable to recover it. 00:34:27.033 [2024-07-25 01:20:20.028594] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.033 [2024-07-25 01:20:20.028731] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.033 [2024-07-25 01:20:20.028758] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.033 [2024-07-25 01:20:20.028773] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.033 [2024-07-25 01:20:20.028786] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:27.033 [2024-07-25 01:20:20.028815] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:27.033 qpair failed and we were unable to recover it. 00:34:27.033 [2024-07-25 01:20:20.038620] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.033 [2024-07-25 01:20:20.038748] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.033 [2024-07-25 01:20:20.038775] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.033 [2024-07-25 01:20:20.038791] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.034 [2024-07-25 01:20:20.038805] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:27.034 [2024-07-25 01:20:20.038834] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:27.034 qpair failed and we were unable to recover it. 00:34:27.034 [2024-07-25 01:20:20.048627] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.034 [2024-07-25 01:20:20.048748] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.034 [2024-07-25 01:20:20.048774] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.034 [2024-07-25 01:20:20.048789] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.034 [2024-07-25 01:20:20.048802] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:27.034 [2024-07-25 01:20:20.048838] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:27.034 qpair failed and we were unable to recover it. 00:34:27.034 [2024-07-25 01:20:20.058679] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.034 [2024-07-25 01:20:20.058792] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.034 [2024-07-25 01:20:20.058820] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.034 [2024-07-25 01:20:20.058836] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.034 [2024-07-25 01:20:20.058852] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:27.034 [2024-07-25 01:20:20.058881] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:27.034 qpair failed and we were unable to recover it. 00:34:27.034 [2024-07-25 01:20:20.068772] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.034 [2024-07-25 01:20:20.068891] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.034 [2024-07-25 01:20:20.068917] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.034 [2024-07-25 01:20:20.068932] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.034 [2024-07-25 01:20:20.068945] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:27.034 [2024-07-25 01:20:20.068974] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:27.034 qpair failed and we were unable to recover it. 00:34:27.034 [2024-07-25 01:20:20.078741] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.034 [2024-07-25 01:20:20.078858] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.034 [2024-07-25 01:20:20.078883] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.034 [2024-07-25 01:20:20.078898] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.034 [2024-07-25 01:20:20.078912] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:27.034 [2024-07-25 01:20:20.078940] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:27.034 qpair failed and we were unable to recover it. 00:34:27.034 [2024-07-25 01:20:20.088818] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.034 [2024-07-25 01:20:20.088956] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.034 [2024-07-25 01:20:20.088982] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.034 [2024-07-25 01:20:20.088996] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.034 [2024-07-25 01:20:20.089010] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:27.034 [2024-07-25 01:20:20.089039] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:27.034 qpair failed and we were unable to recover it. 00:34:27.034 [2024-07-25 01:20:20.098794] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.034 [2024-07-25 01:20:20.098915] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.034 [2024-07-25 01:20:20.098949] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.034 [2024-07-25 01:20:20.098965] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.034 [2024-07-25 01:20:20.098978] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:27.034 [2024-07-25 01:20:20.099008] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:27.034 qpair failed and we were unable to recover it. 00:34:27.034 [2024-07-25 01:20:20.108782] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.034 [2024-07-25 01:20:20.108902] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.034 [2024-07-25 01:20:20.108928] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.034 [2024-07-25 01:20:20.108943] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.034 [2024-07-25 01:20:20.108956] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:27.034 [2024-07-25 01:20:20.108985] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:27.034 qpair failed and we were unable to recover it. 00:34:27.034 [2024-07-25 01:20:20.118800] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.034 [2024-07-25 01:20:20.118916] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.034 [2024-07-25 01:20:20.118941] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.034 [2024-07-25 01:20:20.118955] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.034 [2024-07-25 01:20:20.118969] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:27.034 [2024-07-25 01:20:20.118998] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:27.034 qpair failed and we were unable to recover it. 00:34:27.034 [2024-07-25 01:20:20.128966] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.034 [2024-07-25 01:20:20.129100] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.034 [2024-07-25 01:20:20.129125] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.034 [2024-07-25 01:20:20.129139] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.034 [2024-07-25 01:20:20.129153] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:27.034 [2024-07-25 01:20:20.129180] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:27.034 qpair failed and we were unable to recover it. 00:34:27.034 [2024-07-25 01:20:20.138947] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.034 [2024-07-25 01:20:20.139062] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.034 [2024-07-25 01:20:20.139087] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.034 [2024-07-25 01:20:20.139102] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.034 [2024-07-25 01:20:20.139114] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:27.034 [2024-07-25 01:20:20.139148] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:27.034 qpair failed and we were unable to recover it. 00:34:27.034 [2024-07-25 01:20:20.148990] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.034 [2024-07-25 01:20:20.149108] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.034 [2024-07-25 01:20:20.149133] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.034 [2024-07-25 01:20:20.149148] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.034 [2024-07-25 01:20:20.149161] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:27.034 [2024-07-25 01:20:20.149188] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:27.034 qpair failed and we were unable to recover it. 00:34:27.034 [2024-07-25 01:20:20.158911] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.034 [2024-07-25 01:20:20.159035] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.034 [2024-07-25 01:20:20.159061] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.034 [2024-07-25 01:20:20.159075] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.034 [2024-07-25 01:20:20.159088] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:27.034 [2024-07-25 01:20:20.159116] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:27.034 qpair failed and we were unable to recover it. 00:34:27.034 [2024-07-25 01:20:20.169021] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.034 [2024-07-25 01:20:20.169141] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.034 [2024-07-25 01:20:20.169166] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.034 [2024-07-25 01:20:20.169180] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.034 [2024-07-25 01:20:20.169194] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:27.035 [2024-07-25 01:20:20.169221] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:27.035 qpair failed and we were unable to recover it. 00:34:27.035 [2024-07-25 01:20:20.178978] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.035 [2024-07-25 01:20:20.179090] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.035 [2024-07-25 01:20:20.179116] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.035 [2024-07-25 01:20:20.179130] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.035 [2024-07-25 01:20:20.179143] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:27.035 [2024-07-25 01:20:20.179174] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:27.035 qpair failed and we were unable to recover it. 00:34:27.293 [2024-07-25 01:20:20.189007] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.293 [2024-07-25 01:20:20.189151] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.293 [2024-07-25 01:20:20.189182] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.293 [2024-07-25 01:20:20.189197] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.293 [2024-07-25 01:20:20.189211] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:27.293 [2024-07-25 01:20:20.189240] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:27.293 qpair failed and we were unable to recover it. 00:34:27.293 [2024-07-25 01:20:20.199093] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.293 [2024-07-25 01:20:20.199214] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.293 [2024-07-25 01:20:20.199248] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.293 [2024-07-25 01:20:20.199268] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.293 [2024-07-25 01:20:20.199282] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:27.293 [2024-07-25 01:20:20.199311] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:27.293 qpair failed and we were unable to recover it. 00:34:27.293 [2024-07-25 01:20:20.209060] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.293 [2024-07-25 01:20:20.209172] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.293 [2024-07-25 01:20:20.209198] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.293 [2024-07-25 01:20:20.209213] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.293 [2024-07-25 01:20:20.209226] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:27.293 [2024-07-25 01:20:20.209262] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:27.293 qpair failed and we were unable to recover it. 00:34:27.293 [2024-07-25 01:20:20.219099] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.293 [2024-07-25 01:20:20.219233] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.294 [2024-07-25 01:20:20.219265] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.294 [2024-07-25 01:20:20.219280] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.294 [2024-07-25 01:20:20.219293] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:27.294 [2024-07-25 01:20:20.219322] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:27.294 qpair failed and we were unable to recover it. 00:34:27.294 [2024-07-25 01:20:20.229146] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.294 [2024-07-25 01:20:20.229269] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.294 [2024-07-25 01:20:20.229295] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.294 [2024-07-25 01:20:20.229309] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.294 [2024-07-25 01:20:20.229327] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:27.294 [2024-07-25 01:20:20.229356] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:27.294 qpair failed and we were unable to recover it. 00:34:27.294 [2024-07-25 01:20:20.239233] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.294 [2024-07-25 01:20:20.239382] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.294 [2024-07-25 01:20:20.239409] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.294 [2024-07-25 01:20:20.239425] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.294 [2024-07-25 01:20:20.239441] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:27.294 [2024-07-25 01:20:20.239470] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:27.294 qpair failed and we were unable to recover it. 00:34:27.294 [2024-07-25 01:20:20.249279] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.294 [2024-07-25 01:20:20.249393] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.294 [2024-07-25 01:20:20.249419] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.294 [2024-07-25 01:20:20.249434] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.294 [2024-07-25 01:20:20.249447] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:27.294 [2024-07-25 01:20:20.249475] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:27.294 qpair failed and we were unable to recover it. 00:34:27.294 [2024-07-25 01:20:20.259221] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.294 [2024-07-25 01:20:20.259347] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.294 [2024-07-25 01:20:20.259373] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.294 [2024-07-25 01:20:20.259387] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.294 [2024-07-25 01:20:20.259401] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:27.294 [2024-07-25 01:20:20.259429] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:27.294 qpair failed and we were unable to recover it. 00:34:27.294 [2024-07-25 01:20:20.269295] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.294 [2024-07-25 01:20:20.269430] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.294 [2024-07-25 01:20:20.269457] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.294 [2024-07-25 01:20:20.269472] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.294 [2024-07-25 01:20:20.269484] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:27.294 [2024-07-25 01:20:20.269514] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:27.294 qpair failed and we were unable to recover it. 00:34:27.294 [2024-07-25 01:20:20.279255] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.294 [2024-07-25 01:20:20.279374] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.294 [2024-07-25 01:20:20.279400] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.294 [2024-07-25 01:20:20.279415] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.294 [2024-07-25 01:20:20.279428] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:27.294 [2024-07-25 01:20:20.279457] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:27.294 qpair failed and we were unable to recover it. 00:34:27.294 [2024-07-25 01:20:20.289297] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.294 [2024-07-25 01:20:20.289419] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.294 [2024-07-25 01:20:20.289444] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.294 [2024-07-25 01:20:20.289459] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.294 [2024-07-25 01:20:20.289473] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:27.294 [2024-07-25 01:20:20.289501] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:27.294 qpair failed and we were unable to recover it. 00:34:27.294 [2024-07-25 01:20:20.299327] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.294 [2024-07-25 01:20:20.299437] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.294 [2024-07-25 01:20:20.299462] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.294 [2024-07-25 01:20:20.299477] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.294 [2024-07-25 01:20:20.299490] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:27.294 [2024-07-25 01:20:20.299518] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:27.294 qpair failed and we were unable to recover it. 00:34:27.294 [2024-07-25 01:20:20.309476] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.294 [2024-07-25 01:20:20.309610] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.294 [2024-07-25 01:20:20.309641] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.294 [2024-07-25 01:20:20.309656] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.294 [2024-07-25 01:20:20.309669] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:27.294 [2024-07-25 01:20:20.309697] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:27.294 qpair failed and we were unable to recover it. 00:34:27.294 [2024-07-25 01:20:20.319356] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.294 [2024-07-25 01:20:20.319466] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.294 [2024-07-25 01:20:20.319502] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.294 [2024-07-25 01:20:20.319517] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.294 [2024-07-25 01:20:20.319536] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:27.294 [2024-07-25 01:20:20.319566] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:27.294 qpair failed and we were unable to recover it. 00:34:27.294 [2024-07-25 01:20:20.329417] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.294 [2024-07-25 01:20:20.329535] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.294 [2024-07-25 01:20:20.329560] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.294 [2024-07-25 01:20:20.329574] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.294 [2024-07-25 01:20:20.329588] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:27.294 [2024-07-25 01:20:20.329616] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:27.294 qpair failed and we were unable to recover it. 00:34:27.294 [2024-07-25 01:20:20.339449] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.294 [2024-07-25 01:20:20.339573] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.294 [2024-07-25 01:20:20.339598] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.295 [2024-07-25 01:20:20.339612] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.295 [2024-07-25 01:20:20.339626] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:27.295 [2024-07-25 01:20:20.339653] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:27.295 qpair failed and we were unable to recover it. 00:34:27.295 [2024-07-25 01:20:20.349589] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.295 [2024-07-25 01:20:20.349713] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.295 [2024-07-25 01:20:20.349738] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.295 [2024-07-25 01:20:20.349753] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.295 [2024-07-25 01:20:20.349766] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:27.295 [2024-07-25 01:20:20.349793] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:27.295 qpair failed and we were unable to recover it. 00:34:27.295 [2024-07-25 01:20:20.359535] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.295 [2024-07-25 01:20:20.359689] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.295 [2024-07-25 01:20:20.359714] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.295 [2024-07-25 01:20:20.359729] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.295 [2024-07-25 01:20:20.359742] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:27.295 [2024-07-25 01:20:20.359770] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:27.295 qpair failed and we were unable to recover it. 00:34:27.295 [2024-07-25 01:20:20.369529] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.295 [2024-07-25 01:20:20.369648] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.295 [2024-07-25 01:20:20.369673] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.295 [2024-07-25 01:20:20.369687] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.295 [2024-07-25 01:20:20.369700] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:27.295 [2024-07-25 01:20:20.369728] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:27.295 qpair failed and we were unable to recover it. 00:34:27.295 [2024-07-25 01:20:20.379632] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.295 [2024-07-25 01:20:20.379745] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.295 [2024-07-25 01:20:20.379771] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.295 [2024-07-25 01:20:20.379785] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.295 [2024-07-25 01:20:20.379798] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:27.295 [2024-07-25 01:20:20.379826] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:27.295 qpair failed and we were unable to recover it. 00:34:27.295 [2024-07-25 01:20:20.389664] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.295 [2024-07-25 01:20:20.389782] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.295 [2024-07-25 01:20:20.389809] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.295 [2024-07-25 01:20:20.389824] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.295 [2024-07-25 01:20:20.389837] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:27.295 [2024-07-25 01:20:20.389865] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:27.295 qpair failed and we were unable to recover it. 00:34:27.295 [2024-07-25 01:20:20.399591] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.295 [2024-07-25 01:20:20.399702] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.295 [2024-07-25 01:20:20.399729] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.295 [2024-07-25 01:20:20.399744] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.295 [2024-07-25 01:20:20.399758] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:27.295 [2024-07-25 01:20:20.399788] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:27.295 qpair failed and we were unable to recover it. 00:34:27.295 [2024-07-25 01:20:20.409603] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.295 [2024-07-25 01:20:20.409741] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.295 [2024-07-25 01:20:20.409767] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.295 [2024-07-25 01:20:20.409782] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.295 [2024-07-25 01:20:20.409800] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:27.295 [2024-07-25 01:20:20.409829] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:27.295 qpair failed and we were unable to recover it. 00:34:27.295 [2024-07-25 01:20:20.419657] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.295 [2024-07-25 01:20:20.419787] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.295 [2024-07-25 01:20:20.419814] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.295 [2024-07-25 01:20:20.419829] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.295 [2024-07-25 01:20:20.419845] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:27.295 [2024-07-25 01:20:20.419874] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:27.295 qpair failed and we were unable to recover it. 00:34:27.295 [2024-07-25 01:20:20.429735] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.295 [2024-07-25 01:20:20.429871] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.295 [2024-07-25 01:20:20.429898] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.295 [2024-07-25 01:20:20.429917] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.295 [2024-07-25 01:20:20.429932] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:27.295 [2024-07-25 01:20:20.429962] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:27.295 qpair failed and we were unable to recover it. 00:34:27.295 [2024-07-25 01:20:20.439791] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.295 [2024-07-25 01:20:20.439912] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.295 [2024-07-25 01:20:20.439938] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.295 [2024-07-25 01:20:20.439953] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.295 [2024-07-25 01:20:20.439966] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:27.295 [2024-07-25 01:20:20.439994] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:27.295 qpair failed and we were unable to recover it. 00:34:27.554 [2024-07-25 01:20:20.449821] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.554 [2024-07-25 01:20:20.449976] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.554 [2024-07-25 01:20:20.450002] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.554 [2024-07-25 01:20:20.450017] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.554 [2024-07-25 01:20:20.450030] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:27.554 [2024-07-25 01:20:20.450058] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:27.554 qpair failed and we were unable to recover it. 00:34:27.554 [2024-07-25 01:20:20.459751] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.554 [2024-07-25 01:20:20.459862] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.554 [2024-07-25 01:20:20.459888] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.554 [2024-07-25 01:20:20.459902] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.554 [2024-07-25 01:20:20.459916] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:27.554 [2024-07-25 01:20:20.459944] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:27.554 qpair failed and we were unable to recover it. 00:34:27.554 [2024-07-25 01:20:20.469822] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.554 [2024-07-25 01:20:20.469945] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.554 [2024-07-25 01:20:20.469972] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.554 [2024-07-25 01:20:20.469990] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.554 [2024-07-25 01:20:20.470004] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:27.554 [2024-07-25 01:20:20.470034] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:27.554 qpair failed and we were unable to recover it. 00:34:27.554 [2024-07-25 01:20:20.479802] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.554 [2024-07-25 01:20:20.479929] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.554 [2024-07-25 01:20:20.479955] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.554 [2024-07-25 01:20:20.479970] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.554 [2024-07-25 01:20:20.479984] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:27.554 [2024-07-25 01:20:20.480012] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:27.554 qpair failed and we were unable to recover it. 00:34:27.554 [2024-07-25 01:20:20.489854] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.554 [2024-07-25 01:20:20.489973] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.554 [2024-07-25 01:20:20.489999] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.554 [2024-07-25 01:20:20.490014] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.554 [2024-07-25 01:20:20.490027] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:27.554 [2024-07-25 01:20:20.490055] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:27.554 qpair failed and we were unable to recover it. 00:34:27.554 [2024-07-25 01:20:20.499885] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.554 [2024-07-25 01:20:20.500008] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.554 [2024-07-25 01:20:20.500034] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.554 [2024-07-25 01:20:20.500054] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.554 [2024-07-25 01:20:20.500068] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:27.554 [2024-07-25 01:20:20.500099] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:27.554 qpair failed and we were unable to recover it. 00:34:27.554 [2024-07-25 01:20:20.509933] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.554 [2024-07-25 01:20:20.510051] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.554 [2024-07-25 01:20:20.510077] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.554 [2024-07-25 01:20:20.510091] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.554 [2024-07-25 01:20:20.510104] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:27.554 [2024-07-25 01:20:20.510132] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:27.554 qpair failed and we were unable to recover it. 00:34:27.554 [2024-07-25 01:20:20.519959] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.554 [2024-07-25 01:20:20.520134] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.554 [2024-07-25 01:20:20.520160] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.554 [2024-07-25 01:20:20.520174] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.555 [2024-07-25 01:20:20.520188] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:27.555 [2024-07-25 01:20:20.520216] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:27.555 qpair failed and we were unable to recover it. 00:34:27.555 [2024-07-25 01:20:20.529989] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.555 [2024-07-25 01:20:20.530102] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.555 [2024-07-25 01:20:20.530127] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.555 [2024-07-25 01:20:20.530142] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.555 [2024-07-25 01:20:20.530154] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:27.555 [2024-07-25 01:20:20.530183] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:27.555 qpair failed and we were unable to recover it. 00:34:27.555 [2024-07-25 01:20:20.539984] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.555 [2024-07-25 01:20:20.540093] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.555 [2024-07-25 01:20:20.540118] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.555 [2024-07-25 01:20:20.540133] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.555 [2024-07-25 01:20:20.540146] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:27.555 [2024-07-25 01:20:20.540173] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:27.555 qpair failed and we were unable to recover it. 00:34:27.555 [2024-07-25 01:20:20.550116] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.555 [2024-07-25 01:20:20.550234] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.555 [2024-07-25 01:20:20.550271] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.555 [2024-07-25 01:20:20.550286] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.555 [2024-07-25 01:20:20.550301] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:27.555 [2024-07-25 01:20:20.550330] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:27.555 qpair failed and we were unable to recover it. 00:34:27.555 [2024-07-25 01:20:20.560047] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.555 [2024-07-25 01:20:20.560161] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.555 [2024-07-25 01:20:20.560186] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.555 [2024-07-25 01:20:20.560201] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.555 [2024-07-25 01:20:20.560214] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:27.555 [2024-07-25 01:20:20.560251] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:27.555 qpair failed and we were unable to recover it. 00:34:27.555 [2024-07-25 01:20:20.570163] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.555 [2024-07-25 01:20:20.570283] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.555 [2024-07-25 01:20:20.570309] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.555 [2024-07-25 01:20:20.570323] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.555 [2024-07-25 01:20:20.570336] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:27.555 [2024-07-25 01:20:20.570364] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:27.555 qpair failed and we were unable to recover it. 00:34:27.555 [2024-07-25 01:20:20.580113] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.555 [2024-07-25 01:20:20.580222] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.555 [2024-07-25 01:20:20.580253] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.555 [2024-07-25 01:20:20.580270] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.555 [2024-07-25 01:20:20.580283] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:27.555 [2024-07-25 01:20:20.580311] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:27.555 qpair failed and we were unable to recover it. 00:34:27.555 [2024-07-25 01:20:20.590159] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.555 [2024-07-25 01:20:20.590295] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.555 [2024-07-25 01:20:20.590320] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.555 [2024-07-25 01:20:20.590340] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.555 [2024-07-25 01:20:20.590355] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:27.555 [2024-07-25 01:20:20.590384] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:27.555 qpair failed and we were unable to recover it. 00:34:27.555 [2024-07-25 01:20:20.600239] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.555 [2024-07-25 01:20:20.600363] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.555 [2024-07-25 01:20:20.600387] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.555 [2024-07-25 01:20:20.600402] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.555 [2024-07-25 01:20:20.600415] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:27.555 [2024-07-25 01:20:20.600444] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:27.555 qpair failed and we were unable to recover it. 00:34:27.555 [2024-07-25 01:20:20.610217] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.555 [2024-07-25 01:20:20.610344] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.555 [2024-07-25 01:20:20.610370] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.555 [2024-07-25 01:20:20.610384] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.555 [2024-07-25 01:20:20.610398] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:27.555 [2024-07-25 01:20:20.610426] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:27.555 qpair failed and we were unable to recover it. 00:34:27.555 [2024-07-25 01:20:20.620255] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.555 [2024-07-25 01:20:20.620405] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.555 [2024-07-25 01:20:20.620430] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.555 [2024-07-25 01:20:20.620444] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.555 [2024-07-25 01:20:20.620458] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:27.555 [2024-07-25 01:20:20.620485] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:27.555 qpair failed and we were unable to recover it. 00:34:27.555 [2024-07-25 01:20:20.630280] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.555 [2024-07-25 01:20:20.630400] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.555 [2024-07-25 01:20:20.630425] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.555 [2024-07-25 01:20:20.630439] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.555 [2024-07-25 01:20:20.630452] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:27.555 [2024-07-25 01:20:20.630480] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:27.555 qpair failed and we were unable to recover it. 00:34:27.555 [2024-07-25 01:20:20.640361] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.555 [2024-07-25 01:20:20.640480] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.555 [2024-07-25 01:20:20.640506] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.555 [2024-07-25 01:20:20.640520] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.555 [2024-07-25 01:20:20.640534] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:27.555 [2024-07-25 01:20:20.640562] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:27.555 qpair failed and we were unable to recover it. 00:34:27.555 [2024-07-25 01:20:20.650411] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.555 [2024-07-25 01:20:20.650544] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.555 [2024-07-25 01:20:20.650569] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.555 [2024-07-25 01:20:20.650584] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.555 [2024-07-25 01:20:20.650597] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:27.556 [2024-07-25 01:20:20.650624] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:27.556 qpair failed and we were unable to recover it. 00:34:27.556 [2024-07-25 01:20:20.660318] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.556 [2024-07-25 01:20:20.660429] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.556 [2024-07-25 01:20:20.660455] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.556 [2024-07-25 01:20:20.660469] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.556 [2024-07-25 01:20:20.660482] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:27.556 [2024-07-25 01:20:20.660510] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:27.556 qpair failed and we were unable to recover it. 00:34:27.556 [2024-07-25 01:20:20.670462] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.556 [2024-07-25 01:20:20.670578] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.556 [2024-07-25 01:20:20.670604] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.556 [2024-07-25 01:20:20.670619] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.556 [2024-07-25 01:20:20.670632] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:27.556 [2024-07-25 01:20:20.670660] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:27.556 qpair failed and we were unable to recover it. 00:34:27.556 [2024-07-25 01:20:20.680474] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.556 [2024-07-25 01:20:20.680635] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.556 [2024-07-25 01:20:20.680660] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.556 [2024-07-25 01:20:20.680680] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.556 [2024-07-25 01:20:20.680694] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:27.556 [2024-07-25 01:20:20.680724] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:27.556 qpair failed and we were unable to recover it. 00:34:27.556 [2024-07-25 01:20:20.690566] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.556 [2024-07-25 01:20:20.690684] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.556 [2024-07-25 01:20:20.690709] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.556 [2024-07-25 01:20:20.690724] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.556 [2024-07-25 01:20:20.690737] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:27.556 [2024-07-25 01:20:20.690766] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:27.556 qpair failed and we were unable to recover it. 00:34:27.556 [2024-07-25 01:20:20.700444] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.556 [2024-07-25 01:20:20.700577] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.556 [2024-07-25 01:20:20.700603] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.556 [2024-07-25 01:20:20.700617] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.556 [2024-07-25 01:20:20.700630] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:27.556 [2024-07-25 01:20:20.700657] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:27.556 qpair failed and we were unable to recover it. 00:34:27.815 [2024-07-25 01:20:20.710484] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.815 [2024-07-25 01:20:20.710614] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.815 [2024-07-25 01:20:20.710639] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.815 [2024-07-25 01:20:20.710654] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.815 [2024-07-25 01:20:20.710667] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:27.815 [2024-07-25 01:20:20.710694] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:27.815 qpair failed and we were unable to recover it. 00:34:27.815 [2024-07-25 01:20:20.720525] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.815 [2024-07-25 01:20:20.720640] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.815 [2024-07-25 01:20:20.720666] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.815 [2024-07-25 01:20:20.720680] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.815 [2024-07-25 01:20:20.720695] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:27.815 [2024-07-25 01:20:20.720724] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:27.815 qpair failed and we were unable to recover it. 00:34:27.815 [2024-07-25 01:20:20.730621] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.815 [2024-07-25 01:20:20.730756] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.815 [2024-07-25 01:20:20.730781] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.815 [2024-07-25 01:20:20.730796] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.815 [2024-07-25 01:20:20.730809] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:27.815 [2024-07-25 01:20:20.730837] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:27.815 qpair failed and we were unable to recover it. 00:34:27.815 [2024-07-25 01:20:20.740538] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.815 [2024-07-25 01:20:20.740650] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.815 [2024-07-25 01:20:20.740676] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.815 [2024-07-25 01:20:20.740690] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.815 [2024-07-25 01:20:20.740703] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:27.815 [2024-07-25 01:20:20.740731] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:27.815 qpair failed and we were unable to recover it. 00:34:27.815 [2024-07-25 01:20:20.750678] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.815 [2024-07-25 01:20:20.750794] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.815 [2024-07-25 01:20:20.750819] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.815 [2024-07-25 01:20:20.750833] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.815 [2024-07-25 01:20:20.750847] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:27.815 [2024-07-25 01:20:20.750875] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:27.815 qpair failed and we were unable to recover it. 00:34:27.815 [2024-07-25 01:20:20.760606] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.815 [2024-07-25 01:20:20.760723] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.815 [2024-07-25 01:20:20.760749] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.815 [2024-07-25 01:20:20.760764] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.815 [2024-07-25 01:20:20.760777] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:27.815 [2024-07-25 01:20:20.760806] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:27.815 qpair failed and we were unable to recover it. 00:34:27.815 [2024-07-25 01:20:20.770634] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.815 [2024-07-25 01:20:20.770751] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.815 [2024-07-25 01:20:20.770782] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.815 [2024-07-25 01:20:20.770797] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.815 [2024-07-25 01:20:20.770810] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:27.815 [2024-07-25 01:20:20.770839] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:27.815 qpair failed and we were unable to recover it. 00:34:27.815 [2024-07-25 01:20:20.780686] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.815 [2024-07-25 01:20:20.780802] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.815 [2024-07-25 01:20:20.780828] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.815 [2024-07-25 01:20:20.780842] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.815 [2024-07-25 01:20:20.780855] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:27.815 [2024-07-25 01:20:20.780883] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:27.815 qpair failed and we were unable to recover it. 00:34:27.815 [2024-07-25 01:20:20.790753] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.815 [2024-07-25 01:20:20.790885] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.815 [2024-07-25 01:20:20.790910] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.815 [2024-07-25 01:20:20.790924] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.815 [2024-07-25 01:20:20.790937] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:27.815 [2024-07-25 01:20:20.790965] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:27.815 qpair failed and we were unable to recover it. 00:34:27.815 [2024-07-25 01:20:20.800743] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.815 [2024-07-25 01:20:20.800857] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.815 [2024-07-25 01:20:20.800883] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.815 [2024-07-25 01:20:20.800897] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.815 [2024-07-25 01:20:20.800910] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:27.815 [2024-07-25 01:20:20.800939] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:27.815 qpair failed and we were unable to recover it. 00:34:27.815 [2024-07-25 01:20:20.810812] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.815 [2024-07-25 01:20:20.810923] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.815 [2024-07-25 01:20:20.810949] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.815 [2024-07-25 01:20:20.810964] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.815 [2024-07-25 01:20:20.810977] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:27.815 [2024-07-25 01:20:20.811011] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:27.815 qpair failed and we were unable to recover it. 00:34:27.815 [2024-07-25 01:20:20.820778] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.815 [2024-07-25 01:20:20.820934] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.815 [2024-07-25 01:20:20.820959] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.815 [2024-07-25 01:20:20.820974] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.815 [2024-07-25 01:20:20.820987] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:27.815 [2024-07-25 01:20:20.821015] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:27.815 qpair failed and we were unable to recover it. 00:34:27.815 [2024-07-25 01:20:20.830872] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.816 [2024-07-25 01:20:20.830987] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.816 [2024-07-25 01:20:20.831013] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.816 [2024-07-25 01:20:20.831027] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.816 [2024-07-25 01:20:20.831040] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:27.816 [2024-07-25 01:20:20.831068] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:27.816 qpair failed and we were unable to recover it. 00:34:27.816 [2024-07-25 01:20:20.840868] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.816 [2024-07-25 01:20:20.840989] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.816 [2024-07-25 01:20:20.841014] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.816 [2024-07-25 01:20:20.841029] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.816 [2024-07-25 01:20:20.841042] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:27.816 [2024-07-25 01:20:20.841070] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:27.816 qpair failed and we were unable to recover it. 00:34:27.816 [2024-07-25 01:20:20.850895] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.816 [2024-07-25 01:20:20.851004] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.816 [2024-07-25 01:20:20.851030] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.816 [2024-07-25 01:20:20.851044] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.816 [2024-07-25 01:20:20.851057] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:27.816 [2024-07-25 01:20:20.851084] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:27.816 qpair failed and we were unable to recover it. 00:34:27.816 [2024-07-25 01:20:20.860904] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.816 [2024-07-25 01:20:20.861011] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.816 [2024-07-25 01:20:20.861044] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.816 [2024-07-25 01:20:20.861060] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.816 [2024-07-25 01:20:20.861073] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:27.816 [2024-07-25 01:20:20.861101] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:27.816 qpair failed and we were unable to recover it. 00:34:27.816 [2024-07-25 01:20:20.870936] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.816 [2024-07-25 01:20:20.871055] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.816 [2024-07-25 01:20:20.871080] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.816 [2024-07-25 01:20:20.871095] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.816 [2024-07-25 01:20:20.871108] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:27.816 [2024-07-25 01:20:20.871136] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:27.816 qpair failed and we were unable to recover it. 00:34:27.816 [2024-07-25 01:20:20.881022] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.816 [2024-07-25 01:20:20.881168] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.816 [2024-07-25 01:20:20.881194] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.816 [2024-07-25 01:20:20.881208] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.816 [2024-07-25 01:20:20.881222] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:27.816 [2024-07-25 01:20:20.881257] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:27.816 qpair failed and we were unable to recover it. 00:34:27.816 [2024-07-25 01:20:20.890982] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.816 [2024-07-25 01:20:20.891090] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.816 [2024-07-25 01:20:20.891116] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.816 [2024-07-25 01:20:20.891130] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.816 [2024-07-25 01:20:20.891142] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:27.816 [2024-07-25 01:20:20.891170] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:27.816 qpair failed and we were unable to recover it. 00:34:27.816 [2024-07-25 01:20:20.901011] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.816 [2024-07-25 01:20:20.901129] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.816 [2024-07-25 01:20:20.901155] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.816 [2024-07-25 01:20:20.901169] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.816 [2024-07-25 01:20:20.901182] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:27.816 [2024-07-25 01:20:20.901216] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:27.816 qpair failed and we were unable to recover it. 00:34:27.816 [2024-07-25 01:20:20.911101] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.816 [2024-07-25 01:20:20.911217] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.816 [2024-07-25 01:20:20.911249] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.816 [2024-07-25 01:20:20.911266] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.816 [2024-07-25 01:20:20.911280] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:27.816 [2024-07-25 01:20:20.911310] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:27.816 qpair failed and we were unable to recover it. 00:34:27.816 [2024-07-25 01:20:20.921080] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.816 [2024-07-25 01:20:20.921205] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.816 [2024-07-25 01:20:20.921231] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.816 [2024-07-25 01:20:20.921252] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.816 [2024-07-25 01:20:20.921267] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:27.816 [2024-07-25 01:20:20.921296] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:27.816 qpair failed and we were unable to recover it. 00:34:27.816 [2024-07-25 01:20:20.931125] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.816 [2024-07-25 01:20:20.931238] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.816 [2024-07-25 01:20:20.931270] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.816 [2024-07-25 01:20:20.931285] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.816 [2024-07-25 01:20:20.931298] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:27.816 [2024-07-25 01:20:20.931326] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:27.816 qpair failed and we were unable to recover it. 00:34:27.816 [2024-07-25 01:20:20.941136] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.816 [2024-07-25 01:20:20.941287] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.816 [2024-07-25 01:20:20.941312] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.816 [2024-07-25 01:20:20.941327] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.816 [2024-07-25 01:20:20.941340] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:27.816 [2024-07-25 01:20:20.941369] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:27.816 qpair failed and we were unable to recover it. 00:34:27.816 [2024-07-25 01:20:20.951198] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.816 [2024-07-25 01:20:20.951365] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.816 [2024-07-25 01:20:20.951395] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.816 [2024-07-25 01:20:20.951410] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.816 [2024-07-25 01:20:20.951424] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:27.816 [2024-07-25 01:20:20.951452] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:27.816 qpair failed and we were unable to recover it. 00:34:27.816 [2024-07-25 01:20:20.961172] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.816 [2024-07-25 01:20:20.961285] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.816 [2024-07-25 01:20:20.961310] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.817 [2024-07-25 01:20:20.961325] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.817 [2024-07-25 01:20:20.961338] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:27.817 [2024-07-25 01:20:20.961368] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:27.817 qpair failed and we were unable to recover it. 00:34:28.075 [2024-07-25 01:20:20.971297] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.075 [2024-07-25 01:20:20.971410] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.075 [2024-07-25 01:20:20.971436] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.075 [2024-07-25 01:20:20.971451] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.075 [2024-07-25 01:20:20.971464] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:28.075 [2024-07-25 01:20:20.971492] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:28.075 qpair failed and we were unable to recover it. 00:34:28.075 [2024-07-25 01:20:20.981221] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.075 [2024-07-25 01:20:20.981334] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.075 [2024-07-25 01:20:20.981359] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.075 [2024-07-25 01:20:20.981373] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.075 [2024-07-25 01:20:20.981386] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:28.075 [2024-07-25 01:20:20.981415] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:28.075 qpair failed and we were unable to recover it. 00:34:28.075 [2024-07-25 01:20:20.991280] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.075 [2024-07-25 01:20:20.991400] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.075 [2024-07-25 01:20:20.991425] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.075 [2024-07-25 01:20:20.991439] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.075 [2024-07-25 01:20:20.991452] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:28.075 [2024-07-25 01:20:20.991486] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:28.075 qpair failed and we were unable to recover it. 00:34:28.075 [2024-07-25 01:20:21.001306] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.075 [2024-07-25 01:20:21.001431] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.075 [2024-07-25 01:20:21.001458] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.075 [2024-07-25 01:20:21.001472] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.075 [2024-07-25 01:20:21.001485] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:28.075 [2024-07-25 01:20:21.001516] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:28.075 qpair failed and we were unable to recover it. 00:34:28.075 [2024-07-25 01:20:21.011328] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.075 [2024-07-25 01:20:21.011468] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.075 [2024-07-25 01:20:21.011494] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.075 [2024-07-25 01:20:21.011509] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.075 [2024-07-25 01:20:21.011522] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:28.075 [2024-07-25 01:20:21.011550] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:28.075 qpair failed and we were unable to recover it. 00:34:28.075 [2024-07-25 01:20:21.021358] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.075 [2024-07-25 01:20:21.021474] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.075 [2024-07-25 01:20:21.021500] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.075 [2024-07-25 01:20:21.021514] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.075 [2024-07-25 01:20:21.021528] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:28.075 [2024-07-25 01:20:21.021557] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:28.075 qpair failed and we were unable to recover it. 00:34:28.075 [2024-07-25 01:20:21.031408] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.075 [2024-07-25 01:20:21.031522] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.075 [2024-07-25 01:20:21.031547] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.075 [2024-07-25 01:20:21.031561] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.075 [2024-07-25 01:20:21.031575] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:28.075 [2024-07-25 01:20:21.031603] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:28.075 qpair failed and we were unable to recover it. 00:34:28.075 [2024-07-25 01:20:21.041413] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.075 [2024-07-25 01:20:21.041527] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.075 [2024-07-25 01:20:21.041558] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.075 [2024-07-25 01:20:21.041573] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.075 [2024-07-25 01:20:21.041586] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:28.075 [2024-07-25 01:20:21.041614] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:28.075 qpair failed and we were unable to recover it. 00:34:28.075 [2024-07-25 01:20:21.051441] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.075 [2024-07-25 01:20:21.051555] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.075 [2024-07-25 01:20:21.051581] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.075 [2024-07-25 01:20:21.051595] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.075 [2024-07-25 01:20:21.051609] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:28.075 [2024-07-25 01:20:21.051636] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:28.075 qpair failed and we were unable to recover it. 00:34:28.075 [2024-07-25 01:20:21.061551] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.075 [2024-07-25 01:20:21.061665] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.075 [2024-07-25 01:20:21.061691] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.075 [2024-07-25 01:20:21.061705] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.075 [2024-07-25 01:20:21.061719] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:28.075 [2024-07-25 01:20:21.061747] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:28.075 qpair failed and we were unable to recover it. 00:34:28.075 [2024-07-25 01:20:21.071544] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.075 [2024-07-25 01:20:21.071685] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.075 [2024-07-25 01:20:21.071710] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.075 [2024-07-25 01:20:21.071725] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.075 [2024-07-25 01:20:21.071738] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:28.075 [2024-07-25 01:20:21.071766] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:28.075 qpair failed and we were unable to recover it. 00:34:28.075 [2024-07-25 01:20:21.081630] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.075 [2024-07-25 01:20:21.081769] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.075 [2024-07-25 01:20:21.081794] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.075 [2024-07-25 01:20:21.081809] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.075 [2024-07-25 01:20:21.081827] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:28.075 [2024-07-25 01:20:21.081856] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:28.075 qpair failed and we were unable to recover it. 00:34:28.075 [2024-07-25 01:20:21.091629] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.075 [2024-07-25 01:20:21.091743] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.075 [2024-07-25 01:20:21.091768] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.075 [2024-07-25 01:20:21.091783] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.075 [2024-07-25 01:20:21.091796] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:28.076 [2024-07-25 01:20:21.091825] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:28.076 qpair failed and we were unable to recover it. 00:34:28.076 [2024-07-25 01:20:21.101609] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.076 [2024-07-25 01:20:21.101729] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.076 [2024-07-25 01:20:21.101754] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.076 [2024-07-25 01:20:21.101768] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.076 [2024-07-25 01:20:21.101782] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:28.076 [2024-07-25 01:20:21.101810] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:28.076 qpair failed and we were unable to recover it. 00:34:28.076 [2024-07-25 01:20:21.111610] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.076 [2024-07-25 01:20:21.111728] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.076 [2024-07-25 01:20:21.111753] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.076 [2024-07-25 01:20:21.111767] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.076 [2024-07-25 01:20:21.111781] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:28.076 [2024-07-25 01:20:21.111809] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:28.076 qpair failed and we were unable to recover it. 00:34:28.076 [2024-07-25 01:20:21.121637] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.076 [2024-07-25 01:20:21.121772] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.076 [2024-07-25 01:20:21.121797] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.076 [2024-07-25 01:20:21.121812] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.076 [2024-07-25 01:20:21.121825] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:28.076 [2024-07-25 01:20:21.121853] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:28.076 qpair failed and we were unable to recover it. 00:34:28.076 [2024-07-25 01:20:21.131677] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.076 [2024-07-25 01:20:21.131794] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.076 [2024-07-25 01:20:21.131819] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.076 [2024-07-25 01:20:21.131833] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.076 [2024-07-25 01:20:21.131845] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:28.076 [2024-07-25 01:20:21.131875] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:28.076 qpair failed and we were unable to recover it. 00:34:28.076 [2024-07-25 01:20:21.141694] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.076 [2024-07-25 01:20:21.141814] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.076 [2024-07-25 01:20:21.141839] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.076 [2024-07-25 01:20:21.141854] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.076 [2024-07-25 01:20:21.141867] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:28.076 [2024-07-25 01:20:21.141895] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:28.076 qpair failed and we were unable to recover it. 00:34:28.076 [2024-07-25 01:20:21.151726] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.076 [2024-07-25 01:20:21.151841] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.076 [2024-07-25 01:20:21.151867] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.076 [2024-07-25 01:20:21.151881] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.076 [2024-07-25 01:20:21.151894] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:28.076 [2024-07-25 01:20:21.151923] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:28.076 qpair failed and we were unable to recover it. 00:34:28.076 [2024-07-25 01:20:21.161732] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.076 [2024-07-25 01:20:21.161848] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.076 [2024-07-25 01:20:21.161874] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.076 [2024-07-25 01:20:21.161889] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.076 [2024-07-25 01:20:21.161902] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:28.076 [2024-07-25 01:20:21.161930] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:28.076 qpair failed and we were unable to recover it. 00:34:28.076 [2024-07-25 01:20:21.171802] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.076 [2024-07-25 01:20:21.171923] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.076 [2024-07-25 01:20:21.171949] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.076 [2024-07-25 01:20:21.171963] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.076 [2024-07-25 01:20:21.172037] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:28.076 [2024-07-25 01:20:21.172068] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:28.076 qpair failed and we were unable to recover it. 00:34:28.076 [2024-07-25 01:20:21.181802] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.076 [2024-07-25 01:20:21.181907] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.076 [2024-07-25 01:20:21.181933] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.076 [2024-07-25 01:20:21.181947] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.076 [2024-07-25 01:20:21.181960] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:28.076 [2024-07-25 01:20:21.181988] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:28.076 qpair failed and we were unable to recover it. 00:34:28.076 [2024-07-25 01:20:21.191853] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.076 [2024-07-25 01:20:21.191974] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.076 [2024-07-25 01:20:21.192000] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.076 [2024-07-25 01:20:21.192016] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.076 [2024-07-25 01:20:21.192032] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:28.076 [2024-07-25 01:20:21.192061] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:28.076 qpair failed and we were unable to recover it. 00:34:28.076 [2024-07-25 01:20:21.201864] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.076 [2024-07-25 01:20:21.201986] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.076 [2024-07-25 01:20:21.202012] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.076 [2024-07-25 01:20:21.202027] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.076 [2024-07-25 01:20:21.202040] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:28.076 [2024-07-25 01:20:21.202068] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:28.076 qpair failed and we were unable to recover it. 00:34:28.076 [2024-07-25 01:20:21.211897] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.076 [2024-07-25 01:20:21.212016] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.076 [2024-07-25 01:20:21.212042] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.076 [2024-07-25 01:20:21.212057] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.076 [2024-07-25 01:20:21.212070] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:28.076 [2024-07-25 01:20:21.212097] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:28.076 qpair failed and we were unable to recover it. 00:34:28.076 [2024-07-25 01:20:21.222033] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.076 [2024-07-25 01:20:21.222146] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.076 [2024-07-25 01:20:21.222171] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.076 [2024-07-25 01:20:21.222185] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.076 [2024-07-25 01:20:21.222198] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:28.076 [2024-07-25 01:20:21.222226] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:28.076 qpair failed and we were unable to recover it. 00:34:28.335 [2024-07-25 01:20:21.231966] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.335 [2024-07-25 01:20:21.232092] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.335 [2024-07-25 01:20:21.232118] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.335 [2024-07-25 01:20:21.232132] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.335 [2024-07-25 01:20:21.232145] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:28.335 [2024-07-25 01:20:21.232173] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:28.335 qpair failed and we were unable to recover it. 00:34:28.335 [2024-07-25 01:20:21.242102] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.335 [2024-07-25 01:20:21.242239] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.335 [2024-07-25 01:20:21.242271] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.335 [2024-07-25 01:20:21.242285] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.335 [2024-07-25 01:20:21.242298] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:28.335 [2024-07-25 01:20:21.242327] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:28.335 qpair failed and we were unable to recover it. 00:34:28.335 [2024-07-25 01:20:21.252036] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.335 [2024-07-25 01:20:21.252152] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.336 [2024-07-25 01:20:21.252177] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.336 [2024-07-25 01:20:21.252191] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.336 [2024-07-25 01:20:21.252204] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:28.336 [2024-07-25 01:20:21.252234] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:28.336 qpair failed and we were unable to recover it. 00:34:28.336 [2024-07-25 01:20:21.262042] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.336 [2024-07-25 01:20:21.262156] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.336 [2024-07-25 01:20:21.262181] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.336 [2024-07-25 01:20:21.262201] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.336 [2024-07-25 01:20:21.262215] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:28.336 [2024-07-25 01:20:21.262249] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:28.336 qpair failed and we were unable to recover it. 00:34:28.336 [2024-07-25 01:20:21.272101] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.336 [2024-07-25 01:20:21.272227] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.336 [2024-07-25 01:20:21.272263] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.336 [2024-07-25 01:20:21.272279] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.336 [2024-07-25 01:20:21.272292] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:28.336 [2024-07-25 01:20:21.272323] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:28.336 qpair failed and we were unable to recover it. 00:34:28.336 [2024-07-25 01:20:21.282131] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.336 [2024-07-25 01:20:21.282260] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.336 [2024-07-25 01:20:21.282285] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.336 [2024-07-25 01:20:21.282300] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.336 [2024-07-25 01:20:21.282313] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:28.336 [2024-07-25 01:20:21.282343] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:28.336 qpair failed and we were unable to recover it. 00:34:28.336 [2024-07-25 01:20:21.292141] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.336 [2024-07-25 01:20:21.292267] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.336 [2024-07-25 01:20:21.292294] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.336 [2024-07-25 01:20:21.292308] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.336 [2024-07-25 01:20:21.292321] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:28.336 [2024-07-25 01:20:21.292349] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:28.336 qpair failed and we were unable to recover it. 00:34:28.336 [2024-07-25 01:20:21.302164] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.336 [2024-07-25 01:20:21.302294] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.336 [2024-07-25 01:20:21.302322] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.336 [2024-07-25 01:20:21.302340] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.336 [2024-07-25 01:20:21.302354] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:28.336 [2024-07-25 01:20:21.302383] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:28.336 qpair failed and we were unable to recover it. 00:34:28.336 [2024-07-25 01:20:21.312185] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.336 [2024-07-25 01:20:21.312315] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.336 [2024-07-25 01:20:21.312341] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.336 [2024-07-25 01:20:21.312356] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.336 [2024-07-25 01:20:21.312369] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:28.336 [2024-07-25 01:20:21.312397] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:28.336 qpair failed and we were unable to recover it. 00:34:28.336 [2024-07-25 01:20:21.322198] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.336 [2024-07-25 01:20:21.322320] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.336 [2024-07-25 01:20:21.322345] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.336 [2024-07-25 01:20:21.322360] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.336 [2024-07-25 01:20:21.322373] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:28.336 [2024-07-25 01:20:21.322401] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:28.336 qpair failed and we were unable to recover it. 00:34:28.336 [2024-07-25 01:20:21.332250] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.336 [2024-07-25 01:20:21.332388] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.336 [2024-07-25 01:20:21.332416] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.336 [2024-07-25 01:20:21.332431] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.336 [2024-07-25 01:20:21.332444] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:28.336 [2024-07-25 01:20:21.332473] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:28.336 qpair failed and we were unable to recover it. 00:34:28.336 [2024-07-25 01:20:21.342269] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.336 [2024-07-25 01:20:21.342385] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.336 [2024-07-25 01:20:21.342411] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.336 [2024-07-25 01:20:21.342426] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.336 [2024-07-25 01:20:21.342439] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:28.336 [2024-07-25 01:20:21.342468] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:28.336 qpair failed and we were unable to recover it. 00:34:28.336 [2024-07-25 01:20:21.352305] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.336 [2024-07-25 01:20:21.352430] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.336 [2024-07-25 01:20:21.352455] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.336 [2024-07-25 01:20:21.352475] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.336 [2024-07-25 01:20:21.352489] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:28.336 [2024-07-25 01:20:21.352519] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:28.336 qpair failed and we were unable to recover it. 00:34:28.336 [2024-07-25 01:20:21.362397] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.336 [2024-07-25 01:20:21.362517] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.336 [2024-07-25 01:20:21.362543] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.336 [2024-07-25 01:20:21.362557] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.336 [2024-07-25 01:20:21.362570] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:28.336 [2024-07-25 01:20:21.362598] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:28.336 qpair failed and we were unable to recover it. 00:34:28.336 [2024-07-25 01:20:21.372368] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.336 [2024-07-25 01:20:21.372486] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.336 [2024-07-25 01:20:21.372512] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.336 [2024-07-25 01:20:21.372526] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.336 [2024-07-25 01:20:21.372540] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:28.336 [2024-07-25 01:20:21.372567] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:28.336 qpair failed and we were unable to recover it. 00:34:28.336 [2024-07-25 01:20:21.382419] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.336 [2024-07-25 01:20:21.382549] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.336 [2024-07-25 01:20:21.382574] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.337 [2024-07-25 01:20:21.382588] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.337 [2024-07-25 01:20:21.382601] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:28.337 [2024-07-25 01:20:21.382629] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:28.337 qpair failed and we were unable to recover it. 00:34:28.337 [2024-07-25 01:20:21.392412] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.337 [2024-07-25 01:20:21.392531] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.337 [2024-07-25 01:20:21.392557] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.337 [2024-07-25 01:20:21.392571] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.337 [2024-07-25 01:20:21.392584] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:28.337 [2024-07-25 01:20:21.392612] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:28.337 qpair failed and we were unable to recover it. 00:34:28.337 [2024-07-25 01:20:21.402434] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.337 [2024-07-25 01:20:21.402554] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.337 [2024-07-25 01:20:21.402579] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.337 [2024-07-25 01:20:21.402594] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.337 [2024-07-25 01:20:21.402607] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:28.337 [2024-07-25 01:20:21.402635] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:28.337 qpair failed and we were unable to recover it. 00:34:28.337 [2024-07-25 01:20:21.412463] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.337 [2024-07-25 01:20:21.412575] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.337 [2024-07-25 01:20:21.412600] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.337 [2024-07-25 01:20:21.412615] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.337 [2024-07-25 01:20:21.412628] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:28.337 [2024-07-25 01:20:21.412656] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:28.337 qpair failed and we were unable to recover it. 00:34:28.337 [2024-07-25 01:20:21.422526] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.337 [2024-07-25 01:20:21.422646] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.337 [2024-07-25 01:20:21.422672] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.337 [2024-07-25 01:20:21.422688] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.337 [2024-07-25 01:20:21.422702] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:28.337 [2024-07-25 01:20:21.422731] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:28.337 qpair failed and we were unable to recover it. 00:34:28.337 [2024-07-25 01:20:21.432552] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.337 [2024-07-25 01:20:21.432671] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.337 [2024-07-25 01:20:21.432697] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.337 [2024-07-25 01:20:21.432711] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.337 [2024-07-25 01:20:21.432725] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:28.337 [2024-07-25 01:20:21.432753] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:28.337 qpair failed and we were unable to recover it. 00:34:28.337 [2024-07-25 01:20:21.442586] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.337 [2024-07-25 01:20:21.442708] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.337 [2024-07-25 01:20:21.442733] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.337 [2024-07-25 01:20:21.442753] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.337 [2024-07-25 01:20:21.442767] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:28.337 [2024-07-25 01:20:21.442797] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:28.337 qpair failed and we were unable to recover it. 00:34:28.337 [2024-07-25 01:20:21.452612] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.337 [2024-07-25 01:20:21.452725] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.337 [2024-07-25 01:20:21.452751] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.337 [2024-07-25 01:20:21.452765] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.337 [2024-07-25 01:20:21.452779] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:28.337 [2024-07-25 01:20:21.452807] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:28.337 qpair failed and we were unable to recover it. 00:34:28.337 [2024-07-25 01:20:21.462618] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.337 [2024-07-25 01:20:21.462747] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.337 [2024-07-25 01:20:21.462772] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.337 [2024-07-25 01:20:21.462786] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.337 [2024-07-25 01:20:21.462800] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:28.337 [2024-07-25 01:20:21.462829] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:28.337 qpair failed and we were unable to recover it. 00:34:28.337 [2024-07-25 01:20:21.472664] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.337 [2024-07-25 01:20:21.472785] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.337 [2024-07-25 01:20:21.472811] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.337 [2024-07-25 01:20:21.472826] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.337 [2024-07-25 01:20:21.472840] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:28.337 [2024-07-25 01:20:21.472870] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:28.337 qpair failed and we were unable to recover it. 00:34:28.337 [2024-07-25 01:20:21.482660] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.337 [2024-07-25 01:20:21.482774] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.337 [2024-07-25 01:20:21.482799] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.337 [2024-07-25 01:20:21.482814] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.337 [2024-07-25 01:20:21.482827] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:28.337 [2024-07-25 01:20:21.482856] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:28.337 qpair failed and we were unable to recover it. 00:34:28.596 [2024-07-25 01:20:21.492792] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.596 [2024-07-25 01:20:21.492907] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.596 [2024-07-25 01:20:21.492933] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.596 [2024-07-25 01:20:21.492947] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.596 [2024-07-25 01:20:21.492960] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:28.596 [2024-07-25 01:20:21.492988] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:28.596 qpair failed and we were unable to recover it. 00:34:28.596 [2024-07-25 01:20:21.502794] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.596 [2024-07-25 01:20:21.502912] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.596 [2024-07-25 01:20:21.502937] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.596 [2024-07-25 01:20:21.502952] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.596 [2024-07-25 01:20:21.502965] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:28.596 [2024-07-25 01:20:21.502993] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:28.596 qpair failed and we were unable to recover it. 00:34:28.596 [2024-07-25 01:20:21.512850] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.596 [2024-07-25 01:20:21.512981] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.596 [2024-07-25 01:20:21.513006] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.596 [2024-07-25 01:20:21.513021] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.596 [2024-07-25 01:20:21.513034] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:28.596 [2024-07-25 01:20:21.513062] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:28.596 qpair failed and we were unable to recover it. 00:34:28.596 [2024-07-25 01:20:21.522806] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.596 [2024-07-25 01:20:21.522920] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.596 [2024-07-25 01:20:21.522945] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.596 [2024-07-25 01:20:21.522960] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.596 [2024-07-25 01:20:21.522973] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:28.596 [2024-07-25 01:20:21.523003] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:28.596 qpair failed and we were unable to recover it. 00:34:28.596 [2024-07-25 01:20:21.532796] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.596 [2024-07-25 01:20:21.532915] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.596 [2024-07-25 01:20:21.532944] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.596 [2024-07-25 01:20:21.532958] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.596 [2024-07-25 01:20:21.532970] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:28.596 [2024-07-25 01:20:21.532998] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:28.596 qpair failed and we were unable to recover it. 00:34:28.596 [2024-07-25 01:20:21.542873] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.596 [2024-07-25 01:20:21.542993] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.596 [2024-07-25 01:20:21.543019] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.596 [2024-07-25 01:20:21.543033] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.596 [2024-07-25 01:20:21.543046] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:28.596 [2024-07-25 01:20:21.543073] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:28.596 qpair failed and we were unable to recover it. 00:34:28.596 [2024-07-25 01:20:21.552860] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.596 [2024-07-25 01:20:21.552977] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.596 [2024-07-25 01:20:21.553003] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.596 [2024-07-25 01:20:21.553017] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.596 [2024-07-25 01:20:21.553030] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:28.596 [2024-07-25 01:20:21.553058] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:28.596 qpair failed and we were unable to recover it. 00:34:28.596 [2024-07-25 01:20:21.562879] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.596 [2024-07-25 01:20:21.562997] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.596 [2024-07-25 01:20:21.563023] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.596 [2024-07-25 01:20:21.563038] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.596 [2024-07-25 01:20:21.563051] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:28.596 [2024-07-25 01:20:21.563080] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:28.596 qpair failed and we were unable to recover it. 00:34:28.596 [2024-07-25 01:20:21.572934] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.596 [2024-07-25 01:20:21.573054] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.596 [2024-07-25 01:20:21.573080] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.596 [2024-07-25 01:20:21.573094] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.596 [2024-07-25 01:20:21.573107] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:28.596 [2024-07-25 01:20:21.573135] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:28.596 qpair failed and we were unable to recover it. 00:34:28.596 [2024-07-25 01:20:21.582975] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.596 [2024-07-25 01:20:21.583085] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.596 [2024-07-25 01:20:21.583110] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.596 [2024-07-25 01:20:21.583124] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.596 [2024-07-25 01:20:21.583139] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:28.597 [2024-07-25 01:20:21.583168] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:28.597 qpair failed and we were unable to recover it. 00:34:28.597 [2024-07-25 01:20:21.592983] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.597 [2024-07-25 01:20:21.593095] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.597 [2024-07-25 01:20:21.593120] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.597 [2024-07-25 01:20:21.593135] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.597 [2024-07-25 01:20:21.593148] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:28.597 [2024-07-25 01:20:21.593175] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:28.597 qpair failed and we were unable to recover it. 00:34:28.597 [2024-07-25 01:20:21.603024] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.597 [2024-07-25 01:20:21.603140] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.597 [2024-07-25 01:20:21.603166] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.597 [2024-07-25 01:20:21.603180] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.597 [2024-07-25 01:20:21.603193] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:28.597 [2024-07-25 01:20:21.603222] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:28.597 qpair failed and we were unable to recover it. 00:34:28.597 [2024-07-25 01:20:21.613015] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.597 [2024-07-25 01:20:21.613125] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.597 [2024-07-25 01:20:21.613150] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.597 [2024-07-25 01:20:21.613165] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.597 [2024-07-25 01:20:21.613178] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:28.597 [2024-07-25 01:20:21.613206] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:28.597 qpair failed and we were unable to recover it. 00:34:28.597 [2024-07-25 01:20:21.623081] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.597 [2024-07-25 01:20:21.623209] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.597 [2024-07-25 01:20:21.623253] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.597 [2024-07-25 01:20:21.623272] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.597 [2024-07-25 01:20:21.623286] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:28.597 [2024-07-25 01:20:21.623315] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:28.597 qpair failed and we were unable to recover it. 00:34:28.597 [2024-07-25 01:20:21.633177] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.597 [2024-07-25 01:20:21.633299] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.597 [2024-07-25 01:20:21.633324] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.597 [2024-07-25 01:20:21.633339] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.597 [2024-07-25 01:20:21.633353] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:28.597 [2024-07-25 01:20:21.633381] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:28.597 qpair failed and we were unable to recover it. 00:34:28.597 [2024-07-25 01:20:21.643140] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.597 [2024-07-25 01:20:21.643268] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.597 [2024-07-25 01:20:21.643294] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.597 [2024-07-25 01:20:21.643309] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.597 [2024-07-25 01:20:21.643322] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:28.597 [2024-07-25 01:20:21.643352] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:28.597 qpair failed and we were unable to recover it. 00:34:28.597 [2024-07-25 01:20:21.653173] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.597 [2024-07-25 01:20:21.653329] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.597 [2024-07-25 01:20:21.653355] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.597 [2024-07-25 01:20:21.653370] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.597 [2024-07-25 01:20:21.653382] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:28.597 [2024-07-25 01:20:21.653411] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:28.597 qpair failed and we were unable to recover it. 00:34:28.597 [2024-07-25 01:20:21.663152] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.597 [2024-07-25 01:20:21.663270] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.597 [2024-07-25 01:20:21.663296] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.597 [2024-07-25 01:20:21.663310] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.597 [2024-07-25 01:20:21.663323] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:28.597 [2024-07-25 01:20:21.663357] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:28.597 qpair failed and we were unable to recover it. 00:34:28.597 [2024-07-25 01:20:21.673238] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.597 [2024-07-25 01:20:21.673372] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.597 [2024-07-25 01:20:21.673397] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.597 [2024-07-25 01:20:21.673411] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.597 [2024-07-25 01:20:21.673425] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:28.597 [2024-07-25 01:20:21.673453] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:28.597 qpair failed and we were unable to recover it. 00:34:28.597 [2024-07-25 01:20:21.683226] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.597 [2024-07-25 01:20:21.683367] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.597 [2024-07-25 01:20:21.683393] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.597 [2024-07-25 01:20:21.683408] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.597 [2024-07-25 01:20:21.683422] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:28.597 [2024-07-25 01:20:21.683452] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:28.597 qpair failed and we were unable to recover it. 00:34:28.597 [2024-07-25 01:20:21.693272] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.597 [2024-07-25 01:20:21.693394] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.597 [2024-07-25 01:20:21.693420] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.597 [2024-07-25 01:20:21.693434] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.597 [2024-07-25 01:20:21.693448] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:28.597 [2024-07-25 01:20:21.693476] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:28.597 qpair failed and we were unable to recover it. 00:34:28.597 [2024-07-25 01:20:21.703284] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.597 [2024-07-25 01:20:21.703399] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.597 [2024-07-25 01:20:21.703424] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.597 [2024-07-25 01:20:21.703439] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.597 [2024-07-25 01:20:21.703452] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:28.597 [2024-07-25 01:20:21.703480] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:28.597 qpair failed and we were unable to recover it. 00:34:28.597 [2024-07-25 01:20:21.713400] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.597 [2024-07-25 01:20:21.713519] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.597 [2024-07-25 01:20:21.713549] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.597 [2024-07-25 01:20:21.713564] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.597 [2024-07-25 01:20:21.713578] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:28.597 [2024-07-25 01:20:21.713606] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:28.597 qpair failed and we were unable to recover it. 00:34:28.598 [2024-07-25 01:20:21.723358] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.598 [2024-07-25 01:20:21.723479] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.598 [2024-07-25 01:20:21.723506] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.598 [2024-07-25 01:20:21.723525] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.598 [2024-07-25 01:20:21.723540] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:28.598 [2024-07-25 01:20:21.723569] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:28.598 qpair failed and we were unable to recover it. 00:34:28.598 [2024-07-25 01:20:21.733372] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.598 [2024-07-25 01:20:21.733486] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.598 [2024-07-25 01:20:21.733513] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.598 [2024-07-25 01:20:21.733528] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.598 [2024-07-25 01:20:21.733541] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:28.598 [2024-07-25 01:20:21.733571] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:28.598 qpair failed and we were unable to recover it. 00:34:28.598 [2024-07-25 01:20:21.743410] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.598 [2024-07-25 01:20:21.743525] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.598 [2024-07-25 01:20:21.743551] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.598 [2024-07-25 01:20:21.743566] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.598 [2024-07-25 01:20:21.743579] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:28.598 [2024-07-25 01:20:21.743607] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:28.598 qpair failed and we were unable to recover it. 00:34:28.857 [2024-07-25 01:20:21.753445] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.857 [2024-07-25 01:20:21.753561] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.857 [2024-07-25 01:20:21.753587] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.857 [2024-07-25 01:20:21.753601] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.857 [2024-07-25 01:20:21.753614] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:28.857 [2024-07-25 01:20:21.753648] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:28.857 qpair failed and we were unable to recover it. 00:34:28.857 [2024-07-25 01:20:21.763495] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.857 [2024-07-25 01:20:21.763637] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.857 [2024-07-25 01:20:21.763663] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.857 [2024-07-25 01:20:21.763677] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.857 [2024-07-25 01:20:21.763691] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:28.857 [2024-07-25 01:20:21.763718] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:28.857 qpair failed and we were unable to recover it. 00:34:28.857 [2024-07-25 01:20:21.773592] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.857 [2024-07-25 01:20:21.773711] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.857 [2024-07-25 01:20:21.773736] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.857 [2024-07-25 01:20:21.773751] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.857 [2024-07-25 01:20:21.773764] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:28.857 [2024-07-25 01:20:21.773792] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:28.857 qpair failed and we were unable to recover it. 00:34:28.857 [2024-07-25 01:20:21.783542] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.857 [2024-07-25 01:20:21.783668] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.857 [2024-07-25 01:20:21.783694] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.857 [2024-07-25 01:20:21.783709] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.857 [2024-07-25 01:20:21.783722] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:28.857 [2024-07-25 01:20:21.783750] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:28.857 qpair failed and we were unable to recover it. 00:34:28.857 [2024-07-25 01:20:21.793592] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.857 [2024-07-25 01:20:21.793709] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.857 [2024-07-25 01:20:21.793735] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.857 [2024-07-25 01:20:21.793749] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.857 [2024-07-25 01:20:21.793763] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:28.857 [2024-07-25 01:20:21.793791] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:28.857 qpair failed and we were unable to recover it. 00:34:28.857 [2024-07-25 01:20:21.803611] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.857 [2024-07-25 01:20:21.803741] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.857 [2024-07-25 01:20:21.803772] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.857 [2024-07-25 01:20:21.803786] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.857 [2024-07-25 01:20:21.803800] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:28.857 [2024-07-25 01:20:21.803828] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:28.857 qpair failed and we were unable to recover it. 00:34:28.857 [2024-07-25 01:20:21.813620] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.857 [2024-07-25 01:20:21.813744] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.857 [2024-07-25 01:20:21.813770] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.857 [2024-07-25 01:20:21.813785] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.857 [2024-07-25 01:20:21.813798] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:28.857 [2024-07-25 01:20:21.813826] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:28.857 qpair failed and we were unable to recover it. 00:34:28.857 [2024-07-25 01:20:21.823623] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.857 [2024-07-25 01:20:21.823732] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.857 [2024-07-25 01:20:21.823757] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.857 [2024-07-25 01:20:21.823772] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.857 [2024-07-25 01:20:21.823785] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:28.857 [2024-07-25 01:20:21.823813] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:28.857 qpair failed and we were unable to recover it. 00:34:28.857 [2024-07-25 01:20:21.833693] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.857 [2024-07-25 01:20:21.833812] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.857 [2024-07-25 01:20:21.833837] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.857 [2024-07-25 01:20:21.833851] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.857 [2024-07-25 01:20:21.833864] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:28.857 [2024-07-25 01:20:21.833892] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:28.857 qpair failed and we were unable to recover it. 00:34:28.857 [2024-07-25 01:20:21.843736] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.857 [2024-07-25 01:20:21.843855] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.857 [2024-07-25 01:20:21.843881] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.857 [2024-07-25 01:20:21.843895] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.857 [2024-07-25 01:20:21.843913] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:28.857 [2024-07-25 01:20:21.843943] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:28.857 qpair failed and we were unable to recover it. 00:34:28.857 [2024-07-25 01:20:21.853706] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.857 [2024-07-25 01:20:21.853819] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.857 [2024-07-25 01:20:21.853844] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.857 [2024-07-25 01:20:21.853858] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.857 [2024-07-25 01:20:21.853871] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:28.857 [2024-07-25 01:20:21.853900] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:28.857 qpair failed and we were unable to recover it. 00:34:28.857 [2024-07-25 01:20:21.863724] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.857 [2024-07-25 01:20:21.863831] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.857 [2024-07-25 01:20:21.863856] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.857 [2024-07-25 01:20:21.863870] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.857 [2024-07-25 01:20:21.863883] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:28.857 [2024-07-25 01:20:21.863911] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:28.857 qpair failed and we were unable to recover it. 00:34:28.857 [2024-07-25 01:20:21.873759] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.857 [2024-07-25 01:20:21.873877] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.857 [2024-07-25 01:20:21.873902] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.858 [2024-07-25 01:20:21.873917] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.858 [2024-07-25 01:20:21.873930] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:28.858 [2024-07-25 01:20:21.873958] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:28.858 qpair failed and we were unable to recover it. 00:34:28.858 [2024-07-25 01:20:21.883786] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.858 [2024-07-25 01:20:21.883947] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.858 [2024-07-25 01:20:21.883972] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.858 [2024-07-25 01:20:21.883986] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.858 [2024-07-25 01:20:21.883999] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:28.858 [2024-07-25 01:20:21.884028] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:28.858 qpair failed and we were unable to recover it. 00:34:28.858 [2024-07-25 01:20:21.893910] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.858 [2024-07-25 01:20:21.894027] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.858 [2024-07-25 01:20:21.894053] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.858 [2024-07-25 01:20:21.894067] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.858 [2024-07-25 01:20:21.894080] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:28.858 [2024-07-25 01:20:21.894108] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:28.858 qpair failed and we were unable to recover it. 00:34:28.858 [2024-07-25 01:20:21.903870] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.858 [2024-07-25 01:20:21.903983] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.858 [2024-07-25 01:20:21.904008] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.858 [2024-07-25 01:20:21.904022] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.858 [2024-07-25 01:20:21.904035] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:28.858 [2024-07-25 01:20:21.904063] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:28.858 qpair failed and we were unable to recover it. 00:34:28.858 [2024-07-25 01:20:21.913922] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.858 [2024-07-25 01:20:21.914041] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.858 [2024-07-25 01:20:21.914066] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.858 [2024-07-25 01:20:21.914081] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.858 [2024-07-25 01:20:21.914094] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:28.858 [2024-07-25 01:20:21.914122] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:28.858 qpair failed and we were unable to recover it. 00:34:28.858 [2024-07-25 01:20:21.924018] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.858 [2024-07-25 01:20:21.924162] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.858 [2024-07-25 01:20:21.924187] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.858 [2024-07-25 01:20:21.924201] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.858 [2024-07-25 01:20:21.924215] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:28.858 [2024-07-25 01:20:21.924252] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:28.858 qpair failed and we were unable to recover it. 00:34:28.858 [2024-07-25 01:20:21.933962] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.858 [2024-07-25 01:20:21.934104] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.858 [2024-07-25 01:20:21.934129] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.858 [2024-07-25 01:20:21.934143] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.858 [2024-07-25 01:20:21.934162] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:28.858 [2024-07-25 01:20:21.934190] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:28.858 qpair failed and we were unable to recover it. 00:34:28.858 [2024-07-25 01:20:21.944008] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.858 [2024-07-25 01:20:21.944120] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.858 [2024-07-25 01:20:21.944145] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.858 [2024-07-25 01:20:21.944159] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.858 [2024-07-25 01:20:21.944173] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:28.858 [2024-07-25 01:20:21.944201] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:28.858 qpair failed and we were unable to recover it. 00:34:28.858 [2024-07-25 01:20:21.954016] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.858 [2024-07-25 01:20:21.954138] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.858 [2024-07-25 01:20:21.954164] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.858 [2024-07-25 01:20:21.954178] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.858 [2024-07-25 01:20:21.954191] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:28.858 [2024-07-25 01:20:21.954219] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:28.858 qpair failed and we were unable to recover it. 00:34:28.858 [2024-07-25 01:20:21.964023] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.858 [2024-07-25 01:20:21.964140] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.858 [2024-07-25 01:20:21.964165] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.858 [2024-07-25 01:20:21.964179] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.858 [2024-07-25 01:20:21.964192] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:28.858 [2024-07-25 01:20:21.964220] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:28.858 qpair failed and we were unable to recover it. 00:34:28.858 [2024-07-25 01:20:21.974044] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.858 [2024-07-25 01:20:21.974173] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.858 [2024-07-25 01:20:21.974199] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.858 [2024-07-25 01:20:21.974214] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.858 [2024-07-25 01:20:21.974228] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:28.858 [2024-07-25 01:20:21.974263] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:28.858 qpair failed and we were unable to recover it. 00:34:28.858 [2024-07-25 01:20:21.984069] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.858 [2024-07-25 01:20:21.984183] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.858 [2024-07-25 01:20:21.984208] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.858 [2024-07-25 01:20:21.984223] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.858 [2024-07-25 01:20:21.984236] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:28.858 [2024-07-25 01:20:21.984273] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:28.858 qpair failed and we were unable to recover it. 00:34:28.858 [2024-07-25 01:20:21.994234] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.858 [2024-07-25 01:20:21.994370] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.858 [2024-07-25 01:20:21.994396] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.858 [2024-07-25 01:20:21.994410] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.858 [2024-07-25 01:20:21.994424] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:28.858 [2024-07-25 01:20:21.994452] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:28.858 qpair failed and we were unable to recover it. 00:34:28.858 [2024-07-25 01:20:22.004176] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.858 [2024-07-25 01:20:22.004309] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.858 [2024-07-25 01:20:22.004334] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.858 [2024-07-25 01:20:22.004349] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.859 [2024-07-25 01:20:22.004363] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:28.859 [2024-07-25 01:20:22.004391] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:28.859 qpair failed and we were unable to recover it. 00:34:29.118 [2024-07-25 01:20:22.014198] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.118 [2024-07-25 01:20:22.014323] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.118 [2024-07-25 01:20:22.014349] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.118 [2024-07-25 01:20:22.014364] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.118 [2024-07-25 01:20:22.014377] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:29.118 [2024-07-25 01:20:22.014405] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:29.118 qpair failed and we were unable to recover it. 00:34:29.118 [2024-07-25 01:20:22.024178] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.118 [2024-07-25 01:20:22.024299] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.118 [2024-07-25 01:20:22.024325] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.118 [2024-07-25 01:20:22.024339] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.118 [2024-07-25 01:20:22.024358] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:29.118 [2024-07-25 01:20:22.024387] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:29.118 qpair failed and we were unable to recover it. 00:34:29.118 [2024-07-25 01:20:22.034233] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.118 [2024-07-25 01:20:22.034360] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.118 [2024-07-25 01:20:22.034386] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.118 [2024-07-25 01:20:22.034400] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.118 [2024-07-25 01:20:22.034413] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:29.118 [2024-07-25 01:20:22.034441] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:29.118 qpair failed and we were unable to recover it. 00:34:29.118 [2024-07-25 01:20:22.044285] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.118 [2024-07-25 01:20:22.044405] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.118 [2024-07-25 01:20:22.044430] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.118 [2024-07-25 01:20:22.044444] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.118 [2024-07-25 01:20:22.044457] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:29.118 [2024-07-25 01:20:22.044485] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:29.118 qpair failed and we were unable to recover it. 00:34:29.118 [2024-07-25 01:20:22.054271] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.118 [2024-07-25 01:20:22.054385] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.118 [2024-07-25 01:20:22.054410] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.118 [2024-07-25 01:20:22.054425] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.118 [2024-07-25 01:20:22.054440] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:29.118 [2024-07-25 01:20:22.054469] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:29.118 qpair failed and we were unable to recover it. 00:34:29.118 [2024-07-25 01:20:22.064304] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.118 [2024-07-25 01:20:22.064419] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.118 [2024-07-25 01:20:22.064444] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.118 [2024-07-25 01:20:22.064459] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.118 [2024-07-25 01:20:22.064472] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:29.118 [2024-07-25 01:20:22.064499] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:29.118 qpair failed and we were unable to recover it. 00:34:29.118 [2024-07-25 01:20:22.074347] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.118 [2024-07-25 01:20:22.074491] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.118 [2024-07-25 01:20:22.074516] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.118 [2024-07-25 01:20:22.074531] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.118 [2024-07-25 01:20:22.074544] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:29.118 [2024-07-25 01:20:22.074572] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:29.118 qpair failed and we were unable to recover it. 00:34:29.118 [2024-07-25 01:20:22.084368] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.118 [2024-07-25 01:20:22.084487] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.118 [2024-07-25 01:20:22.084513] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.118 [2024-07-25 01:20:22.084527] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.118 [2024-07-25 01:20:22.084541] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:29.118 [2024-07-25 01:20:22.084569] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:29.118 qpair failed and we were unable to recover it. 00:34:29.118 [2024-07-25 01:20:22.094388] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.118 [2024-07-25 01:20:22.094500] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.118 [2024-07-25 01:20:22.094526] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.118 [2024-07-25 01:20:22.094540] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.118 [2024-07-25 01:20:22.094554] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:29.118 [2024-07-25 01:20:22.094582] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:29.118 qpair failed and we were unable to recover it. 00:34:29.118 [2024-07-25 01:20:22.104416] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.118 [2024-07-25 01:20:22.104525] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.118 [2024-07-25 01:20:22.104550] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.118 [2024-07-25 01:20:22.104564] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.118 [2024-07-25 01:20:22.104578] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:29.118 [2024-07-25 01:20:22.104606] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:29.118 qpair failed and we were unable to recover it. 00:34:29.118 [2024-07-25 01:20:22.114523] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.118 [2024-07-25 01:20:22.114641] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.118 [2024-07-25 01:20:22.114667] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.118 [2024-07-25 01:20:22.114687] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.118 [2024-07-25 01:20:22.114701] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:29.118 [2024-07-25 01:20:22.114729] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:29.118 qpair failed and we were unable to recover it. 00:34:29.118 [2024-07-25 01:20:22.124570] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.118 [2024-07-25 01:20:22.124681] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.118 [2024-07-25 01:20:22.124707] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.118 [2024-07-25 01:20:22.124721] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.119 [2024-07-25 01:20:22.124735] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:29.119 [2024-07-25 01:20:22.124763] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:29.119 qpair failed and we were unable to recover it. 00:34:29.119 [2024-07-25 01:20:22.134517] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.119 [2024-07-25 01:20:22.134627] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.119 [2024-07-25 01:20:22.134653] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.119 [2024-07-25 01:20:22.134668] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.119 [2024-07-25 01:20:22.134682] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:29.119 [2024-07-25 01:20:22.134711] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:29.119 qpair failed and we were unable to recover it. 00:34:29.119 [2024-07-25 01:20:22.144562] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.119 [2024-07-25 01:20:22.144690] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.119 [2024-07-25 01:20:22.144716] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.119 [2024-07-25 01:20:22.144731] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.119 [2024-07-25 01:20:22.144748] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:29.119 [2024-07-25 01:20:22.144777] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:29.119 qpair failed and we were unable to recover it. 00:34:29.119 [2024-07-25 01:20:22.154569] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.119 [2024-07-25 01:20:22.154728] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.119 [2024-07-25 01:20:22.154754] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.119 [2024-07-25 01:20:22.154768] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.119 [2024-07-25 01:20:22.154782] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:29.119 [2024-07-25 01:20:22.154810] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:29.119 qpair failed and we were unable to recover it. 00:34:29.119 [2024-07-25 01:20:22.164626] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.119 [2024-07-25 01:20:22.164763] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.119 [2024-07-25 01:20:22.164789] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.119 [2024-07-25 01:20:22.164804] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.119 [2024-07-25 01:20:22.164821] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:29.119 [2024-07-25 01:20:22.164851] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:29.119 qpair failed and we were unable to recover it. 00:34:29.119 [2024-07-25 01:20:22.174621] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.119 [2024-07-25 01:20:22.174730] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.119 [2024-07-25 01:20:22.174755] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.119 [2024-07-25 01:20:22.174770] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.119 [2024-07-25 01:20:22.174784] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:29.119 [2024-07-25 01:20:22.174812] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:29.119 qpair failed and we were unable to recover it. 00:34:29.119 [2024-07-25 01:20:22.184705] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.119 [2024-07-25 01:20:22.184840] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.119 [2024-07-25 01:20:22.184866] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.119 [2024-07-25 01:20:22.184880] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.119 [2024-07-25 01:20:22.184893] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:29.119 [2024-07-25 01:20:22.184921] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:29.119 qpair failed and we were unable to recover it. 00:34:29.119 [2024-07-25 01:20:22.194707] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.119 [2024-07-25 01:20:22.194826] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.119 [2024-07-25 01:20:22.194853] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.119 [2024-07-25 01:20:22.194867] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.119 [2024-07-25 01:20:22.194880] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:29.119 [2024-07-25 01:20:22.194908] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:29.119 qpair failed and we were unable to recover it. 00:34:29.119 [2024-07-25 01:20:22.204763] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.119 [2024-07-25 01:20:22.204885] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.119 [2024-07-25 01:20:22.204910] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.119 [2024-07-25 01:20:22.204934] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.119 [2024-07-25 01:20:22.204948] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:29.119 [2024-07-25 01:20:22.204977] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:29.119 qpair failed and we were unable to recover it. 00:34:29.119 [2024-07-25 01:20:22.214811] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.119 [2024-07-25 01:20:22.214925] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.119 [2024-07-25 01:20:22.214950] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.119 [2024-07-25 01:20:22.214965] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.119 [2024-07-25 01:20:22.214978] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:29.119 [2024-07-25 01:20:22.215006] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:29.119 qpair failed and we were unable to recover it. 00:34:29.119 [2024-07-25 01:20:22.224805] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.119 [2024-07-25 01:20:22.224917] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.119 [2024-07-25 01:20:22.224943] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.119 [2024-07-25 01:20:22.224957] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.119 [2024-07-25 01:20:22.224971] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:29.119 [2024-07-25 01:20:22.224999] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:29.119 qpair failed and we were unable to recover it. 00:34:29.119 [2024-07-25 01:20:22.234846] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.119 [2024-07-25 01:20:22.234961] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.119 [2024-07-25 01:20:22.234986] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.119 [2024-07-25 01:20:22.235000] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.119 [2024-07-25 01:20:22.235013] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:29.119 [2024-07-25 01:20:22.235041] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:29.119 qpair failed and we were unable to recover it. 00:34:29.119 [2024-07-25 01:20:22.244848] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.119 [2024-07-25 01:20:22.244969] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.119 [2024-07-25 01:20:22.244994] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.119 [2024-07-25 01:20:22.245009] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.119 [2024-07-25 01:20:22.245023] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:29.119 [2024-07-25 01:20:22.245051] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:29.119 qpair failed and we were unable to recover it. 00:34:29.119 [2024-07-25 01:20:22.254865] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.119 [2024-07-25 01:20:22.254982] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.119 [2024-07-25 01:20:22.255008] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.119 [2024-07-25 01:20:22.255023] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.119 [2024-07-25 01:20:22.255036] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:29.120 [2024-07-25 01:20:22.255067] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:29.120 qpair failed and we were unable to recover it. 00:34:29.120 [2024-07-25 01:20:22.264906] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.120 [2024-07-25 01:20:22.265017] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.120 [2024-07-25 01:20:22.265043] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.120 [2024-07-25 01:20:22.265058] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.120 [2024-07-25 01:20:22.265071] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:29.120 [2024-07-25 01:20:22.265099] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:29.120 qpair failed and we were unable to recover it. 00:34:29.379 [2024-07-25 01:20:22.275020] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.379 [2024-07-25 01:20:22.275142] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.379 [2024-07-25 01:20:22.275167] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.379 [2024-07-25 01:20:22.275182] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.379 [2024-07-25 01:20:22.275195] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:29.379 [2024-07-25 01:20:22.275222] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:29.379 qpair failed and we were unable to recover it. 00:34:29.379 [2024-07-25 01:20:22.284983] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.379 [2024-07-25 01:20:22.285118] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.379 [2024-07-25 01:20:22.285143] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.379 [2024-07-25 01:20:22.285157] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.379 [2024-07-25 01:20:22.285171] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:29.379 [2024-07-25 01:20:22.285199] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:29.379 qpair failed and we were unable to recover it. 00:34:29.379 [2024-07-25 01:20:22.294975] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.379 [2024-07-25 01:20:22.295089] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.379 [2024-07-25 01:20:22.295119] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.379 [2024-07-25 01:20:22.295135] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.379 [2024-07-25 01:20:22.295148] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:29.379 [2024-07-25 01:20:22.295176] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:29.379 qpair failed and we were unable to recover it. 00:34:29.379 [2024-07-25 01:20:22.305031] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.379 [2024-07-25 01:20:22.305143] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.379 [2024-07-25 01:20:22.305169] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.379 [2024-07-25 01:20:22.305183] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.379 [2024-07-25 01:20:22.305196] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:29.379 [2024-07-25 01:20:22.305226] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:29.379 qpair failed and we were unable to recover it. 00:34:29.379 [2024-07-25 01:20:22.315059] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.379 [2024-07-25 01:20:22.315173] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.379 [2024-07-25 01:20:22.315198] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.379 [2024-07-25 01:20:22.315212] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.379 [2024-07-25 01:20:22.315226] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:29.379 [2024-07-25 01:20:22.315262] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:29.379 qpair failed and we were unable to recover it. 00:34:29.379 [2024-07-25 01:20:22.325165] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.379 [2024-07-25 01:20:22.325292] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.379 [2024-07-25 01:20:22.325318] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.379 [2024-07-25 01:20:22.325332] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.379 [2024-07-25 01:20:22.325345] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:29.379 [2024-07-25 01:20:22.325373] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:29.379 qpair failed and we were unable to recover it. 00:34:29.379 [2024-07-25 01:20:22.335122] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.379 [2024-07-25 01:20:22.335251] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.379 [2024-07-25 01:20:22.335276] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.379 [2024-07-25 01:20:22.335290] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.379 [2024-07-25 01:20:22.335304] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:29.379 [2024-07-25 01:20:22.335332] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:29.379 qpair failed and we were unable to recover it. 00:34:29.379 [2024-07-25 01:20:22.345132] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.379 [2024-07-25 01:20:22.345252] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.379 [2024-07-25 01:20:22.345279] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.379 [2024-07-25 01:20:22.345293] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.379 [2024-07-25 01:20:22.345307] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:29.379 [2024-07-25 01:20:22.345337] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:29.379 qpair failed and we were unable to recover it. 00:34:29.379 [2024-07-25 01:20:22.355202] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.379 [2024-07-25 01:20:22.355338] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.379 [2024-07-25 01:20:22.355364] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.379 [2024-07-25 01:20:22.355379] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.379 [2024-07-25 01:20:22.355392] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:29.379 [2024-07-25 01:20:22.355421] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:29.379 qpair failed and we were unable to recover it. 00:34:29.379 [2024-07-25 01:20:22.365288] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.379 [2024-07-25 01:20:22.365401] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.379 [2024-07-25 01:20:22.365427] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.379 [2024-07-25 01:20:22.365442] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.379 [2024-07-25 01:20:22.365455] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:29.379 [2024-07-25 01:20:22.365484] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:29.379 qpair failed and we were unable to recover it. 00:34:29.379 [2024-07-25 01:20:22.375250] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.379 [2024-07-25 01:20:22.375363] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.379 [2024-07-25 01:20:22.375389] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.379 [2024-07-25 01:20:22.375403] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.379 [2024-07-25 01:20:22.375417] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:29.379 [2024-07-25 01:20:22.375444] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:29.379 qpair failed and we were unable to recover it. 00:34:29.379 [2024-07-25 01:20:22.385226] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.379 [2024-07-25 01:20:22.385362] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.379 [2024-07-25 01:20:22.385394] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.379 [2024-07-25 01:20:22.385409] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.379 [2024-07-25 01:20:22.385423] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:29.379 [2024-07-25 01:20:22.385451] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:29.379 qpair failed and we were unable to recover it. 00:34:29.379 [2024-07-25 01:20:22.395283] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.379 [2024-07-25 01:20:22.395407] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.379 [2024-07-25 01:20:22.395432] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.379 [2024-07-25 01:20:22.395447] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.380 [2024-07-25 01:20:22.395460] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:29.380 [2024-07-25 01:20:22.395488] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:29.380 qpair failed and we were unable to recover it. 00:34:29.380 [2024-07-25 01:20:22.405335] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.380 [2024-07-25 01:20:22.405470] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.380 [2024-07-25 01:20:22.405496] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.380 [2024-07-25 01:20:22.405510] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.380 [2024-07-25 01:20:22.405524] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:29.380 [2024-07-25 01:20:22.405552] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:29.380 qpair failed and we were unable to recover it. 00:34:29.380 [2024-07-25 01:20:22.415372] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.380 [2024-07-25 01:20:22.415484] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.380 [2024-07-25 01:20:22.415509] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.380 [2024-07-25 01:20:22.415523] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.380 [2024-07-25 01:20:22.415537] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:29.380 [2024-07-25 01:20:22.415564] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:29.380 qpair failed and we were unable to recover it. 00:34:29.380 [2024-07-25 01:20:22.425462] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.380 [2024-07-25 01:20:22.425586] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.380 [2024-07-25 01:20:22.425612] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.380 [2024-07-25 01:20:22.425626] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.380 [2024-07-25 01:20:22.425640] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:29.380 [2024-07-25 01:20:22.425673] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:29.380 qpair failed and we were unable to recover it. 00:34:29.380 [2024-07-25 01:20:22.435491] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.380 [2024-07-25 01:20:22.435609] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.380 [2024-07-25 01:20:22.435634] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.380 [2024-07-25 01:20:22.435648] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.380 [2024-07-25 01:20:22.435662] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:29.380 [2024-07-25 01:20:22.435689] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:29.380 qpair failed and we were unable to recover it. 00:34:29.380 [2024-07-25 01:20:22.445417] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.380 [2024-07-25 01:20:22.445536] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.380 [2024-07-25 01:20:22.445561] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.380 [2024-07-25 01:20:22.445576] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.380 [2024-07-25 01:20:22.445589] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:29.380 [2024-07-25 01:20:22.445617] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:29.380 qpair failed and we were unable to recover it. 00:34:29.380 [2024-07-25 01:20:22.455452] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.380 [2024-07-25 01:20:22.455579] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.380 [2024-07-25 01:20:22.455604] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.380 [2024-07-25 01:20:22.455619] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.380 [2024-07-25 01:20:22.455632] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:29.380 [2024-07-25 01:20:22.455659] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:29.380 qpair failed and we were unable to recover it. 00:34:29.380 [2024-07-25 01:20:22.465573] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.380 [2024-07-25 01:20:22.465692] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.380 [2024-07-25 01:20:22.465717] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.380 [2024-07-25 01:20:22.465731] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.380 [2024-07-25 01:20:22.465745] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:29.380 [2024-07-25 01:20:22.465772] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:29.380 qpair failed and we were unable to recover it. 00:34:29.380 [2024-07-25 01:20:22.475569] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.380 [2024-07-25 01:20:22.475700] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.380 [2024-07-25 01:20:22.475731] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.380 [2024-07-25 01:20:22.475746] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.380 [2024-07-25 01:20:22.475759] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:29.380 [2024-07-25 01:20:22.475786] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:29.380 qpair failed and we were unable to recover it. 00:34:29.380 [2024-07-25 01:20:22.485515] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.380 [2024-07-25 01:20:22.485640] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.380 [2024-07-25 01:20:22.485666] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.380 [2024-07-25 01:20:22.485681] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.380 [2024-07-25 01:20:22.485694] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:29.380 [2024-07-25 01:20:22.485722] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:29.380 qpair failed and we were unable to recover it. 00:34:29.380 [2024-07-25 01:20:22.495566] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.380 [2024-07-25 01:20:22.495687] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.380 [2024-07-25 01:20:22.495713] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.380 [2024-07-25 01:20:22.495727] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.380 [2024-07-25 01:20:22.495740] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:29.380 [2024-07-25 01:20:22.495768] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:29.380 qpair failed and we were unable to recover it. 00:34:29.380 [2024-07-25 01:20:22.505668] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.380 [2024-07-25 01:20:22.505806] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.380 [2024-07-25 01:20:22.505831] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.380 [2024-07-25 01:20:22.505845] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.380 [2024-07-25 01:20:22.505858] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:29.380 [2024-07-25 01:20:22.505886] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:29.380 qpair failed and we were unable to recover it. 00:34:29.380 [2024-07-25 01:20:22.515623] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.380 [2024-07-25 01:20:22.515760] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.380 [2024-07-25 01:20:22.515786] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.380 [2024-07-25 01:20:22.515800] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.380 [2024-07-25 01:20:22.515813] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:29.380 [2024-07-25 01:20:22.515847] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:29.380 qpair failed and we were unable to recover it. 00:34:29.380 [2024-07-25 01:20:22.525675] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.380 [2024-07-25 01:20:22.525805] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.380 [2024-07-25 01:20:22.525830] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.380 [2024-07-25 01:20:22.525845] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.380 [2024-07-25 01:20:22.525858] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:29.380 [2024-07-25 01:20:22.525886] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:29.381 qpair failed and we were unable to recover it. 00:34:29.639 [2024-07-25 01:20:22.535709] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.639 [2024-07-25 01:20:22.535819] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.639 [2024-07-25 01:20:22.535844] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.639 [2024-07-25 01:20:22.535858] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.639 [2024-07-25 01:20:22.535869] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:29.639 [2024-07-25 01:20:22.535896] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:29.639 qpair failed and we were unable to recover it. 00:34:29.639 [2024-07-25 01:20:22.545695] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.639 [2024-07-25 01:20:22.545809] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.639 [2024-07-25 01:20:22.545835] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.640 [2024-07-25 01:20:22.545851] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.640 [2024-07-25 01:20:22.545866] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:29.640 [2024-07-25 01:20:22.545895] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:29.640 qpair failed and we were unable to recover it. 00:34:29.640 [2024-07-25 01:20:22.555803] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.640 [2024-07-25 01:20:22.555950] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.640 [2024-07-25 01:20:22.555976] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.640 [2024-07-25 01:20:22.555991] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.640 [2024-07-25 01:20:22.556004] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:29.640 [2024-07-25 01:20:22.556031] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:29.640 qpair failed and we were unable to recover it. 00:34:29.640 [2024-07-25 01:20:22.565854] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.640 [2024-07-25 01:20:22.565995] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.640 [2024-07-25 01:20:22.566026] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.640 [2024-07-25 01:20:22.566041] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.640 [2024-07-25 01:20:22.566054] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:29.640 [2024-07-25 01:20:22.566082] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:29.640 qpair failed and we were unable to recover it. 00:34:29.640 [2024-07-25 01:20:22.575808] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.640 [2024-07-25 01:20:22.575917] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.640 [2024-07-25 01:20:22.575942] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.640 [2024-07-25 01:20:22.575956] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.640 [2024-07-25 01:20:22.575970] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:29.640 [2024-07-25 01:20:22.575998] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:29.640 qpair failed and we were unable to recover it. 00:34:29.640 [2024-07-25 01:20:22.585837] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.640 [2024-07-25 01:20:22.585949] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.640 [2024-07-25 01:20:22.585974] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.640 [2024-07-25 01:20:22.585989] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.640 [2024-07-25 01:20:22.586002] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:29.640 [2024-07-25 01:20:22.586030] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:29.640 qpair failed and we were unable to recover it. 00:34:29.640 [2024-07-25 01:20:22.595934] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.640 [2024-07-25 01:20:22.596053] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.640 [2024-07-25 01:20:22.596079] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.640 [2024-07-25 01:20:22.596093] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.640 [2024-07-25 01:20:22.596106] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:29.640 [2024-07-25 01:20:22.596134] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:29.640 qpair failed and we were unable to recover it. 00:34:29.640 [2024-07-25 01:20:22.605885] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.640 [2024-07-25 01:20:22.606009] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.640 [2024-07-25 01:20:22.606035] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.640 [2024-07-25 01:20:22.606050] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.640 [2024-07-25 01:20:22.606069] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:29.640 [2024-07-25 01:20:22.606097] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:29.640 qpair failed and we were unable to recover it. 00:34:29.640 [2024-07-25 01:20:22.615995] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.640 [2024-07-25 01:20:22.616117] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.640 [2024-07-25 01:20:22.616142] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.640 [2024-07-25 01:20:22.616156] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.640 [2024-07-25 01:20:22.616169] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:29.640 [2024-07-25 01:20:22.616197] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:29.640 qpair failed and we were unable to recover it. 00:34:29.640 [2024-07-25 01:20:22.625945] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.640 [2024-07-25 01:20:22.626063] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.640 [2024-07-25 01:20:22.626089] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.640 [2024-07-25 01:20:22.626103] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.640 [2024-07-25 01:20:22.626116] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:29.640 [2024-07-25 01:20:22.626144] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:29.640 qpair failed and we were unable to recover it. 00:34:29.640 [2024-07-25 01:20:22.635951] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.640 [2024-07-25 01:20:22.636110] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.640 [2024-07-25 01:20:22.636134] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.640 [2024-07-25 01:20:22.636149] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.640 [2024-07-25 01:20:22.636163] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:29.640 [2024-07-25 01:20:22.636191] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:29.640 qpair failed and we were unable to recover it. 00:34:29.640 [2024-07-25 01:20:22.645995] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.640 [2024-07-25 01:20:22.646124] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.640 [2024-07-25 01:20:22.646150] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.640 [2024-07-25 01:20:22.646164] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.640 [2024-07-25 01:20:22.646177] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:29.640 [2024-07-25 01:20:22.646206] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:29.640 qpair failed and we were unable to recover it. 00:34:29.640 [2024-07-25 01:20:22.655997] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.640 [2024-07-25 01:20:22.656166] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.640 [2024-07-25 01:20:22.656192] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.640 [2024-07-25 01:20:22.656206] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.640 [2024-07-25 01:20:22.656220] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:29.640 [2024-07-25 01:20:22.656256] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:29.640 qpair failed and we were unable to recover it. 00:34:29.640 [2024-07-25 01:20:22.666094] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.640 [2024-07-25 01:20:22.666254] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.640 [2024-07-25 01:20:22.666279] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.640 [2024-07-25 01:20:22.666294] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.640 [2024-07-25 01:20:22.666308] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:29.640 [2024-07-25 01:20:22.666336] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:29.640 qpair failed and we were unable to recover it. 00:34:29.640 [2024-07-25 01:20:22.676094] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.640 [2024-07-25 01:20:22.676216] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.641 [2024-07-25 01:20:22.676249] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.641 [2024-07-25 01:20:22.676267] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.641 [2024-07-25 01:20:22.676281] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:29.641 [2024-07-25 01:20:22.676310] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:29.641 qpair failed and we were unable to recover it. 00:34:29.641 [2024-07-25 01:20:22.686083] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.641 [2024-07-25 01:20:22.686197] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.641 [2024-07-25 01:20:22.686222] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.641 [2024-07-25 01:20:22.686237] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.641 [2024-07-25 01:20:22.686258] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:29.641 [2024-07-25 01:20:22.686287] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:29.641 qpair failed and we were unable to recover it. 00:34:29.641 [2024-07-25 01:20:22.696221] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.641 [2024-07-25 01:20:22.696353] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.641 [2024-07-25 01:20:22.696379] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.641 [2024-07-25 01:20:22.696393] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.641 [2024-07-25 01:20:22.696412] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:29.641 [2024-07-25 01:20:22.696440] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:29.641 qpair failed and we were unable to recover it. 00:34:29.641 [2024-07-25 01:20:22.706136] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.641 [2024-07-25 01:20:22.706256] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.641 [2024-07-25 01:20:22.706282] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.641 [2024-07-25 01:20:22.706304] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.641 [2024-07-25 01:20:22.706318] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:29.641 [2024-07-25 01:20:22.706346] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:29.641 qpair failed and we were unable to recover it. 00:34:29.641 [2024-07-25 01:20:22.716187] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.641 [2024-07-25 01:20:22.716321] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.641 [2024-07-25 01:20:22.716347] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.641 [2024-07-25 01:20:22.716362] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.641 [2024-07-25 01:20:22.716375] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:29.641 [2024-07-25 01:20:22.716406] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:29.641 qpair failed and we were unable to recover it. 00:34:29.641 [2024-07-25 01:20:22.726286] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.641 [2024-07-25 01:20:22.726408] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.641 [2024-07-25 01:20:22.726434] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.641 [2024-07-25 01:20:22.726448] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.641 [2024-07-25 01:20:22.726462] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:29.641 [2024-07-25 01:20:22.726491] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:29.641 qpair failed and we were unable to recover it. 00:34:29.641 [2024-07-25 01:20:22.736342] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.641 [2024-07-25 01:20:22.736485] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.641 [2024-07-25 01:20:22.736511] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.641 [2024-07-25 01:20:22.736525] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.641 [2024-07-25 01:20:22.736539] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:29.641 [2024-07-25 01:20:22.736567] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:29.641 qpair failed and we were unable to recover it. 00:34:29.641 [2024-07-25 01:20:22.746268] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.641 [2024-07-25 01:20:22.746391] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.641 [2024-07-25 01:20:22.746417] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.641 [2024-07-25 01:20:22.746431] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.641 [2024-07-25 01:20:22.746444] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:29.641 [2024-07-25 01:20:22.746472] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:29.641 qpair failed and we were unable to recover it. 00:34:29.641 [2024-07-25 01:20:22.756286] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.641 [2024-07-25 01:20:22.756403] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.641 [2024-07-25 01:20:22.756429] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.641 [2024-07-25 01:20:22.756444] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.641 [2024-07-25 01:20:22.756457] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:29.641 [2024-07-25 01:20:22.756487] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:29.641 qpair failed and we were unable to recover it. 00:34:29.641 [2024-07-25 01:20:22.766341] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.641 [2024-07-25 01:20:22.766462] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.641 [2024-07-25 01:20:22.766488] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.641 [2024-07-25 01:20:22.766503] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.641 [2024-07-25 01:20:22.766516] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:29.641 [2024-07-25 01:20:22.766544] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:29.641 qpair failed and we were unable to recover it. 00:34:29.641 [2024-07-25 01:20:22.776426] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.641 [2024-07-25 01:20:22.776533] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.641 [2024-07-25 01:20:22.776559] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.641 [2024-07-25 01:20:22.776573] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.641 [2024-07-25 01:20:22.776586] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:29.641 [2024-07-25 01:20:22.776614] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:29.641 qpair failed and we were unable to recover it. 00:34:29.641 [2024-07-25 01:20:22.786352] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.641 [2024-07-25 01:20:22.786476] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.641 [2024-07-25 01:20:22.786501] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.641 [2024-07-25 01:20:22.786515] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.641 [2024-07-25 01:20:22.786534] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:29.641 [2024-07-25 01:20:22.786563] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:29.641 qpair failed and we were unable to recover it. 00:34:29.900 [2024-07-25 01:20:22.796397] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.900 [2024-07-25 01:20:22.796514] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.900 [2024-07-25 01:20:22.796540] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.900 [2024-07-25 01:20:22.796554] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.900 [2024-07-25 01:20:22.796568] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:29.900 [2024-07-25 01:20:22.796596] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:29.900 qpair failed and we were unable to recover it. 00:34:29.900 [2024-07-25 01:20:22.806457] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.900 [2024-07-25 01:20:22.806613] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.900 [2024-07-25 01:20:22.806639] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.900 [2024-07-25 01:20:22.806654] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.900 [2024-07-25 01:20:22.806667] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:29.900 [2024-07-25 01:20:22.806696] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:29.900 qpair failed and we were unable to recover it. 00:34:29.900 [2024-07-25 01:20:22.816539] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.900 [2024-07-25 01:20:22.816653] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.900 [2024-07-25 01:20:22.816680] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.900 [2024-07-25 01:20:22.816695] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.900 [2024-07-25 01:20:22.816708] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:29.900 [2024-07-25 01:20:22.816736] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:29.900 qpair failed and we were unable to recover it. 00:34:29.900 [2024-07-25 01:20:22.826540] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.900 [2024-07-25 01:20:22.826657] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.900 [2024-07-25 01:20:22.826684] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.900 [2024-07-25 01:20:22.826699] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.900 [2024-07-25 01:20:22.826712] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f9f840 00:34:29.900 [2024-07-25 01:20:22.826740] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:29.900 qpair failed and we were unable to recover it. 00:34:29.900 [2024-07-25 01:20:22.836606] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.900 [2024-07-25 01:20:22.836731] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.900 [2024-07-25 01:20:22.836763] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.900 [2024-07-25 01:20:22.836779] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.900 [2024-07-25 01:20:22.836792] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fafd4000b90 00:34:29.900 [2024-07-25 01:20:22.836824] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:29.900 qpair failed and we were unable to recover it. 00:34:29.900 [2024-07-25 01:20:22.846603] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.900 [2024-07-25 01:20:22.846718] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.900 [2024-07-25 01:20:22.846745] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.900 [2024-07-25 01:20:22.846761] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.900 [2024-07-25 01:20:22.846774] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fafd4000b90 00:34:29.900 [2024-07-25 01:20:22.846805] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:29.900 qpair failed and we were unable to recover it. 00:34:29.900 [2024-07-25 01:20:22.856644] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.900 [2024-07-25 01:20:22.856761] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.900 [2024-07-25 01:20:22.856793] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.900 [2024-07-25 01:20:22.856809] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.900 [2024-07-25 01:20:22.856823] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fafc4000b90 00:34:29.900 [2024-07-25 01:20:22.856854] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:29.900 qpair failed and we were unable to recover it. 00:34:29.900 [2024-07-25 01:20:22.866611] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.900 [2024-07-25 01:20:22.866725] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.900 [2024-07-25 01:20:22.866753] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.900 [2024-07-25 01:20:22.866768] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.900 [2024-07-25 01:20:22.866782] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fafc4000b90 00:34:29.900 [2024-07-25 01:20:22.866814] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:29.900 qpair failed and we were unable to recover it. 00:34:29.900 [2024-07-25 01:20:22.876642] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.900 [2024-07-25 01:20:22.876787] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.900 [2024-07-25 01:20:22.876819] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.900 [2024-07-25 01:20:22.876855] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.900 [2024-07-25 01:20:22.876882] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fafcc000b90 00:34:29.900 [2024-07-25 01:20:22.876931] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:29.900 qpair failed and we were unable to recover it. 00:34:29.900 [2024-07-25 01:20:22.886676] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.900 [2024-07-25 01:20:22.886812] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.900 [2024-07-25 01:20:22.886841] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.900 [2024-07-25 01:20:22.886865] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.900 [2024-07-25 01:20:22.886890] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fafcc000b90 00:34:29.900 [2024-07-25 01:20:22.886937] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:29.900 qpair failed and we were unable to recover it. 00:34:29.900 [2024-07-25 01:20:22.887051] nvme_ctrlr.c:4353:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Submitting Keep Alive failed 00:34:29.900 A controller has encountered a failure and is being reset. 00:34:29.900 Controller properly reset. 00:34:29.900 Initializing NVMe Controllers 00:34:29.900 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:34:29.900 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:34:29.900 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:34:29.901 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:34:29.901 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:34:29.901 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:34:29.901 Initialization complete. Launching workers. 00:34:29.901 Starting thread on core 1 00:34:29.901 Starting thread on core 2 00:34:29.901 Starting thread on core 3 00:34:29.901 Starting thread on core 0 00:34:29.901 01:20:22 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:34:29.901 00:34:29.901 real 0m10.892s 00:34:29.901 user 0m18.073s 00:34:29.901 sys 0m5.091s 00:34:29.901 01:20:22 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:34:29.901 01:20:22 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:29.901 ************************************ 00:34:29.901 END TEST nvmf_target_disconnect_tc2 00:34:29.901 ************************************ 00:34:29.901 01:20:22 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:34:29.901 01:20:22 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:34:29.901 01:20:22 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:34:29.901 01:20:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:34:29.901 01:20:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@117 -- # sync 00:34:29.901 01:20:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:34:29.901 01:20:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@120 -- # set +e 00:34:29.901 01:20:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:34:29.901 01:20:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:34:29.901 rmmod nvme_tcp 00:34:29.901 rmmod nvme_fabrics 00:34:29.901 rmmod nvme_keyring 00:34:29.901 01:20:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:34:29.901 01:20:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set -e 00:34:29.901 01:20:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@125 -- # return 0 00:34:29.901 01:20:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@489 -- # '[' -n 3928120 ']' 00:34:29.901 01:20:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@490 -- # killprocess 3928120 00:34:29.901 01:20:22 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@946 -- # '[' -z 3928120 ']' 00:34:29.901 01:20:22 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@950 -- # kill -0 3928120 00:34:29.901 01:20:22 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@951 -- # uname 00:34:29.901 01:20:22 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:34:29.901 01:20:22 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3928120 00:34:29.901 01:20:23 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@952 -- # process_name=reactor_4 00:34:29.901 01:20:23 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@956 -- # '[' reactor_4 = sudo ']' 00:34:29.901 01:20:23 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3928120' 00:34:29.901 killing process with pid 3928120 00:34:29.901 01:20:23 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@965 -- # kill 3928120 00:34:29.901 01:20:23 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@970 -- # wait 3928120 00:34:30.159 01:20:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:34:30.159 01:20:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:34:30.159 01:20:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:34:30.159 01:20:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:34:30.159 01:20:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:34:30.159 01:20:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:30.159 01:20:23 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:34:30.159 01:20:23 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:32.725 01:20:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:34:32.725 00:34:32.725 real 0m15.655s 00:34:32.725 user 0m44.606s 00:34:32.725 sys 0m7.075s 00:34:32.725 01:20:25 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1122 -- # xtrace_disable 00:34:32.725 01:20:25 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:34:32.725 ************************************ 00:34:32.725 END TEST nvmf_target_disconnect 00:34:32.725 ************************************ 00:34:32.725 01:20:25 nvmf_tcp -- nvmf/nvmf.sh@126 -- # timing_exit host 00:34:32.725 01:20:25 nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:32.725 01:20:25 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:32.725 01:20:25 nvmf_tcp -- nvmf/nvmf.sh@128 -- # trap - SIGINT SIGTERM EXIT 00:34:32.725 00:34:32.725 real 27m1.974s 00:34:32.725 user 74m7.749s 00:34:32.725 sys 6m21.767s 00:34:32.725 01:20:25 nvmf_tcp -- common/autotest_common.sh@1122 -- # xtrace_disable 00:34:32.725 01:20:25 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:32.725 ************************************ 00:34:32.725 END TEST nvmf_tcp 00:34:32.725 ************************************ 00:34:32.725 01:20:25 -- spdk/autotest.sh@288 -- # [[ 0 -eq 0 ]] 00:34:32.725 01:20:25 -- spdk/autotest.sh@289 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:34:32.725 01:20:25 -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:34:32.725 01:20:25 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:34:32.725 01:20:25 -- common/autotest_common.sh@10 -- # set +x 00:34:32.725 ************************************ 00:34:32.725 START TEST spdkcli_nvmf_tcp 00:34:32.725 ************************************ 00:34:32.725 01:20:25 spdkcli_nvmf_tcp -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:34:32.725 * Looking for test storage... 00:34:32.725 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:34:32.725 01:20:25 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:34:32.725 01:20:25 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:34:32.725 01:20:25 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:34:32.725 01:20:25 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:32.725 01:20:25 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:34:32.725 01:20:25 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:32.725 01:20:25 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:32.725 01:20:25 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:32.725 01:20:25 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:32.725 01:20:25 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:32.725 01:20:25 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:32.725 01:20:25 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:32.725 01:20:25 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:32.725 01:20:25 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:32.725 01:20:25 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:32.725 01:20:25 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:34:32.725 01:20:25 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:34:32.725 01:20:25 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:32.725 01:20:25 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:32.725 01:20:25 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:32.725 01:20:25 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:32.725 01:20:25 spdkcli_nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:32.725 01:20:25 spdkcli_nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:32.725 01:20:25 spdkcli_nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:32.725 01:20:25 spdkcli_nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:32.725 01:20:25 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:32.725 01:20:25 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:32.725 01:20:25 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:32.725 01:20:25 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:34:32.726 01:20:25 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:32.726 01:20:25 spdkcli_nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:34:32.726 01:20:25 spdkcli_nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:34:32.726 01:20:25 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:34:32.726 01:20:25 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:32.726 01:20:25 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:32.726 01:20:25 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:32.726 01:20:25 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:34:32.726 01:20:25 spdkcli_nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:34:32.726 01:20:25 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:34:32.726 01:20:25 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:34:32.726 01:20:25 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:34:32.726 01:20:25 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:34:32.726 01:20:25 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:34:32.726 01:20:25 spdkcli_nvmf_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:34:32.726 01:20:25 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:32.726 01:20:25 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:34:32.726 01:20:25 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=3929317 00:34:32.726 01:20:25 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:34:32.726 01:20:25 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 3929317 00:34:32.726 01:20:25 spdkcli_nvmf_tcp -- common/autotest_common.sh@827 -- # '[' -z 3929317 ']' 00:34:32.726 01:20:25 spdkcli_nvmf_tcp -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:32.726 01:20:25 spdkcli_nvmf_tcp -- common/autotest_common.sh@832 -- # local max_retries=100 00:34:32.726 01:20:25 spdkcli_nvmf_tcp -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:32.726 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:32.726 01:20:25 spdkcli_nvmf_tcp -- common/autotest_common.sh@836 -- # xtrace_disable 00:34:32.726 01:20:25 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:32.726 [2024-07-25 01:20:25.535028] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 23.11.0 initialization... 00:34:32.726 [2024-07-25 01:20:25.535122] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3929317 ] 00:34:32.726 EAL: No free 2048 kB hugepages reported on node 1 00:34:32.726 [2024-07-25 01:20:25.592606] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:34:32.726 [2024-07-25 01:20:25.681564] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:34:32.726 [2024-07-25 01:20:25.681569] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:34:32.726 01:20:25 spdkcli_nvmf_tcp -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:34:32.726 01:20:25 spdkcli_nvmf_tcp -- common/autotest_common.sh@860 -- # return 0 00:34:32.726 01:20:25 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:34:32.726 01:20:25 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:32.726 01:20:25 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:32.726 01:20:25 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:34:32.726 01:20:25 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:34:32.726 01:20:25 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:34:32.726 01:20:25 spdkcli_nvmf_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:34:32.726 01:20:25 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:32.726 01:20:25 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:34:32.726 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:34:32.726 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:34:32.726 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:34:32.726 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:34:32.726 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:34:32.726 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:34:32.726 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:34:32.726 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:34:32.726 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:34:32.726 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:34:32.726 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:34:32.726 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:34:32.726 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:34:32.726 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:34:32.726 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:34:32.726 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:34:32.726 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:34:32.726 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:34:32.726 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:34:32.726 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:34:32.726 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:34:32.726 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:34:32.726 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:34:32.726 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:34:32.726 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:34:32.726 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:34:32.726 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:34:32.726 ' 00:34:35.254 [2024-07-25 01:20:28.347422] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:36.626 [2024-07-25 01:20:29.583753] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:34:39.152 [2024-07-25 01:20:31.870838] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:34:41.048 [2024-07-25 01:20:33.849105] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:34:42.455 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:34:42.455 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:34:42.455 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:34:42.455 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:34:42.455 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:34:42.455 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:34:42.455 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:34:42.455 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:34:42.455 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:34:42.455 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:34:42.455 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:34:42.455 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:34:42.455 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:34:42.455 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:34:42.455 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:34:42.455 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:34:42.455 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:34:42.455 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:34:42.455 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:34:42.455 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:34:42.455 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:34:42.455 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:34:42.455 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:34:42.455 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:34:42.455 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:34:42.455 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:34:42.455 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:34:42.455 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:34:42.455 01:20:35 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:34:42.455 01:20:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:42.455 01:20:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:42.455 01:20:35 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:34:42.455 01:20:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:34:42.455 01:20:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:42.455 01:20:35 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:34:42.455 01:20:35 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:34:43.021 01:20:35 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:34:43.021 01:20:35 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:34:43.021 01:20:35 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:34:43.021 01:20:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:43.021 01:20:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:43.021 01:20:35 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:34:43.021 01:20:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:34:43.021 01:20:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:43.021 01:20:35 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:34:43.021 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:34:43.021 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:34:43.021 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:34:43.021 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:34:43.021 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:34:43.021 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:34:43.021 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:34:43.021 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:34:43.021 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:34:43.021 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:34:43.021 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:34:43.021 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:34:43.021 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:34:43.021 ' 00:34:48.281 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:34:48.281 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:34:48.281 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:34:48.281 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:34:48.281 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:34:48.281 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:34:48.281 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:34:48.281 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:34:48.281 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:34:48.281 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:34:48.281 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:34:48.281 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:34:48.281 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:34:48.281 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:34:48.281 01:20:41 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:34:48.281 01:20:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:48.281 01:20:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:48.281 01:20:41 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 3929317 00:34:48.281 01:20:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@946 -- # '[' -z 3929317 ']' 00:34:48.281 01:20:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@950 -- # kill -0 3929317 00:34:48.281 01:20:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@951 -- # uname 00:34:48.281 01:20:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:34:48.281 01:20:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3929317 00:34:48.281 01:20:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:34:48.281 01:20:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:34:48.281 01:20:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3929317' 00:34:48.281 killing process with pid 3929317 00:34:48.281 01:20:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@965 -- # kill 3929317 00:34:48.281 01:20:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@970 -- # wait 3929317 00:34:48.281 01:20:41 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:34:48.281 01:20:41 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:34:48.281 01:20:41 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 3929317 ']' 00:34:48.281 01:20:41 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 3929317 00:34:48.281 01:20:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@946 -- # '[' -z 3929317 ']' 00:34:48.281 01:20:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@950 -- # kill -0 3929317 00:34:48.281 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 950: kill: (3929317) - No such process 00:34:48.281 01:20:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@973 -- # echo 'Process with pid 3929317 is not found' 00:34:48.281 Process with pid 3929317 is not found 00:34:48.281 01:20:41 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:34:48.281 01:20:41 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:34:48.281 01:20:41 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:34:48.281 00:34:48.281 real 0m15.966s 00:34:48.281 user 0m33.773s 00:34:48.281 sys 0m0.772s 00:34:48.281 01:20:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@1122 -- # xtrace_disable 00:34:48.281 01:20:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:48.281 ************************************ 00:34:48.281 END TEST spdkcli_nvmf_tcp 00:34:48.281 ************************************ 00:34:48.281 01:20:41 -- spdk/autotest.sh@290 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:34:48.281 01:20:41 -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:34:48.281 01:20:41 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:34:48.281 01:20:41 -- common/autotest_common.sh@10 -- # set +x 00:34:48.539 ************************************ 00:34:48.539 START TEST nvmf_identify_passthru 00:34:48.539 ************************************ 00:34:48.539 01:20:41 nvmf_identify_passthru -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:34:48.539 * Looking for test storage... 00:34:48.539 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:34:48.539 01:20:41 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:48.539 01:20:41 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:34:48.539 01:20:41 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:48.539 01:20:41 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:48.539 01:20:41 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:48.539 01:20:41 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:48.539 01:20:41 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:48.539 01:20:41 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:48.539 01:20:41 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:48.539 01:20:41 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:48.539 01:20:41 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:48.539 01:20:41 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:48.539 01:20:41 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:34:48.539 01:20:41 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:34:48.539 01:20:41 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:48.539 01:20:41 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:48.539 01:20:41 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:48.539 01:20:41 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:48.539 01:20:41 nvmf_identify_passthru -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:48.539 01:20:41 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:48.539 01:20:41 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:48.539 01:20:41 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:48.539 01:20:41 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:48.539 01:20:41 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:48.539 01:20:41 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:48.539 01:20:41 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:34:48.539 01:20:41 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:48.539 01:20:41 nvmf_identify_passthru -- nvmf/common.sh@47 -- # : 0 00:34:48.539 01:20:41 nvmf_identify_passthru -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:34:48.539 01:20:41 nvmf_identify_passthru -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:34:48.539 01:20:41 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:48.539 01:20:41 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:48.539 01:20:41 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:48.539 01:20:41 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:34:48.539 01:20:41 nvmf_identify_passthru -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:34:48.539 01:20:41 nvmf_identify_passthru -- nvmf/common.sh@51 -- # have_pci_nics=0 00:34:48.539 01:20:41 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:48.539 01:20:41 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:48.539 01:20:41 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:48.540 01:20:41 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:48.540 01:20:41 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:48.540 01:20:41 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:48.540 01:20:41 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:48.540 01:20:41 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:34:48.540 01:20:41 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:48.540 01:20:41 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:34:48.540 01:20:41 nvmf_identify_passthru -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:34:48.540 01:20:41 nvmf_identify_passthru -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:48.540 01:20:41 nvmf_identify_passthru -- nvmf/common.sh@448 -- # prepare_net_devs 00:34:48.540 01:20:41 nvmf_identify_passthru -- nvmf/common.sh@410 -- # local -g is_hw=no 00:34:48.540 01:20:41 nvmf_identify_passthru -- nvmf/common.sh@412 -- # remove_spdk_ns 00:34:48.540 01:20:41 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:48.540 01:20:41 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:34:48.540 01:20:41 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:48.540 01:20:41 nvmf_identify_passthru -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:34:48.540 01:20:41 nvmf_identify_passthru -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:34:48.540 01:20:41 nvmf_identify_passthru -- nvmf/common.sh@285 -- # xtrace_disable 00:34:48.540 01:20:41 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:50.437 01:20:43 nvmf_identify_passthru -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:50.437 01:20:43 nvmf_identify_passthru -- nvmf/common.sh@291 -- # pci_devs=() 00:34:50.437 01:20:43 nvmf_identify_passthru -- nvmf/common.sh@291 -- # local -a pci_devs 00:34:50.437 01:20:43 nvmf_identify_passthru -- nvmf/common.sh@292 -- # pci_net_devs=() 00:34:50.437 01:20:43 nvmf_identify_passthru -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:34:50.437 01:20:43 nvmf_identify_passthru -- nvmf/common.sh@293 -- # pci_drivers=() 00:34:50.437 01:20:43 nvmf_identify_passthru -- nvmf/common.sh@293 -- # local -A pci_drivers 00:34:50.437 01:20:43 nvmf_identify_passthru -- nvmf/common.sh@295 -- # net_devs=() 00:34:50.437 01:20:43 nvmf_identify_passthru -- nvmf/common.sh@295 -- # local -ga net_devs 00:34:50.437 01:20:43 nvmf_identify_passthru -- nvmf/common.sh@296 -- # e810=() 00:34:50.437 01:20:43 nvmf_identify_passthru -- nvmf/common.sh@296 -- # local -ga e810 00:34:50.437 01:20:43 nvmf_identify_passthru -- nvmf/common.sh@297 -- # x722=() 00:34:50.437 01:20:43 nvmf_identify_passthru -- nvmf/common.sh@297 -- # local -ga x722 00:34:50.437 01:20:43 nvmf_identify_passthru -- nvmf/common.sh@298 -- # mlx=() 00:34:50.437 01:20:43 nvmf_identify_passthru -- nvmf/common.sh@298 -- # local -ga mlx 00:34:50.438 01:20:43 nvmf_identify_passthru -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:50.438 01:20:43 nvmf_identify_passthru -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:50.438 01:20:43 nvmf_identify_passthru -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:50.438 01:20:43 nvmf_identify_passthru -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:50.438 01:20:43 nvmf_identify_passthru -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:50.438 01:20:43 nvmf_identify_passthru -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:50.438 01:20:43 nvmf_identify_passthru -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:50.438 01:20:43 nvmf_identify_passthru -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:50.438 01:20:43 nvmf_identify_passthru -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:50.438 01:20:43 nvmf_identify_passthru -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:50.438 01:20:43 nvmf_identify_passthru -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:50.438 01:20:43 nvmf_identify_passthru -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:34:50.438 01:20:43 nvmf_identify_passthru -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:34:50.438 01:20:43 nvmf_identify_passthru -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:34:50.438 01:20:43 nvmf_identify_passthru -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:34:50.438 01:20:43 nvmf_identify_passthru -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:34:50.438 01:20:43 nvmf_identify_passthru -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:34:50.438 01:20:43 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:34:50.438 01:20:43 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:34:50.438 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:34:50.438 01:20:43 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:34:50.438 01:20:43 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:34:50.438 01:20:43 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:50.438 01:20:43 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:50.438 01:20:43 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:34:50.438 01:20:43 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:34:50.438 01:20:43 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:34:50.438 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:34:50.438 01:20:43 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:34:50.438 01:20:43 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:34:50.438 01:20:43 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:50.438 01:20:43 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:50.438 01:20:43 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:34:50.438 01:20:43 nvmf_identify_passthru -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:34:50.438 01:20:43 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:34:50.438 01:20:43 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:34:50.438 01:20:43 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:34:50.438 01:20:43 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:50.438 01:20:43 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:34:50.438 01:20:43 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:50.438 01:20:43 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:34:50.438 01:20:43 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:34:50.438 01:20:43 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:50.438 01:20:43 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:34:50.438 Found net devices under 0000:0a:00.0: cvl_0_0 00:34:50.438 01:20:43 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:34:50.438 01:20:43 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:34:50.438 01:20:43 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:50.438 01:20:43 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:34:50.438 01:20:43 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:50.438 01:20:43 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:34:50.438 01:20:43 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:34:50.438 01:20:43 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:50.438 01:20:43 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:34:50.438 Found net devices under 0000:0a:00.1: cvl_0_1 00:34:50.438 01:20:43 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:34:50.438 01:20:43 nvmf_identify_passthru -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:34:50.438 01:20:43 nvmf_identify_passthru -- nvmf/common.sh@414 -- # is_hw=yes 00:34:50.438 01:20:43 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:34:50.438 01:20:43 nvmf_identify_passthru -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:34:50.438 01:20:43 nvmf_identify_passthru -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:34:50.438 01:20:43 nvmf_identify_passthru -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:50.438 01:20:43 nvmf_identify_passthru -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:50.438 01:20:43 nvmf_identify_passthru -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:50.438 01:20:43 nvmf_identify_passthru -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:34:50.438 01:20:43 nvmf_identify_passthru -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:50.438 01:20:43 nvmf_identify_passthru -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:50.438 01:20:43 nvmf_identify_passthru -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:34:50.438 01:20:43 nvmf_identify_passthru -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:50.438 01:20:43 nvmf_identify_passthru -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:50.438 01:20:43 nvmf_identify_passthru -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:34:50.438 01:20:43 nvmf_identify_passthru -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:34:50.438 01:20:43 nvmf_identify_passthru -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:34:50.438 01:20:43 nvmf_identify_passthru -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:50.438 01:20:43 nvmf_identify_passthru -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:50.438 01:20:43 nvmf_identify_passthru -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:50.438 01:20:43 nvmf_identify_passthru -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:34:50.438 01:20:43 nvmf_identify_passthru -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:50.438 01:20:43 nvmf_identify_passthru -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:50.438 01:20:43 nvmf_identify_passthru -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:50.695 01:20:43 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:34:50.695 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:50.695 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.231 ms 00:34:50.695 00:34:50.695 --- 10.0.0.2 ping statistics --- 00:34:50.695 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:50.695 rtt min/avg/max/mdev = 0.231/0.231/0.231/0.000 ms 00:34:50.695 01:20:43 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:50.695 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:50.695 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.123 ms 00:34:50.695 00:34:50.695 --- 10.0.0.1 ping statistics --- 00:34:50.695 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:50.695 rtt min/avg/max/mdev = 0.123/0.123/0.123/0.000 ms 00:34:50.695 01:20:43 nvmf_identify_passthru -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:50.695 01:20:43 nvmf_identify_passthru -- nvmf/common.sh@422 -- # return 0 00:34:50.695 01:20:43 nvmf_identify_passthru -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:34:50.695 01:20:43 nvmf_identify_passthru -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:50.695 01:20:43 nvmf_identify_passthru -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:34:50.695 01:20:43 nvmf_identify_passthru -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:34:50.695 01:20:43 nvmf_identify_passthru -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:50.695 01:20:43 nvmf_identify_passthru -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:34:50.695 01:20:43 nvmf_identify_passthru -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:34:50.695 01:20:43 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:34:50.695 01:20:43 nvmf_identify_passthru -- common/autotest_common.sh@720 -- # xtrace_disable 00:34:50.695 01:20:43 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:50.695 01:20:43 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:34:50.695 01:20:43 nvmf_identify_passthru -- common/autotest_common.sh@1520 -- # bdfs=() 00:34:50.695 01:20:43 nvmf_identify_passthru -- common/autotest_common.sh@1520 -- # local bdfs 00:34:50.695 01:20:43 nvmf_identify_passthru -- common/autotest_common.sh@1521 -- # bdfs=($(get_nvme_bdfs)) 00:34:50.695 01:20:43 nvmf_identify_passthru -- common/autotest_common.sh@1521 -- # get_nvme_bdfs 00:34:50.695 01:20:43 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # bdfs=() 00:34:50.695 01:20:43 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # local bdfs 00:34:50.695 01:20:43 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:34:50.695 01:20:43 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:34:50.695 01:20:43 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # jq -r '.config[].params.traddr' 00:34:50.695 01:20:43 nvmf_identify_passthru -- common/autotest_common.sh@1511 -- # (( 1 == 0 )) 00:34:50.695 01:20:43 nvmf_identify_passthru -- common/autotest_common.sh@1515 -- # printf '%s\n' 0000:88:00.0 00:34:50.695 01:20:43 nvmf_identify_passthru -- common/autotest_common.sh@1523 -- # echo 0000:88:00.0 00:34:50.695 01:20:43 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:88:00.0 00:34:50.695 01:20:43 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:88:00.0 ']' 00:34:50.695 01:20:43 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:88:00.0' -i 0 00:34:50.695 01:20:43 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:34:50.695 01:20:43 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:34:50.695 EAL: No free 2048 kB hugepages reported on node 1 00:34:54.876 01:20:47 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=PHLJ916004901P0FGN 00:34:54.876 01:20:47 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:88:00.0' -i 0 00:34:54.876 01:20:47 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:34:54.876 01:20:47 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:34:54.876 EAL: No free 2048 kB hugepages reported on node 1 00:34:59.060 01:20:52 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:34:59.060 01:20:52 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:34:59.060 01:20:52 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:59.060 01:20:52 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:59.060 01:20:52 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:34:59.060 01:20:52 nvmf_identify_passthru -- common/autotest_common.sh@720 -- # xtrace_disable 00:34:59.060 01:20:52 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:59.060 01:20:52 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=3933828 00:34:59.060 01:20:52 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:34:59.060 01:20:52 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:34:59.060 01:20:52 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 3933828 00:34:59.060 01:20:52 nvmf_identify_passthru -- common/autotest_common.sh@827 -- # '[' -z 3933828 ']' 00:34:59.060 01:20:52 nvmf_identify_passthru -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:59.060 01:20:52 nvmf_identify_passthru -- common/autotest_common.sh@832 -- # local max_retries=100 00:34:59.060 01:20:52 nvmf_identify_passthru -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:59.060 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:59.060 01:20:52 nvmf_identify_passthru -- common/autotest_common.sh@836 -- # xtrace_disable 00:34:59.060 01:20:52 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:59.060 [2024-07-25 01:20:52.146383] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 23.11.0 initialization... 00:34:59.061 [2024-07-25 01:20:52.146463] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:59.061 EAL: No free 2048 kB hugepages reported on node 1 00:34:59.319 [2024-07-25 01:20:52.222442] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:34:59.319 [2024-07-25 01:20:52.314217] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:59.319 [2024-07-25 01:20:52.314301] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:59.319 [2024-07-25 01:20:52.314325] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:59.319 [2024-07-25 01:20:52.314336] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:59.319 [2024-07-25 01:20:52.314346] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:59.319 [2024-07-25 01:20:52.314395] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:34:59.319 [2024-07-25 01:20:52.314425] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:34:59.319 [2024-07-25 01:20:52.314484] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:34:59.319 [2024-07-25 01:20:52.314487] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:34:59.319 01:20:52 nvmf_identify_passthru -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:34:59.319 01:20:52 nvmf_identify_passthru -- common/autotest_common.sh@860 -- # return 0 00:34:59.319 01:20:52 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:34:59.319 01:20:52 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:59.319 01:20:52 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:59.319 INFO: Log level set to 20 00:34:59.319 INFO: Requests: 00:34:59.319 { 00:34:59.319 "jsonrpc": "2.0", 00:34:59.319 "method": "nvmf_set_config", 00:34:59.319 "id": 1, 00:34:59.319 "params": { 00:34:59.319 "admin_cmd_passthru": { 00:34:59.319 "identify_ctrlr": true 00:34:59.319 } 00:34:59.319 } 00:34:59.319 } 00:34:59.319 00:34:59.319 INFO: response: 00:34:59.319 { 00:34:59.319 "jsonrpc": "2.0", 00:34:59.319 "id": 1, 00:34:59.319 "result": true 00:34:59.319 } 00:34:59.319 00:34:59.319 01:20:52 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:59.319 01:20:52 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:34:59.319 01:20:52 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:59.319 01:20:52 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:59.319 INFO: Setting log level to 20 00:34:59.319 INFO: Setting log level to 20 00:34:59.319 INFO: Log level set to 20 00:34:59.319 INFO: Log level set to 20 00:34:59.319 INFO: Requests: 00:34:59.319 { 00:34:59.319 "jsonrpc": "2.0", 00:34:59.319 "method": "framework_start_init", 00:34:59.319 "id": 1 00:34:59.319 } 00:34:59.319 00:34:59.319 INFO: Requests: 00:34:59.319 { 00:34:59.319 "jsonrpc": "2.0", 00:34:59.319 "method": "framework_start_init", 00:34:59.319 "id": 1 00:34:59.319 } 00:34:59.319 00:34:59.578 [2024-07-25 01:20:52.492580] nvmf_tgt.c: 451:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:34:59.578 INFO: response: 00:34:59.578 { 00:34:59.578 "jsonrpc": "2.0", 00:34:59.578 "id": 1, 00:34:59.578 "result": true 00:34:59.578 } 00:34:59.578 00:34:59.578 INFO: response: 00:34:59.578 { 00:34:59.578 "jsonrpc": "2.0", 00:34:59.578 "id": 1, 00:34:59.578 "result": true 00:34:59.578 } 00:34:59.578 00:34:59.578 01:20:52 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:59.578 01:20:52 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:34:59.578 01:20:52 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:59.578 01:20:52 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:59.578 INFO: Setting log level to 40 00:34:59.578 INFO: Setting log level to 40 00:34:59.578 INFO: Setting log level to 40 00:34:59.578 [2024-07-25 01:20:52.502676] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:59.578 01:20:52 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:59.578 01:20:52 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:34:59.578 01:20:52 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:59.578 01:20:52 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:59.578 01:20:52 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:88:00.0 00:34:59.578 01:20:52 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:59.578 01:20:52 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:02.854 Nvme0n1 00:35:02.854 01:20:55 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:02.854 01:20:55 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:35:02.854 01:20:55 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:02.854 01:20:55 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:02.854 01:20:55 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:02.854 01:20:55 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:35:02.854 01:20:55 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:02.854 01:20:55 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:02.854 01:20:55 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:02.854 01:20:55 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:02.854 01:20:55 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:02.854 01:20:55 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:02.854 [2024-07-25 01:20:55.391751] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:02.854 01:20:55 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:02.854 01:20:55 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:35:02.854 01:20:55 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:02.854 01:20:55 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:02.854 [ 00:35:02.854 { 00:35:02.854 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:35:02.854 "subtype": "Discovery", 00:35:02.854 "listen_addresses": [], 00:35:02.854 "allow_any_host": true, 00:35:02.854 "hosts": [] 00:35:02.854 }, 00:35:02.854 { 00:35:02.854 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:35:02.854 "subtype": "NVMe", 00:35:02.854 "listen_addresses": [ 00:35:02.854 { 00:35:02.854 "trtype": "TCP", 00:35:02.854 "adrfam": "IPv4", 00:35:02.854 "traddr": "10.0.0.2", 00:35:02.854 "trsvcid": "4420" 00:35:02.854 } 00:35:02.854 ], 00:35:02.854 "allow_any_host": true, 00:35:02.854 "hosts": [], 00:35:02.854 "serial_number": "SPDK00000000000001", 00:35:02.854 "model_number": "SPDK bdev Controller", 00:35:02.854 "max_namespaces": 1, 00:35:02.854 "min_cntlid": 1, 00:35:02.854 "max_cntlid": 65519, 00:35:02.854 "namespaces": [ 00:35:02.854 { 00:35:02.854 "nsid": 1, 00:35:02.854 "bdev_name": "Nvme0n1", 00:35:02.854 "name": "Nvme0n1", 00:35:02.854 "nguid": "83A22928B17C410D857AF6FB5EFAF9B8", 00:35:02.854 "uuid": "83a22928-b17c-410d-857a-f6fb5efaf9b8" 00:35:02.854 } 00:35:02.854 ] 00:35:02.854 } 00:35:02.854 ] 00:35:02.854 01:20:55 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:02.854 01:20:55 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:35:02.854 01:20:55 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:35:02.854 01:20:55 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:35:02.854 EAL: No free 2048 kB hugepages reported on node 1 00:35:02.854 01:20:55 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=PHLJ916004901P0FGN 00:35:02.854 01:20:55 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:35:02.854 01:20:55 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:35:02.854 01:20:55 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:35:02.854 EAL: No free 2048 kB hugepages reported on node 1 00:35:02.855 01:20:55 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:35:02.855 01:20:55 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' PHLJ916004901P0FGN '!=' PHLJ916004901P0FGN ']' 00:35:02.855 01:20:55 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:35:02.855 01:20:55 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:35:02.855 01:20:55 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:02.855 01:20:55 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:02.855 01:20:55 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:02.855 01:20:55 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:35:02.855 01:20:55 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:35:02.855 01:20:55 nvmf_identify_passthru -- nvmf/common.sh@488 -- # nvmfcleanup 00:35:02.855 01:20:55 nvmf_identify_passthru -- nvmf/common.sh@117 -- # sync 00:35:02.855 01:20:55 nvmf_identify_passthru -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:35:02.855 01:20:55 nvmf_identify_passthru -- nvmf/common.sh@120 -- # set +e 00:35:02.855 01:20:55 nvmf_identify_passthru -- nvmf/common.sh@121 -- # for i in {1..20} 00:35:02.855 01:20:55 nvmf_identify_passthru -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:35:02.855 rmmod nvme_tcp 00:35:02.855 rmmod nvme_fabrics 00:35:02.855 rmmod nvme_keyring 00:35:02.855 01:20:55 nvmf_identify_passthru -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:35:02.855 01:20:55 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set -e 00:35:02.855 01:20:55 nvmf_identify_passthru -- nvmf/common.sh@125 -- # return 0 00:35:02.855 01:20:55 nvmf_identify_passthru -- nvmf/common.sh@489 -- # '[' -n 3933828 ']' 00:35:02.855 01:20:55 nvmf_identify_passthru -- nvmf/common.sh@490 -- # killprocess 3933828 00:35:02.855 01:20:55 nvmf_identify_passthru -- common/autotest_common.sh@946 -- # '[' -z 3933828 ']' 00:35:02.855 01:20:55 nvmf_identify_passthru -- common/autotest_common.sh@950 -- # kill -0 3933828 00:35:02.855 01:20:55 nvmf_identify_passthru -- common/autotest_common.sh@951 -- # uname 00:35:02.855 01:20:55 nvmf_identify_passthru -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:35:02.855 01:20:55 nvmf_identify_passthru -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3933828 00:35:02.855 01:20:55 nvmf_identify_passthru -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:35:02.855 01:20:55 nvmf_identify_passthru -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:35:02.855 01:20:55 nvmf_identify_passthru -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3933828' 00:35:02.855 killing process with pid 3933828 00:35:02.855 01:20:55 nvmf_identify_passthru -- common/autotest_common.sh@965 -- # kill 3933828 00:35:02.855 01:20:55 nvmf_identify_passthru -- common/autotest_common.sh@970 -- # wait 3933828 00:35:04.750 01:20:57 nvmf_identify_passthru -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:35:04.750 01:20:57 nvmf_identify_passthru -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:35:04.750 01:20:57 nvmf_identify_passthru -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:35:04.750 01:20:57 nvmf_identify_passthru -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:35:04.750 01:20:57 nvmf_identify_passthru -- nvmf/common.sh@278 -- # remove_spdk_ns 00:35:04.750 01:20:57 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:04.750 01:20:57 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:35:04.750 01:20:57 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:06.686 01:20:59 nvmf_identify_passthru -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:35:06.686 00:35:06.686 real 0m18.132s 00:35:06.686 user 0m27.193s 00:35:06.686 sys 0m2.388s 00:35:06.686 01:20:59 nvmf_identify_passthru -- common/autotest_common.sh@1122 -- # xtrace_disable 00:35:06.686 01:20:59 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:06.686 ************************************ 00:35:06.686 END TEST nvmf_identify_passthru 00:35:06.686 ************************************ 00:35:06.686 01:20:59 -- spdk/autotest.sh@292 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:35:06.686 01:20:59 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:35:06.686 01:20:59 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:35:06.686 01:20:59 -- common/autotest_common.sh@10 -- # set +x 00:35:06.686 ************************************ 00:35:06.686 START TEST nvmf_dif 00:35:06.686 ************************************ 00:35:06.686 01:20:59 nvmf_dif -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:35:06.686 * Looking for test storage... 00:35:06.686 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:35:06.686 01:20:59 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:06.686 01:20:59 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:35:06.686 01:20:59 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:06.686 01:20:59 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:06.686 01:20:59 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:06.686 01:20:59 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:06.686 01:20:59 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:06.686 01:20:59 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:06.686 01:20:59 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:06.686 01:20:59 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:06.686 01:20:59 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:06.686 01:20:59 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:06.686 01:20:59 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:35:06.686 01:20:59 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:35:06.686 01:20:59 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:06.686 01:20:59 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:06.686 01:20:59 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:06.686 01:20:59 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:06.686 01:20:59 nvmf_dif -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:06.686 01:20:59 nvmf_dif -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:06.686 01:20:59 nvmf_dif -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:06.686 01:20:59 nvmf_dif -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:06.686 01:20:59 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:06.687 01:20:59 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:06.687 01:20:59 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:06.687 01:20:59 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:35:06.687 01:20:59 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:06.687 01:20:59 nvmf_dif -- nvmf/common.sh@47 -- # : 0 00:35:06.687 01:20:59 nvmf_dif -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:35:06.687 01:20:59 nvmf_dif -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:35:06.687 01:20:59 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:06.687 01:20:59 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:06.687 01:20:59 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:06.687 01:20:59 nvmf_dif -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:35:06.687 01:20:59 nvmf_dif -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:35:06.687 01:20:59 nvmf_dif -- nvmf/common.sh@51 -- # have_pci_nics=0 00:35:06.687 01:20:59 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:35:06.687 01:20:59 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:35:06.687 01:20:59 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:35:06.687 01:20:59 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:35:06.687 01:20:59 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:35:06.687 01:20:59 nvmf_dif -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:35:06.687 01:20:59 nvmf_dif -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:06.687 01:20:59 nvmf_dif -- nvmf/common.sh@448 -- # prepare_net_devs 00:35:06.687 01:20:59 nvmf_dif -- nvmf/common.sh@410 -- # local -g is_hw=no 00:35:06.687 01:20:59 nvmf_dif -- nvmf/common.sh@412 -- # remove_spdk_ns 00:35:06.687 01:20:59 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:06.687 01:20:59 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:35:06.687 01:20:59 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:06.687 01:20:59 nvmf_dif -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:35:06.687 01:20:59 nvmf_dif -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:35:06.687 01:20:59 nvmf_dif -- nvmf/common.sh@285 -- # xtrace_disable 00:35:06.687 01:20:59 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:08.591 01:21:01 nvmf_dif -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:08.591 01:21:01 nvmf_dif -- nvmf/common.sh@291 -- # pci_devs=() 00:35:08.591 01:21:01 nvmf_dif -- nvmf/common.sh@291 -- # local -a pci_devs 00:35:08.591 01:21:01 nvmf_dif -- nvmf/common.sh@292 -- # pci_net_devs=() 00:35:08.591 01:21:01 nvmf_dif -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:35:08.591 01:21:01 nvmf_dif -- nvmf/common.sh@293 -- # pci_drivers=() 00:35:08.591 01:21:01 nvmf_dif -- nvmf/common.sh@293 -- # local -A pci_drivers 00:35:08.591 01:21:01 nvmf_dif -- nvmf/common.sh@295 -- # net_devs=() 00:35:08.591 01:21:01 nvmf_dif -- nvmf/common.sh@295 -- # local -ga net_devs 00:35:08.591 01:21:01 nvmf_dif -- nvmf/common.sh@296 -- # e810=() 00:35:08.591 01:21:01 nvmf_dif -- nvmf/common.sh@296 -- # local -ga e810 00:35:08.591 01:21:01 nvmf_dif -- nvmf/common.sh@297 -- # x722=() 00:35:08.591 01:21:01 nvmf_dif -- nvmf/common.sh@297 -- # local -ga x722 00:35:08.591 01:21:01 nvmf_dif -- nvmf/common.sh@298 -- # mlx=() 00:35:08.591 01:21:01 nvmf_dif -- nvmf/common.sh@298 -- # local -ga mlx 00:35:08.591 01:21:01 nvmf_dif -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:08.591 01:21:01 nvmf_dif -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:08.591 01:21:01 nvmf_dif -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:08.591 01:21:01 nvmf_dif -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:08.591 01:21:01 nvmf_dif -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:08.591 01:21:01 nvmf_dif -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:08.591 01:21:01 nvmf_dif -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:08.591 01:21:01 nvmf_dif -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:08.591 01:21:01 nvmf_dif -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:08.591 01:21:01 nvmf_dif -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:08.591 01:21:01 nvmf_dif -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:08.591 01:21:01 nvmf_dif -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:35:08.591 01:21:01 nvmf_dif -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:35:08.591 01:21:01 nvmf_dif -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:35:08.591 01:21:01 nvmf_dif -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:35:08.591 01:21:01 nvmf_dif -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:35:08.591 01:21:01 nvmf_dif -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:35:08.591 01:21:01 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:35:08.591 01:21:01 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:35:08.591 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:35:08.591 01:21:01 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:35:08.591 01:21:01 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:35:08.591 01:21:01 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:08.591 01:21:01 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:08.591 01:21:01 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:35:08.591 01:21:01 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:35:08.591 01:21:01 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:35:08.591 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:35:08.591 01:21:01 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:35:08.591 01:21:01 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:35:08.591 01:21:01 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:08.591 01:21:01 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:08.591 01:21:01 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:35:08.591 01:21:01 nvmf_dif -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:35:08.591 01:21:01 nvmf_dif -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:35:08.591 01:21:01 nvmf_dif -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:35:08.591 01:21:01 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:35:08.592 01:21:01 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:08.592 01:21:01 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:35:08.592 01:21:01 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:08.592 01:21:01 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:35:08.592 01:21:01 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:35:08.592 01:21:01 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:08.592 01:21:01 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:35:08.592 Found net devices under 0000:0a:00.0: cvl_0_0 00:35:08.592 01:21:01 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:35:08.592 01:21:01 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:35:08.592 01:21:01 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:08.592 01:21:01 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:35:08.592 01:21:01 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:08.592 01:21:01 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:35:08.592 01:21:01 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:35:08.592 01:21:01 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:08.592 01:21:01 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:35:08.592 Found net devices under 0000:0a:00.1: cvl_0_1 00:35:08.592 01:21:01 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:35:08.592 01:21:01 nvmf_dif -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:35:08.592 01:21:01 nvmf_dif -- nvmf/common.sh@414 -- # is_hw=yes 00:35:08.592 01:21:01 nvmf_dif -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:35:08.592 01:21:01 nvmf_dif -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:35:08.592 01:21:01 nvmf_dif -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:35:08.592 01:21:01 nvmf_dif -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:08.592 01:21:01 nvmf_dif -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:08.592 01:21:01 nvmf_dif -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:08.592 01:21:01 nvmf_dif -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:35:08.592 01:21:01 nvmf_dif -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:08.592 01:21:01 nvmf_dif -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:08.592 01:21:01 nvmf_dif -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:35:08.592 01:21:01 nvmf_dif -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:08.592 01:21:01 nvmf_dif -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:08.592 01:21:01 nvmf_dif -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:35:08.592 01:21:01 nvmf_dif -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:35:08.592 01:21:01 nvmf_dif -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:35:08.592 01:21:01 nvmf_dif -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:08.592 01:21:01 nvmf_dif -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:08.592 01:21:01 nvmf_dif -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:08.592 01:21:01 nvmf_dif -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:35:08.592 01:21:01 nvmf_dif -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:08.592 01:21:01 nvmf_dif -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:08.592 01:21:01 nvmf_dif -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:08.592 01:21:01 nvmf_dif -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:35:08.592 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:08.592 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.220 ms 00:35:08.592 00:35:08.592 --- 10.0.0.2 ping statistics --- 00:35:08.592 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:08.592 rtt min/avg/max/mdev = 0.220/0.220/0.220/0.000 ms 00:35:08.592 01:21:01 nvmf_dif -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:08.592 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:08.592 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.118 ms 00:35:08.592 00:35:08.592 --- 10.0.0.1 ping statistics --- 00:35:08.592 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:08.592 rtt min/avg/max/mdev = 0.118/0.118/0.118/0.000 ms 00:35:08.592 01:21:01 nvmf_dif -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:08.592 01:21:01 nvmf_dif -- nvmf/common.sh@422 -- # return 0 00:35:08.592 01:21:01 nvmf_dif -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:35:08.592 01:21:01 nvmf_dif -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:35:10.011 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:35:10.011 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:35:10.011 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:35:10.011 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:35:10.011 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:35:10.011 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:35:10.011 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:35:10.011 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:35:10.011 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:35:10.011 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:35:10.011 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:35:10.011 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:35:10.011 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:35:10.011 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:35:10.011 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:35:10.011 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:35:10.011 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:35:10.011 01:21:03 nvmf_dif -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:10.011 01:21:03 nvmf_dif -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:35:10.011 01:21:03 nvmf_dif -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:35:10.011 01:21:03 nvmf_dif -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:10.011 01:21:03 nvmf_dif -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:35:10.011 01:21:03 nvmf_dif -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:35:10.011 01:21:03 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:35:10.011 01:21:03 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:35:10.011 01:21:03 nvmf_dif -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:35:10.011 01:21:03 nvmf_dif -- common/autotest_common.sh@720 -- # xtrace_disable 00:35:10.011 01:21:03 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:10.011 01:21:03 nvmf_dif -- nvmf/common.sh@481 -- # nvmfpid=3937202 00:35:10.011 01:21:03 nvmf_dif -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:35:10.011 01:21:03 nvmf_dif -- nvmf/common.sh@482 -- # waitforlisten 3937202 00:35:10.011 01:21:03 nvmf_dif -- common/autotest_common.sh@827 -- # '[' -z 3937202 ']' 00:35:10.011 01:21:03 nvmf_dif -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:10.011 01:21:03 nvmf_dif -- common/autotest_common.sh@832 -- # local max_retries=100 00:35:10.011 01:21:03 nvmf_dif -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:10.011 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:10.011 01:21:03 nvmf_dif -- common/autotest_common.sh@836 -- # xtrace_disable 00:35:10.011 01:21:03 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:10.011 [2024-07-25 01:21:03.101393] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 23.11.0 initialization... 00:35:10.011 [2024-07-25 01:21:03.101472] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:10.011 EAL: No free 2048 kB hugepages reported on node 1 00:35:10.269 [2024-07-25 01:21:03.169723] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:10.269 [2024-07-25 01:21:03.261895] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:10.269 [2024-07-25 01:21:03.261952] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:10.269 [2024-07-25 01:21:03.261968] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:10.269 [2024-07-25 01:21:03.261990] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:10.269 [2024-07-25 01:21:03.262001] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:10.269 [2024-07-25 01:21:03.262031] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:35:10.269 01:21:03 nvmf_dif -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:35:10.269 01:21:03 nvmf_dif -- common/autotest_common.sh@860 -- # return 0 00:35:10.269 01:21:03 nvmf_dif -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:35:10.269 01:21:03 nvmf_dif -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:10.269 01:21:03 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:10.269 01:21:03 nvmf_dif -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:10.269 01:21:03 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:35:10.269 01:21:03 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:35:10.269 01:21:03 nvmf_dif -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:10.269 01:21:03 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:10.269 [2024-07-25 01:21:03.412913] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:10.269 01:21:03 nvmf_dif -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:10.269 01:21:03 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:35:10.269 01:21:03 nvmf_dif -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:35:10.269 01:21:03 nvmf_dif -- common/autotest_common.sh@1103 -- # xtrace_disable 00:35:10.269 01:21:03 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:10.526 ************************************ 00:35:10.526 START TEST fio_dif_1_default 00:35:10.526 ************************************ 00:35:10.526 01:21:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1121 -- # fio_dif_1 00:35:10.526 01:21:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:35:10.526 01:21:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:35:10.526 01:21:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:35:10.526 01:21:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:35:10.526 01:21:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:35:10.526 01:21:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:35:10.526 01:21:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:10.526 01:21:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:35:10.526 bdev_null0 00:35:10.526 01:21:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:10.526 01:21:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:35:10.526 01:21:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:10.526 01:21:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:35:10.526 01:21:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:10.526 01:21:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:35:10.526 01:21:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:10.526 01:21:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:35:10.526 01:21:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:10.526 01:21:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:35:10.526 01:21:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:10.526 01:21:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:35:10.526 [2024-07-25 01:21:03.473270] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:10.526 01:21:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:10.526 01:21:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:35:10.526 01:21:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:35:10.526 01:21:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:35:10.526 01:21:03 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # config=() 00:35:10.526 01:21:03 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # local subsystem config 00:35:10.526 01:21:03 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:35:10.526 01:21:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:10.526 01:21:03 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:35:10.526 { 00:35:10.526 "params": { 00:35:10.526 "name": "Nvme$subsystem", 00:35:10.526 "trtype": "$TEST_TRANSPORT", 00:35:10.526 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:10.526 "adrfam": "ipv4", 00:35:10.526 "trsvcid": "$NVMF_PORT", 00:35:10.526 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:10.526 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:10.526 "hdgst": ${hdgst:-false}, 00:35:10.526 "ddgst": ${ddgst:-false} 00:35:10.526 }, 00:35:10.526 "method": "bdev_nvme_attach_controller" 00:35:10.526 } 00:35:10.526 EOF 00:35:10.526 )") 00:35:10.526 01:21:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:35:10.526 01:21:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:10.526 01:21:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:35:10.526 01:21:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:35:10.526 01:21:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:35:10.526 01:21:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:35:10.526 01:21:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1335 -- # local sanitizers 00:35:10.526 01:21:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:10.526 01:21:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1337 -- # shift 00:35:10.526 01:21:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local asan_lib= 00:35:10.526 01:21:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:35:10.526 01:21:03 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # cat 00:35:10.526 01:21:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:35:10.526 01:21:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:35:10.526 01:21:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:10.526 01:21:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # grep libasan 00:35:10.526 01:21:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:35:10.526 01:21:03 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@556 -- # jq . 00:35:10.526 01:21:03 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@557 -- # IFS=, 00:35:10.526 01:21:03 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:35:10.526 "params": { 00:35:10.526 "name": "Nvme0", 00:35:10.526 "trtype": "tcp", 00:35:10.526 "traddr": "10.0.0.2", 00:35:10.526 "adrfam": "ipv4", 00:35:10.526 "trsvcid": "4420", 00:35:10.526 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:10.526 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:10.526 "hdgst": false, 00:35:10.526 "ddgst": false 00:35:10.526 }, 00:35:10.526 "method": "bdev_nvme_attach_controller" 00:35:10.526 }' 00:35:10.526 01:21:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # asan_lib= 00:35:10.526 01:21:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:35:10.526 01:21:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:35:10.526 01:21:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:10.526 01:21:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:35:10.526 01:21:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:35:10.526 01:21:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # asan_lib= 00:35:10.526 01:21:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:35:10.526 01:21:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:35:10.526 01:21:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:10.783 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:35:10.783 fio-3.35 00:35:10.783 Starting 1 thread 00:35:10.783 EAL: No free 2048 kB hugepages reported on node 1 00:35:22.972 00:35:22.972 filename0: (groupid=0, jobs=1): err= 0: pid=3937428: Thu Jul 25 01:21:14 2024 00:35:22.972 read: IOPS=97, BW=390KiB/s (399kB/s)(3904KiB/10014msec) 00:35:22.972 slat (nsec): min=6138, max=65787, avg=9105.80, stdev=4441.83 00:35:22.972 clat (usec): min=40836, max=48414, avg=41009.98, stdev=483.08 00:35:22.972 lat (usec): min=40843, max=48430, avg=41019.08, stdev=483.10 00:35:22.972 clat percentiles (usec): 00:35:22.972 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:35:22.972 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:35:22.972 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:35:22.972 | 99.00th=[41681], 99.50th=[42206], 99.90th=[48497], 99.95th=[48497], 00:35:22.972 | 99.99th=[48497] 00:35:22.972 bw ( KiB/s): min= 384, max= 416, per=99.52%, avg=388.80, stdev=11.72, samples=20 00:35:22.972 iops : min= 96, max= 104, avg=97.20, stdev= 2.93, samples=20 00:35:22.972 lat (msec) : 50=100.00% 00:35:22.972 cpu : usr=90.26%, sys=9.46%, ctx=15, majf=0, minf=323 00:35:22.972 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:22.972 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:22.972 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:22.972 issued rwts: total=976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:22.972 latency : target=0, window=0, percentile=100.00%, depth=4 00:35:22.972 00:35:22.972 Run status group 0 (all jobs): 00:35:22.972 READ: bw=390KiB/s (399kB/s), 390KiB/s-390KiB/s (399kB/s-399kB/s), io=3904KiB (3998kB), run=10014-10014msec 00:35:22.972 01:21:14 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:35:22.972 01:21:14 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:35:22.972 01:21:14 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:35:22.972 01:21:14 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:35:22.972 01:21:14 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:35:22.972 01:21:14 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:35:22.972 01:21:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:22.972 01:21:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:35:22.972 01:21:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:22.972 01:21:14 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:35:22.972 01:21:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:22.972 01:21:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:35:22.972 01:21:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:22.972 00:35:22.973 real 0m11.253s 00:35:22.973 user 0m10.328s 00:35:22.973 sys 0m1.253s 00:35:22.973 01:21:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1122 -- # xtrace_disable 00:35:22.973 01:21:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:35:22.973 ************************************ 00:35:22.973 END TEST fio_dif_1_default 00:35:22.973 ************************************ 00:35:22.973 01:21:14 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:35:22.973 01:21:14 nvmf_dif -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:35:22.973 01:21:14 nvmf_dif -- common/autotest_common.sh@1103 -- # xtrace_disable 00:35:22.973 01:21:14 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:22.973 ************************************ 00:35:22.973 START TEST fio_dif_1_multi_subsystems 00:35:22.973 ************************************ 00:35:22.973 01:21:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1121 -- # fio_dif_1_multi_subsystems 00:35:22.973 01:21:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:35:22.973 01:21:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:35:22.973 01:21:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:35:22.973 01:21:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:35:22.973 01:21:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:35:22.973 01:21:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:35:22.973 01:21:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:35:22.973 01:21:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:22.973 01:21:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:22.973 bdev_null0 00:35:22.973 01:21:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:22.973 01:21:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:35:22.973 01:21:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:22.973 01:21:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:22.973 01:21:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:22.973 01:21:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:35:22.973 01:21:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:22.973 01:21:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:22.973 01:21:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:22.973 01:21:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:35:22.973 01:21:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:22.973 01:21:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:22.973 [2024-07-25 01:21:14.779095] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:22.973 01:21:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:22.973 01:21:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:35:22.973 01:21:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:35:22.973 01:21:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:35:22.973 01:21:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:35:22.973 01:21:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:22.973 01:21:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:22.973 bdev_null1 00:35:22.973 01:21:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:22.973 01:21:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:35:22.973 01:21:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:22.973 01:21:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:22.973 01:21:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:22.973 01:21:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:35:22.973 01:21:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:22.973 01:21:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:22.973 01:21:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:22.973 01:21:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:22.973 01:21:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:22.973 01:21:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:22.973 01:21:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:22.973 01:21:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:35:22.973 01:21:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:35:22.973 01:21:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:35:22.973 01:21:14 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # config=() 00:35:22.973 01:21:14 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # local subsystem config 00:35:22.973 01:21:14 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:35:22.973 01:21:14 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:35:22.973 { 00:35:22.973 "params": { 00:35:22.973 "name": "Nvme$subsystem", 00:35:22.973 "trtype": "$TEST_TRANSPORT", 00:35:22.973 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:22.973 "adrfam": "ipv4", 00:35:22.973 "trsvcid": "$NVMF_PORT", 00:35:22.973 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:22.973 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:22.973 "hdgst": ${hdgst:-false}, 00:35:22.973 "ddgst": ${ddgst:-false} 00:35:22.973 }, 00:35:22.973 "method": "bdev_nvme_attach_controller" 00:35:22.973 } 00:35:22.973 EOF 00:35:22.973 )") 00:35:22.973 01:21:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:22.973 01:21:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:22.973 01:21:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:35:22.973 01:21:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:35:22.973 01:21:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:35:22.973 01:21:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:35:22.973 01:21:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:35:22.973 01:21:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1335 -- # local sanitizers 00:35:22.973 01:21:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:22.973 01:21:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1337 -- # shift 00:35:22.973 01:21:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local asan_lib= 00:35:22.973 01:21:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:35:22.973 01:21:14 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:35:22.973 01:21:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:35:22.973 01:21:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:22.973 01:21:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:35:22.973 01:21:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:35:22.973 01:21:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # grep libasan 00:35:22.973 01:21:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:35:22.973 01:21:14 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:35:22.973 01:21:14 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:35:22.973 { 00:35:22.973 "params": { 00:35:22.973 "name": "Nvme$subsystem", 00:35:22.973 "trtype": "$TEST_TRANSPORT", 00:35:22.973 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:22.973 "adrfam": "ipv4", 00:35:22.973 "trsvcid": "$NVMF_PORT", 00:35:22.973 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:22.973 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:22.973 "hdgst": ${hdgst:-false}, 00:35:22.973 "ddgst": ${ddgst:-false} 00:35:22.973 }, 00:35:22.973 "method": "bdev_nvme_attach_controller" 00:35:22.973 } 00:35:22.973 EOF 00:35:22.973 )") 00:35:22.973 01:21:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:35:22.973 01:21:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:35:22.973 01:21:14 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:35:22.973 01:21:14 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@556 -- # jq . 00:35:22.973 01:21:14 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@557 -- # IFS=, 00:35:22.973 01:21:14 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:35:22.973 "params": { 00:35:22.973 "name": "Nvme0", 00:35:22.973 "trtype": "tcp", 00:35:22.973 "traddr": "10.0.0.2", 00:35:22.973 "adrfam": "ipv4", 00:35:22.973 "trsvcid": "4420", 00:35:22.973 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:22.973 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:22.973 "hdgst": false, 00:35:22.973 "ddgst": false 00:35:22.973 }, 00:35:22.973 "method": "bdev_nvme_attach_controller" 00:35:22.973 },{ 00:35:22.973 "params": { 00:35:22.973 "name": "Nvme1", 00:35:22.973 "trtype": "tcp", 00:35:22.973 "traddr": "10.0.0.2", 00:35:22.973 "adrfam": "ipv4", 00:35:22.973 "trsvcid": "4420", 00:35:22.974 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:22.974 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:22.974 "hdgst": false, 00:35:22.974 "ddgst": false 00:35:22.974 }, 00:35:22.974 "method": "bdev_nvme_attach_controller" 00:35:22.974 }' 00:35:22.974 01:21:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # asan_lib= 00:35:22.974 01:21:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:35:22.974 01:21:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:35:22.974 01:21:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:22.974 01:21:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:35:22.974 01:21:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:35:22.974 01:21:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # asan_lib= 00:35:22.974 01:21:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:35:22.974 01:21:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:35:22.974 01:21:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:22.974 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:35:22.974 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:35:22.974 fio-3.35 00:35:22.974 Starting 2 threads 00:35:22.974 EAL: No free 2048 kB hugepages reported on node 1 00:35:32.940 00:35:32.941 filename0: (groupid=0, jobs=1): err= 0: pid=3939340: Thu Jul 25 01:21:25 2024 00:35:32.941 read: IOPS=143, BW=575KiB/s (589kB/s)(5760KiB/10022msec) 00:35:32.941 slat (nsec): min=6790, max=33690, avg=9755.73, stdev=4618.39 00:35:32.941 clat (usec): min=685, max=43363, avg=27806.49, stdev=19007.73 00:35:32.941 lat (usec): min=693, max=43383, avg=27816.25, stdev=19008.27 00:35:32.941 clat percentiles (usec): 00:35:32.941 | 1.00th=[ 717], 5.00th=[ 734], 10.00th=[ 742], 20.00th=[ 766], 00:35:32.941 | 30.00th=[ 840], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:35:32.941 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41681], 00:35:32.941 | 99.00th=[42206], 99.50th=[42206], 99.90th=[43254], 99.95th=[43254], 00:35:32.941 | 99.99th=[43254] 00:35:32.941 bw ( KiB/s): min= 384, max= 768, per=43.01%, avg=574.40, stdev=184.99, samples=20 00:35:32.941 iops : min= 96, max= 192, avg=143.60, stdev=46.25, samples=20 00:35:32.941 lat (usec) : 750=14.65%, 1000=18.40% 00:35:32.941 lat (msec) : 50=66.94% 00:35:32.941 cpu : usr=94.73%, sys=4.98%, ctx=15, majf=0, minf=106 00:35:32.941 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:32.941 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:32.941 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:32.941 issued rwts: total=1440,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:32.941 latency : target=0, window=0, percentile=100.00%, depth=4 00:35:32.941 filename1: (groupid=0, jobs=1): err= 0: pid=3939341: Thu Jul 25 01:21:25 2024 00:35:32.941 read: IOPS=190, BW=762KiB/s (780kB/s)(7616KiB/10001msec) 00:35:32.941 slat (nsec): min=6836, max=64217, avg=9814.95, stdev=4757.47 00:35:32.941 clat (usec): min=668, max=43329, avg=20977.62, stdev=20213.45 00:35:32.941 lat (usec): min=676, max=43357, avg=20987.43, stdev=20214.03 00:35:32.941 clat percentiles (usec): 00:35:32.941 | 1.00th=[ 709], 5.00th=[ 725], 10.00th=[ 742], 20.00th=[ 758], 00:35:32.941 | 30.00th=[ 775], 40.00th=[ 791], 50.00th=[ 1106], 60.00th=[41157], 00:35:32.941 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:35:32.941 | 99.00th=[41157], 99.50th=[41681], 99.90th=[43254], 99.95th=[43254], 00:35:32.941 | 99.99th=[43254] 00:35:32.941 bw ( KiB/s): min= 704, max= 768, per=57.02%, avg=761.26, stdev=17.13, samples=19 00:35:32.941 iops : min= 176, max= 192, avg=190.32, stdev= 4.28, samples=19 00:35:32.941 lat (usec) : 750=16.86%, 1000=32.72% 00:35:32.941 lat (msec) : 2=0.42%, 50=50.00% 00:35:32.941 cpu : usr=94.78%, sys=4.92%, ctx=18, majf=0, minf=177 00:35:32.941 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:32.941 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:32.941 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:32.941 issued rwts: total=1904,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:32.941 latency : target=0, window=0, percentile=100.00%, depth=4 00:35:32.941 00:35:32.941 Run status group 0 (all jobs): 00:35:32.941 READ: bw=1335KiB/s (1367kB/s), 575KiB/s-762KiB/s (589kB/s-780kB/s), io=13.1MiB (13.7MB), run=10001-10022msec 00:35:33.200 01:21:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:35:33.200 01:21:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:35:33.200 01:21:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:35:33.200 01:21:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:35:33.200 01:21:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:35:33.200 01:21:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:35:33.200 01:21:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:33.200 01:21:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:33.200 01:21:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:33.200 01:21:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:35:33.200 01:21:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:33.200 01:21:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:33.200 01:21:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:33.200 01:21:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:35:33.200 01:21:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:35:33.200 01:21:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:35:33.200 01:21:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:35:33.200 01:21:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:33.200 01:21:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:33.200 01:21:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:33.200 01:21:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:35:33.200 01:21:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:33.200 01:21:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:33.200 01:21:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:33.200 00:35:33.200 real 0m11.435s 00:35:33.200 user 0m20.380s 00:35:33.200 sys 0m1.285s 00:35:33.200 01:21:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1122 -- # xtrace_disable 00:35:33.200 01:21:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:33.200 ************************************ 00:35:33.200 END TEST fio_dif_1_multi_subsystems 00:35:33.200 ************************************ 00:35:33.200 01:21:26 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:35:33.200 01:21:26 nvmf_dif -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:35:33.200 01:21:26 nvmf_dif -- common/autotest_common.sh@1103 -- # xtrace_disable 00:35:33.200 01:21:26 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:33.200 ************************************ 00:35:33.200 START TEST fio_dif_rand_params 00:35:33.200 ************************************ 00:35:33.200 01:21:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1121 -- # fio_dif_rand_params 00:35:33.200 01:21:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:35:33.200 01:21:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:35:33.200 01:21:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:35:33.200 01:21:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:35:33.200 01:21:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:35:33.200 01:21:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:35:33.200 01:21:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:35:33.200 01:21:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:35:33.200 01:21:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:35:33.200 01:21:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:35:33.200 01:21:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:35:33.200 01:21:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:35:33.200 01:21:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:35:33.200 01:21:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:33.200 01:21:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:33.200 bdev_null0 00:35:33.200 01:21:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:33.200 01:21:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:35:33.200 01:21:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:33.200 01:21:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:33.200 01:21:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:33.200 01:21:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:35:33.200 01:21:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:33.200 01:21:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:33.200 01:21:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:33.200 01:21:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:35:33.200 01:21:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:33.200 01:21:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:33.200 [2024-07-25 01:21:26.259627] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:33.200 01:21:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:33.200 01:21:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:35:33.200 01:21:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:35:33.200 01:21:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:35:33.200 01:21:26 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:35:33.200 01:21:26 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:35:33.200 01:21:26 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:35:33.200 01:21:26 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:35:33.200 { 00:35:33.200 "params": { 00:35:33.200 "name": "Nvme$subsystem", 00:35:33.200 "trtype": "$TEST_TRANSPORT", 00:35:33.200 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:33.200 "adrfam": "ipv4", 00:35:33.200 "trsvcid": "$NVMF_PORT", 00:35:33.200 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:33.200 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:33.200 "hdgst": ${hdgst:-false}, 00:35:33.200 "ddgst": ${ddgst:-false} 00:35:33.200 }, 00:35:33.200 "method": "bdev_nvme_attach_controller" 00:35:33.200 } 00:35:33.200 EOF 00:35:33.200 )") 00:35:33.200 01:21:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:33.200 01:21:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:33.200 01:21:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:35:33.200 01:21:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:35:33.200 01:21:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:35:33.200 01:21:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:35:33.200 01:21:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1335 -- # local sanitizers 00:35:33.200 01:21:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:35:33.200 01:21:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:33.200 01:21:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # shift 00:35:33.200 01:21:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local asan_lib= 00:35:33.200 01:21:26 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:35:33.200 01:21:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:35:33.200 01:21:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:33.200 01:21:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:35:33.200 01:21:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # grep libasan 00:35:33.201 01:21:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:35:33.201 01:21:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:35:33.201 01:21:26 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:35:33.201 01:21:26 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:35:33.201 01:21:26 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:35:33.201 "params": { 00:35:33.201 "name": "Nvme0", 00:35:33.201 "trtype": "tcp", 00:35:33.201 "traddr": "10.0.0.2", 00:35:33.201 "adrfam": "ipv4", 00:35:33.201 "trsvcid": "4420", 00:35:33.201 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:33.201 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:33.201 "hdgst": false, 00:35:33.201 "ddgst": false 00:35:33.201 }, 00:35:33.201 "method": "bdev_nvme_attach_controller" 00:35:33.201 }' 00:35:33.201 01:21:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # asan_lib= 00:35:33.201 01:21:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:35:33.201 01:21:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:35:33.201 01:21:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:33.201 01:21:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:35:33.201 01:21:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:35:33.201 01:21:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # asan_lib= 00:35:33.201 01:21:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:35:33.201 01:21:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:35:33.201 01:21:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:33.459 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:35:33.459 ... 00:35:33.459 fio-3.35 00:35:33.459 Starting 3 threads 00:35:33.459 EAL: No free 2048 kB hugepages reported on node 1 00:35:40.015 00:35:40.015 filename0: (groupid=0, jobs=1): err= 0: pid=3940736: Thu Jul 25 01:21:32 2024 00:35:40.015 read: IOPS=195, BW=24.5MiB/s (25.7MB/s)(122MiB/5002msec) 00:35:40.015 slat (nsec): min=7521, max=49864, avg=13690.94, stdev=4570.05 00:35:40.016 clat (usec): min=5163, max=57029, avg=15307.52, stdev=13951.48 00:35:40.016 lat (usec): min=5174, max=57050, avg=15321.21, stdev=13951.51 00:35:40.016 clat percentiles (usec): 00:35:40.016 | 1.00th=[ 5538], 5.00th=[ 6128], 10.00th=[ 6521], 20.00th=[ 8291], 00:35:40.016 | 30.00th=[ 9110], 40.00th=[ 9765], 50.00th=[10683], 60.00th=[11863], 00:35:40.016 | 70.00th=[12518], 80.00th=[13173], 90.00th=[50070], 95.00th=[52167], 00:35:40.016 | 99.00th=[54264], 99.50th=[54264], 99.90th=[56886], 99.95th=[56886], 00:35:40.016 | 99.99th=[56886] 00:35:40.016 bw ( KiB/s): min=20736, max=33024, per=31.99%, avg=24519.11, stdev=3748.84, samples=9 00:35:40.016 iops : min= 162, max= 258, avg=191.56, stdev=29.29, samples=9 00:35:40.016 lat (msec) : 10=44.33%, 20=43.11%, 50=2.45%, 100=10.11% 00:35:40.016 cpu : usr=91.74%, sys=7.70%, ctx=9, majf=0, minf=76 00:35:40.016 IO depths : 1=1.3%, 2=98.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:40.016 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:40.016 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:40.016 issued rwts: total=979,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:40.016 latency : target=0, window=0, percentile=100.00%, depth=3 00:35:40.016 filename0: (groupid=0, jobs=1): err= 0: pid=3940737: Thu Jul 25 01:21:32 2024 00:35:40.016 read: IOPS=189, BW=23.7MiB/s (24.9MB/s)(119MiB/5015msec) 00:35:40.016 slat (nsec): min=6474, max=50570, avg=13763.77, stdev=4730.57 00:35:40.016 clat (usec): min=5224, max=56712, avg=15796.91, stdev=14140.75 00:35:40.016 lat (usec): min=5235, max=56731, avg=15810.67, stdev=14141.16 00:35:40.016 clat percentiles (usec): 00:35:40.016 | 1.00th=[ 5735], 5.00th=[ 6128], 10.00th=[ 6390], 20.00th=[ 8455], 00:35:40.016 | 30.00th=[ 9241], 40.00th=[ 9896], 50.00th=[10945], 60.00th=[12256], 00:35:40.016 | 70.00th=[13173], 80.00th=[14353], 90.00th=[50594], 95.00th=[53216], 00:35:40.016 | 99.00th=[55837], 99.50th=[56361], 99.90th=[56886], 99.95th=[56886], 00:35:40.016 | 99.99th=[56886] 00:35:40.016 bw ( KiB/s): min=19712, max=28928, per=31.67%, avg=24272.90, stdev=3180.54, samples=10 00:35:40.016 iops : min= 154, max= 226, avg=189.60, stdev=24.89, samples=10 00:35:40.016 lat (msec) : 10=40.69%, 20=46.69%, 50=1.68%, 100=10.94% 00:35:40.016 cpu : usr=91.76%, sys=7.66%, ctx=14, majf=0, minf=126 00:35:40.016 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:40.016 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:40.016 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:40.016 issued rwts: total=951,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:40.016 latency : target=0, window=0, percentile=100.00%, depth=3 00:35:40.016 filename0: (groupid=0, jobs=1): err= 0: pid=3940738: Thu Jul 25 01:21:32 2024 00:35:40.016 read: IOPS=214, BW=26.8MiB/s (28.1MB/s)(135MiB/5023msec) 00:35:40.016 slat (nsec): min=6576, max=58835, avg=17741.77, stdev=7147.08 00:35:40.016 clat (usec): min=5159, max=57314, avg=13955.43, stdev=11489.23 00:35:40.016 lat (usec): min=5173, max=57326, avg=13973.17, stdev=11489.07 00:35:40.016 clat percentiles (usec): 00:35:40.016 | 1.00th=[ 5604], 5.00th=[ 5932], 10.00th=[ 6587], 20.00th=[ 8455], 00:35:40.016 | 30.00th=[ 8979], 40.00th=[ 9503], 50.00th=[10290], 60.00th=[11469], 00:35:40.016 | 70.00th=[12780], 80.00th=[14353], 90.00th=[17695], 95.00th=[49546], 00:35:40.016 | 99.00th=[52167], 99.50th=[53740], 99.90th=[55837], 99.95th=[57410], 00:35:40.016 | 99.99th=[57410] 00:35:40.016 bw ( KiB/s): min=19968, max=31232, per=35.91%, avg=27525.60, stdev=3400.12, samples=10 00:35:40.016 iops : min= 156, max= 244, avg=215.00, stdev=26.55, samples=10 00:35:40.016 lat (msec) : 10=46.29%, 20=45.08%, 50=4.17%, 100=4.45% 00:35:40.016 cpu : usr=92.27%, sys=7.17%, ctx=23, majf=0, minf=133 00:35:40.016 IO depths : 1=0.7%, 2=99.3%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:40.016 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:40.016 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:40.016 issued rwts: total=1078,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:40.016 latency : target=0, window=0, percentile=100.00%, depth=3 00:35:40.016 00:35:40.016 Run status group 0 (all jobs): 00:35:40.016 READ: bw=74.9MiB/s (78.5MB/s), 23.7MiB/s-26.8MiB/s (24.9MB/s-28.1MB/s), io=376MiB (394MB), run=5002-5023msec 00:35:40.016 01:21:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:35:40.016 01:21:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:35:40.016 01:21:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:35:40.016 01:21:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:35:40.016 01:21:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:35:40.016 01:21:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:35:40.016 01:21:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:40.016 01:21:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:40.016 01:21:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:40.016 01:21:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:35:40.016 01:21:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:40.016 01:21:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:40.016 01:21:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:40.016 01:21:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:35:40.016 01:21:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:35:40.016 01:21:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:35:40.016 01:21:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:35:40.016 01:21:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:35:40.016 01:21:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:35:40.016 01:21:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:35:40.016 01:21:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:35:40.016 01:21:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:35:40.016 01:21:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:35:40.016 01:21:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:35:40.016 01:21:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:35:40.016 01:21:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:40.016 01:21:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:40.016 bdev_null0 00:35:40.016 01:21:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:40.016 01:21:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:35:40.016 01:21:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:40.016 01:21:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:40.016 01:21:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:40.016 01:21:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:35:40.016 01:21:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:40.016 01:21:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:40.016 01:21:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:40.016 01:21:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:35:40.016 01:21:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:40.016 01:21:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:40.016 [2024-07-25 01:21:32.369890] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:40.016 01:21:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:40.016 01:21:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:35:40.016 01:21:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:35:40.016 01:21:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:35:40.016 01:21:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:35:40.016 01:21:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:40.016 01:21:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:40.016 bdev_null1 00:35:40.016 01:21:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:40.016 01:21:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:35:40.016 01:21:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:40.016 01:21:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:40.016 01:21:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:40.016 01:21:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:35:40.016 01:21:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:40.016 01:21:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:40.016 01:21:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:40.016 01:21:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:40.016 01:21:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:40.016 01:21:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:40.016 01:21:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:40.016 01:21:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:35:40.016 01:21:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:35:40.016 01:21:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:35:40.016 01:21:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:35:40.016 01:21:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:40.016 01:21:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:40.016 bdev_null2 00:35:40.016 01:21:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:40.016 01:21:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:35:40.016 01:21:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:40.017 01:21:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:40.017 01:21:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:40.017 01:21:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:35:40.017 01:21:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:40.017 01:21:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:40.017 01:21:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:40.017 01:21:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:35:40.017 01:21:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:40.017 01:21:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:40.017 01:21:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:40.017 01:21:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:35:40.017 01:21:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:35:40.017 01:21:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:40.017 01:21:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:35:40.017 01:21:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:40.017 01:21:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:35:40.017 01:21:32 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:35:40.017 01:21:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:35:40.017 01:21:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:35:40.017 01:21:32 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:35:40.017 01:21:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:35:40.017 01:21:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:35:40.017 01:21:32 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:35:40.017 01:21:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1335 -- # local sanitizers 00:35:40.017 01:21:32 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:35:40.017 { 00:35:40.017 "params": { 00:35:40.017 "name": "Nvme$subsystem", 00:35:40.017 "trtype": "$TEST_TRANSPORT", 00:35:40.017 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:40.017 "adrfam": "ipv4", 00:35:40.017 "trsvcid": "$NVMF_PORT", 00:35:40.017 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:40.017 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:40.017 "hdgst": ${hdgst:-false}, 00:35:40.017 "ddgst": ${ddgst:-false} 00:35:40.017 }, 00:35:40.017 "method": "bdev_nvme_attach_controller" 00:35:40.017 } 00:35:40.017 EOF 00:35:40.017 )") 00:35:40.017 01:21:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:40.017 01:21:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # shift 00:35:40.017 01:21:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local asan_lib= 00:35:40.017 01:21:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:35:40.017 01:21:32 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:35:40.017 01:21:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:40.017 01:21:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # grep libasan 00:35:40.017 01:21:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:35:40.017 01:21:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:35:40.017 01:21:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:35:40.017 01:21:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:35:40.017 01:21:32 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:35:40.017 01:21:32 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:35:40.017 { 00:35:40.017 "params": { 00:35:40.017 "name": "Nvme$subsystem", 00:35:40.017 "trtype": "$TEST_TRANSPORT", 00:35:40.017 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:40.017 "adrfam": "ipv4", 00:35:40.017 "trsvcid": "$NVMF_PORT", 00:35:40.017 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:40.017 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:40.017 "hdgst": ${hdgst:-false}, 00:35:40.017 "ddgst": ${ddgst:-false} 00:35:40.017 }, 00:35:40.017 "method": "bdev_nvme_attach_controller" 00:35:40.017 } 00:35:40.017 EOF 00:35:40.017 )") 00:35:40.017 01:21:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:35:40.017 01:21:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:35:40.017 01:21:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:35:40.017 01:21:32 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:35:40.017 01:21:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:35:40.017 01:21:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:35:40.017 01:21:32 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:35:40.017 01:21:32 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:35:40.017 { 00:35:40.017 "params": { 00:35:40.017 "name": "Nvme$subsystem", 00:35:40.017 "trtype": "$TEST_TRANSPORT", 00:35:40.017 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:40.017 "adrfam": "ipv4", 00:35:40.017 "trsvcid": "$NVMF_PORT", 00:35:40.017 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:40.017 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:40.017 "hdgst": ${hdgst:-false}, 00:35:40.017 "ddgst": ${ddgst:-false} 00:35:40.017 }, 00:35:40.017 "method": "bdev_nvme_attach_controller" 00:35:40.017 } 00:35:40.017 EOF 00:35:40.017 )") 00:35:40.017 01:21:32 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:35:40.017 01:21:32 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:35:40.017 01:21:32 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:35:40.017 01:21:32 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:35:40.017 "params": { 00:35:40.017 "name": "Nvme0", 00:35:40.017 "trtype": "tcp", 00:35:40.017 "traddr": "10.0.0.2", 00:35:40.017 "adrfam": "ipv4", 00:35:40.017 "trsvcid": "4420", 00:35:40.017 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:40.017 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:40.017 "hdgst": false, 00:35:40.017 "ddgst": false 00:35:40.017 }, 00:35:40.017 "method": "bdev_nvme_attach_controller" 00:35:40.017 },{ 00:35:40.017 "params": { 00:35:40.017 "name": "Nvme1", 00:35:40.017 "trtype": "tcp", 00:35:40.017 "traddr": "10.0.0.2", 00:35:40.017 "adrfam": "ipv4", 00:35:40.017 "trsvcid": "4420", 00:35:40.017 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:40.017 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:40.017 "hdgst": false, 00:35:40.017 "ddgst": false 00:35:40.017 }, 00:35:40.017 "method": "bdev_nvme_attach_controller" 00:35:40.017 },{ 00:35:40.017 "params": { 00:35:40.017 "name": "Nvme2", 00:35:40.017 "trtype": "tcp", 00:35:40.017 "traddr": "10.0.0.2", 00:35:40.017 "adrfam": "ipv4", 00:35:40.017 "trsvcid": "4420", 00:35:40.017 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:35:40.017 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:35:40.017 "hdgst": false, 00:35:40.017 "ddgst": false 00:35:40.017 }, 00:35:40.017 "method": "bdev_nvme_attach_controller" 00:35:40.017 }' 00:35:40.017 01:21:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # asan_lib= 00:35:40.017 01:21:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:35:40.017 01:21:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:35:40.017 01:21:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:40.017 01:21:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:35:40.017 01:21:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:35:40.017 01:21:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # asan_lib= 00:35:40.017 01:21:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:35:40.017 01:21:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:35:40.017 01:21:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:40.017 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:35:40.017 ... 00:35:40.017 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:35:40.017 ... 00:35:40.017 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:35:40.017 ... 00:35:40.017 fio-3.35 00:35:40.017 Starting 24 threads 00:35:40.017 EAL: No free 2048 kB hugepages reported on node 1 00:35:52.214 00:35:52.214 filename0: (groupid=0, jobs=1): err= 0: pid=3941591: Thu Jul 25 01:21:43 2024 00:35:52.214 read: IOPS=471, BW=1887KiB/s (1932kB/s)(18.4MiB/10006msec) 00:35:52.214 slat (usec): min=8, max=105, avg=22.97, stdev=14.24 00:35:52.214 clat (usec): min=26639, max=51414, avg=33706.38, stdev=1277.57 00:35:52.214 lat (usec): min=26655, max=51440, avg=33729.35, stdev=1277.00 00:35:52.214 clat percentiles (usec): 00:35:52.214 | 1.00th=[32637], 5.00th=[32900], 10.00th=[33162], 20.00th=[33424], 00:35:52.214 | 30.00th=[33424], 40.00th=[33424], 50.00th=[33424], 60.00th=[33817], 00:35:52.214 | 70.00th=[33817], 80.00th=[33817], 90.00th=[34341], 95.00th=[34866], 00:35:52.214 | 99.00th=[36963], 99.50th=[37487], 99.90th=[51119], 99.95th=[51119], 00:35:52.214 | 99.99th=[51643] 00:35:52.214 bw ( KiB/s): min= 1664, max= 1920, per=4.14%, avg=1879.58, stdev=74.55, samples=19 00:35:52.214 iops : min= 416, max= 480, avg=469.89, stdev=18.64, samples=19 00:35:52.214 lat (msec) : 50=99.66%, 100=0.34% 00:35:52.214 cpu : usr=98.08%, sys=1.51%, ctx=13, majf=0, minf=9 00:35:52.214 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:35:52.214 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:52.214 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:52.214 issued rwts: total=4720,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:52.214 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:52.214 filename0: (groupid=0, jobs=1): err= 0: pid=3941592: Thu Jul 25 01:21:43 2024 00:35:52.214 read: IOPS=475, BW=1900KiB/s (1946kB/s)(18.6MiB/10004msec) 00:35:52.214 slat (usec): min=11, max=120, avg=77.95, stdev=13.02 00:35:52.215 clat (usec): min=11704, max=37176, avg=32984.79, stdev=1900.43 00:35:52.215 lat (usec): min=11752, max=37217, avg=33062.74, stdev=1903.69 00:35:52.215 clat percentiles (usec): 00:35:52.215 | 1.00th=[23462], 5.00th=[32113], 10.00th=[32375], 20.00th=[32637], 00:35:52.215 | 30.00th=[32900], 40.00th=[32900], 50.00th=[33162], 60.00th=[33162], 00:35:52.215 | 70.00th=[33424], 80.00th=[33817], 90.00th=[33817], 95.00th=[34341], 00:35:52.215 | 99.00th=[36439], 99.50th=[36963], 99.90th=[36963], 99.95th=[36963], 00:35:52.215 | 99.99th=[36963] 00:35:52.215 bw ( KiB/s): min= 1792, max= 1920, per=4.18%, avg=1899.79, stdev=47.95, samples=19 00:35:52.215 iops : min= 448, max= 480, avg=474.95, stdev=11.99, samples=19 00:35:52.215 lat (msec) : 20=0.67%, 50=99.33% 00:35:52.215 cpu : usr=98.19%, sys=1.36%, ctx=15, majf=0, minf=9 00:35:52.215 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:35:52.215 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:52.215 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:52.215 issued rwts: total=4752,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:52.215 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:52.215 filename0: (groupid=0, jobs=1): err= 0: pid=3941593: Thu Jul 25 01:21:43 2024 00:35:52.215 read: IOPS=471, BW=1887KiB/s (1933kB/s)(18.4MiB/10004msec) 00:35:52.215 slat (nsec): min=9960, max=65567, avg=31816.06, stdev=8174.84 00:35:52.215 clat (usec): min=16193, max=70754, avg=33622.38, stdev=2465.50 00:35:52.215 lat (usec): min=16225, max=70787, avg=33654.20, stdev=2465.86 00:35:52.215 clat percentiles (usec): 00:35:52.215 | 1.00th=[32637], 5.00th=[32900], 10.00th=[33162], 20.00th=[33162], 00:35:52.215 | 30.00th=[33424], 40.00th=[33424], 50.00th=[33424], 60.00th=[33424], 00:35:52.215 | 70.00th=[33817], 80.00th=[33817], 90.00th=[34341], 95.00th=[34341], 00:35:52.215 | 99.00th=[36963], 99.50th=[36963], 99.90th=[70779], 99.95th=[70779], 00:35:52.215 | 99.99th=[70779] 00:35:52.215 bw ( KiB/s): min= 1667, max= 1920, per=4.14%, avg=1879.74, stdev=74.07, samples=19 00:35:52.215 iops : min= 416, max= 480, avg=469.89, stdev=18.64, samples=19 00:35:52.215 lat (msec) : 20=0.34%, 50=99.32%, 100=0.34% 00:35:52.215 cpu : usr=98.35%, sys=1.26%, ctx=15, majf=0, minf=9 00:35:52.215 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:35:52.215 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:52.215 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:52.215 issued rwts: total=4720,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:52.215 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:52.215 filename0: (groupid=0, jobs=1): err= 0: pid=3941594: Thu Jul 25 01:21:43 2024 00:35:52.215 read: IOPS=475, BW=1903KiB/s (1948kB/s)(18.6MiB/10003msec) 00:35:52.215 slat (nsec): min=8920, max=86507, avg=32253.56, stdev=9410.87 00:35:52.215 clat (usec): min=11177, max=44279, avg=33366.61, stdev=1929.58 00:35:52.215 lat (usec): min=11202, max=44317, avg=33398.87, stdev=1929.60 00:35:52.215 clat percentiles (usec): 00:35:52.215 | 1.00th=[22676], 5.00th=[32900], 10.00th=[33162], 20.00th=[33162], 00:35:52.215 | 30.00th=[33424], 40.00th=[33424], 50.00th=[33424], 60.00th=[33817], 00:35:52.215 | 70.00th=[33817], 80.00th=[33817], 90.00th=[34341], 95.00th=[34341], 00:35:52.215 | 99.00th=[36439], 99.50th=[36439], 99.90th=[37487], 99.95th=[44303], 00:35:52.215 | 99.99th=[44303] 00:35:52.215 bw ( KiB/s): min= 1792, max= 1968, per=4.19%, avg=1902.32, stdev=50.28, samples=19 00:35:52.215 iops : min= 448, max= 492, avg=475.58, stdev=12.57, samples=19 00:35:52.215 lat (msec) : 20=0.65%, 50=99.35% 00:35:52.215 cpu : usr=98.07%, sys=1.38%, ctx=132, majf=0, minf=9 00:35:52.215 IO depths : 1=6.1%, 2=12.3%, 4=24.6%, 8=50.6%, 16=6.4%, 32=0.0%, >=64=0.0% 00:35:52.215 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:52.215 complete : 0=0.0%, 4=94.0%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:52.215 issued rwts: total=4758,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:52.215 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:52.215 filename0: (groupid=0, jobs=1): err= 0: pid=3941595: Thu Jul 25 01:21:43 2024 00:35:52.215 read: IOPS=471, BW=1887KiB/s (1932kB/s)(18.4MiB/10007msec) 00:35:52.215 slat (nsec): min=10979, max=83593, avg=36948.16, stdev=10866.16 00:35:52.215 clat (usec): min=26235, max=51715, avg=33590.75, stdev=1347.07 00:35:52.215 lat (usec): min=26275, max=51735, avg=33627.70, stdev=1346.64 00:35:52.215 clat percentiles (usec): 00:35:52.215 | 1.00th=[32637], 5.00th=[32900], 10.00th=[33162], 20.00th=[33162], 00:35:52.215 | 30.00th=[33162], 40.00th=[33424], 50.00th=[33424], 60.00th=[33424], 00:35:52.215 | 70.00th=[33817], 80.00th=[33817], 90.00th=[34341], 95.00th=[34341], 00:35:52.215 | 99.00th=[36963], 99.50th=[40109], 99.90th=[51643], 99.95th=[51643], 00:35:52.215 | 99.99th=[51643] 00:35:52.215 bw ( KiB/s): min= 1664, max= 2048, per=4.16%, avg=1888.00, stdev=81.75, samples=20 00:35:52.215 iops : min= 416, max= 512, avg=472.00, stdev=20.44, samples=20 00:35:52.215 lat (msec) : 50=99.66%, 100=0.34% 00:35:52.215 cpu : usr=93.49%, sys=3.78%, ctx=316, majf=0, minf=9 00:35:52.215 IO depths : 1=6.1%, 2=12.3%, 4=25.0%, 8=50.2%, 16=6.4%, 32=0.0%, >=64=0.0% 00:35:52.215 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:52.215 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:52.215 issued rwts: total=4720,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:52.215 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:52.215 filename0: (groupid=0, jobs=1): err= 0: pid=3941596: Thu Jul 25 01:21:43 2024 00:35:52.215 read: IOPS=471, BW=1887KiB/s (1932kB/s)(18.4MiB/10006msec) 00:35:52.215 slat (nsec): min=8872, max=88052, avg=33027.07, stdev=9827.32 00:35:52.215 clat (usec): min=26261, max=51497, avg=33634.92, stdev=1281.57 00:35:52.215 lat (usec): min=26298, max=51525, avg=33667.95, stdev=1280.57 00:35:52.215 clat percentiles (usec): 00:35:52.215 | 1.00th=[32900], 5.00th=[32900], 10.00th=[33162], 20.00th=[33162], 00:35:52.215 | 30.00th=[33424], 40.00th=[33424], 50.00th=[33424], 60.00th=[33817], 00:35:52.215 | 70.00th=[33817], 80.00th=[33817], 90.00th=[34341], 95.00th=[34341], 00:35:52.215 | 99.00th=[36963], 99.50th=[37487], 99.90th=[51643], 99.95th=[51643], 00:35:52.215 | 99.99th=[51643] 00:35:52.215 bw ( KiB/s): min= 1664, max= 1920, per=4.14%, avg=1879.58, stdev=74.55, samples=19 00:35:52.215 iops : min= 416, max= 480, avg=469.89, stdev=18.64, samples=19 00:35:52.215 lat (msec) : 50=99.66%, 100=0.34% 00:35:52.215 cpu : usr=98.10%, sys=1.50%, ctx=18, majf=0, minf=9 00:35:52.215 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:35:52.215 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:52.215 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:52.215 issued rwts: total=4720,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:52.215 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:52.215 filename0: (groupid=0, jobs=1): err= 0: pid=3941597: Thu Jul 25 01:21:43 2024 00:35:52.215 read: IOPS=473, BW=1892KiB/s (1938kB/s)(18.5MiB/10012msec) 00:35:52.215 slat (nsec): min=7504, max=92884, avg=27871.77, stdev=15732.64 00:35:52.215 clat (usec): min=14953, max=66238, avg=33598.08, stdev=2125.15 00:35:52.215 lat (usec): min=14978, max=66258, avg=33625.96, stdev=2124.44 00:35:52.215 clat percentiles (usec): 00:35:52.215 | 1.00th=[32113], 5.00th=[32900], 10.00th=[33162], 20.00th=[33162], 00:35:52.215 | 30.00th=[33424], 40.00th=[33424], 50.00th=[33424], 60.00th=[33817], 00:35:52.215 | 70.00th=[33817], 80.00th=[33817], 90.00th=[34341], 95.00th=[34866], 00:35:52.215 | 99.00th=[36439], 99.50th=[47449], 99.90th=[53740], 99.95th=[53740], 00:35:52.215 | 99.99th=[66323] 00:35:52.215 bw ( KiB/s): min= 1664, max= 1992, per=4.14%, avg=1879.58, stdev=78.31, samples=19 00:35:52.215 iops : min= 416, max= 498, avg=469.89, stdev=19.58, samples=19 00:35:52.215 lat (msec) : 20=0.68%, 50=98.99%, 100=0.34% 00:35:52.215 cpu : usr=98.23%, sys=1.36%, ctx=14, majf=0, minf=9 00:35:52.215 IO depths : 1=6.2%, 2=12.4%, 4=24.9%, 8=50.2%, 16=6.3%, 32=0.0%, >=64=0.0% 00:35:52.215 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:52.215 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:52.215 issued rwts: total=4736,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:52.215 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:52.215 filename0: (groupid=0, jobs=1): err= 0: pid=3941598: Thu Jul 25 01:21:43 2024 00:35:52.215 read: IOPS=473, BW=1892KiB/s (1938kB/s)(18.5MiB/10012msec) 00:35:52.215 slat (usec): min=6, max=114, avg=39.49, stdev=18.32 00:35:52.215 clat (usec): min=15055, max=66156, avg=33466.10, stdev=1964.88 00:35:52.215 lat (usec): min=15069, max=66173, avg=33505.59, stdev=1963.01 00:35:52.215 clat percentiles (usec): 00:35:52.215 | 1.00th=[32113], 5.00th=[32637], 10.00th=[32900], 20.00th=[33162], 00:35:52.215 | 30.00th=[33162], 40.00th=[33424], 50.00th=[33424], 60.00th=[33424], 00:35:52.215 | 70.00th=[33817], 80.00th=[33817], 90.00th=[34341], 95.00th=[34866], 00:35:52.215 | 99.00th=[36439], 99.50th=[36963], 99.90th=[53740], 99.95th=[53740], 00:35:52.215 | 99.99th=[66323] 00:35:52.215 bw ( KiB/s): min= 1664, max= 1920, per=4.14%, avg=1879.58, stdev=74.55, samples=19 00:35:52.215 iops : min= 416, max= 480, avg=469.89, stdev=18.64, samples=19 00:35:52.215 lat (msec) : 20=0.68%, 50=98.99%, 100=0.34% 00:35:52.215 cpu : usr=98.24%, sys=1.34%, ctx=14, majf=0, minf=9 00:35:52.215 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:35:52.215 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:52.215 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:52.215 issued rwts: total=4736,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:52.215 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:52.215 filename1: (groupid=0, jobs=1): err= 0: pid=3941599: Thu Jul 25 01:21:43 2024 00:35:52.215 read: IOPS=471, BW=1887KiB/s (1932kB/s)(18.4MiB/10006msec) 00:35:52.215 slat (usec): min=9, max=104, avg=37.44, stdev=13.44 00:35:52.215 clat (usec): min=26291, max=51595, avg=33595.83, stdev=1268.20 00:35:52.215 lat (usec): min=26331, max=51636, avg=33633.27, stdev=1267.40 00:35:52.215 clat percentiles (usec): 00:35:52.215 | 1.00th=[32637], 5.00th=[32900], 10.00th=[33162], 20.00th=[33162], 00:35:52.215 | 30.00th=[33162], 40.00th=[33424], 50.00th=[33424], 60.00th=[33424], 00:35:52.215 | 70.00th=[33817], 80.00th=[33817], 90.00th=[34341], 95.00th=[34341], 00:35:52.215 | 99.00th=[36963], 99.50th=[37487], 99.90th=[51643], 99.95th=[51643], 00:35:52.215 | 99.99th=[51643] 00:35:52.215 bw ( KiB/s): min= 1664, max= 1920, per=4.14%, avg=1879.58, stdev=74.55, samples=19 00:35:52.216 iops : min= 416, max= 480, avg=469.89, stdev=18.64, samples=19 00:35:52.216 lat (msec) : 50=99.66%, 100=0.34% 00:35:52.216 cpu : usr=95.31%, sys=2.63%, ctx=168, majf=0, minf=9 00:35:52.216 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:35:52.216 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:52.216 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:52.216 issued rwts: total=4720,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:52.216 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:52.216 filename1: (groupid=0, jobs=1): err= 0: pid=3941600: Thu Jul 25 01:21:43 2024 00:35:52.216 read: IOPS=475, BW=1900KiB/s (1946kB/s)(18.6MiB/10003msec) 00:35:52.216 slat (nsec): min=14125, max=80423, avg=34331.27, stdev=9318.67 00:35:52.216 clat (usec): min=11366, max=37227, avg=33374.21, stdev=1947.29 00:35:52.216 lat (usec): min=11416, max=37252, avg=33408.54, stdev=1946.25 00:35:52.216 clat percentiles (usec): 00:35:52.216 | 1.00th=[23462], 5.00th=[32900], 10.00th=[33162], 20.00th=[33162], 00:35:52.216 | 30.00th=[33162], 40.00th=[33424], 50.00th=[33424], 60.00th=[33424], 00:35:52.216 | 70.00th=[33817], 80.00th=[33817], 90.00th=[34341], 95.00th=[34341], 00:35:52.216 | 99.00th=[36963], 99.50th=[36963], 99.90th=[36963], 99.95th=[36963], 00:35:52.216 | 99.99th=[37487] 00:35:52.216 bw ( KiB/s): min= 1792, max= 1923, per=4.18%, avg=1899.95, stdev=48.03, samples=19 00:35:52.216 iops : min= 448, max= 480, avg=474.95, stdev=11.99, samples=19 00:35:52.216 lat (msec) : 20=0.67%, 50=99.33% 00:35:52.216 cpu : usr=96.82%, sys=2.29%, ctx=81, majf=0, minf=9 00:35:52.216 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:35:52.216 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:52.216 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:52.216 issued rwts: total=4752,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:52.216 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:52.216 filename1: (groupid=0, jobs=1): err= 0: pid=3941601: Thu Jul 25 01:21:43 2024 00:35:52.216 read: IOPS=473, BW=1893KiB/s (1938kB/s)(18.5MiB/10009msec) 00:35:52.216 slat (nsec): min=8773, max=81804, avg=34188.37, stdev=9042.29 00:35:52.216 clat (usec): min=10090, max=63420, avg=33501.76, stdev=2481.37 00:35:52.216 lat (usec): min=10106, max=63456, avg=33535.95, stdev=2481.35 00:35:52.216 clat percentiles (usec): 00:35:52.216 | 1.00th=[26870], 5.00th=[32900], 10.00th=[33162], 20.00th=[33162], 00:35:52.216 | 30.00th=[33162], 40.00th=[33424], 50.00th=[33424], 60.00th=[33424], 00:35:52.216 | 70.00th=[33817], 80.00th=[33817], 90.00th=[34341], 95.00th=[34341], 00:35:52.216 | 99.00th=[36963], 99.50th=[37487], 99.90th=[63177], 99.95th=[63177], 00:35:52.216 | 99.99th=[63177] 00:35:52.216 bw ( KiB/s): min= 1664, max= 1920, per=4.14%, avg=1879.58, stdev=74.55, samples=19 00:35:52.216 iops : min= 416, max= 480, avg=469.89, stdev=18.64, samples=19 00:35:52.216 lat (msec) : 20=0.68%, 50=98.99%, 100=0.34% 00:35:52.216 cpu : usr=91.56%, sys=4.49%, ctx=251, majf=0, minf=9 00:35:52.216 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:35:52.216 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:52.216 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:52.216 issued rwts: total=4736,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:52.216 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:52.216 filename1: (groupid=0, jobs=1): err= 0: pid=3941602: Thu Jul 25 01:21:43 2024 00:35:52.216 read: IOPS=472, BW=1892KiB/s (1937kB/s)(18.5MiB/10014msec) 00:35:52.216 slat (usec): min=8, max=114, avg=39.88, stdev=20.14 00:35:52.216 clat (usec): min=15090, max=55770, avg=33476.12, stdev=1996.15 00:35:52.216 lat (usec): min=15114, max=55808, avg=33516.00, stdev=1994.37 00:35:52.216 clat percentiles (usec): 00:35:52.216 | 1.00th=[31851], 5.00th=[32637], 10.00th=[32900], 20.00th=[33162], 00:35:52.216 | 30.00th=[33162], 40.00th=[33424], 50.00th=[33424], 60.00th=[33424], 00:35:52.216 | 70.00th=[33817], 80.00th=[33817], 90.00th=[34341], 95.00th=[34866], 00:35:52.216 | 99.00th=[36439], 99.50th=[36963], 99.90th=[55837], 99.95th=[55837], 00:35:52.216 | 99.99th=[55837] 00:35:52.216 bw ( KiB/s): min= 1664, max= 1920, per=4.14%, avg=1879.58, stdev=74.55, samples=19 00:35:52.216 iops : min= 416, max= 480, avg=469.89, stdev=18.64, samples=19 00:35:52.216 lat (msec) : 20=0.68%, 50=98.99%, 100=0.34% 00:35:52.216 cpu : usr=96.06%, sys=2.38%, ctx=48, majf=0, minf=9 00:35:52.216 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:35:52.216 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:52.216 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:52.216 issued rwts: total=4736,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:52.216 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:52.216 filename1: (groupid=0, jobs=1): err= 0: pid=3941603: Thu Jul 25 01:21:43 2024 00:35:52.216 read: IOPS=476, BW=1907KiB/s (1953kB/s)(18.6MiB/10009msec) 00:35:52.216 slat (nsec): min=8034, max=78619, avg=16483.06, stdev=11250.33 00:35:52.216 clat (usec): min=9911, max=63331, avg=33480.87, stdev=3780.62 00:35:52.216 lat (usec): min=9964, max=63377, avg=33497.36, stdev=3779.84 00:35:52.216 clat percentiles (usec): 00:35:52.216 | 1.00th=[26084], 5.00th=[26870], 10.00th=[28443], 20.00th=[33162], 00:35:52.216 | 30.00th=[33424], 40.00th=[33424], 50.00th=[33817], 60.00th=[33817], 00:35:52.216 | 70.00th=[33817], 80.00th=[34341], 90.00th=[37487], 95.00th=[40109], 00:35:52.216 | 99.00th=[41681], 99.50th=[47973], 99.90th=[63177], 99.95th=[63177], 00:35:52.216 | 99.99th=[63177] 00:35:52.216 bw ( KiB/s): min= 1632, max= 1936, per=4.18%, avg=1898.11, stdev=68.76, samples=19 00:35:52.216 iops : min= 408, max= 484, avg=474.53, stdev=17.19, samples=19 00:35:52.216 lat (msec) : 10=0.04%, 20=0.50%, 50=99.12%, 100=0.34% 00:35:52.216 cpu : usr=98.00%, sys=1.60%, ctx=14, majf=0, minf=9 00:35:52.216 IO depths : 1=0.2%, 2=0.4%, 4=2.3%, 8=80.2%, 16=16.9%, 32=0.0%, >=64=0.0% 00:35:52.216 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:52.216 complete : 0=0.0%, 4=89.2%, 8=9.4%, 16=1.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:52.216 issued rwts: total=4772,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:52.216 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:52.216 filename1: (groupid=0, jobs=1): err= 0: pid=3941604: Thu Jul 25 01:21:43 2024 00:35:52.216 read: IOPS=471, BW=1885KiB/s (1930kB/s)(18.4MiB/10018msec) 00:35:52.216 slat (usec): min=8, max=210, avg=76.63, stdev=11.59 00:35:52.216 clat (usec): min=19321, max=75625, avg=33280.57, stdev=2827.84 00:35:52.216 lat (usec): min=19332, max=75673, avg=33357.20, stdev=2826.19 00:35:52.216 clat percentiles (usec): 00:35:52.216 | 1.00th=[31851], 5.00th=[32375], 10.00th=[32375], 20.00th=[32637], 00:35:52.216 | 30.00th=[32900], 40.00th=[32900], 50.00th=[33162], 60.00th=[33162], 00:35:52.216 | 70.00th=[33424], 80.00th=[33817], 90.00th=[33817], 95.00th=[34341], 00:35:52.216 | 99.00th=[36439], 99.50th=[46400], 99.90th=[74974], 99.95th=[76022], 00:35:52.216 | 99.99th=[76022] 00:35:52.216 bw ( KiB/s): min= 1664, max= 1920, per=4.15%, avg=1881.60, stdev=73.12, samples=20 00:35:52.216 iops : min= 416, max= 480, avg=470.40, stdev=18.28, samples=20 00:35:52.216 lat (msec) : 20=0.34%, 50=99.28%, 100=0.38% 00:35:52.216 cpu : usr=98.21%, sys=1.34%, ctx=13, majf=0, minf=9 00:35:52.216 IO depths : 1=6.1%, 2=12.4%, 4=24.8%, 8=50.3%, 16=6.4%, 32=0.0%, >=64=0.0% 00:35:52.216 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:52.216 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:52.216 issued rwts: total=4720,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:52.216 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:52.216 filename1: (groupid=0, jobs=1): err= 0: pid=3941605: Thu Jul 25 01:21:43 2024 00:35:52.216 read: IOPS=475, BW=1900KiB/s (1946kB/s)(18.6MiB/10003msec) 00:35:52.216 slat (nsec): min=9532, max=70124, avg=32986.97, stdev=9059.30 00:35:52.216 clat (usec): min=11622, max=37167, avg=33397.73, stdev=1902.50 00:35:52.216 lat (usec): min=11634, max=37201, avg=33430.71, stdev=1903.14 00:35:52.216 clat percentiles (usec): 00:35:52.216 | 1.00th=[23987], 5.00th=[32900], 10.00th=[33162], 20.00th=[33162], 00:35:52.216 | 30.00th=[33424], 40.00th=[33424], 50.00th=[33424], 60.00th=[33817], 00:35:52.216 | 70.00th=[33817], 80.00th=[33817], 90.00th=[34341], 95.00th=[34341], 00:35:52.216 | 99.00th=[36439], 99.50th=[36963], 99.90th=[36963], 99.95th=[36963], 00:35:52.216 | 99.99th=[36963] 00:35:52.216 bw ( KiB/s): min= 1792, max= 1920, per=4.18%, avg=1899.79, stdev=47.95, samples=19 00:35:52.216 iops : min= 448, max= 480, avg=474.95, stdev=11.99, samples=19 00:35:52.216 lat (msec) : 20=0.67%, 50=99.33% 00:35:52.216 cpu : usr=94.96%, sys=2.86%, ctx=159, majf=0, minf=9 00:35:52.216 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:35:52.216 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:52.216 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:52.216 issued rwts: total=4752,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:52.216 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:52.216 filename1: (groupid=0, jobs=1): err= 0: pid=3941606: Thu Jul 25 01:21:43 2024 00:35:52.216 read: IOPS=471, BW=1887KiB/s (1932kB/s)(18.4MiB/10006msec) 00:35:52.216 slat (nsec): min=14228, max=95681, avg=34130.92, stdev=8440.30 00:35:52.216 clat (usec): min=26308, max=51522, avg=33613.32, stdev=1268.59 00:35:52.216 lat (usec): min=26331, max=51549, avg=33647.45, stdev=1267.79 00:35:52.216 clat percentiles (usec): 00:35:52.216 | 1.00th=[32637], 5.00th=[32900], 10.00th=[33162], 20.00th=[33162], 00:35:52.216 | 30.00th=[33162], 40.00th=[33424], 50.00th=[33424], 60.00th=[33424], 00:35:52.216 | 70.00th=[33817], 80.00th=[33817], 90.00th=[34341], 95.00th=[34341], 00:35:52.216 | 99.00th=[36963], 99.50th=[37487], 99.90th=[51643], 99.95th=[51643], 00:35:52.216 | 99.99th=[51643] 00:35:52.216 bw ( KiB/s): min= 1664, max= 1920, per=4.14%, avg=1879.58, stdev=74.55, samples=19 00:35:52.216 iops : min= 416, max= 480, avg=469.89, stdev=18.64, samples=19 00:35:52.216 lat (msec) : 50=99.66%, 100=0.34% 00:35:52.216 cpu : usr=95.78%, sys=2.50%, ctx=43, majf=0, minf=9 00:35:52.216 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:35:52.216 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:52.216 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:52.216 issued rwts: total=4720,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:52.216 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:52.216 filename2: (groupid=0, jobs=1): err= 0: pid=3941607: Thu Jul 25 01:21:43 2024 00:35:52.216 read: IOPS=475, BW=1900KiB/s (1946kB/s)(18.6MiB/10002msec) 00:35:52.217 slat (usec): min=8, max=100, avg=34.12, stdev=13.95 00:35:52.217 clat (usec): min=12022, max=37170, avg=33392.35, stdev=1895.40 00:35:52.217 lat (usec): min=12071, max=37214, avg=33426.47, stdev=1895.26 00:35:52.217 clat percentiles (usec): 00:35:52.217 | 1.00th=[23200], 5.00th=[32900], 10.00th=[33162], 20.00th=[33162], 00:35:52.217 | 30.00th=[33424], 40.00th=[33424], 50.00th=[33424], 60.00th=[33817], 00:35:52.217 | 70.00th=[33817], 80.00th=[33817], 90.00th=[34341], 95.00th=[34341], 00:35:52.217 | 99.00th=[36439], 99.50th=[36963], 99.90th=[36963], 99.95th=[36963], 00:35:52.217 | 99.99th=[36963] 00:35:52.217 bw ( KiB/s): min= 1792, max= 1923, per=4.18%, avg=1899.95, stdev=48.03, samples=19 00:35:52.217 iops : min= 448, max= 480, avg=474.95, stdev=11.99, samples=19 00:35:52.217 lat (msec) : 20=0.63%, 50=99.37% 00:35:52.217 cpu : usr=98.27%, sys=1.32%, ctx=25, majf=0, minf=9 00:35:52.217 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:35:52.217 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:52.217 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:52.217 issued rwts: total=4752,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:52.217 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:52.217 filename2: (groupid=0, jobs=1): err= 0: pid=3941608: Thu Jul 25 01:21:43 2024 00:35:52.217 read: IOPS=478, BW=1913KiB/s (1959kB/s)(18.7MiB/10014msec) 00:35:52.217 slat (nsec): min=8129, max=83986, avg=30513.78, stdev=13447.95 00:35:52.217 clat (usec): min=13813, max=56164, avg=33177.56, stdev=3000.81 00:35:52.217 lat (usec): min=13824, max=56198, avg=33208.07, stdev=3003.32 00:35:52.217 clat percentiles (usec): 00:35:52.217 | 1.00th=[20841], 5.00th=[32637], 10.00th=[32900], 20.00th=[33162], 00:35:52.217 | 30.00th=[33162], 40.00th=[33424], 50.00th=[33424], 60.00th=[33424], 00:35:52.217 | 70.00th=[33817], 80.00th=[33817], 90.00th=[34341], 95.00th=[34866], 00:35:52.217 | 99.00th=[40633], 99.50th=[50594], 99.90th=[55837], 99.95th=[56361], 00:35:52.217 | 99.99th=[56361] 00:35:52.217 bw ( KiB/s): min= 1664, max= 2176, per=4.19%, avg=1902.32, stdev=110.97, samples=19 00:35:52.217 iops : min= 416, max= 544, avg=475.58, stdev=27.74, samples=19 00:35:52.217 lat (msec) : 20=0.92%, 50=98.54%, 100=0.54% 00:35:52.217 cpu : usr=98.24%, sys=1.37%, ctx=15, majf=0, minf=9 00:35:52.217 IO depths : 1=5.8%, 2=11.7%, 4=23.7%, 8=52.0%, 16=6.8%, 32=0.0%, >=64=0.0% 00:35:52.217 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:52.217 complete : 0=0.0%, 4=93.8%, 8=0.5%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:52.217 issued rwts: total=4790,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:52.217 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:52.217 filename2: (groupid=0, jobs=1): err= 0: pid=3941609: Thu Jul 25 01:21:43 2024 00:35:52.217 read: IOPS=471, BW=1887KiB/s (1933kB/s)(18.4MiB/10004msec) 00:35:52.217 slat (nsec): min=9700, max=96262, avg=35085.00, stdev=13645.57 00:35:52.217 clat (usec): min=16131, max=70839, avg=33575.31, stdev=2480.74 00:35:52.217 lat (usec): min=16146, max=70873, avg=33610.39, stdev=2480.71 00:35:52.217 clat percentiles (usec): 00:35:52.217 | 1.00th=[32375], 5.00th=[32900], 10.00th=[32900], 20.00th=[33162], 00:35:52.217 | 30.00th=[33162], 40.00th=[33424], 50.00th=[33424], 60.00th=[33424], 00:35:52.217 | 70.00th=[33817], 80.00th=[33817], 90.00th=[34341], 95.00th=[34341], 00:35:52.217 | 99.00th=[36439], 99.50th=[36963], 99.90th=[70779], 99.95th=[70779], 00:35:52.217 | 99.99th=[70779] 00:35:52.217 bw ( KiB/s): min= 1667, max= 1920, per=4.14%, avg=1879.74, stdev=74.07, samples=19 00:35:52.217 iops : min= 416, max= 480, avg=469.89, stdev=18.64, samples=19 00:35:52.217 lat (msec) : 20=0.34%, 50=99.32%, 100=0.34% 00:35:52.217 cpu : usr=94.22%, sys=3.25%, ctx=160, majf=0, minf=9 00:35:52.217 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:35:52.217 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:52.217 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:52.217 issued rwts: total=4720,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:52.217 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:52.217 filename2: (groupid=0, jobs=1): err= 0: pid=3941610: Thu Jul 25 01:21:43 2024 00:35:52.217 read: IOPS=471, BW=1887KiB/s (1933kB/s)(18.4MiB/10004msec) 00:35:52.217 slat (usec): min=8, max=115, avg=40.07, stdev=19.06 00:35:52.217 clat (usec): min=16125, max=70877, avg=33535.25, stdev=2489.12 00:35:52.217 lat (usec): min=16133, max=70916, avg=33575.32, stdev=2488.09 00:35:52.217 clat percentiles (usec): 00:35:52.217 | 1.00th=[32113], 5.00th=[32637], 10.00th=[32900], 20.00th=[33162], 00:35:52.217 | 30.00th=[33162], 40.00th=[33424], 50.00th=[33424], 60.00th=[33424], 00:35:52.217 | 70.00th=[33817], 80.00th=[33817], 90.00th=[34341], 95.00th=[34341], 00:35:52.217 | 99.00th=[36439], 99.50th=[36963], 99.90th=[70779], 99.95th=[70779], 00:35:52.217 | 99.99th=[70779] 00:35:52.217 bw ( KiB/s): min= 1667, max= 1920, per=4.14%, avg=1879.74, stdev=74.07, samples=19 00:35:52.217 iops : min= 416, max= 480, avg=469.89, stdev=18.64, samples=19 00:35:52.217 lat (msec) : 20=0.34%, 50=99.32%, 100=0.34% 00:35:52.217 cpu : usr=96.45%, sys=2.33%, ctx=160, majf=0, minf=9 00:35:52.217 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:35:52.217 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:52.217 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:52.217 issued rwts: total=4720,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:52.217 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:52.217 filename2: (groupid=0, jobs=1): err= 0: pid=3941611: Thu Jul 25 01:21:43 2024 00:35:52.217 read: IOPS=471, BW=1887KiB/s (1932kB/s)(18.4MiB/10006msec) 00:35:52.217 slat (usec): min=13, max=110, avg=43.80, stdev=17.46 00:35:52.217 clat (usec): min=26210, max=51805, avg=33529.10, stdev=1302.68 00:35:52.217 lat (usec): min=26246, max=51826, avg=33572.90, stdev=1299.76 00:35:52.217 clat percentiles (usec): 00:35:52.217 | 1.00th=[32113], 5.00th=[32637], 10.00th=[32900], 20.00th=[33162], 00:35:52.217 | 30.00th=[33162], 40.00th=[33424], 50.00th=[33424], 60.00th=[33424], 00:35:52.217 | 70.00th=[33817], 80.00th=[33817], 90.00th=[34341], 95.00th=[34341], 00:35:52.217 | 99.00th=[36963], 99.50th=[36963], 99.90th=[51643], 99.95th=[51643], 00:35:52.217 | 99.99th=[51643] 00:35:52.217 bw ( KiB/s): min= 1664, max= 1920, per=4.14%, avg=1879.58, stdev=74.55, samples=19 00:35:52.217 iops : min= 416, max= 480, avg=469.89, stdev=18.64, samples=19 00:35:52.217 lat (msec) : 50=99.66%, 100=0.34% 00:35:52.217 cpu : usr=98.06%, sys=1.51%, ctx=23, majf=0, minf=9 00:35:52.217 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:35:52.217 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:52.217 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:52.217 issued rwts: total=4720,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:52.217 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:52.217 filename2: (groupid=0, jobs=1): err= 0: pid=3941612: Thu Jul 25 01:21:43 2024 00:35:52.217 read: IOPS=473, BW=1893KiB/s (1938kB/s)(18.5MiB/10009msec) 00:35:52.217 slat (nsec): min=8395, max=68339, avg=33100.96, stdev=9716.60 00:35:52.217 clat (usec): min=10240, max=62760, avg=33510.93, stdev=2451.62 00:35:52.217 lat (usec): min=10265, max=62797, avg=33544.03, stdev=2451.96 00:35:52.217 clat percentiles (usec): 00:35:52.217 | 1.00th=[26870], 5.00th=[32900], 10.00th=[33162], 20.00th=[33162], 00:35:52.217 | 30.00th=[33162], 40.00th=[33424], 50.00th=[33424], 60.00th=[33424], 00:35:52.217 | 70.00th=[33817], 80.00th=[33817], 90.00th=[34341], 95.00th=[34341], 00:35:52.217 | 99.00th=[36963], 99.50th=[37487], 99.90th=[62653], 99.95th=[62653], 00:35:52.217 | 99.99th=[62653] 00:35:52.217 bw ( KiB/s): min= 1667, max= 1920, per=4.14%, avg=1879.74, stdev=74.07, samples=19 00:35:52.217 iops : min= 416, max= 480, avg=469.89, stdev=18.64, samples=19 00:35:52.217 lat (msec) : 20=0.68%, 50=98.99%, 100=0.34% 00:35:52.217 cpu : usr=98.32%, sys=1.28%, ctx=14, majf=0, minf=9 00:35:52.217 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:35:52.217 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:52.217 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:52.217 issued rwts: total=4736,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:52.217 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:52.217 filename2: (groupid=0, jobs=1): err= 0: pid=3941613: Thu Jul 25 01:21:43 2024 00:35:52.217 read: IOPS=471, BW=1885KiB/s (1931kB/s)(18.4MiB/10014msec) 00:35:52.217 slat (nsec): min=8877, max=77936, avg=33360.53, stdev=7499.18 00:35:52.217 clat (usec): min=22209, max=75033, avg=33646.31, stdev=1816.52 00:35:52.217 lat (usec): min=22249, max=75077, avg=33679.67, stdev=1816.56 00:35:52.217 clat percentiles (usec): 00:35:52.217 | 1.00th=[32637], 5.00th=[32900], 10.00th=[33162], 20.00th=[33162], 00:35:52.217 | 30.00th=[33162], 40.00th=[33424], 50.00th=[33424], 60.00th=[33424], 00:35:52.217 | 70.00th=[33817], 80.00th=[33817], 90.00th=[34341], 95.00th=[34341], 00:35:52.217 | 99.00th=[36963], 99.50th=[36963], 99.90th=[60031], 99.95th=[60031], 00:35:52.217 | 99.99th=[74974] 00:35:52.217 bw ( KiB/s): min= 1664, max= 1920, per=4.15%, avg=1881.60, stdev=73.12, samples=20 00:35:52.217 iops : min= 416, max= 480, avg=470.40, stdev=18.28, samples=20 00:35:52.217 lat (msec) : 50=99.66%, 100=0.34% 00:35:52.217 cpu : usr=98.27%, sys=1.32%, ctx=15, majf=0, minf=9 00:35:52.217 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:35:52.217 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:52.217 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:52.217 issued rwts: total=4720,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:52.217 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:52.217 filename2: (groupid=0, jobs=1): err= 0: pid=3941614: Thu Jul 25 01:21:43 2024 00:35:52.217 read: IOPS=471, BW=1887KiB/s (1932kB/s)(18.4MiB/10006msec) 00:35:52.217 slat (nsec): min=9049, max=85568, avg=32548.10, stdev=10476.03 00:35:52.217 clat (usec): min=26265, max=59810, avg=33635.59, stdev=1328.05 00:35:52.217 lat (usec): min=26298, max=59836, avg=33668.14, stdev=1326.88 00:35:52.217 clat percentiles (usec): 00:35:52.217 | 1.00th=[32637], 5.00th=[32900], 10.00th=[33162], 20.00th=[33162], 00:35:52.217 | 30.00th=[33424], 40.00th=[33424], 50.00th=[33424], 60.00th=[33817], 00:35:52.217 | 70.00th=[33817], 80.00th=[33817], 90.00th=[34341], 95.00th=[34341], 00:35:52.217 | 99.00th=[36963], 99.50th=[37487], 99.90th=[51119], 99.95th=[51643], 00:35:52.217 | 99.99th=[60031] 00:35:52.217 bw ( KiB/s): min= 1664, max= 1920, per=4.14%, avg=1879.58, stdev=74.55, samples=19 00:35:52.217 iops : min= 416, max= 480, avg=469.89, stdev=18.64, samples=19 00:35:52.218 lat (msec) : 50=99.66%, 100=0.34% 00:35:52.218 cpu : usr=98.22%, sys=1.35%, ctx=13, majf=0, minf=9 00:35:52.218 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:35:52.218 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:52.218 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:52.218 issued rwts: total=4720,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:52.218 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:52.218 00:35:52.218 Run status group 0 (all jobs): 00:35:52.218 READ: bw=44.3MiB/s (46.5MB/s), 1885KiB/s-1913KiB/s (1930kB/s-1959kB/s), io=444MiB (466MB), run=10002-10018msec 00:35:52.218 01:21:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:35:52.218 01:21:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:35:52.218 01:21:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:35:52.218 01:21:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:35:52.218 01:21:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:35:52.218 01:21:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:35:52.218 01:21:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:52.218 01:21:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:52.218 01:21:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:52.218 01:21:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:35:52.218 01:21:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:52.218 01:21:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:52.218 01:21:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:52.218 01:21:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:35:52.218 01:21:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:35:52.218 01:21:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:35:52.218 01:21:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:35:52.218 01:21:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:52.218 01:21:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:52.218 01:21:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:52.218 01:21:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:35:52.218 01:21:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:52.218 01:21:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:52.218 01:21:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:52.218 01:21:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:35:52.218 01:21:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:35:52.218 01:21:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:35:52.218 01:21:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:35:52.218 01:21:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:52.218 01:21:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:52.218 01:21:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:52.218 01:21:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:35:52.218 01:21:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:52.218 01:21:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:52.218 01:21:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:52.218 01:21:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:35:52.218 01:21:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:35:52.218 01:21:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:35:52.218 01:21:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:35:52.218 01:21:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:35:52.218 01:21:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:35:52.218 01:21:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:35:52.218 01:21:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:35:52.218 01:21:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:35:52.218 01:21:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:35:52.218 01:21:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:35:52.218 01:21:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:35:52.218 01:21:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:52.218 01:21:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:52.218 bdev_null0 00:35:52.218 01:21:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:52.218 01:21:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:35:52.218 01:21:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:52.218 01:21:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:52.218 01:21:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:52.218 01:21:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:35:52.218 01:21:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:52.218 01:21:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:52.218 01:21:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:52.218 01:21:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:35:52.218 01:21:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:52.218 01:21:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:52.218 [2024-07-25 01:21:44.133996] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:52.218 01:21:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:52.218 01:21:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:35:52.218 01:21:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:35:52.218 01:21:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:35:52.218 01:21:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:35:52.218 01:21:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:52.218 01:21:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:52.218 bdev_null1 00:35:52.218 01:21:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:52.218 01:21:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:35:52.218 01:21:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:52.218 01:21:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:52.218 01:21:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:52.218 01:21:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:35:52.218 01:21:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:52.218 01:21:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:52.218 01:21:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:52.218 01:21:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:52.218 01:21:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:52.218 01:21:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:52.218 01:21:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:52.218 01:21:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:35:52.218 01:21:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:35:52.218 01:21:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:35:52.218 01:21:44 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:35:52.218 01:21:44 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:35:52.218 01:21:44 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:35:52.218 01:21:44 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:35:52.218 { 00:35:52.218 "params": { 00:35:52.218 "name": "Nvme$subsystem", 00:35:52.218 "trtype": "$TEST_TRANSPORT", 00:35:52.218 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:52.218 "adrfam": "ipv4", 00:35:52.218 "trsvcid": "$NVMF_PORT", 00:35:52.218 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:52.218 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:52.218 "hdgst": ${hdgst:-false}, 00:35:52.218 "ddgst": ${ddgst:-false} 00:35:52.218 }, 00:35:52.218 "method": "bdev_nvme_attach_controller" 00:35:52.218 } 00:35:52.218 EOF 00:35:52.218 )") 00:35:52.218 01:21:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:52.218 01:21:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:52.218 01:21:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:35:52.218 01:21:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:35:52.218 01:21:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1335 -- # local sanitizers 00:35:52.218 01:21:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:52.218 01:21:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # shift 00:35:52.218 01:21:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:35:52.218 01:21:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local asan_lib= 00:35:52.218 01:21:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:35:52.218 01:21:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:35:52.218 01:21:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:35:52.218 01:21:44 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:35:52.219 01:21:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:52.219 01:21:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # grep libasan 00:35:52.219 01:21:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:35:52.219 01:21:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:35:52.219 01:21:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:35:52.219 01:21:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:35:52.219 01:21:44 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:35:52.219 01:21:44 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:35:52.219 { 00:35:52.219 "params": { 00:35:52.219 "name": "Nvme$subsystem", 00:35:52.219 "trtype": "$TEST_TRANSPORT", 00:35:52.219 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:52.219 "adrfam": "ipv4", 00:35:52.219 "trsvcid": "$NVMF_PORT", 00:35:52.219 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:52.219 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:52.219 "hdgst": ${hdgst:-false}, 00:35:52.219 "ddgst": ${ddgst:-false} 00:35:52.219 }, 00:35:52.219 "method": "bdev_nvme_attach_controller" 00:35:52.219 } 00:35:52.219 EOF 00:35:52.219 )") 00:35:52.219 01:21:44 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:35:52.219 01:21:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:35:52.219 01:21:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:35:52.219 01:21:44 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:35:52.219 01:21:44 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:35:52.219 01:21:44 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:35:52.219 "params": { 00:35:52.219 "name": "Nvme0", 00:35:52.219 "trtype": "tcp", 00:35:52.219 "traddr": "10.0.0.2", 00:35:52.219 "adrfam": "ipv4", 00:35:52.219 "trsvcid": "4420", 00:35:52.219 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:52.219 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:52.219 "hdgst": false, 00:35:52.219 "ddgst": false 00:35:52.219 }, 00:35:52.219 "method": "bdev_nvme_attach_controller" 00:35:52.219 },{ 00:35:52.219 "params": { 00:35:52.219 "name": "Nvme1", 00:35:52.219 "trtype": "tcp", 00:35:52.219 "traddr": "10.0.0.2", 00:35:52.219 "adrfam": "ipv4", 00:35:52.219 "trsvcid": "4420", 00:35:52.219 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:52.219 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:52.219 "hdgst": false, 00:35:52.219 "ddgst": false 00:35:52.219 }, 00:35:52.219 "method": "bdev_nvme_attach_controller" 00:35:52.219 }' 00:35:52.219 01:21:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # asan_lib= 00:35:52.219 01:21:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:35:52.219 01:21:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:35:52.219 01:21:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:52.219 01:21:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:35:52.219 01:21:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:35:52.219 01:21:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # asan_lib= 00:35:52.219 01:21:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:35:52.219 01:21:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:35:52.219 01:21:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:52.219 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:35:52.219 ... 00:35:52.219 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:35:52.219 ... 00:35:52.219 fio-3.35 00:35:52.219 Starting 4 threads 00:35:52.219 EAL: No free 2048 kB hugepages reported on node 1 00:35:57.519 00:35:57.519 filename0: (groupid=0, jobs=1): err= 0: pid=3942972: Thu Jul 25 01:21:50 2024 00:35:57.519 read: IOPS=1727, BW=13.5MiB/s (14.2MB/s)(67.5MiB/5002msec) 00:35:57.519 slat (nsec): min=4317, max=69879, avg=23039.64, stdev=11558.18 00:35:57.519 clat (usec): min=997, max=8247, avg=4545.65, stdev=553.24 00:35:57.519 lat (usec): min=1010, max=8259, avg=4568.69, stdev=553.07 00:35:57.519 clat percentiles (usec): 00:35:57.519 | 1.00th=[ 2933], 5.00th=[ 3916], 10.00th=[ 4146], 20.00th=[ 4293], 00:35:57.519 | 30.00th=[ 4359], 40.00th=[ 4424], 50.00th=[ 4490], 60.00th=[ 4621], 00:35:57.519 | 70.00th=[ 4686], 80.00th=[ 4752], 90.00th=[ 4883], 95.00th=[ 5211], 00:35:57.519 | 99.00th=[ 6980], 99.50th=[ 7373], 99.90th=[ 7832], 99.95th=[ 7963], 00:35:57.519 | 99.99th=[ 8225] 00:35:57.519 bw ( KiB/s): min=13056, max=14464, per=24.88%, avg=13816.00, stdev=519.13, samples=10 00:35:57.519 iops : min= 1632, max= 1808, avg=1727.00, stdev=64.89, samples=10 00:35:57.519 lat (usec) : 1000=0.01% 00:35:57.519 lat (msec) : 2=0.38%, 4=5.21%, 10=94.40% 00:35:57.519 cpu : usr=94.02%, sys=5.52%, ctx=13, majf=0, minf=9 00:35:57.519 IO depths : 1=0.1%, 2=17.4%, 4=55.4%, 8=27.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:57.519 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:57.519 complete : 0=0.0%, 4=91.9%, 8=8.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:57.519 issued rwts: total=8643,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:57.519 latency : target=0, window=0, percentile=100.00%, depth=8 00:35:57.519 filename0: (groupid=0, jobs=1): err= 0: pid=3942973: Thu Jul 25 01:21:50 2024 00:35:57.519 read: IOPS=1747, BW=13.7MiB/s (14.3MB/s)(68.3MiB/5001msec) 00:35:57.519 slat (nsec): min=4044, max=68788, avg=23491.10, stdev=9168.69 00:35:57.519 clat (usec): min=790, max=9267, avg=4493.02, stdev=492.56 00:35:57.519 lat (usec): min=811, max=9306, avg=4516.51, stdev=493.11 00:35:57.519 clat percentiles (usec): 00:35:57.519 | 1.00th=[ 3032], 5.00th=[ 3851], 10.00th=[ 4146], 20.00th=[ 4293], 00:35:57.519 | 30.00th=[ 4359], 40.00th=[ 4424], 50.00th=[ 4490], 60.00th=[ 4555], 00:35:57.519 | 70.00th=[ 4621], 80.00th=[ 4752], 90.00th=[ 4817], 95.00th=[ 4948], 00:35:57.519 | 99.00th=[ 6128], 99.50th=[ 6980], 99.90th=[ 8586], 99.95th=[ 8717], 00:35:57.519 | 99.99th=[ 9241] 00:35:57.519 bw ( KiB/s): min=13408, max=14512, per=25.10%, avg=13937.78, stdev=429.06, samples=9 00:35:57.519 iops : min= 1676, max= 1814, avg=1742.22, stdev=53.63, samples=9 00:35:57.519 lat (usec) : 1000=0.03% 00:35:57.519 lat (msec) : 2=0.34%, 4=6.44%, 10=93.18% 00:35:57.519 cpu : usr=94.64%, sys=4.84%, ctx=14, majf=0, minf=0 00:35:57.519 IO depths : 1=0.2%, 2=19.6%, 4=54.4%, 8=25.8%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:57.519 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:57.519 complete : 0=0.0%, 4=90.8%, 8=9.2%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:57.519 issued rwts: total=8739,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:57.519 latency : target=0, window=0, percentile=100.00%, depth=8 00:35:57.519 filename1: (groupid=0, jobs=1): err= 0: pid=3942974: Thu Jul 25 01:21:50 2024 00:35:57.519 read: IOPS=1749, BW=13.7MiB/s (14.3MB/s)(68.4MiB/5004msec) 00:35:57.519 slat (nsec): min=4084, max=61318, avg=15135.30, stdev=9176.13 00:35:57.519 clat (usec): min=1193, max=8355, avg=4524.24, stdev=414.32 00:35:57.519 lat (usec): min=1212, max=8368, avg=4539.38, stdev=414.24 00:35:57.519 clat percentiles (usec): 00:35:57.519 | 1.00th=[ 3228], 5.00th=[ 3949], 10.00th=[ 4178], 20.00th=[ 4293], 00:35:57.519 | 30.00th=[ 4359], 40.00th=[ 4424], 50.00th=[ 4555], 60.00th=[ 4621], 00:35:57.519 | 70.00th=[ 4686], 80.00th=[ 4752], 90.00th=[ 4817], 95.00th=[ 4948], 00:35:57.519 | 99.00th=[ 5669], 99.50th=[ 6587], 99.90th=[ 7898], 99.95th=[ 8029], 00:35:57.519 | 99.99th=[ 8356] 00:35:57.519 bw ( KiB/s): min=13440, max=14592, per=25.20%, avg=13994.60, stdev=490.63, samples=10 00:35:57.519 iops : min= 1680, max= 1824, avg=1749.30, stdev=61.36, samples=10 00:35:57.519 lat (msec) : 2=0.10%, 4=5.70%, 10=94.20% 00:35:57.519 cpu : usr=94.30%, sys=5.24%, ctx=14, majf=0, minf=0 00:35:57.519 IO depths : 1=0.3%, 2=12.8%, 4=60.5%, 8=26.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:57.519 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:57.519 complete : 0=0.0%, 4=91.3%, 8=8.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:57.519 issued rwts: total=8752,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:57.519 latency : target=0, window=0, percentile=100.00%, depth=8 00:35:57.519 filename1: (groupid=0, jobs=1): err= 0: pid=3942975: Thu Jul 25 01:21:50 2024 00:35:57.519 read: IOPS=1720, BW=13.4MiB/s (14.1MB/s)(67.2MiB/5002msec) 00:35:57.519 slat (nsec): min=3974, max=69827, avg=22558.58, stdev=11599.93 00:35:57.519 clat (usec): min=904, max=8462, avg=4568.72, stdev=574.07 00:35:57.519 lat (usec): min=916, max=8476, avg=4591.28, stdev=573.78 00:35:57.519 clat percentiles (usec): 00:35:57.520 | 1.00th=[ 3163], 5.00th=[ 3949], 10.00th=[ 4178], 20.00th=[ 4293], 00:35:57.520 | 30.00th=[ 4359], 40.00th=[ 4424], 50.00th=[ 4555], 60.00th=[ 4621], 00:35:57.520 | 70.00th=[ 4686], 80.00th=[ 4752], 90.00th=[ 4883], 95.00th=[ 5342], 00:35:57.520 | 99.00th=[ 7046], 99.50th=[ 7308], 99.90th=[ 7963], 99.95th=[ 8029], 00:35:57.520 | 99.99th=[ 8455] 00:35:57.520 bw ( KiB/s): min=12697, max=14624, per=24.77%, avg=13756.10, stdev=573.86, samples=10 00:35:57.520 iops : min= 1587, max= 1828, avg=1719.50, stdev=71.76, samples=10 00:35:57.520 lat (usec) : 1000=0.03% 00:35:57.520 lat (msec) : 2=0.27%, 4=5.09%, 10=94.61% 00:35:57.520 cpu : usr=94.62%, sys=4.86%, ctx=7, majf=0, minf=9 00:35:57.520 IO depths : 1=0.1%, 2=16.7%, 4=56.5%, 8=26.7%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:57.520 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:57.520 complete : 0=0.0%, 4=91.4%, 8=8.6%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:57.520 issued rwts: total=8604,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:57.520 latency : target=0, window=0, percentile=100.00%, depth=8 00:35:57.520 00:35:57.520 Run status group 0 (all jobs): 00:35:57.520 READ: bw=54.2MiB/s (56.9MB/s), 13.4MiB/s-13.7MiB/s (14.1MB/s-14.3MB/s), io=271MiB (285MB), run=5001-5004msec 00:35:57.520 01:21:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:35:57.520 01:21:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:35:57.520 01:21:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:35:57.520 01:21:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:35:57.520 01:21:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:35:57.520 01:21:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:35:57.520 01:21:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:57.520 01:21:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:57.520 01:21:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:57.520 01:21:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:35:57.520 01:21:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:57.520 01:21:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:57.520 01:21:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:57.520 01:21:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:35:57.520 01:21:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:35:57.520 01:21:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:35:57.520 01:21:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:35:57.520 01:21:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:57.520 01:21:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:57.520 01:21:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:57.520 01:21:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:35:57.520 01:21:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:57.520 01:21:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:57.520 01:21:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:57.520 00:35:57.520 real 0m24.210s 00:35:57.520 user 4m30.362s 00:35:57.520 sys 0m7.863s 00:35:57.520 01:21:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1122 -- # xtrace_disable 00:35:57.520 01:21:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:57.520 ************************************ 00:35:57.520 END TEST fio_dif_rand_params 00:35:57.520 ************************************ 00:35:57.520 01:21:50 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:35:57.520 01:21:50 nvmf_dif -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:35:57.520 01:21:50 nvmf_dif -- common/autotest_common.sh@1103 -- # xtrace_disable 00:35:57.520 01:21:50 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:57.520 ************************************ 00:35:57.520 START TEST fio_dif_digest 00:35:57.520 ************************************ 00:35:57.520 01:21:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1121 -- # fio_dif_digest 00:35:57.520 01:21:50 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:35:57.520 01:21:50 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:35:57.520 01:21:50 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:35:57.520 01:21:50 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:35:57.520 01:21:50 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:35:57.520 01:21:50 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:35:57.520 01:21:50 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:35:57.520 01:21:50 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:35:57.520 01:21:50 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:35:57.520 01:21:50 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:35:57.520 01:21:50 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:35:57.520 01:21:50 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:35:57.520 01:21:50 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:35:57.520 01:21:50 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:35:57.520 01:21:50 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:35:57.520 01:21:50 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:35:57.520 01:21:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:57.520 01:21:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:35:57.520 bdev_null0 00:35:57.520 01:21:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:57.520 01:21:50 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:35:57.520 01:21:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:57.520 01:21:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:35:57.520 01:21:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:57.520 01:21:50 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:35:57.520 01:21:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:57.520 01:21:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:35:57.520 01:21:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:57.520 01:21:50 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:35:57.520 01:21:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:57.520 01:21:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:35:57.520 [2024-07-25 01:21:50.526152] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:57.520 01:21:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:57.520 01:21:50 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:35:57.520 01:21:50 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:35:57.520 01:21:50 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:35:57.520 01:21:50 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # config=() 00:35:57.520 01:21:50 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # local subsystem config 00:35:57.520 01:21:50 nvmf_dif.fio_dif_digest -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:35:57.520 01:21:50 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:35:57.520 { 00:35:57.520 "params": { 00:35:57.520 "name": "Nvme$subsystem", 00:35:57.520 "trtype": "$TEST_TRANSPORT", 00:35:57.520 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:57.520 "adrfam": "ipv4", 00:35:57.520 "trsvcid": "$NVMF_PORT", 00:35:57.520 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:57.520 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:57.520 "hdgst": ${hdgst:-false}, 00:35:57.520 "ddgst": ${ddgst:-false} 00:35:57.520 }, 00:35:57.520 "method": "bdev_nvme_attach_controller" 00:35:57.520 } 00:35:57.520 EOF 00:35:57.520 )") 00:35:57.520 01:21:50 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:57.520 01:21:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:57.520 01:21:50 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:35:57.520 01:21:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:35:57.520 01:21:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:35:57.520 01:21:50 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:35:57.520 01:21:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1335 -- # local sanitizers 00:35:57.520 01:21:50 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:35:57.520 01:21:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:57.520 01:21:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1337 -- # shift 00:35:57.520 01:21:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local asan_lib= 00:35:57.520 01:21:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:35:57.520 01:21:50 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # cat 00:35:57.520 01:21:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:57.520 01:21:50 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:35:57.520 01:21:50 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:35:57.520 01:21:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # grep libasan 00:35:57.520 01:21:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:35:57.520 01:21:50 nvmf_dif.fio_dif_digest -- nvmf/common.sh@556 -- # jq . 00:35:57.520 01:21:50 nvmf_dif.fio_dif_digest -- nvmf/common.sh@557 -- # IFS=, 00:35:57.520 01:21:50 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:35:57.520 "params": { 00:35:57.521 "name": "Nvme0", 00:35:57.521 "trtype": "tcp", 00:35:57.521 "traddr": "10.0.0.2", 00:35:57.521 "adrfam": "ipv4", 00:35:57.521 "trsvcid": "4420", 00:35:57.521 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:57.521 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:57.521 "hdgst": true, 00:35:57.521 "ddgst": true 00:35:57.521 }, 00:35:57.521 "method": "bdev_nvme_attach_controller" 00:35:57.521 }' 00:35:57.521 01:21:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # asan_lib= 00:35:57.521 01:21:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:35:57.521 01:21:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:35:57.521 01:21:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:57.521 01:21:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:35:57.521 01:21:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:35:57.521 01:21:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # asan_lib= 00:35:57.521 01:21:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:35:57.521 01:21:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:35:57.521 01:21:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:57.779 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:35:57.779 ... 00:35:57.779 fio-3.35 00:35:57.779 Starting 3 threads 00:35:57.779 EAL: No free 2048 kB hugepages reported on node 1 00:36:09.974 00:36:09.974 filename0: (groupid=0, jobs=1): err= 0: pid=3943749: Thu Jul 25 01:22:01 2024 00:36:09.974 read: IOPS=202, BW=25.3MiB/s (26.5MB/s)(254MiB/10044msec) 00:36:09.974 slat (nsec): min=4547, max=41656, avg=17105.04, stdev=4223.09 00:36:09.974 clat (usec): min=8587, max=56049, avg=14801.94, stdev=2721.13 00:36:09.974 lat (usec): min=8623, max=56070, avg=14819.05, stdev=2721.09 00:36:09.974 clat percentiles (usec): 00:36:09.974 | 1.00th=[11600], 5.00th=[12911], 10.00th=[13304], 20.00th=[13829], 00:36:09.974 | 30.00th=[14091], 40.00th=[14353], 50.00th=[14615], 60.00th=[14877], 00:36:09.974 | 70.00th=[15139], 80.00th=[15533], 90.00th=[16057], 95.00th=[16450], 00:36:09.974 | 99.00th=[17433], 99.50th=[24249], 99.90th=[55313], 99.95th=[55837], 00:36:09.974 | 99.99th=[55837] 00:36:09.974 bw ( KiB/s): min=23552, max=26880, per=32.79%, avg=25948.10, stdev=919.80, samples=20 00:36:09.974 iops : min= 184, max= 210, avg=202.70, stdev= 7.20, samples=20 00:36:09.974 lat (msec) : 10=0.49%, 20=98.97%, 50=0.25%, 100=0.30% 00:36:09.974 cpu : usr=93.63%, sys=5.73%, ctx=175, majf=0, minf=180 00:36:09.974 IO depths : 1=0.4%, 2=99.6%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:09.974 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:09.974 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:09.974 issued rwts: total=2030,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:09.974 latency : target=0, window=0, percentile=100.00%, depth=3 00:36:09.974 filename0: (groupid=0, jobs=1): err= 0: pid=3943750: Thu Jul 25 01:22:01 2024 00:36:09.974 read: IOPS=204, BW=25.5MiB/s (26.8MB/s)(256MiB/10045msec) 00:36:09.974 slat (nsec): min=4984, max=81775, avg=18439.58, stdev=4762.90 00:36:09.974 clat (usec): min=7901, max=56910, avg=14648.79, stdev=2228.71 00:36:09.974 lat (usec): min=7916, max=56920, avg=14667.23, stdev=2228.59 00:36:09.974 clat percentiles (usec): 00:36:09.974 | 1.00th=[11731], 5.00th=[12911], 10.00th=[13304], 20.00th=[13698], 00:36:09.974 | 30.00th=[14091], 40.00th=[14353], 50.00th=[14615], 60.00th=[14746], 00:36:09.974 | 70.00th=[15008], 80.00th=[15401], 90.00th=[15926], 95.00th=[16319], 00:36:09.974 | 99.00th=[17171], 99.50th=[17433], 99.90th=[55313], 99.95th=[56886], 00:36:09.974 | 99.99th=[56886] 00:36:09.974 bw ( KiB/s): min=24320, max=27648, per=33.14%, avg=26227.20, stdev=682.26, samples=20 00:36:09.974 iops : min= 190, max= 216, avg=204.90, stdev= 5.33, samples=20 00:36:09.974 lat (msec) : 10=0.39%, 20=99.22%, 50=0.20%, 100=0.20% 00:36:09.974 cpu : usr=93.69%, sys=5.83%, ctx=28, majf=0, minf=209 00:36:09.974 IO depths : 1=0.9%, 2=99.1%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:09.974 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:09.974 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:09.974 issued rwts: total=2051,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:09.974 latency : target=0, window=0, percentile=100.00%, depth=3 00:36:09.974 filename0: (groupid=0, jobs=1): err= 0: pid=3943751: Thu Jul 25 01:22:01 2024 00:36:09.974 read: IOPS=211, BW=26.5MiB/s (27.8MB/s)(266MiB/10045msec) 00:36:09.974 slat (nsec): min=4710, max=46734, avg=20736.97, stdev=5275.50 00:36:09.974 clat (usec): min=8014, max=53832, avg=14107.87, stdev=1674.05 00:36:09.974 lat (usec): min=8046, max=53853, avg=14128.61, stdev=1673.96 00:36:09.974 clat percentiles (usec): 00:36:09.974 | 1.00th=[ 9896], 5.00th=[12125], 10.00th=[12649], 20.00th=[13173], 00:36:09.974 | 30.00th=[13566], 40.00th=[13829], 50.00th=[14091], 60.00th=[14353], 00:36:09.974 | 70.00th=[14615], 80.00th=[14877], 90.00th=[15533], 95.00th=[15926], 00:36:09.974 | 99.00th=[16712], 99.50th=[17433], 99.90th=[23725], 99.95th=[47449], 00:36:09.974 | 99.99th=[53740] 00:36:09.974 bw ( KiB/s): min=26112, max=28416, per=34.40%, avg=27225.60, stdev=577.08, samples=20 00:36:09.974 iops : min= 204, max= 222, avg=212.70, stdev= 4.51, samples=20 00:36:09.974 lat (msec) : 10=1.03%, 20=98.73%, 50=0.19%, 100=0.05% 00:36:09.974 cpu : usr=91.53%, sys=7.08%, ctx=505, majf=0, minf=203 00:36:09.974 IO depths : 1=0.3%, 2=99.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:09.974 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:09.974 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:09.974 issued rwts: total=2129,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:09.974 latency : target=0, window=0, percentile=100.00%, depth=3 00:36:09.974 00:36:09.974 Run status group 0 (all jobs): 00:36:09.974 READ: bw=77.3MiB/s (81.0MB/s), 25.3MiB/s-26.5MiB/s (26.5MB/s-27.8MB/s), io=776MiB (814MB), run=10044-10045msec 00:36:09.974 01:22:01 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:36:09.974 01:22:01 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:36:09.974 01:22:01 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:36:09.974 01:22:01 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:36:09.975 01:22:01 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:36:09.975 01:22:01 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:36:09.975 01:22:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:09.975 01:22:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:36:09.975 01:22:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:09.975 01:22:01 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:36:09.975 01:22:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:09.975 01:22:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:36:09.975 01:22:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:09.975 00:36:09.975 real 0m11.161s 00:36:09.975 user 0m29.172s 00:36:09.975 sys 0m2.159s 00:36:09.975 01:22:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1122 -- # xtrace_disable 00:36:09.975 01:22:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:36:09.975 ************************************ 00:36:09.975 END TEST fio_dif_digest 00:36:09.975 ************************************ 00:36:09.975 01:22:01 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:36:09.975 01:22:01 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:36:09.975 01:22:01 nvmf_dif -- nvmf/common.sh@488 -- # nvmfcleanup 00:36:09.975 01:22:01 nvmf_dif -- nvmf/common.sh@117 -- # sync 00:36:09.975 01:22:01 nvmf_dif -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:36:09.975 01:22:01 nvmf_dif -- nvmf/common.sh@120 -- # set +e 00:36:09.975 01:22:01 nvmf_dif -- nvmf/common.sh@121 -- # for i in {1..20} 00:36:09.975 01:22:01 nvmf_dif -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:36:09.975 rmmod nvme_tcp 00:36:09.975 rmmod nvme_fabrics 00:36:09.975 rmmod nvme_keyring 00:36:09.975 01:22:01 nvmf_dif -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:36:09.975 01:22:01 nvmf_dif -- nvmf/common.sh@124 -- # set -e 00:36:09.975 01:22:01 nvmf_dif -- nvmf/common.sh@125 -- # return 0 00:36:09.975 01:22:01 nvmf_dif -- nvmf/common.sh@489 -- # '[' -n 3937202 ']' 00:36:09.975 01:22:01 nvmf_dif -- nvmf/common.sh@490 -- # killprocess 3937202 00:36:09.975 01:22:01 nvmf_dif -- common/autotest_common.sh@946 -- # '[' -z 3937202 ']' 00:36:09.975 01:22:01 nvmf_dif -- common/autotest_common.sh@950 -- # kill -0 3937202 00:36:09.975 01:22:01 nvmf_dif -- common/autotest_common.sh@951 -- # uname 00:36:09.975 01:22:01 nvmf_dif -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:36:09.975 01:22:01 nvmf_dif -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3937202 00:36:09.975 01:22:01 nvmf_dif -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:36:09.975 01:22:01 nvmf_dif -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:36:09.975 01:22:01 nvmf_dif -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3937202' 00:36:09.975 killing process with pid 3937202 00:36:09.975 01:22:01 nvmf_dif -- common/autotest_common.sh@965 -- # kill 3937202 00:36:09.975 01:22:01 nvmf_dif -- common/autotest_common.sh@970 -- # wait 3937202 00:36:09.975 01:22:02 nvmf_dif -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:36:09.975 01:22:02 nvmf_dif -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:36:09.975 Waiting for block devices as requested 00:36:09.975 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:36:10.232 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:36:10.232 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:36:10.232 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:36:10.489 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:36:10.489 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:36:10.489 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:36:10.489 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:36:10.745 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:36:10.745 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:36:10.745 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:36:10.745 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:36:11.003 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:36:11.003 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:36:11.003 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:36:11.003 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:36:11.260 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:36:11.260 01:22:04 nvmf_dif -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:36:11.261 01:22:04 nvmf_dif -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:36:11.261 01:22:04 nvmf_dif -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:36:11.261 01:22:04 nvmf_dif -- nvmf/common.sh@278 -- # remove_spdk_ns 00:36:11.261 01:22:04 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:11.261 01:22:04 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:36:11.261 01:22:04 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:13.785 01:22:06 nvmf_dif -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:36:13.786 00:36:13.786 real 1m6.753s 00:36:13.786 user 6m27.163s 00:36:13.786 sys 0m19.322s 00:36:13.786 01:22:06 nvmf_dif -- common/autotest_common.sh@1122 -- # xtrace_disable 00:36:13.786 01:22:06 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:36:13.786 ************************************ 00:36:13.786 END TEST nvmf_dif 00:36:13.786 ************************************ 00:36:13.786 01:22:06 -- spdk/autotest.sh@293 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:36:13.786 01:22:06 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:36:13.786 01:22:06 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:36:13.786 01:22:06 -- common/autotest_common.sh@10 -- # set +x 00:36:13.786 ************************************ 00:36:13.786 START TEST nvmf_abort_qd_sizes 00:36:13.786 ************************************ 00:36:13.786 01:22:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:36:13.786 * Looking for test storage... 00:36:13.786 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:36:13.786 01:22:06 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:13.786 01:22:06 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:36:13.786 01:22:06 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:13.786 01:22:06 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:13.786 01:22:06 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:13.786 01:22:06 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:13.786 01:22:06 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:13.786 01:22:06 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:13.786 01:22:06 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:13.786 01:22:06 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:13.786 01:22:06 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:13.786 01:22:06 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:13.786 01:22:06 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:36:13.786 01:22:06 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:36:13.786 01:22:06 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:13.786 01:22:06 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:13.786 01:22:06 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:13.786 01:22:06 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:13.786 01:22:06 nvmf_abort_qd_sizes -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:13.786 01:22:06 nvmf_abort_qd_sizes -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:13.786 01:22:06 nvmf_abort_qd_sizes -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:13.786 01:22:06 nvmf_abort_qd_sizes -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:13.786 01:22:06 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:13.786 01:22:06 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:13.786 01:22:06 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:13.786 01:22:06 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:36:13.786 01:22:06 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:13.786 01:22:06 nvmf_abort_qd_sizes -- nvmf/common.sh@47 -- # : 0 00:36:13.786 01:22:06 nvmf_abort_qd_sizes -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:36:13.786 01:22:06 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:36:13.786 01:22:06 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:13.786 01:22:06 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:13.786 01:22:06 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:13.786 01:22:06 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:36:13.786 01:22:06 nvmf_abort_qd_sizes -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:36:13.786 01:22:06 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # have_pci_nics=0 00:36:13.786 01:22:06 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:36:13.786 01:22:06 nvmf_abort_qd_sizes -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:36:13.786 01:22:06 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:13.786 01:22:06 nvmf_abort_qd_sizes -- nvmf/common.sh@448 -- # prepare_net_devs 00:36:13.786 01:22:06 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # local -g is_hw=no 00:36:13.786 01:22:06 nvmf_abort_qd_sizes -- nvmf/common.sh@412 -- # remove_spdk_ns 00:36:13.786 01:22:06 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:13.786 01:22:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:36:13.786 01:22:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:13.786 01:22:06 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:36:13.786 01:22:06 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:36:13.786 01:22:06 nvmf_abort_qd_sizes -- nvmf/common.sh@285 -- # xtrace_disable 00:36:13.786 01:22:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:36:15.688 01:22:08 nvmf_abort_qd_sizes -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:15.688 01:22:08 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # pci_devs=() 00:36:15.688 01:22:08 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # local -a pci_devs 00:36:15.688 01:22:08 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # pci_net_devs=() 00:36:15.688 01:22:08 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:36:15.688 01:22:08 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # pci_drivers=() 00:36:15.688 01:22:08 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # local -A pci_drivers 00:36:15.688 01:22:08 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # net_devs=() 00:36:15.688 01:22:08 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # local -ga net_devs 00:36:15.688 01:22:08 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # e810=() 00:36:15.688 01:22:08 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # local -ga e810 00:36:15.688 01:22:08 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # x722=() 00:36:15.688 01:22:08 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # local -ga x722 00:36:15.688 01:22:08 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # mlx=() 00:36:15.688 01:22:08 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # local -ga mlx 00:36:15.688 01:22:08 nvmf_abort_qd_sizes -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:15.688 01:22:08 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:15.688 01:22:08 nvmf_abort_qd_sizes -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:15.688 01:22:08 nvmf_abort_qd_sizes -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:15.688 01:22:08 nvmf_abort_qd_sizes -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:15.688 01:22:08 nvmf_abort_qd_sizes -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:15.688 01:22:08 nvmf_abort_qd_sizes -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:15.688 01:22:08 nvmf_abort_qd_sizes -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:15.688 01:22:08 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:15.688 01:22:08 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:15.688 01:22:08 nvmf_abort_qd_sizes -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:15.688 01:22:08 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:36:15.688 01:22:08 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:36:15.688 01:22:08 nvmf_abort_qd_sizes -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:36:15.688 01:22:08 nvmf_abort_qd_sizes -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:36:15.688 01:22:08 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:36:15.688 01:22:08 nvmf_abort_qd_sizes -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:36:15.688 01:22:08 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:36:15.688 01:22:08 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:36:15.688 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:36:15.688 01:22:08 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:36:15.688 01:22:08 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:36:15.688 01:22:08 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:15.688 01:22:08 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:15.688 01:22:08 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:36:15.688 01:22:08 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:36:15.688 01:22:08 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:36:15.688 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:36:15.688 01:22:08 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:36:15.688 01:22:08 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:36:15.688 01:22:08 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:15.688 01:22:08 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:15.688 01:22:08 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:36:15.688 01:22:08 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:36:15.688 01:22:08 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:36:15.688 01:22:08 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:36:15.688 01:22:08 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:36:15.688 01:22:08 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:15.688 01:22:08 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:36:15.688 01:22:08 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:15.688 01:22:08 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:36:15.688 01:22:08 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:36:15.688 01:22:08 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:15.688 01:22:08 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:36:15.688 Found net devices under 0000:0a:00.0: cvl_0_0 00:36:15.688 01:22:08 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:36:15.688 01:22:08 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:36:15.688 01:22:08 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:15.688 01:22:08 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:36:15.688 01:22:08 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:15.688 01:22:08 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:36:15.688 01:22:08 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:36:15.689 01:22:08 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:15.689 01:22:08 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:36:15.689 Found net devices under 0000:0a:00.1: cvl_0_1 00:36:15.689 01:22:08 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:36:15.689 01:22:08 nvmf_abort_qd_sizes -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:36:15.689 01:22:08 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # is_hw=yes 00:36:15.689 01:22:08 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:36:15.689 01:22:08 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:36:15.689 01:22:08 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:36:15.689 01:22:08 nvmf_abort_qd_sizes -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:15.689 01:22:08 nvmf_abort_qd_sizes -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:15.689 01:22:08 nvmf_abort_qd_sizes -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:15.689 01:22:08 nvmf_abort_qd_sizes -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:36:15.689 01:22:08 nvmf_abort_qd_sizes -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:15.689 01:22:08 nvmf_abort_qd_sizes -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:15.689 01:22:08 nvmf_abort_qd_sizes -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:36:15.689 01:22:08 nvmf_abort_qd_sizes -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:15.689 01:22:08 nvmf_abort_qd_sizes -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:15.689 01:22:08 nvmf_abort_qd_sizes -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:36:15.689 01:22:08 nvmf_abort_qd_sizes -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:36:15.689 01:22:08 nvmf_abort_qd_sizes -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:36:15.689 01:22:08 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:15.689 01:22:08 nvmf_abort_qd_sizes -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:15.689 01:22:08 nvmf_abort_qd_sizes -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:15.689 01:22:08 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:36:15.689 01:22:08 nvmf_abort_qd_sizes -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:15.689 01:22:08 nvmf_abort_qd_sizes -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:15.689 01:22:08 nvmf_abort_qd_sizes -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:15.689 01:22:08 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:36:15.689 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:15.689 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.253 ms 00:36:15.689 00:36:15.689 --- 10.0.0.2 ping statistics --- 00:36:15.689 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:15.689 rtt min/avg/max/mdev = 0.253/0.253/0.253/0.000 ms 00:36:15.689 01:22:08 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:15.689 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:15.689 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.086 ms 00:36:15.689 00:36:15.689 --- 10.0.0.1 ping statistics --- 00:36:15.689 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:15.689 rtt min/avg/max/mdev = 0.086/0.086/0.086/0.000 ms 00:36:15.689 01:22:08 nvmf_abort_qd_sizes -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:15.689 01:22:08 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # return 0 00:36:15.689 01:22:08 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:36:15.689 01:22:08 nvmf_abort_qd_sizes -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:36:16.624 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:36:16.624 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:36:16.624 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:36:16.624 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:36:16.624 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:36:16.624 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:36:16.624 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:36:16.624 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:36:16.624 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:36:16.624 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:36:16.624 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:36:16.624 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:36:16.624 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:36:16.624 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:36:16.624 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:36:16.624 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:36:17.558 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:36:17.816 01:22:10 nvmf_abort_qd_sizes -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:17.816 01:22:10 nvmf_abort_qd_sizes -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:36:17.816 01:22:10 nvmf_abort_qd_sizes -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:36:17.816 01:22:10 nvmf_abort_qd_sizes -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:17.816 01:22:10 nvmf_abort_qd_sizes -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:36:17.816 01:22:10 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:36:17.816 01:22:10 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:36:17.816 01:22:10 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:36:17.816 01:22:10 nvmf_abort_qd_sizes -- common/autotest_common.sh@720 -- # xtrace_disable 00:36:17.816 01:22:10 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:36:17.816 01:22:10 nvmf_abort_qd_sizes -- nvmf/common.sh@481 -- # nvmfpid=3948527 00:36:17.816 01:22:10 nvmf_abort_qd_sizes -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:36:17.816 01:22:10 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # waitforlisten 3948527 00:36:17.816 01:22:10 nvmf_abort_qd_sizes -- common/autotest_common.sh@827 -- # '[' -z 3948527 ']' 00:36:17.816 01:22:10 nvmf_abort_qd_sizes -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:17.816 01:22:10 nvmf_abort_qd_sizes -- common/autotest_common.sh@832 -- # local max_retries=100 00:36:17.816 01:22:10 nvmf_abort_qd_sizes -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:17.816 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:17.816 01:22:10 nvmf_abort_qd_sizes -- common/autotest_common.sh@836 -- # xtrace_disable 00:36:17.816 01:22:10 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:36:17.816 [2024-07-25 01:22:10.858649] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 23.11.0 initialization... 00:36:17.816 [2024-07-25 01:22:10.858733] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:17.816 EAL: No free 2048 kB hugepages reported on node 1 00:36:17.816 [2024-07-25 01:22:10.923866] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:36:18.075 [2024-07-25 01:22:11.016086] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:18.075 [2024-07-25 01:22:11.016142] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:18.075 [2024-07-25 01:22:11.016171] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:18.075 [2024-07-25 01:22:11.016182] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:18.075 [2024-07-25 01:22:11.016192] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:18.075 [2024-07-25 01:22:11.016280] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:36:18.075 [2024-07-25 01:22:11.016306] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:36:18.075 [2024-07-25 01:22:11.016362] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:36:18.075 [2024-07-25 01:22:11.016364] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:36:18.075 01:22:11 nvmf_abort_qd_sizes -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:36:18.075 01:22:11 nvmf_abort_qd_sizes -- common/autotest_common.sh@860 -- # return 0 00:36:18.075 01:22:11 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:36:18.075 01:22:11 nvmf_abort_qd_sizes -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:18.075 01:22:11 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:36:18.075 01:22:11 nvmf_abort_qd_sizes -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:18.075 01:22:11 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:36:18.075 01:22:11 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:36:18.075 01:22:11 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:36:18.075 01:22:11 nvmf_abort_qd_sizes -- scripts/common.sh@309 -- # local bdf bdfs 00:36:18.075 01:22:11 nvmf_abort_qd_sizes -- scripts/common.sh@310 -- # local nvmes 00:36:18.075 01:22:11 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # [[ -n 0000:88:00.0 ]] 00:36:18.075 01:22:11 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:36:18.075 01:22:11 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:36:18.075 01:22:11 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:88:00.0 ]] 00:36:18.075 01:22:11 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 00:36:18.075 01:22:11 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:36:18.075 01:22:11 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:36:18.075 01:22:11 nvmf_abort_qd_sizes -- scripts/common.sh@325 -- # (( 1 )) 00:36:18.075 01:22:11 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # printf '%s\n' 0000:88:00.0 00:36:18.075 01:22:11 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:36:18.075 01:22:11 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:88:00.0 00:36:18.075 01:22:11 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:36:18.075 01:22:11 nvmf_abort_qd_sizes -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:36:18.075 01:22:11 nvmf_abort_qd_sizes -- common/autotest_common.sh@1103 -- # xtrace_disable 00:36:18.075 01:22:11 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:36:18.075 ************************************ 00:36:18.075 START TEST spdk_target_abort 00:36:18.075 ************************************ 00:36:18.075 01:22:11 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1121 -- # spdk_target 00:36:18.075 01:22:11 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:36:18.075 01:22:11 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:88:00.0 -b spdk_target 00:36:18.075 01:22:11 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:18.075 01:22:11 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:21.355 spdk_targetn1 00:36:21.355 01:22:14 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:21.355 01:22:14 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:36:21.355 01:22:14 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:21.355 01:22:14 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:21.355 [2024-07-25 01:22:14.044373] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:21.355 01:22:14 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:21.355 01:22:14 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:36:21.355 01:22:14 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:21.355 01:22:14 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:21.355 01:22:14 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:21.355 01:22:14 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:36:21.355 01:22:14 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:21.355 01:22:14 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:21.355 01:22:14 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:21.355 01:22:14 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:36:21.355 01:22:14 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:21.355 01:22:14 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:21.355 [2024-07-25 01:22:14.076645] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:21.355 01:22:14 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:21.355 01:22:14 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:36:21.355 01:22:14 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:36:21.355 01:22:14 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:36:21.355 01:22:14 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:36:21.355 01:22:14 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:36:21.355 01:22:14 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:36:21.355 01:22:14 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:36:21.355 01:22:14 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:36:21.355 01:22:14 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:36:21.355 01:22:14 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:21.355 01:22:14 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:36:21.355 01:22:14 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:21.355 01:22:14 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:36:21.355 01:22:14 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:21.355 01:22:14 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:36:21.355 01:22:14 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:21.355 01:22:14 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:36:21.355 01:22:14 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:21.355 01:22:14 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:21.355 01:22:14 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:36:21.355 01:22:14 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:21.355 EAL: No free 2048 kB hugepages reported on node 1 00:36:24.636 Initializing NVMe Controllers 00:36:24.636 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:36:24.636 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:36:24.636 Initialization complete. Launching workers. 00:36:24.636 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 11314, failed: 0 00:36:24.636 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1225, failed to submit 10089 00:36:24.636 success 746, unsuccess 479, failed 0 00:36:24.636 01:22:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:36:24.636 01:22:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:24.636 EAL: No free 2048 kB hugepages reported on node 1 00:36:27.914 Initializing NVMe Controllers 00:36:27.914 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:36:27.914 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:36:27.914 Initialization complete. Launching workers. 00:36:27.914 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8579, failed: 0 00:36:27.914 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1242, failed to submit 7337 00:36:27.914 success 312, unsuccess 930, failed 0 00:36:27.914 01:22:20 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:36:27.914 01:22:20 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:27.914 EAL: No free 2048 kB hugepages reported on node 1 00:36:31.192 Initializing NVMe Controllers 00:36:31.192 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:36:31.192 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:36:31.192 Initialization complete. Launching workers. 00:36:31.192 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 31298, failed: 0 00:36:31.192 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2661, failed to submit 28637 00:36:31.192 success 531, unsuccess 2130, failed 0 00:36:31.192 01:22:23 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:36:31.192 01:22:23 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:31.192 01:22:23 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:31.192 01:22:23 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:31.192 01:22:23 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:36:31.192 01:22:23 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:31.192 01:22:23 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:32.125 01:22:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:32.125 01:22:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 3948527 00:36:32.125 01:22:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@946 -- # '[' -z 3948527 ']' 00:36:32.125 01:22:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@950 -- # kill -0 3948527 00:36:32.125 01:22:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@951 -- # uname 00:36:32.125 01:22:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:36:32.125 01:22:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3948527 00:36:32.125 01:22:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:36:32.125 01:22:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:36:32.125 01:22:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3948527' 00:36:32.125 killing process with pid 3948527 00:36:32.125 01:22:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@965 -- # kill 3948527 00:36:32.125 01:22:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@970 -- # wait 3948527 00:36:32.383 00:36:32.383 real 0m14.222s 00:36:32.383 user 0m53.896s 00:36:32.383 sys 0m2.652s 00:36:32.383 01:22:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1122 -- # xtrace_disable 00:36:32.383 01:22:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:32.383 ************************************ 00:36:32.383 END TEST spdk_target_abort 00:36:32.383 ************************************ 00:36:32.383 01:22:25 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:36:32.383 01:22:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:36:32.383 01:22:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@1103 -- # xtrace_disable 00:36:32.383 01:22:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:36:32.383 ************************************ 00:36:32.383 START TEST kernel_target_abort 00:36:32.383 ************************************ 00:36:32.383 01:22:25 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1121 -- # kernel_target 00:36:32.383 01:22:25 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:36:32.383 01:22:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@741 -- # local ip 00:36:32.383 01:22:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:32.383 01:22:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:32.383 01:22:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:32.383 01:22:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:32.383 01:22:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:32.383 01:22:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:32.383 01:22:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:32.383 01:22:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:32.383 01:22:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:32.383 01:22:25 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:36:32.383 01:22:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:36:32.383 01:22:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:36:32.383 01:22:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:36:32.383 01:22:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:36:32.383 01:22:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:36:32.383 01:22:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@639 -- # local block nvme 00:36:32.383 01:22:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:36:32.383 01:22:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@642 -- # modprobe nvmet 00:36:32.383 01:22:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:36:32.383 01:22:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:36:33.780 Waiting for block devices as requested 00:36:33.780 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:36:33.780 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:36:33.780 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:36:34.038 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:36:34.038 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:36:34.038 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:36:34.038 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:36:34.038 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:36:34.296 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:36:34.296 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:36:34.296 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:36:34.296 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:36:34.554 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:36:34.554 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:36:34.554 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:36:34.554 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:36:34.811 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:36:34.811 01:22:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:36:34.811 01:22:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:36:34.811 01:22:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:36:34.811 01:22:27 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:36:34.811 01:22:27 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:36:34.811 01:22:27 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:36:34.811 01:22:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:36:34.811 01:22:27 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:36:34.811 01:22:27 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:36:34.811 No valid GPT data, bailing 00:36:34.811 01:22:27 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:36:34.812 01:22:27 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:36:34.812 01:22:27 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:36:34.812 01:22:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:36:34.812 01:22:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:36:34.812 01:22:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:36:34.812 01:22:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:36:34.812 01:22:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:36:34.812 01:22:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:36:34.812 01:22:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # echo 1 00:36:34.812 01:22:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:36:34.812 01:22:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # echo 1 00:36:34.812 01:22:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:36:34.812 01:22:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@672 -- # echo tcp 00:36:34.812 01:22:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # echo 4420 00:36:34.812 01:22:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@674 -- # echo ipv4 00:36:34.812 01:22:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:36:35.069 01:22:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.1 -t tcp -s 4420 00:36:35.069 00:36:35.069 Discovery Log Number of Records 2, Generation counter 2 00:36:35.069 =====Discovery Log Entry 0====== 00:36:35.069 trtype: tcp 00:36:35.069 adrfam: ipv4 00:36:35.069 subtype: current discovery subsystem 00:36:35.069 treq: not specified, sq flow control disable supported 00:36:35.069 portid: 1 00:36:35.069 trsvcid: 4420 00:36:35.069 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:36:35.069 traddr: 10.0.0.1 00:36:35.069 eflags: none 00:36:35.069 sectype: none 00:36:35.069 =====Discovery Log Entry 1====== 00:36:35.069 trtype: tcp 00:36:35.069 adrfam: ipv4 00:36:35.069 subtype: nvme subsystem 00:36:35.069 treq: not specified, sq flow control disable supported 00:36:35.069 portid: 1 00:36:35.069 trsvcid: 4420 00:36:35.069 subnqn: nqn.2016-06.io.spdk:testnqn 00:36:35.069 traddr: 10.0.0.1 00:36:35.069 eflags: none 00:36:35.069 sectype: none 00:36:35.069 01:22:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:36:35.069 01:22:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:36:35.069 01:22:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:36:35.069 01:22:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:36:35.069 01:22:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:36:35.069 01:22:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:36:35.069 01:22:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:36:35.069 01:22:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:36:35.069 01:22:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:36:35.069 01:22:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:35.069 01:22:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:36:35.069 01:22:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:35.069 01:22:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:36:35.069 01:22:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:35.070 01:22:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:36:35.070 01:22:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:35.070 01:22:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:36:35.070 01:22:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:35.070 01:22:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:35.070 01:22:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:36:35.070 01:22:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:35.070 EAL: No free 2048 kB hugepages reported on node 1 00:36:38.346 Initializing NVMe Controllers 00:36:38.346 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:36:38.346 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:36:38.346 Initialization complete. Launching workers. 00:36:38.346 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 34978, failed: 0 00:36:38.346 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 34978, failed to submit 0 00:36:38.346 success 0, unsuccess 34978, failed 0 00:36:38.346 01:22:31 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:36:38.346 01:22:31 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:38.346 EAL: No free 2048 kB hugepages reported on node 1 00:36:41.623 Initializing NVMe Controllers 00:36:41.623 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:36:41.623 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:36:41.623 Initialization complete. Launching workers. 00:36:41.623 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 67406, failed: 0 00:36:41.623 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 17018, failed to submit 50388 00:36:41.623 success 0, unsuccess 17018, failed 0 00:36:41.623 01:22:34 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:36:41.623 01:22:34 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:41.623 EAL: No free 2048 kB hugepages reported on node 1 00:36:44.902 Initializing NVMe Controllers 00:36:44.902 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:36:44.902 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:36:44.902 Initialization complete. Launching workers. 00:36:44.902 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 66164, failed: 0 00:36:44.902 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 16534, failed to submit 49630 00:36:44.902 success 0, unsuccess 16534, failed 0 00:36:44.902 01:22:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:36:44.902 01:22:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:36:44.902 01:22:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # echo 0 00:36:44.902 01:22:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:36:44.902 01:22:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:36:44.902 01:22:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:36:44.902 01:22:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:36:44.902 01:22:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:36:44.902 01:22:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:36:44.902 01:22:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:36:45.468 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:36:45.468 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:36:45.468 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:36:45.468 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:36:45.726 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:36:45.726 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:36:45.726 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:36:45.726 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:36:45.726 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:36:45.726 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:36:45.726 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:36:45.726 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:36:45.726 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:36:45.726 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:36:45.726 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:36:45.726 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:36:46.662 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:36:46.662 00:36:46.662 real 0m14.304s 00:36:46.662 user 0m5.419s 00:36:46.662 sys 0m3.453s 00:36:46.662 01:22:39 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1122 -- # xtrace_disable 00:36:46.662 01:22:39 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:46.662 ************************************ 00:36:46.662 END TEST kernel_target_abort 00:36:46.662 ************************************ 00:36:46.662 01:22:39 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:36:46.662 01:22:39 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:36:46.662 01:22:39 nvmf_abort_qd_sizes -- nvmf/common.sh@488 -- # nvmfcleanup 00:36:46.662 01:22:39 nvmf_abort_qd_sizes -- nvmf/common.sh@117 -- # sync 00:36:46.662 01:22:39 nvmf_abort_qd_sizes -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:36:46.662 01:22:39 nvmf_abort_qd_sizes -- nvmf/common.sh@120 -- # set +e 00:36:46.662 01:22:39 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # for i in {1..20} 00:36:46.662 01:22:39 nvmf_abort_qd_sizes -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:36:46.662 rmmod nvme_tcp 00:36:46.927 rmmod nvme_fabrics 00:36:46.927 rmmod nvme_keyring 00:36:46.927 01:22:39 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:36:46.927 01:22:39 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set -e 00:36:46.927 01:22:39 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # return 0 00:36:46.927 01:22:39 nvmf_abort_qd_sizes -- nvmf/common.sh@489 -- # '[' -n 3948527 ']' 00:36:46.927 01:22:39 nvmf_abort_qd_sizes -- nvmf/common.sh@490 -- # killprocess 3948527 00:36:46.927 01:22:39 nvmf_abort_qd_sizes -- common/autotest_common.sh@946 -- # '[' -z 3948527 ']' 00:36:46.927 01:22:39 nvmf_abort_qd_sizes -- common/autotest_common.sh@950 -- # kill -0 3948527 00:36:46.927 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 950: kill: (3948527) - No such process 00:36:46.927 01:22:39 nvmf_abort_qd_sizes -- common/autotest_common.sh@973 -- # echo 'Process with pid 3948527 is not found' 00:36:46.927 Process with pid 3948527 is not found 00:36:46.927 01:22:39 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:36:46.927 01:22:39 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:36:47.882 Waiting for block devices as requested 00:36:47.882 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:36:47.882 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:36:48.141 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:36:48.141 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:36:48.141 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:36:48.141 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:36:48.399 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:36:48.399 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:36:48.399 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:36:48.399 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:36:48.657 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:36:48.657 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:36:48.657 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:36:48.657 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:36:48.915 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:36:48.915 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:36:48.915 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:36:49.173 01:22:42 nvmf_abort_qd_sizes -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:36:49.173 01:22:42 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:36:49.173 01:22:42 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:36:49.174 01:22:42 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # remove_spdk_ns 00:36:49.174 01:22:42 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:49.174 01:22:42 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:36:49.174 01:22:42 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:51.072 01:22:44 nvmf_abort_qd_sizes -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:36:51.072 00:36:51.072 real 0m37.745s 00:36:51.072 user 1m1.333s 00:36:51.072 sys 0m9.337s 00:36:51.072 01:22:44 nvmf_abort_qd_sizes -- common/autotest_common.sh@1122 -- # xtrace_disable 00:36:51.072 01:22:44 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:36:51.072 ************************************ 00:36:51.072 END TEST nvmf_abort_qd_sizes 00:36:51.072 ************************************ 00:36:51.072 01:22:44 -- spdk/autotest.sh@295 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:36:51.072 01:22:44 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:36:51.072 01:22:44 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:36:51.072 01:22:44 -- common/autotest_common.sh@10 -- # set +x 00:36:51.072 ************************************ 00:36:51.072 START TEST keyring_file 00:36:51.072 ************************************ 00:36:51.072 01:22:44 keyring_file -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:36:51.330 * Looking for test storage... 00:36:51.330 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:36:51.330 01:22:44 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:36:51.330 01:22:44 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:51.330 01:22:44 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:36:51.330 01:22:44 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:51.330 01:22:44 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:51.330 01:22:44 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:51.330 01:22:44 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:51.330 01:22:44 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:51.330 01:22:44 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:51.330 01:22:44 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:51.330 01:22:44 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:51.330 01:22:44 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:51.330 01:22:44 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:51.330 01:22:44 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:36:51.330 01:22:44 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:36:51.330 01:22:44 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:51.330 01:22:44 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:51.330 01:22:44 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:51.330 01:22:44 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:51.330 01:22:44 keyring_file -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:51.330 01:22:44 keyring_file -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:51.330 01:22:44 keyring_file -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:51.330 01:22:44 keyring_file -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:51.330 01:22:44 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:51.330 01:22:44 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:51.330 01:22:44 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:51.330 01:22:44 keyring_file -- paths/export.sh@5 -- # export PATH 00:36:51.330 01:22:44 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:51.330 01:22:44 keyring_file -- nvmf/common.sh@47 -- # : 0 00:36:51.330 01:22:44 keyring_file -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:36:51.330 01:22:44 keyring_file -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:36:51.331 01:22:44 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:51.331 01:22:44 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:51.331 01:22:44 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:51.331 01:22:44 keyring_file -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:36:51.331 01:22:44 keyring_file -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:36:51.331 01:22:44 keyring_file -- nvmf/common.sh@51 -- # have_pci_nics=0 00:36:51.331 01:22:44 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:36:51.331 01:22:44 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:36:51.331 01:22:44 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:36:51.331 01:22:44 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:36:51.331 01:22:44 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:36:51.331 01:22:44 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:36:51.331 01:22:44 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:36:51.331 01:22:44 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:36:51.331 01:22:44 keyring_file -- keyring/common.sh@17 -- # name=key0 00:36:51.331 01:22:44 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:36:51.331 01:22:44 keyring_file -- keyring/common.sh@17 -- # digest=0 00:36:51.331 01:22:44 keyring_file -- keyring/common.sh@18 -- # mktemp 00:36:51.331 01:22:44 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.uPEibmr1ff 00:36:51.331 01:22:44 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:36:51.331 01:22:44 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:36:51.331 01:22:44 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:36:51.331 01:22:44 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:36:51.331 01:22:44 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:36:51.331 01:22:44 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:36:51.331 01:22:44 keyring_file -- nvmf/common.sh@705 -- # python - 00:36:51.331 01:22:44 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.uPEibmr1ff 00:36:51.331 01:22:44 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.uPEibmr1ff 00:36:51.331 01:22:44 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.uPEibmr1ff 00:36:51.331 01:22:44 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:36:51.331 01:22:44 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:36:51.331 01:22:44 keyring_file -- keyring/common.sh@17 -- # name=key1 00:36:51.331 01:22:44 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:36:51.331 01:22:44 keyring_file -- keyring/common.sh@17 -- # digest=0 00:36:51.331 01:22:44 keyring_file -- keyring/common.sh@18 -- # mktemp 00:36:51.331 01:22:44 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.SK2iiVjJuW 00:36:51.331 01:22:44 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:36:51.331 01:22:44 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:36:51.331 01:22:44 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:36:51.331 01:22:44 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:36:51.331 01:22:44 keyring_file -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:36:51.331 01:22:44 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:36:51.331 01:22:44 keyring_file -- nvmf/common.sh@705 -- # python - 00:36:51.331 01:22:44 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.SK2iiVjJuW 00:36:51.331 01:22:44 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.SK2iiVjJuW 00:36:51.331 01:22:44 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.SK2iiVjJuW 00:36:51.331 01:22:44 keyring_file -- keyring/file.sh@30 -- # tgtpid=3954273 00:36:51.331 01:22:44 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:36:51.331 01:22:44 keyring_file -- keyring/file.sh@32 -- # waitforlisten 3954273 00:36:51.331 01:22:44 keyring_file -- common/autotest_common.sh@827 -- # '[' -z 3954273 ']' 00:36:51.331 01:22:44 keyring_file -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:51.331 01:22:44 keyring_file -- common/autotest_common.sh@832 -- # local max_retries=100 00:36:51.331 01:22:44 keyring_file -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:51.331 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:51.331 01:22:44 keyring_file -- common/autotest_common.sh@836 -- # xtrace_disable 00:36:51.331 01:22:44 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:36:51.331 [2024-07-25 01:22:44.436154] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 23.11.0 initialization... 00:36:51.331 [2024-07-25 01:22:44.436263] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3954273 ] 00:36:51.331 EAL: No free 2048 kB hugepages reported on node 1 00:36:51.589 [2024-07-25 01:22:44.496357] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:51.589 [2024-07-25 01:22:44.583590] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:36:51.847 01:22:44 keyring_file -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:36:51.847 01:22:44 keyring_file -- common/autotest_common.sh@860 -- # return 0 00:36:51.847 01:22:44 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:36:51.847 01:22:44 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:51.847 01:22:44 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:36:51.847 [2024-07-25 01:22:44.839471] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:51.847 null0 00:36:51.847 [2024-07-25 01:22:44.871545] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:36:51.847 [2024-07-25 01:22:44.872009] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:36:51.847 [2024-07-25 01:22:44.879559] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:36:51.847 01:22:44 keyring_file -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:51.847 01:22:44 keyring_file -- keyring/file.sh@43 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:36:51.847 01:22:44 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:36:51.847 01:22:44 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:36:51.847 01:22:44 keyring_file -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:36:51.847 01:22:44 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:36:51.847 01:22:44 keyring_file -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:36:51.847 01:22:44 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:36:51.847 01:22:44 keyring_file -- common/autotest_common.sh@651 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:36:51.847 01:22:44 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:51.847 01:22:44 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:36:51.847 [2024-07-25 01:22:44.887575] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:36:51.847 request: 00:36:51.847 { 00:36:51.847 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:36:51.847 "secure_channel": false, 00:36:51.847 "listen_address": { 00:36:51.847 "trtype": "tcp", 00:36:51.847 "traddr": "127.0.0.1", 00:36:51.847 "trsvcid": "4420" 00:36:51.847 }, 00:36:51.847 "method": "nvmf_subsystem_add_listener", 00:36:51.847 "req_id": 1 00:36:51.847 } 00:36:51.847 Got JSON-RPC error response 00:36:51.847 response: 00:36:51.847 { 00:36:51.847 "code": -32602, 00:36:51.847 "message": "Invalid parameters" 00:36:51.847 } 00:36:51.847 01:22:44 keyring_file -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:36:51.847 01:22:44 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:36:51.847 01:22:44 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:36:51.847 01:22:44 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:36:51.847 01:22:44 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:36:51.847 01:22:44 keyring_file -- keyring/file.sh@46 -- # bperfpid=3954292 00:36:51.847 01:22:44 keyring_file -- keyring/file.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:36:51.847 01:22:44 keyring_file -- keyring/file.sh@48 -- # waitforlisten 3954292 /var/tmp/bperf.sock 00:36:51.847 01:22:44 keyring_file -- common/autotest_common.sh@827 -- # '[' -z 3954292 ']' 00:36:51.847 01:22:44 keyring_file -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:36:51.847 01:22:44 keyring_file -- common/autotest_common.sh@832 -- # local max_retries=100 00:36:51.847 01:22:44 keyring_file -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:36:51.847 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:36:51.847 01:22:44 keyring_file -- common/autotest_common.sh@836 -- # xtrace_disable 00:36:51.847 01:22:44 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:36:51.847 [2024-07-25 01:22:44.938192] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 23.11.0 initialization... 00:36:51.847 [2024-07-25 01:22:44.938273] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3954292 ] 00:36:51.847 EAL: No free 2048 kB hugepages reported on node 1 00:36:52.105 [2024-07-25 01:22:45.004444] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:52.106 [2024-07-25 01:22:45.095145] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:36:52.106 01:22:45 keyring_file -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:36:52.106 01:22:45 keyring_file -- common/autotest_common.sh@860 -- # return 0 00:36:52.106 01:22:45 keyring_file -- keyring/file.sh@49 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.uPEibmr1ff 00:36:52.106 01:22:45 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.uPEibmr1ff 00:36:52.364 01:22:45 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.SK2iiVjJuW 00:36:52.364 01:22:45 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.SK2iiVjJuW 00:36:52.622 01:22:45 keyring_file -- keyring/file.sh@51 -- # get_key key0 00:36:52.622 01:22:45 keyring_file -- keyring/file.sh@51 -- # jq -r .path 00:36:52.622 01:22:45 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:52.622 01:22:45 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:52.622 01:22:45 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:52.880 01:22:45 keyring_file -- keyring/file.sh@51 -- # [[ /tmp/tmp.uPEibmr1ff == \/\t\m\p\/\t\m\p\.\u\P\E\i\b\m\r\1\f\f ]] 00:36:52.880 01:22:45 keyring_file -- keyring/file.sh@52 -- # get_key key1 00:36:52.880 01:22:45 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:36:52.880 01:22:45 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:52.880 01:22:45 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:52.880 01:22:45 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:36:53.138 01:22:46 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.SK2iiVjJuW == \/\t\m\p\/\t\m\p\.\S\K\2\i\i\V\j\J\u\W ]] 00:36:53.138 01:22:46 keyring_file -- keyring/file.sh@53 -- # get_refcnt key0 00:36:53.138 01:22:46 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:53.138 01:22:46 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:53.138 01:22:46 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:53.138 01:22:46 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:53.138 01:22:46 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:53.396 01:22:46 keyring_file -- keyring/file.sh@53 -- # (( 1 == 1 )) 00:36:53.396 01:22:46 keyring_file -- keyring/file.sh@54 -- # get_refcnt key1 00:36:53.396 01:22:46 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:36:53.396 01:22:46 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:53.396 01:22:46 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:53.396 01:22:46 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:53.396 01:22:46 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:36:53.653 01:22:46 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:36:53.653 01:22:46 keyring_file -- keyring/file.sh@57 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:53.653 01:22:46 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:53.911 [2024-07-25 01:22:46.950735] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:36:53.911 nvme0n1 00:36:53.911 01:22:47 keyring_file -- keyring/file.sh@59 -- # get_refcnt key0 00:36:53.911 01:22:47 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:53.911 01:22:47 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:53.911 01:22:47 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:53.911 01:22:47 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:53.911 01:22:47 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:54.169 01:22:47 keyring_file -- keyring/file.sh@59 -- # (( 2 == 2 )) 00:36:54.169 01:22:47 keyring_file -- keyring/file.sh@60 -- # get_refcnt key1 00:36:54.169 01:22:47 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:36:54.169 01:22:47 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:54.169 01:22:47 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:54.169 01:22:47 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:54.169 01:22:47 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:36:54.427 01:22:47 keyring_file -- keyring/file.sh@60 -- # (( 1 == 1 )) 00:36:54.427 01:22:47 keyring_file -- keyring/file.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:36:54.684 Running I/O for 1 seconds... 00:36:55.617 00:36:55.617 Latency(us) 00:36:55.617 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:55.617 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:36:55.617 nvme0n1 : 1.02 5128.40 20.03 0.00 0.00 24687.92 4466.16 27379.48 00:36:55.617 =================================================================================================================== 00:36:55.617 Total : 5128.40 20.03 0.00 0.00 24687.92 4466.16 27379.48 00:36:55.617 0 00:36:55.617 01:22:48 keyring_file -- keyring/file.sh@64 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:36:55.617 01:22:48 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:36:55.875 01:22:48 keyring_file -- keyring/file.sh@65 -- # get_refcnt key0 00:36:55.875 01:22:48 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:55.875 01:22:48 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:55.875 01:22:48 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:55.875 01:22:48 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:55.875 01:22:48 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:56.133 01:22:49 keyring_file -- keyring/file.sh@65 -- # (( 1 == 1 )) 00:36:56.133 01:22:49 keyring_file -- keyring/file.sh@66 -- # get_refcnt key1 00:36:56.133 01:22:49 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:36:56.133 01:22:49 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:56.133 01:22:49 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:56.133 01:22:49 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:56.133 01:22:49 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:36:56.391 01:22:49 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:36:56.391 01:22:49 keyring_file -- keyring/file.sh@69 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:36:56.391 01:22:49 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:36:56.391 01:22:49 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:36:56.391 01:22:49 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:36:56.391 01:22:49 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:36:56.391 01:22:49 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:36:56.391 01:22:49 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:36:56.391 01:22:49 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:36:56.391 01:22:49 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:36:56.649 [2024-07-25 01:22:49.675375] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:36:56.649 [2024-07-25 01:22:49.675864] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cf3730 (107): Transport endpoint is not connected 00:36:56.649 [2024-07-25 01:22:49.676858] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cf3730 (9): Bad file descriptor 00:36:56.649 [2024-07-25 01:22:49.677856] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:36:56.649 [2024-07-25 01:22:49.677879] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:36:56.649 [2024-07-25 01:22:49.677894] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:36:56.649 request: 00:36:56.649 { 00:36:56.649 "name": "nvme0", 00:36:56.649 "trtype": "tcp", 00:36:56.649 "traddr": "127.0.0.1", 00:36:56.649 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:56.649 "adrfam": "ipv4", 00:36:56.649 "trsvcid": "4420", 00:36:56.649 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:56.649 "psk": "key1", 00:36:56.649 "method": "bdev_nvme_attach_controller", 00:36:56.649 "req_id": 1 00:36:56.649 } 00:36:56.649 Got JSON-RPC error response 00:36:56.649 response: 00:36:56.649 { 00:36:56.649 "code": -5, 00:36:56.649 "message": "Input/output error" 00:36:56.649 } 00:36:56.649 01:22:49 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:36:56.649 01:22:49 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:36:56.649 01:22:49 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:36:56.649 01:22:49 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:36:56.649 01:22:49 keyring_file -- keyring/file.sh@71 -- # get_refcnt key0 00:36:56.649 01:22:49 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:56.649 01:22:49 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:56.649 01:22:49 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:56.649 01:22:49 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:56.649 01:22:49 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:56.908 01:22:49 keyring_file -- keyring/file.sh@71 -- # (( 1 == 1 )) 00:36:56.908 01:22:49 keyring_file -- keyring/file.sh@72 -- # get_refcnt key1 00:36:56.908 01:22:49 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:36:56.908 01:22:49 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:56.908 01:22:49 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:56.908 01:22:49 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:56.908 01:22:49 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:36:57.166 01:22:50 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:36:57.166 01:22:50 keyring_file -- keyring/file.sh@75 -- # bperf_cmd keyring_file_remove_key key0 00:36:57.166 01:22:50 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:36:57.424 01:22:50 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key1 00:36:57.424 01:22:50 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:36:57.681 01:22:50 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_get_keys 00:36:57.681 01:22:50 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:57.681 01:22:50 keyring_file -- keyring/file.sh@77 -- # jq length 00:36:57.940 01:22:50 keyring_file -- keyring/file.sh@77 -- # (( 0 == 0 )) 00:36:57.940 01:22:50 keyring_file -- keyring/file.sh@80 -- # chmod 0660 /tmp/tmp.uPEibmr1ff 00:36:57.940 01:22:50 keyring_file -- keyring/file.sh@81 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.uPEibmr1ff 00:36:57.940 01:22:50 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:36:57.940 01:22:50 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.uPEibmr1ff 00:36:57.940 01:22:50 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:36:57.940 01:22:50 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:36:57.940 01:22:50 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:36:57.940 01:22:50 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:36:57.940 01:22:50 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.uPEibmr1ff 00:36:57.940 01:22:50 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.uPEibmr1ff 00:36:58.198 [2024-07-25 01:22:51.157784] keyring.c: 34:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.uPEibmr1ff': 0100660 00:36:58.198 [2024-07-25 01:22:51.157822] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:36:58.198 request: 00:36:58.198 { 00:36:58.198 "name": "key0", 00:36:58.198 "path": "/tmp/tmp.uPEibmr1ff", 00:36:58.198 "method": "keyring_file_add_key", 00:36:58.198 "req_id": 1 00:36:58.198 } 00:36:58.198 Got JSON-RPC error response 00:36:58.198 response: 00:36:58.198 { 00:36:58.198 "code": -1, 00:36:58.198 "message": "Operation not permitted" 00:36:58.198 } 00:36:58.198 01:22:51 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:36:58.198 01:22:51 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:36:58.198 01:22:51 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:36:58.198 01:22:51 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:36:58.198 01:22:51 keyring_file -- keyring/file.sh@84 -- # chmod 0600 /tmp/tmp.uPEibmr1ff 00:36:58.198 01:22:51 keyring_file -- keyring/file.sh@85 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.uPEibmr1ff 00:36:58.198 01:22:51 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.uPEibmr1ff 00:36:58.455 01:22:51 keyring_file -- keyring/file.sh@86 -- # rm -f /tmp/tmp.uPEibmr1ff 00:36:58.455 01:22:51 keyring_file -- keyring/file.sh@88 -- # get_refcnt key0 00:36:58.455 01:22:51 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:58.455 01:22:51 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:58.455 01:22:51 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:58.455 01:22:51 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:58.455 01:22:51 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:58.713 01:22:51 keyring_file -- keyring/file.sh@88 -- # (( 1 == 1 )) 00:36:58.713 01:22:51 keyring_file -- keyring/file.sh@90 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:58.713 01:22:51 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:36:58.713 01:22:51 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:58.713 01:22:51 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:36:58.713 01:22:51 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:36:58.713 01:22:51 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:36:58.713 01:22:51 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:36:58.713 01:22:51 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:58.713 01:22:51 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:58.971 [2024-07-25 01:22:51.895791] keyring.c: 29:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.uPEibmr1ff': No such file or directory 00:36:58.971 [2024-07-25 01:22:51.895824] nvme_tcp.c:2573:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:36:58.971 [2024-07-25 01:22:51.895855] nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:36:58.971 [2024-07-25 01:22:51.895868] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:36:58.971 [2024-07-25 01:22:51.895881] bdev_nvme.c:6269:bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:36:58.971 request: 00:36:58.971 { 00:36:58.971 "name": "nvme0", 00:36:58.971 "trtype": "tcp", 00:36:58.971 "traddr": "127.0.0.1", 00:36:58.971 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:58.971 "adrfam": "ipv4", 00:36:58.971 "trsvcid": "4420", 00:36:58.971 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:58.971 "psk": "key0", 00:36:58.971 "method": "bdev_nvme_attach_controller", 00:36:58.971 "req_id": 1 00:36:58.971 } 00:36:58.971 Got JSON-RPC error response 00:36:58.971 response: 00:36:58.971 { 00:36:58.971 "code": -19, 00:36:58.971 "message": "No such device" 00:36:58.971 } 00:36:58.971 01:22:51 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:36:58.971 01:22:51 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:36:58.971 01:22:51 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:36:58.971 01:22:51 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:36:58.971 01:22:51 keyring_file -- keyring/file.sh@92 -- # bperf_cmd keyring_file_remove_key key0 00:36:58.971 01:22:51 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:36:59.229 01:22:52 keyring_file -- keyring/file.sh@95 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:36:59.229 01:22:52 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:36:59.229 01:22:52 keyring_file -- keyring/common.sh@17 -- # name=key0 00:36:59.229 01:22:52 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:36:59.229 01:22:52 keyring_file -- keyring/common.sh@17 -- # digest=0 00:36:59.229 01:22:52 keyring_file -- keyring/common.sh@18 -- # mktemp 00:36:59.229 01:22:52 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.9ZGZqXWEhX 00:36:59.229 01:22:52 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:36:59.229 01:22:52 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:36:59.229 01:22:52 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:36:59.229 01:22:52 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:36:59.229 01:22:52 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:36:59.229 01:22:52 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:36:59.229 01:22:52 keyring_file -- nvmf/common.sh@705 -- # python - 00:36:59.229 01:22:52 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.9ZGZqXWEhX 00:36:59.229 01:22:52 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.9ZGZqXWEhX 00:36:59.229 01:22:52 keyring_file -- keyring/file.sh@95 -- # key0path=/tmp/tmp.9ZGZqXWEhX 00:36:59.229 01:22:52 keyring_file -- keyring/file.sh@96 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.9ZGZqXWEhX 00:36:59.229 01:22:52 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.9ZGZqXWEhX 00:36:59.486 01:22:52 keyring_file -- keyring/file.sh@97 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:59.487 01:22:52 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:59.744 nvme0n1 00:36:59.744 01:22:52 keyring_file -- keyring/file.sh@99 -- # get_refcnt key0 00:36:59.744 01:22:52 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:59.744 01:22:52 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:59.744 01:22:52 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:59.744 01:22:52 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:59.744 01:22:52 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:00.001 01:22:53 keyring_file -- keyring/file.sh@99 -- # (( 2 == 2 )) 00:37:00.001 01:22:53 keyring_file -- keyring/file.sh@100 -- # bperf_cmd keyring_file_remove_key key0 00:37:00.001 01:22:53 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:37:00.259 01:22:53 keyring_file -- keyring/file.sh@101 -- # get_key key0 00:37:00.259 01:22:53 keyring_file -- keyring/file.sh@101 -- # jq -r .removed 00:37:00.259 01:22:53 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:00.259 01:22:53 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:00.259 01:22:53 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:00.515 01:22:53 keyring_file -- keyring/file.sh@101 -- # [[ true == \t\r\u\e ]] 00:37:00.515 01:22:53 keyring_file -- keyring/file.sh@102 -- # get_refcnt key0 00:37:00.515 01:22:53 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:37:00.515 01:22:53 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:00.515 01:22:53 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:00.515 01:22:53 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:00.515 01:22:53 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:00.772 01:22:53 keyring_file -- keyring/file.sh@102 -- # (( 1 == 1 )) 00:37:00.772 01:22:53 keyring_file -- keyring/file.sh@103 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:37:00.772 01:22:53 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:37:01.030 01:22:54 keyring_file -- keyring/file.sh@104 -- # bperf_cmd keyring_get_keys 00:37:01.030 01:22:54 keyring_file -- keyring/file.sh@104 -- # jq length 00:37:01.030 01:22:54 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:01.287 01:22:54 keyring_file -- keyring/file.sh@104 -- # (( 0 == 0 )) 00:37:01.288 01:22:54 keyring_file -- keyring/file.sh@107 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.9ZGZqXWEhX 00:37:01.288 01:22:54 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.9ZGZqXWEhX 00:37:01.545 01:22:54 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.SK2iiVjJuW 00:37:01.545 01:22:54 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.SK2iiVjJuW 00:37:01.802 01:22:54 keyring_file -- keyring/file.sh@109 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:01.802 01:22:54 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:02.060 nvme0n1 00:37:02.060 01:22:55 keyring_file -- keyring/file.sh@112 -- # bperf_cmd save_config 00:37:02.060 01:22:55 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:37:02.318 01:22:55 keyring_file -- keyring/file.sh@112 -- # config='{ 00:37:02.318 "subsystems": [ 00:37:02.318 { 00:37:02.318 "subsystem": "keyring", 00:37:02.318 "config": [ 00:37:02.318 { 00:37:02.318 "method": "keyring_file_add_key", 00:37:02.318 "params": { 00:37:02.318 "name": "key0", 00:37:02.318 "path": "/tmp/tmp.9ZGZqXWEhX" 00:37:02.318 } 00:37:02.318 }, 00:37:02.318 { 00:37:02.318 "method": "keyring_file_add_key", 00:37:02.318 "params": { 00:37:02.318 "name": "key1", 00:37:02.318 "path": "/tmp/tmp.SK2iiVjJuW" 00:37:02.318 } 00:37:02.318 } 00:37:02.318 ] 00:37:02.318 }, 00:37:02.318 { 00:37:02.318 "subsystem": "iobuf", 00:37:02.318 "config": [ 00:37:02.318 { 00:37:02.318 "method": "iobuf_set_options", 00:37:02.318 "params": { 00:37:02.318 "small_pool_count": 8192, 00:37:02.318 "large_pool_count": 1024, 00:37:02.318 "small_bufsize": 8192, 00:37:02.318 "large_bufsize": 135168 00:37:02.318 } 00:37:02.318 } 00:37:02.318 ] 00:37:02.318 }, 00:37:02.318 { 00:37:02.318 "subsystem": "sock", 00:37:02.318 "config": [ 00:37:02.318 { 00:37:02.318 "method": "sock_set_default_impl", 00:37:02.318 "params": { 00:37:02.318 "impl_name": "posix" 00:37:02.318 } 00:37:02.318 }, 00:37:02.318 { 00:37:02.318 "method": "sock_impl_set_options", 00:37:02.318 "params": { 00:37:02.318 "impl_name": "ssl", 00:37:02.318 "recv_buf_size": 4096, 00:37:02.318 "send_buf_size": 4096, 00:37:02.318 "enable_recv_pipe": true, 00:37:02.318 "enable_quickack": false, 00:37:02.318 "enable_placement_id": 0, 00:37:02.318 "enable_zerocopy_send_server": true, 00:37:02.318 "enable_zerocopy_send_client": false, 00:37:02.318 "zerocopy_threshold": 0, 00:37:02.318 "tls_version": 0, 00:37:02.318 "enable_ktls": false 00:37:02.318 } 00:37:02.318 }, 00:37:02.318 { 00:37:02.318 "method": "sock_impl_set_options", 00:37:02.318 "params": { 00:37:02.318 "impl_name": "posix", 00:37:02.318 "recv_buf_size": 2097152, 00:37:02.318 "send_buf_size": 2097152, 00:37:02.318 "enable_recv_pipe": true, 00:37:02.318 "enable_quickack": false, 00:37:02.318 "enable_placement_id": 0, 00:37:02.318 "enable_zerocopy_send_server": true, 00:37:02.318 "enable_zerocopy_send_client": false, 00:37:02.318 "zerocopy_threshold": 0, 00:37:02.318 "tls_version": 0, 00:37:02.318 "enable_ktls": false 00:37:02.318 } 00:37:02.318 } 00:37:02.318 ] 00:37:02.318 }, 00:37:02.318 { 00:37:02.318 "subsystem": "vmd", 00:37:02.318 "config": [] 00:37:02.318 }, 00:37:02.318 { 00:37:02.318 "subsystem": "accel", 00:37:02.318 "config": [ 00:37:02.318 { 00:37:02.318 "method": "accel_set_options", 00:37:02.318 "params": { 00:37:02.318 "small_cache_size": 128, 00:37:02.318 "large_cache_size": 16, 00:37:02.318 "task_count": 2048, 00:37:02.318 "sequence_count": 2048, 00:37:02.318 "buf_count": 2048 00:37:02.318 } 00:37:02.318 } 00:37:02.318 ] 00:37:02.318 }, 00:37:02.318 { 00:37:02.318 "subsystem": "bdev", 00:37:02.318 "config": [ 00:37:02.318 { 00:37:02.318 "method": "bdev_set_options", 00:37:02.318 "params": { 00:37:02.318 "bdev_io_pool_size": 65535, 00:37:02.318 "bdev_io_cache_size": 256, 00:37:02.318 "bdev_auto_examine": true, 00:37:02.318 "iobuf_small_cache_size": 128, 00:37:02.318 "iobuf_large_cache_size": 16 00:37:02.318 } 00:37:02.318 }, 00:37:02.318 { 00:37:02.318 "method": "bdev_raid_set_options", 00:37:02.318 "params": { 00:37:02.318 "process_window_size_kb": 1024 00:37:02.318 } 00:37:02.318 }, 00:37:02.318 { 00:37:02.318 "method": "bdev_iscsi_set_options", 00:37:02.318 "params": { 00:37:02.318 "timeout_sec": 30 00:37:02.318 } 00:37:02.318 }, 00:37:02.318 { 00:37:02.318 "method": "bdev_nvme_set_options", 00:37:02.318 "params": { 00:37:02.318 "action_on_timeout": "none", 00:37:02.318 "timeout_us": 0, 00:37:02.318 "timeout_admin_us": 0, 00:37:02.318 "keep_alive_timeout_ms": 10000, 00:37:02.318 "arbitration_burst": 0, 00:37:02.318 "low_priority_weight": 0, 00:37:02.318 "medium_priority_weight": 0, 00:37:02.318 "high_priority_weight": 0, 00:37:02.318 "nvme_adminq_poll_period_us": 10000, 00:37:02.318 "nvme_ioq_poll_period_us": 0, 00:37:02.318 "io_queue_requests": 512, 00:37:02.318 "delay_cmd_submit": true, 00:37:02.318 "transport_retry_count": 4, 00:37:02.318 "bdev_retry_count": 3, 00:37:02.318 "transport_ack_timeout": 0, 00:37:02.318 "ctrlr_loss_timeout_sec": 0, 00:37:02.318 "reconnect_delay_sec": 0, 00:37:02.318 "fast_io_fail_timeout_sec": 0, 00:37:02.318 "disable_auto_failback": false, 00:37:02.319 "generate_uuids": false, 00:37:02.319 "transport_tos": 0, 00:37:02.319 "nvme_error_stat": false, 00:37:02.319 "rdma_srq_size": 0, 00:37:02.319 "io_path_stat": false, 00:37:02.319 "allow_accel_sequence": false, 00:37:02.319 "rdma_max_cq_size": 0, 00:37:02.319 "rdma_cm_event_timeout_ms": 0, 00:37:02.319 "dhchap_digests": [ 00:37:02.319 "sha256", 00:37:02.319 "sha384", 00:37:02.319 "sha512" 00:37:02.319 ], 00:37:02.319 "dhchap_dhgroups": [ 00:37:02.319 "null", 00:37:02.319 "ffdhe2048", 00:37:02.319 "ffdhe3072", 00:37:02.319 "ffdhe4096", 00:37:02.319 "ffdhe6144", 00:37:02.319 "ffdhe8192" 00:37:02.319 ] 00:37:02.319 } 00:37:02.319 }, 00:37:02.319 { 00:37:02.319 "method": "bdev_nvme_attach_controller", 00:37:02.319 "params": { 00:37:02.319 "name": "nvme0", 00:37:02.319 "trtype": "TCP", 00:37:02.319 "adrfam": "IPv4", 00:37:02.319 "traddr": "127.0.0.1", 00:37:02.319 "trsvcid": "4420", 00:37:02.319 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:02.319 "prchk_reftag": false, 00:37:02.319 "prchk_guard": false, 00:37:02.319 "ctrlr_loss_timeout_sec": 0, 00:37:02.319 "reconnect_delay_sec": 0, 00:37:02.319 "fast_io_fail_timeout_sec": 0, 00:37:02.319 "psk": "key0", 00:37:02.319 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:02.319 "hdgst": false, 00:37:02.319 "ddgst": false 00:37:02.319 } 00:37:02.319 }, 00:37:02.319 { 00:37:02.319 "method": "bdev_nvme_set_hotplug", 00:37:02.319 "params": { 00:37:02.319 "period_us": 100000, 00:37:02.319 "enable": false 00:37:02.319 } 00:37:02.319 }, 00:37:02.319 { 00:37:02.319 "method": "bdev_wait_for_examine" 00:37:02.319 } 00:37:02.319 ] 00:37:02.319 }, 00:37:02.319 { 00:37:02.319 "subsystem": "nbd", 00:37:02.319 "config": [] 00:37:02.319 } 00:37:02.319 ] 00:37:02.319 }' 00:37:02.319 01:22:55 keyring_file -- keyring/file.sh@114 -- # killprocess 3954292 00:37:02.319 01:22:55 keyring_file -- common/autotest_common.sh@946 -- # '[' -z 3954292 ']' 00:37:02.319 01:22:55 keyring_file -- common/autotest_common.sh@950 -- # kill -0 3954292 00:37:02.319 01:22:55 keyring_file -- common/autotest_common.sh@951 -- # uname 00:37:02.319 01:22:55 keyring_file -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:37:02.319 01:22:55 keyring_file -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3954292 00:37:02.319 01:22:55 keyring_file -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:37:02.319 01:22:55 keyring_file -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:37:02.319 01:22:55 keyring_file -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3954292' 00:37:02.319 killing process with pid 3954292 00:37:02.319 01:22:55 keyring_file -- common/autotest_common.sh@965 -- # kill 3954292 00:37:02.319 Received shutdown signal, test time was about 1.000000 seconds 00:37:02.319 00:37:02.319 Latency(us) 00:37:02.319 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:02.319 =================================================================================================================== 00:37:02.319 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:37:02.319 01:22:55 keyring_file -- common/autotest_common.sh@970 -- # wait 3954292 00:37:02.578 01:22:55 keyring_file -- keyring/file.sh@117 -- # bperfpid=3955707 00:37:02.578 01:22:55 keyring_file -- keyring/file.sh@119 -- # waitforlisten 3955707 /var/tmp/bperf.sock 00:37:02.578 01:22:55 keyring_file -- common/autotest_common.sh@827 -- # '[' -z 3955707 ']' 00:37:02.578 01:22:55 keyring_file -- keyring/file.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:37:02.578 01:22:55 keyring_file -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:37:02.578 01:22:55 keyring_file -- keyring/file.sh@115 -- # echo '{ 00:37:02.578 "subsystems": [ 00:37:02.578 { 00:37:02.578 "subsystem": "keyring", 00:37:02.578 "config": [ 00:37:02.578 { 00:37:02.578 "method": "keyring_file_add_key", 00:37:02.578 "params": { 00:37:02.578 "name": "key0", 00:37:02.578 "path": "/tmp/tmp.9ZGZqXWEhX" 00:37:02.578 } 00:37:02.578 }, 00:37:02.578 { 00:37:02.578 "method": "keyring_file_add_key", 00:37:02.578 "params": { 00:37:02.578 "name": "key1", 00:37:02.578 "path": "/tmp/tmp.SK2iiVjJuW" 00:37:02.578 } 00:37:02.578 } 00:37:02.578 ] 00:37:02.578 }, 00:37:02.578 { 00:37:02.578 "subsystem": "iobuf", 00:37:02.578 "config": [ 00:37:02.578 { 00:37:02.578 "method": "iobuf_set_options", 00:37:02.578 "params": { 00:37:02.578 "small_pool_count": 8192, 00:37:02.578 "large_pool_count": 1024, 00:37:02.578 "small_bufsize": 8192, 00:37:02.578 "large_bufsize": 135168 00:37:02.578 } 00:37:02.578 } 00:37:02.578 ] 00:37:02.578 }, 00:37:02.578 { 00:37:02.578 "subsystem": "sock", 00:37:02.578 "config": [ 00:37:02.578 { 00:37:02.578 "method": "sock_set_default_impl", 00:37:02.578 "params": { 00:37:02.578 "impl_name": "posix" 00:37:02.578 } 00:37:02.578 }, 00:37:02.578 { 00:37:02.578 "method": "sock_impl_set_options", 00:37:02.578 "params": { 00:37:02.578 "impl_name": "ssl", 00:37:02.578 "recv_buf_size": 4096, 00:37:02.578 "send_buf_size": 4096, 00:37:02.578 "enable_recv_pipe": true, 00:37:02.578 "enable_quickack": false, 00:37:02.578 "enable_placement_id": 0, 00:37:02.578 "enable_zerocopy_send_server": true, 00:37:02.578 "enable_zerocopy_send_client": false, 00:37:02.578 "zerocopy_threshold": 0, 00:37:02.578 "tls_version": 0, 00:37:02.578 "enable_ktls": false 00:37:02.578 } 00:37:02.578 }, 00:37:02.578 { 00:37:02.578 "method": "sock_impl_set_options", 00:37:02.578 "params": { 00:37:02.578 "impl_name": "posix", 00:37:02.578 "recv_buf_size": 2097152, 00:37:02.578 "send_buf_size": 2097152, 00:37:02.578 "enable_recv_pipe": true, 00:37:02.578 "enable_quickack": false, 00:37:02.578 "enable_placement_id": 0, 00:37:02.578 "enable_zerocopy_send_server": true, 00:37:02.578 "enable_zerocopy_send_client": false, 00:37:02.578 "zerocopy_threshold": 0, 00:37:02.578 "tls_version": 0, 00:37:02.578 "enable_ktls": false 00:37:02.578 } 00:37:02.578 } 00:37:02.578 ] 00:37:02.578 }, 00:37:02.578 { 00:37:02.578 "subsystem": "vmd", 00:37:02.578 "config": [] 00:37:02.578 }, 00:37:02.578 { 00:37:02.578 "subsystem": "accel", 00:37:02.578 "config": [ 00:37:02.578 { 00:37:02.578 "method": "accel_set_options", 00:37:02.578 "params": { 00:37:02.578 "small_cache_size": 128, 00:37:02.578 "large_cache_size": 16, 00:37:02.578 "task_count": 2048, 00:37:02.578 "sequence_count": 2048, 00:37:02.578 "buf_count": 2048 00:37:02.578 } 00:37:02.578 } 00:37:02.578 ] 00:37:02.578 }, 00:37:02.578 { 00:37:02.578 "subsystem": "bdev", 00:37:02.578 "config": [ 00:37:02.578 { 00:37:02.578 "method": "bdev_set_options", 00:37:02.578 "params": { 00:37:02.578 "bdev_io_pool_size": 65535, 00:37:02.578 "bdev_io_cache_size": 256, 00:37:02.578 "bdev_auto_examine": true, 00:37:02.578 "iobuf_small_cache_size": 128, 00:37:02.578 "iobuf_large_cache_size": 16 00:37:02.578 } 00:37:02.578 }, 00:37:02.578 { 00:37:02.578 "method": "bdev_raid_set_options", 00:37:02.578 "params": { 00:37:02.578 "process_window_size_kb": 1024 00:37:02.578 } 00:37:02.578 }, 00:37:02.578 { 00:37:02.579 "method": "bdev_iscsi_set_options", 00:37:02.579 "params": { 00:37:02.579 "timeout_sec": 30 00:37:02.579 } 00:37:02.579 }, 00:37:02.579 { 00:37:02.579 "method": "bdev_nvme_set_options", 00:37:02.579 "params": { 00:37:02.579 "action_on_timeout": "none", 00:37:02.579 "timeout_us": 0, 00:37:02.579 "timeout_admin_us": 0, 00:37:02.579 "keep_alive_timeout_ms": 10000, 00:37:02.579 "arbitration_burst": 0, 00:37:02.579 "low_priority_weight": 0, 00:37:02.579 "medium_priority_weight": 0, 00:37:02.579 "high_priority_weight": 0, 00:37:02.579 "nvme_adminq_poll_period_us": 10000, 00:37:02.579 "nvme_ioq_poll_period_us": 0, 00:37:02.579 "io_queue_requests": 512, 00:37:02.579 "delay_cmd_submit": true, 00:37:02.579 "transport_retry_count": 4, 00:37:02.579 "bdev_retry_count": 3, 00:37:02.579 "transport_ack_timeout": 0, 00:37:02.579 "ctrlr_loss_timeout_sec": 0, 00:37:02.579 "reconnect_delay_sec": 0, 00:37:02.579 "fast_io_fail_timeout_sec": 0, 00:37:02.579 "disable_auto_failback": false, 00:37:02.579 "generate_uuids": false, 00:37:02.579 "transport_tos": 0, 00:37:02.579 "nvme_error_stat": false, 00:37:02.579 "rdma_srq_size": 0, 00:37:02.579 "io_path_stat": false, 00:37:02.579 "allow_accel_sequence": false, 00:37:02.579 "rdma_max_cq_size": 0, 00:37:02.579 "rdma_cm_event_timeout_ms": 0, 00:37:02.579 "dhchap_digests": [ 00:37:02.579 "sha256", 00:37:02.579 "sha384", 00:37:02.579 "sha512" 00:37:02.579 ], 00:37:02.579 "dhchap_dhgroups": [ 00:37:02.579 "null", 00:37:02.579 "ffdhe2048", 00:37:02.579 "ffdhe3072", 00:37:02.579 "ffdhe4096", 00:37:02.579 "ffdhe6144", 00:37:02.579 "ffdhe8192" 00:37:02.579 ] 00:37:02.579 } 00:37:02.579 }, 00:37:02.579 { 00:37:02.579 "method": "bdev_nvme_attach_controller", 00:37:02.579 "params": { 00:37:02.579 "name": "nvme0", 00:37:02.579 "trtype": "TCP", 00:37:02.579 "adrfam": "IPv4", 00:37:02.579 "traddr": "127.0.0.1", 00:37:02.579 "trsvcid": "4420", 00:37:02.579 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:02.579 "prchk_reftag": false, 00:37:02.579 "prchk_guard": false, 00:37:02.579 "ctrlr_loss_timeout_sec": 0, 00:37:02.579 "reconnect_delay_sec": 0, 00:37:02.579 "fast_io_fail_timeout_sec": 0, 00:37:02.579 "psk": "key0", 00:37:02.579 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:02.579 "hdgst": false, 00:37:02.579 "ddgst": false 00:37:02.579 } 00:37:02.579 }, 00:37:02.579 { 00:37:02.579 "method": "bdev_nvme_set_hotplug", 00:37:02.579 "params": { 00:37:02.579 "period_us": 100000, 00:37:02.579 "enable": false 00:37:02.579 } 00:37:02.579 }, 00:37:02.579 { 00:37:02.579 "method": "bdev_wait_for_examine" 00:37:02.579 } 00:37:02.579 ] 00:37:02.579 }, 00:37:02.579 { 00:37:02.579 "subsystem": "nbd", 00:37:02.579 "config": [] 00:37:02.579 } 00:37:02.579 ] 00:37:02.579 }' 00:37:02.579 01:22:55 keyring_file -- common/autotest_common.sh@832 -- # local max_retries=100 00:37:02.579 01:22:55 keyring_file -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:37:02.579 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:37:02.579 01:22:55 keyring_file -- common/autotest_common.sh@836 -- # xtrace_disable 00:37:02.579 01:22:55 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:37:02.579 [2024-07-25 01:22:55.646841] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 23.11.0 initialization... 00:37:02.579 [2024-07-25 01:22:55.646937] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3955707 ] 00:37:02.579 EAL: No free 2048 kB hugepages reported on node 1 00:37:02.579 [2024-07-25 01:22:55.707266] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:02.878 [2024-07-25 01:22:55.799838] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:37:02.878 [2024-07-25 01:22:55.981094] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:37:03.811 01:22:56 keyring_file -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:37:03.811 01:22:56 keyring_file -- common/autotest_common.sh@860 -- # return 0 00:37:03.811 01:22:56 keyring_file -- keyring/file.sh@120 -- # bperf_cmd keyring_get_keys 00:37:03.811 01:22:56 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:03.811 01:22:56 keyring_file -- keyring/file.sh@120 -- # jq length 00:37:03.811 01:22:56 keyring_file -- keyring/file.sh@120 -- # (( 2 == 2 )) 00:37:03.811 01:22:56 keyring_file -- keyring/file.sh@121 -- # get_refcnt key0 00:37:03.811 01:22:56 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:37:03.811 01:22:56 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:03.811 01:22:56 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:03.811 01:22:56 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:03.811 01:22:56 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:04.068 01:22:57 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:37:04.068 01:22:57 keyring_file -- keyring/file.sh@122 -- # get_refcnt key1 00:37:04.068 01:22:57 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:37:04.068 01:22:57 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:04.068 01:22:57 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:04.068 01:22:57 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:04.068 01:22:57 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:37:04.325 01:22:57 keyring_file -- keyring/file.sh@122 -- # (( 1 == 1 )) 00:37:04.325 01:22:57 keyring_file -- keyring/file.sh@123 -- # bperf_cmd bdev_nvme_get_controllers 00:37:04.325 01:22:57 keyring_file -- keyring/file.sh@123 -- # jq -r '.[].name' 00:37:04.325 01:22:57 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:37:04.583 01:22:57 keyring_file -- keyring/file.sh@123 -- # [[ nvme0 == nvme0 ]] 00:37:04.583 01:22:57 keyring_file -- keyring/file.sh@1 -- # cleanup 00:37:04.583 01:22:57 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.9ZGZqXWEhX /tmp/tmp.SK2iiVjJuW 00:37:04.583 01:22:57 keyring_file -- keyring/file.sh@20 -- # killprocess 3955707 00:37:04.583 01:22:57 keyring_file -- common/autotest_common.sh@946 -- # '[' -z 3955707 ']' 00:37:04.583 01:22:57 keyring_file -- common/autotest_common.sh@950 -- # kill -0 3955707 00:37:04.583 01:22:57 keyring_file -- common/autotest_common.sh@951 -- # uname 00:37:04.583 01:22:57 keyring_file -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:37:04.583 01:22:57 keyring_file -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3955707 00:37:04.583 01:22:57 keyring_file -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:37:04.583 01:22:57 keyring_file -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:37:04.583 01:22:57 keyring_file -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3955707' 00:37:04.583 killing process with pid 3955707 00:37:04.583 01:22:57 keyring_file -- common/autotest_common.sh@965 -- # kill 3955707 00:37:04.583 Received shutdown signal, test time was about 1.000000 seconds 00:37:04.583 00:37:04.583 Latency(us) 00:37:04.583 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:04.583 =================================================================================================================== 00:37:04.583 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:37:04.583 01:22:57 keyring_file -- common/autotest_common.sh@970 -- # wait 3955707 00:37:04.840 01:22:57 keyring_file -- keyring/file.sh@21 -- # killprocess 3954273 00:37:04.840 01:22:57 keyring_file -- common/autotest_common.sh@946 -- # '[' -z 3954273 ']' 00:37:04.840 01:22:57 keyring_file -- common/autotest_common.sh@950 -- # kill -0 3954273 00:37:04.840 01:22:57 keyring_file -- common/autotest_common.sh@951 -- # uname 00:37:04.840 01:22:57 keyring_file -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:37:04.840 01:22:57 keyring_file -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3954273 00:37:04.840 01:22:57 keyring_file -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:37:04.840 01:22:57 keyring_file -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:37:04.840 01:22:57 keyring_file -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3954273' 00:37:04.840 killing process with pid 3954273 00:37:04.840 01:22:57 keyring_file -- common/autotest_common.sh@965 -- # kill 3954273 00:37:04.840 [2024-07-25 01:22:57.884572] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:37:04.840 01:22:57 keyring_file -- common/autotest_common.sh@970 -- # wait 3954273 00:37:05.404 00:37:05.404 real 0m14.079s 00:37:05.404 user 0m35.051s 00:37:05.404 sys 0m3.155s 00:37:05.404 01:22:58 keyring_file -- common/autotest_common.sh@1122 -- # xtrace_disable 00:37:05.404 01:22:58 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:37:05.404 ************************************ 00:37:05.404 END TEST keyring_file 00:37:05.404 ************************************ 00:37:05.404 01:22:58 -- spdk/autotest.sh@296 -- # [[ y == y ]] 00:37:05.404 01:22:58 -- spdk/autotest.sh@297 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:37:05.404 01:22:58 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:37:05.404 01:22:58 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:37:05.404 01:22:58 -- common/autotest_common.sh@10 -- # set +x 00:37:05.404 ************************************ 00:37:05.404 START TEST keyring_linux 00:37:05.404 ************************************ 00:37:05.404 01:22:58 keyring_linux -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:37:05.404 * Looking for test storage... 00:37:05.404 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:37:05.404 01:22:58 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:37:05.404 01:22:58 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:05.404 01:22:58 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:37:05.404 01:22:58 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:05.404 01:22:58 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:05.404 01:22:58 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:05.405 01:22:58 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:05.405 01:22:58 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:05.405 01:22:58 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:05.405 01:22:58 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:05.405 01:22:58 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:05.405 01:22:58 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:05.405 01:22:58 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:05.405 01:22:58 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:37:05.405 01:22:58 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:37:05.405 01:22:58 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:05.405 01:22:58 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:05.405 01:22:58 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:05.405 01:22:58 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:05.405 01:22:58 keyring_linux -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:05.405 01:22:58 keyring_linux -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:05.405 01:22:58 keyring_linux -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:05.405 01:22:58 keyring_linux -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:05.405 01:22:58 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:05.405 01:22:58 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:05.405 01:22:58 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:05.405 01:22:58 keyring_linux -- paths/export.sh@5 -- # export PATH 00:37:05.405 01:22:58 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:05.405 01:22:58 keyring_linux -- nvmf/common.sh@47 -- # : 0 00:37:05.405 01:22:58 keyring_linux -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:37:05.405 01:22:58 keyring_linux -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:37:05.405 01:22:58 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:05.405 01:22:58 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:05.405 01:22:58 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:05.405 01:22:58 keyring_linux -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:37:05.405 01:22:58 keyring_linux -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:37:05.405 01:22:58 keyring_linux -- nvmf/common.sh@51 -- # have_pci_nics=0 00:37:05.405 01:22:58 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:37:05.405 01:22:58 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:37:05.405 01:22:58 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:37:05.405 01:22:58 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:37:05.405 01:22:58 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:37:05.405 01:22:58 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:37:05.405 01:22:58 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:37:05.405 01:22:58 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:37:05.405 01:22:58 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:37:05.405 01:22:58 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:37:05.405 01:22:58 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:37:05.405 01:22:58 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:37:05.405 01:22:58 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:37:05.405 01:22:58 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:37:05.405 01:22:58 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:37:05.405 01:22:58 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:37:05.405 01:22:58 keyring_linux -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:37:05.405 01:22:58 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:37:05.405 01:22:58 keyring_linux -- nvmf/common.sh@705 -- # python - 00:37:05.405 01:22:58 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:37:05.405 01:22:58 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:37:05.405 /tmp/:spdk-test:key0 00:37:05.405 01:22:58 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:37:05.405 01:22:58 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:37:05.405 01:22:58 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:37:05.405 01:22:58 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:37:05.405 01:22:58 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:37:05.405 01:22:58 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:37:05.405 01:22:58 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:37:05.405 01:22:58 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:37:05.405 01:22:58 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:37:05.405 01:22:58 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:37:05.405 01:22:58 keyring_linux -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:37:05.405 01:22:58 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:37:05.405 01:22:58 keyring_linux -- nvmf/common.sh@705 -- # python - 00:37:05.405 01:22:58 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:37:05.405 01:22:58 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:37:05.405 /tmp/:spdk-test:key1 00:37:05.405 01:22:58 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=3956104 00:37:05.405 01:22:58 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:37:05.405 01:22:58 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 3956104 00:37:05.405 01:22:58 keyring_linux -- common/autotest_common.sh@827 -- # '[' -z 3956104 ']' 00:37:05.405 01:22:58 keyring_linux -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:05.405 01:22:58 keyring_linux -- common/autotest_common.sh@832 -- # local max_retries=100 00:37:05.405 01:22:58 keyring_linux -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:05.405 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:05.405 01:22:58 keyring_linux -- common/autotest_common.sh@836 -- # xtrace_disable 00:37:05.405 01:22:58 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:37:05.405 [2024-07-25 01:22:58.544918] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 23.11.0 initialization... 00:37:05.405 [2024-07-25 01:22:58.545018] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3956104 ] 00:37:05.662 EAL: No free 2048 kB hugepages reported on node 1 00:37:05.662 [2024-07-25 01:22:58.608267] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:05.662 [2024-07-25 01:22:58.698215] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:37:05.920 01:22:58 keyring_linux -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:37:05.920 01:22:58 keyring_linux -- common/autotest_common.sh@860 -- # return 0 00:37:05.920 01:22:58 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:37:05.920 01:22:58 keyring_linux -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:05.920 01:22:58 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:37:05.920 [2024-07-25 01:22:58.940983] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:05.920 null0 00:37:05.920 [2024-07-25 01:22:58.973060] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:37:05.920 [2024-07-25 01:22:58.973567] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:37:05.920 01:22:58 keyring_linux -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:05.920 01:22:58 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:37:05.920 531178395 00:37:05.920 01:22:58 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:37:05.920 468087844 00:37:05.920 01:22:58 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=3956187 00:37:05.920 01:22:58 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:37:05.920 01:22:58 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 3956187 /var/tmp/bperf.sock 00:37:05.920 01:22:58 keyring_linux -- common/autotest_common.sh@827 -- # '[' -z 3956187 ']' 00:37:05.920 01:22:58 keyring_linux -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:37:05.920 01:22:58 keyring_linux -- common/autotest_common.sh@832 -- # local max_retries=100 00:37:05.920 01:22:58 keyring_linux -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:37:05.920 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:37:05.920 01:22:58 keyring_linux -- common/autotest_common.sh@836 -- # xtrace_disable 00:37:05.920 01:22:58 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:37:05.920 [2024-07-25 01:22:59.041163] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 23.11.0 initialization... 00:37:05.920 [2024-07-25 01:22:59.041261] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3956187 ] 00:37:06.177 EAL: No free 2048 kB hugepages reported on node 1 00:37:06.177 [2024-07-25 01:22:59.102763] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:06.177 [2024-07-25 01:22:59.188253] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:37:06.177 01:22:59 keyring_linux -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:37:06.177 01:22:59 keyring_linux -- common/autotest_common.sh@860 -- # return 0 00:37:06.177 01:22:59 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:37:06.177 01:22:59 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:37:06.434 01:22:59 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:37:06.434 01:22:59 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:37:06.999 01:22:59 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:37:06.999 01:22:59 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:37:06.999 [2024-07-25 01:23:00.065268] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:37:06.999 nvme0n1 00:37:07.257 01:23:00 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:37:07.257 01:23:00 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:37:07.257 01:23:00 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:37:07.257 01:23:00 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:37:07.257 01:23:00 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:37:07.257 01:23:00 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:07.257 01:23:00 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:37:07.257 01:23:00 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:37:07.257 01:23:00 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:37:07.257 01:23:00 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:37:07.257 01:23:00 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:07.257 01:23:00 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:07.257 01:23:00 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:37:07.514 01:23:00 keyring_linux -- keyring/linux.sh@25 -- # sn=531178395 00:37:07.514 01:23:00 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:37:07.514 01:23:00 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:37:07.514 01:23:00 keyring_linux -- keyring/linux.sh@26 -- # [[ 531178395 == \5\3\1\1\7\8\3\9\5 ]] 00:37:07.514 01:23:00 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 531178395 00:37:07.514 01:23:00 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:37:07.514 01:23:00 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:37:07.772 Running I/O for 1 seconds... 00:37:08.705 00:37:08.705 Latency(us) 00:37:08.705 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:08.705 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:37:08.705 nvme0n1 : 1.02 5450.92 21.29 0.00 0.00 23300.09 6505.05 30292.20 00:37:08.705 =================================================================================================================== 00:37:08.705 Total : 5450.92 21.29 0.00 0.00 23300.09 6505.05 30292.20 00:37:08.705 0 00:37:08.705 01:23:01 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:37:08.705 01:23:01 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:37:08.963 01:23:02 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:37:08.963 01:23:02 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:37:08.963 01:23:02 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:37:08.963 01:23:02 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:37:08.963 01:23:02 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:08.963 01:23:02 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:37:09.221 01:23:02 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:37:09.221 01:23:02 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:37:09.221 01:23:02 keyring_linux -- keyring/linux.sh@23 -- # return 00:37:09.221 01:23:02 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:37:09.221 01:23:02 keyring_linux -- common/autotest_common.sh@648 -- # local es=0 00:37:09.221 01:23:02 keyring_linux -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:37:09.221 01:23:02 keyring_linux -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:37:09.221 01:23:02 keyring_linux -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:37:09.221 01:23:02 keyring_linux -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:37:09.221 01:23:02 keyring_linux -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:37:09.221 01:23:02 keyring_linux -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:37:09.221 01:23:02 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:37:09.479 [2024-07-25 01:23:02.508259] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:37:09.479 [2024-07-25 01:23:02.508800] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3eea0 (107): Transport endpoint is not connected 00:37:09.479 [2024-07-25 01:23:02.509791] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3eea0 (9): Bad file descriptor 00:37:09.479 [2024-07-25 01:23:02.510789] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:37:09.479 [2024-07-25 01:23:02.510812] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:37:09.479 [2024-07-25 01:23:02.510836] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:37:09.479 request: 00:37:09.479 { 00:37:09.479 "name": "nvme0", 00:37:09.479 "trtype": "tcp", 00:37:09.479 "traddr": "127.0.0.1", 00:37:09.479 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:09.479 "adrfam": "ipv4", 00:37:09.479 "trsvcid": "4420", 00:37:09.479 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:09.479 "psk": ":spdk-test:key1", 00:37:09.479 "method": "bdev_nvme_attach_controller", 00:37:09.479 "req_id": 1 00:37:09.479 } 00:37:09.479 Got JSON-RPC error response 00:37:09.479 response: 00:37:09.479 { 00:37:09.479 "code": -5, 00:37:09.479 "message": "Input/output error" 00:37:09.479 } 00:37:09.479 01:23:02 keyring_linux -- common/autotest_common.sh@651 -- # es=1 00:37:09.479 01:23:02 keyring_linux -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:37:09.479 01:23:02 keyring_linux -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:37:09.479 01:23:02 keyring_linux -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:37:09.479 01:23:02 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:37:09.479 01:23:02 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:37:09.479 01:23:02 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:37:09.479 01:23:02 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:37:09.479 01:23:02 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:37:09.479 01:23:02 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:37:09.479 01:23:02 keyring_linux -- keyring/linux.sh@33 -- # sn=531178395 00:37:09.479 01:23:02 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 531178395 00:37:09.479 1 links removed 00:37:09.479 01:23:02 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:37:09.479 01:23:02 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:37:09.479 01:23:02 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:37:09.479 01:23:02 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:37:09.479 01:23:02 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:37:09.479 01:23:02 keyring_linux -- keyring/linux.sh@33 -- # sn=468087844 00:37:09.479 01:23:02 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 468087844 00:37:09.479 1 links removed 00:37:09.479 01:23:02 keyring_linux -- keyring/linux.sh@41 -- # killprocess 3956187 00:37:09.479 01:23:02 keyring_linux -- common/autotest_common.sh@946 -- # '[' -z 3956187 ']' 00:37:09.479 01:23:02 keyring_linux -- common/autotest_common.sh@950 -- # kill -0 3956187 00:37:09.479 01:23:02 keyring_linux -- common/autotest_common.sh@951 -- # uname 00:37:09.479 01:23:02 keyring_linux -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:37:09.479 01:23:02 keyring_linux -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3956187 00:37:09.479 01:23:02 keyring_linux -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:37:09.479 01:23:02 keyring_linux -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:37:09.479 01:23:02 keyring_linux -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3956187' 00:37:09.479 killing process with pid 3956187 00:37:09.479 01:23:02 keyring_linux -- common/autotest_common.sh@965 -- # kill 3956187 00:37:09.479 Received shutdown signal, test time was about 1.000000 seconds 00:37:09.479 00:37:09.479 Latency(us) 00:37:09.479 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:09.479 =================================================================================================================== 00:37:09.479 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:37:09.479 01:23:02 keyring_linux -- common/autotest_common.sh@970 -- # wait 3956187 00:37:09.737 01:23:02 keyring_linux -- keyring/linux.sh@42 -- # killprocess 3956104 00:37:09.737 01:23:02 keyring_linux -- common/autotest_common.sh@946 -- # '[' -z 3956104 ']' 00:37:09.737 01:23:02 keyring_linux -- common/autotest_common.sh@950 -- # kill -0 3956104 00:37:09.737 01:23:02 keyring_linux -- common/autotest_common.sh@951 -- # uname 00:37:09.737 01:23:02 keyring_linux -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:37:09.737 01:23:02 keyring_linux -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3956104 00:37:09.737 01:23:02 keyring_linux -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:37:09.737 01:23:02 keyring_linux -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:37:09.737 01:23:02 keyring_linux -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3956104' 00:37:09.737 killing process with pid 3956104 00:37:09.737 01:23:02 keyring_linux -- common/autotest_common.sh@965 -- # kill 3956104 00:37:09.737 01:23:02 keyring_linux -- common/autotest_common.sh@970 -- # wait 3956104 00:37:10.304 00:37:10.304 real 0m4.803s 00:37:10.304 user 0m9.057s 00:37:10.304 sys 0m1.555s 00:37:10.304 01:23:03 keyring_linux -- common/autotest_common.sh@1122 -- # xtrace_disable 00:37:10.304 01:23:03 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:37:10.304 ************************************ 00:37:10.304 END TEST keyring_linux 00:37:10.304 ************************************ 00:37:10.304 01:23:03 -- spdk/autotest.sh@308 -- # '[' 0 -eq 1 ']' 00:37:10.304 01:23:03 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:37:10.304 01:23:03 -- spdk/autotest.sh@316 -- # '[' 0 -eq 1 ']' 00:37:10.304 01:23:03 -- spdk/autotest.sh@321 -- # '[' 0 -eq 1 ']' 00:37:10.304 01:23:03 -- spdk/autotest.sh@330 -- # '[' 0 -eq 1 ']' 00:37:10.304 01:23:03 -- spdk/autotest.sh@335 -- # '[' 0 -eq 1 ']' 00:37:10.304 01:23:03 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:37:10.304 01:23:03 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:37:10.304 01:23:03 -- spdk/autotest.sh@347 -- # '[' 0 -eq 1 ']' 00:37:10.304 01:23:03 -- spdk/autotest.sh@352 -- # '[' 0 -eq 1 ']' 00:37:10.304 01:23:03 -- spdk/autotest.sh@356 -- # '[' 0 -eq 1 ']' 00:37:10.304 01:23:03 -- spdk/autotest.sh@363 -- # [[ 0 -eq 1 ]] 00:37:10.304 01:23:03 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:37:10.304 01:23:03 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:37:10.304 01:23:03 -- spdk/autotest.sh@375 -- # [[ 0 -eq 1 ]] 00:37:10.304 01:23:03 -- spdk/autotest.sh@380 -- # trap - SIGINT SIGTERM EXIT 00:37:10.304 01:23:03 -- spdk/autotest.sh@382 -- # timing_enter post_cleanup 00:37:10.304 01:23:03 -- common/autotest_common.sh@720 -- # xtrace_disable 00:37:10.304 01:23:03 -- common/autotest_common.sh@10 -- # set +x 00:37:10.304 01:23:03 -- spdk/autotest.sh@383 -- # autotest_cleanup 00:37:10.304 01:23:03 -- common/autotest_common.sh@1388 -- # local autotest_es=0 00:37:10.304 01:23:03 -- common/autotest_common.sh@1389 -- # xtrace_disable 00:37:10.304 01:23:03 -- common/autotest_common.sh@10 -- # set +x 00:37:12.201 INFO: APP EXITING 00:37:12.201 INFO: killing all VMs 00:37:12.201 INFO: killing vhost app 00:37:12.201 INFO: EXIT DONE 00:37:12.766 0000:88:00.0 (8086 0a54): Already using the nvme driver 00:37:12.766 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:37:12.766 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:37:12.766 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:37:12.766 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:37:12.766 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:37:12.766 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:37:13.024 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:37:13.024 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:37:13.024 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:37:13.024 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:37:13.024 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:37:13.024 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:37:13.024 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:37:13.024 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:37:13.024 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:37:13.024 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:37:14.399 Cleaning 00:37:14.399 Removing: /var/run/dpdk/spdk0/config 00:37:14.399 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:37:14.399 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:37:14.399 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:37:14.399 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:37:14.399 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:37:14.399 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:37:14.399 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:37:14.400 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:37:14.400 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:37:14.400 Removing: /var/run/dpdk/spdk0/hugepage_info 00:37:14.400 Removing: /var/run/dpdk/spdk1/config 00:37:14.400 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:37:14.400 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:37:14.400 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:37:14.400 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:37:14.400 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:37:14.400 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:37:14.400 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:37:14.400 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:37:14.400 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:37:14.400 Removing: /var/run/dpdk/spdk1/hugepage_info 00:37:14.400 Removing: /var/run/dpdk/spdk1/mp_socket 00:37:14.400 Removing: /var/run/dpdk/spdk2/config 00:37:14.400 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:37:14.400 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:37:14.400 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:37:14.400 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:37:14.400 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:37:14.400 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:37:14.400 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:37:14.400 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:37:14.400 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:37:14.400 Removing: /var/run/dpdk/spdk2/hugepage_info 00:37:14.400 Removing: /var/run/dpdk/spdk3/config 00:37:14.400 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:37:14.400 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:37:14.400 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:37:14.400 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:37:14.400 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:37:14.400 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:37:14.400 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:37:14.400 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:37:14.400 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:37:14.400 Removing: /var/run/dpdk/spdk3/hugepage_info 00:37:14.400 Removing: /var/run/dpdk/spdk4/config 00:37:14.400 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:37:14.400 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:37:14.400 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:37:14.400 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:37:14.400 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:37:14.400 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:37:14.400 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:37:14.400 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:37:14.400 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:37:14.400 Removing: /var/run/dpdk/spdk4/hugepage_info 00:37:14.400 Removing: /dev/shm/bdev_svc_trace.1 00:37:14.400 Removing: /dev/shm/nvmf_trace.0 00:37:14.400 Removing: /dev/shm/spdk_tgt_trace.pid3636576 00:37:14.400 Removing: /var/run/dpdk/spdk0 00:37:14.400 Removing: /var/run/dpdk/spdk1 00:37:14.400 Removing: /var/run/dpdk/spdk2 00:37:14.400 Removing: /var/run/dpdk/spdk3 00:37:14.400 Removing: /var/run/dpdk/spdk4 00:37:14.400 Removing: /var/run/dpdk/spdk_pid3635023 00:37:14.400 Removing: /var/run/dpdk/spdk_pid3635758 00:37:14.400 Removing: /var/run/dpdk/spdk_pid3636576 00:37:14.400 Removing: /var/run/dpdk/spdk_pid3637006 00:37:14.400 Removing: /var/run/dpdk/spdk_pid3637693 00:37:14.400 Removing: /var/run/dpdk/spdk_pid3637833 00:37:14.400 Removing: /var/run/dpdk/spdk_pid3638551 00:37:14.400 Removing: /var/run/dpdk/spdk_pid3638564 00:37:14.400 Removing: /var/run/dpdk/spdk_pid3638808 00:37:14.400 Removing: /var/run/dpdk/spdk_pid3640107 00:37:14.400 Removing: /var/run/dpdk/spdk_pid3641041 00:37:14.400 Removing: /var/run/dpdk/spdk_pid3641241 00:37:14.400 Removing: /var/run/dpdk/spdk_pid3641534 00:37:14.400 Removing: /var/run/dpdk/spdk_pid3641740 00:37:14.400 Removing: /var/run/dpdk/spdk_pid3641927 00:37:14.400 Removing: /var/run/dpdk/spdk_pid3642085 00:37:14.400 Removing: /var/run/dpdk/spdk_pid3642242 00:37:14.400 Removing: /var/run/dpdk/spdk_pid3642426 00:37:14.400 Removing: /var/run/dpdk/spdk_pid3643007 00:37:14.400 Removing: /var/run/dpdk/spdk_pid3645355 00:37:14.400 Removing: /var/run/dpdk/spdk_pid3645521 00:37:14.400 Removing: /var/run/dpdk/spdk_pid3645697 00:37:14.400 Removing: /var/run/dpdk/spdk_pid3645813 00:37:14.400 Removing: /var/run/dpdk/spdk_pid3646121 00:37:14.400 Removing: /var/run/dpdk/spdk_pid3646248 00:37:14.400 Removing: /var/run/dpdk/spdk_pid3646555 00:37:14.400 Removing: /var/run/dpdk/spdk_pid3646590 00:37:14.400 Removing: /var/run/dpdk/spdk_pid3646851 00:37:14.400 Removing: /var/run/dpdk/spdk_pid3646856 00:37:14.400 Removing: /var/run/dpdk/spdk_pid3647028 00:37:14.400 Removing: /var/run/dpdk/spdk_pid3647156 00:37:14.400 Removing: /var/run/dpdk/spdk_pid3647520 00:37:14.400 Removing: /var/run/dpdk/spdk_pid3647678 00:37:14.400 Removing: /var/run/dpdk/spdk_pid3647873 00:37:14.400 Removing: /var/run/dpdk/spdk_pid3648039 00:37:14.400 Removing: /var/run/dpdk/spdk_pid3648190 00:37:14.400 Removing: /var/run/dpdk/spdk_pid3648250 00:37:14.400 Removing: /var/run/dpdk/spdk_pid3648525 00:37:14.400 Removing: /var/run/dpdk/spdk_pid3648686 00:37:14.400 Removing: /var/run/dpdk/spdk_pid3648839 00:37:14.400 Removing: /var/run/dpdk/spdk_pid3648996 00:37:14.400 Removing: /var/run/dpdk/spdk_pid3649268 00:37:14.400 Removing: /var/run/dpdk/spdk_pid3649431 00:37:14.400 Removing: /var/run/dpdk/spdk_pid3649584 00:37:14.400 Removing: /var/run/dpdk/spdk_pid3649754 00:37:14.400 Removing: /var/run/dpdk/spdk_pid3650018 00:37:14.400 Removing: /var/run/dpdk/spdk_pid3650170 00:37:14.400 Removing: /var/run/dpdk/spdk_pid3650329 00:37:14.400 Removing: /var/run/dpdk/spdk_pid3650598 00:37:14.400 Removing: /var/run/dpdk/spdk_pid3650766 00:37:14.400 Removing: /var/run/dpdk/spdk_pid3650918 00:37:14.400 Removing: /var/run/dpdk/spdk_pid3651077 00:37:14.400 Removing: /var/run/dpdk/spdk_pid3651345 00:37:14.400 Removing: /var/run/dpdk/spdk_pid3651516 00:37:14.400 Removing: /var/run/dpdk/spdk_pid3651670 00:37:14.400 Removing: /var/run/dpdk/spdk_pid3651938 00:37:14.400 Removing: /var/run/dpdk/spdk_pid3652104 00:37:14.400 Removing: /var/run/dpdk/spdk_pid3652173 00:37:14.400 Removing: /var/run/dpdk/spdk_pid3652379 00:37:14.400 Removing: /var/run/dpdk/spdk_pid3654547 00:37:14.400 Removing: /var/run/dpdk/spdk_pid3708183 00:37:14.400 Removing: /var/run/dpdk/spdk_pid3710797 00:37:14.400 Removing: /var/run/dpdk/spdk_pid3717596 00:37:14.400 Removing: /var/run/dpdk/spdk_pid3720897 00:37:14.400 Removing: /var/run/dpdk/spdk_pid3723751 00:37:14.400 Removing: /var/run/dpdk/spdk_pid3724149 00:37:14.400 Removing: /var/run/dpdk/spdk_pid3731275 00:37:14.400 Removing: /var/run/dpdk/spdk_pid3731327 00:37:14.400 Removing: /var/run/dpdk/spdk_pid3731927 00:37:14.400 Removing: /var/run/dpdk/spdk_pid3732588 00:37:14.400 Removing: /var/run/dpdk/spdk_pid3733200 00:37:14.400 Removing: /var/run/dpdk/spdk_pid3733542 00:37:14.400 Removing: /var/run/dpdk/spdk_pid3733650 00:37:14.400 Removing: /var/run/dpdk/spdk_pid3733786 00:37:14.400 Removing: /var/run/dpdk/spdk_pid3733922 00:37:14.400 Removing: /var/run/dpdk/spdk_pid3733925 00:37:14.400 Removing: /var/run/dpdk/spdk_pid3734577 00:37:14.400 Removing: /var/run/dpdk/spdk_pid3735214 00:37:14.400 Removing: /var/run/dpdk/spdk_pid3735778 00:37:14.400 Removing: /var/run/dpdk/spdk_pid3736175 00:37:14.400 Removing: /var/run/dpdk/spdk_pid3736242 00:37:14.400 Removing: /var/run/dpdk/spdk_pid3736438 00:37:14.400 Removing: /var/run/dpdk/spdk_pid3737323 00:37:14.400 Removing: /var/run/dpdk/spdk_pid3738036 00:37:14.400 Removing: /var/run/dpdk/spdk_pid3743387 00:37:14.400 Removing: /var/run/dpdk/spdk_pid3743547 00:37:14.400 Removing: /var/run/dpdk/spdk_pid3746164 00:37:14.400 Removing: /var/run/dpdk/spdk_pid3749854 00:37:14.400 Removing: /var/run/dpdk/spdk_pid3752517 00:37:14.400 Removing: /var/run/dpdk/spdk_pid3758773 00:37:14.400 Removing: /var/run/dpdk/spdk_pid3763966 00:37:14.400 Removing: /var/run/dpdk/spdk_pid3765157 00:37:14.400 Removing: /var/run/dpdk/spdk_pid3765814 00:37:14.400 Removing: /var/run/dpdk/spdk_pid3775997 00:37:14.400 Removing: /var/run/dpdk/spdk_pid3778207 00:37:14.400 Removing: /var/run/dpdk/spdk_pid3803390 00:37:14.400 Removing: /var/run/dpdk/spdk_pid3806167 00:37:14.400 Removing: /var/run/dpdk/spdk_pid3807345 00:37:14.400 Removing: /var/run/dpdk/spdk_pid3808544 00:37:14.400 Removing: /var/run/dpdk/spdk_pid3808669 00:37:14.400 Removing: /var/run/dpdk/spdk_pid3808809 00:37:14.400 Removing: /var/run/dpdk/spdk_pid3808832 00:37:14.400 Removing: /var/run/dpdk/spdk_pid3809262 00:37:14.400 Removing: /var/run/dpdk/spdk_pid3810690 00:37:14.400 Removing: /var/run/dpdk/spdk_pid3811910 00:37:14.400 Removing: /var/run/dpdk/spdk_pid3812224 00:37:14.400 Removing: /var/run/dpdk/spdk_pid3813842 00:37:14.400 Removing: /var/run/dpdk/spdk_pid3814258 00:37:14.659 Removing: /var/run/dpdk/spdk_pid3814706 00:37:14.659 Removing: /var/run/dpdk/spdk_pid3817091 00:37:14.659 Removing: /var/run/dpdk/spdk_pid3820461 00:37:14.659 Removing: /var/run/dpdk/spdk_pid3823865 00:37:14.659 Removing: /var/run/dpdk/spdk_pid3847506 00:37:14.659 Removing: /var/run/dpdk/spdk_pid3850145 00:37:14.659 Removing: /var/run/dpdk/spdk_pid3854022 00:37:14.659 Removing: /var/run/dpdk/spdk_pid3854968 00:37:14.659 Removing: /var/run/dpdk/spdk_pid3855939 00:37:14.659 Removing: /var/run/dpdk/spdk_pid3858479 00:37:14.659 Removing: /var/run/dpdk/spdk_pid3860834 00:37:14.659 Removing: /var/run/dpdk/spdk_pid3865021 00:37:14.659 Removing: /var/run/dpdk/spdk_pid3865039 00:37:14.659 Removing: /var/run/dpdk/spdk_pid3867805 00:37:14.659 Removing: /var/run/dpdk/spdk_pid3867941 00:37:14.659 Removing: /var/run/dpdk/spdk_pid3868077 00:37:14.659 Removing: /var/run/dpdk/spdk_pid3868339 00:37:14.659 Removing: /var/run/dpdk/spdk_pid3868375 00:37:14.659 Removing: /var/run/dpdk/spdk_pid3869435 00:37:14.659 Removing: /var/run/dpdk/spdk_pid3870714 00:37:14.659 Removing: /var/run/dpdk/spdk_pid3871989 00:37:14.659 Removing: /var/run/dpdk/spdk_pid3873704 00:37:14.659 Removing: /var/run/dpdk/spdk_pid3874878 00:37:14.659 Removing: /var/run/dpdk/spdk_pid3876060 00:37:14.659 Removing: /var/run/dpdk/spdk_pid3879878 00:37:14.659 Removing: /var/run/dpdk/spdk_pid3880211 00:37:14.659 Removing: /var/run/dpdk/spdk_pid3881491 00:37:14.659 Removing: /var/run/dpdk/spdk_pid3882340 00:37:14.659 Removing: /var/run/dpdk/spdk_pid3885924 00:37:14.659 Removing: /var/run/dpdk/spdk_pid3887901 00:37:14.659 Removing: /var/run/dpdk/spdk_pid3891188 00:37:14.659 Removing: /var/run/dpdk/spdk_pid3894634 00:37:14.659 Removing: /var/run/dpdk/spdk_pid3900844 00:37:14.659 Removing: /var/run/dpdk/spdk_pid3905801 00:37:14.659 Removing: /var/run/dpdk/spdk_pid3905851 00:37:14.659 Removing: /var/run/dpdk/spdk_pid3917984 00:37:14.659 Removing: /var/run/dpdk/spdk_pid3918396 00:37:14.659 Removing: /var/run/dpdk/spdk_pid3918844 00:37:14.659 Removing: /var/run/dpdk/spdk_pid3919323 00:37:14.659 Removing: /var/run/dpdk/spdk_pid3919822 00:37:14.659 Removing: /var/run/dpdk/spdk_pid3920310 00:37:14.659 Removing: /var/run/dpdk/spdk_pid3920716 00:37:14.659 Removing: /var/run/dpdk/spdk_pid3921121 00:37:14.659 Removing: /var/run/dpdk/spdk_pid3923566 00:37:14.659 Removing: /var/run/dpdk/spdk_pid3923761 00:37:14.659 Removing: /var/run/dpdk/spdk_pid3927542 00:37:14.659 Removing: /var/run/dpdk/spdk_pid3927682 00:37:14.659 Removing: /var/run/dpdk/spdk_pid3929317 00:37:14.659 Removing: /var/run/dpdk/spdk_pid3934244 00:37:14.659 Removing: /var/run/dpdk/spdk_pid3934253 00:37:14.659 Removing: /var/run/dpdk/spdk_pid3937258 00:37:14.659 Removing: /var/run/dpdk/spdk_pid3939161 00:37:14.659 Removing: /var/run/dpdk/spdk_pid3940604 00:37:14.659 Removing: /var/run/dpdk/spdk_pid3941412 00:37:14.659 Removing: /var/run/dpdk/spdk_pid3942816 00:37:14.659 Removing: /var/run/dpdk/spdk_pid3943571 00:37:14.659 Removing: /var/run/dpdk/spdk_pid3948912 00:37:14.659 Removing: /var/run/dpdk/spdk_pid3949228 00:37:14.659 Removing: /var/run/dpdk/spdk_pid3949619 00:37:14.659 Removing: /var/run/dpdk/spdk_pid3951162 00:37:14.659 Removing: /var/run/dpdk/spdk_pid3951564 00:37:14.659 Removing: /var/run/dpdk/spdk_pid3951841 00:37:14.659 Removing: /var/run/dpdk/spdk_pid3954273 00:37:14.659 Removing: /var/run/dpdk/spdk_pid3954292 00:37:14.659 Removing: /var/run/dpdk/spdk_pid3955707 00:37:14.659 Removing: /var/run/dpdk/spdk_pid3956104 00:37:14.659 Removing: /var/run/dpdk/spdk_pid3956187 00:37:14.659 Clean 00:37:14.659 01:23:07 -- common/autotest_common.sh@1447 -- # return 0 00:37:14.659 01:23:07 -- spdk/autotest.sh@384 -- # timing_exit post_cleanup 00:37:14.659 01:23:07 -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:14.659 01:23:07 -- common/autotest_common.sh@10 -- # set +x 00:37:14.659 01:23:07 -- spdk/autotest.sh@386 -- # timing_exit autotest 00:37:14.659 01:23:07 -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:14.659 01:23:07 -- common/autotest_common.sh@10 -- # set +x 00:37:14.659 01:23:07 -- spdk/autotest.sh@387 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:37:14.659 01:23:07 -- spdk/autotest.sh@389 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:37:14.659 01:23:07 -- spdk/autotest.sh@389 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:37:14.659 01:23:07 -- spdk/autotest.sh@391 -- # hash lcov 00:37:14.659 01:23:07 -- spdk/autotest.sh@391 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:37:14.659 01:23:07 -- spdk/autotest.sh@393 -- # hostname 00:37:14.659 01:23:07 -- spdk/autotest.sh@393 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-gp-11 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:37:14.917 geninfo: WARNING: invalid characters removed from testname! 00:37:47.019 01:23:35 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:37:47.019 01:23:39 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:37:49.553 01:23:42 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:37:52.838 01:23:45 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:37:55.373 01:23:48 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:37:57.906 01:23:51 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:38:01.190 01:23:53 -- spdk/autotest.sh@400 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:38:01.190 01:23:53 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:01.190 01:23:53 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:38:01.190 01:23:53 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:01.190 01:23:53 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:01.190 01:23:53 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:01.190 01:23:53 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:01.190 01:23:53 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:01.190 01:23:53 -- paths/export.sh@5 -- $ export PATH 00:38:01.190 01:23:53 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:01.190 01:23:53 -- common/autobuild_common.sh@439 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:38:01.190 01:23:53 -- common/autobuild_common.sh@440 -- $ date +%s 00:38:01.190 01:23:53 -- common/autobuild_common.sh@440 -- $ mktemp -dt spdk_1721863433.XXXXXX 00:38:01.190 01:23:53 -- common/autobuild_common.sh@440 -- $ SPDK_WORKSPACE=/tmp/spdk_1721863433.JRDjM4 00:38:01.190 01:23:53 -- common/autobuild_common.sh@442 -- $ [[ -n '' ]] 00:38:01.190 01:23:53 -- common/autobuild_common.sh@446 -- $ '[' -n v23.11 ']' 00:38:01.190 01:23:53 -- common/autobuild_common.sh@447 -- $ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:38:01.190 01:23:53 -- common/autobuild_common.sh@447 -- $ scanbuild_exclude=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk' 00:38:01.190 01:23:53 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:38:01.190 01:23:53 -- common/autobuild_common.sh@455 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:38:01.190 01:23:53 -- common/autobuild_common.sh@456 -- $ get_config_params 00:38:01.190 01:23:53 -- common/autotest_common.sh@395 -- $ xtrace_disable 00:38:01.190 01:23:53 -- common/autotest_common.sh@10 -- $ set +x 00:38:01.190 01:23:54 -- common/autobuild_common.sh@456 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build' 00:38:01.190 01:23:54 -- common/autobuild_common.sh@458 -- $ start_monitor_resources 00:38:01.190 01:23:54 -- pm/common@17 -- $ local monitor 00:38:01.190 01:23:54 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:38:01.190 01:23:54 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:38:01.191 01:23:54 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:38:01.191 01:23:54 -- pm/common@21 -- $ date +%s 00:38:01.191 01:23:54 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:38:01.191 01:23:54 -- pm/common@21 -- $ date +%s 00:38:01.191 01:23:54 -- pm/common@25 -- $ sleep 1 00:38:01.191 01:23:54 -- pm/common@21 -- $ date +%s 00:38:01.191 01:23:54 -- pm/common@21 -- $ date +%s 00:38:01.191 01:23:54 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721863434 00:38:01.191 01:23:54 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721863434 00:38:01.191 01:23:54 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721863434 00:38:01.191 01:23:54 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721863434 00:38:01.191 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721863434_collect-vmstat.pm.log 00:38:01.191 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721863434_collect-cpu-load.pm.log 00:38:01.191 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721863434_collect-cpu-temp.pm.log 00:38:01.191 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721863434_collect-bmc-pm.bmc.pm.log 00:38:02.126 01:23:55 -- common/autobuild_common.sh@459 -- $ trap stop_monitor_resources EXIT 00:38:02.126 01:23:55 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j48 00:38:02.126 01:23:55 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:38:02.126 01:23:55 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:38:02.126 01:23:55 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:38:02.126 01:23:55 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:38:02.126 01:23:55 -- spdk/autopackage.sh@19 -- $ timing_finish 00:38:02.126 01:23:55 -- common/autotest_common.sh@732 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:38:02.126 01:23:55 -- common/autotest_common.sh@733 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:38:02.126 01:23:55 -- common/autotest_common.sh@735 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:38:02.126 01:23:55 -- spdk/autopackage.sh@20 -- $ exit 0 00:38:02.126 01:23:55 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:38:02.126 01:23:55 -- pm/common@29 -- $ signal_monitor_resources TERM 00:38:02.126 01:23:55 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:38:02.126 01:23:55 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:38:02.126 01:23:55 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:38:02.126 01:23:55 -- pm/common@44 -- $ pid=3967351 00:38:02.126 01:23:55 -- pm/common@50 -- $ kill -TERM 3967351 00:38:02.126 01:23:55 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:38:02.126 01:23:55 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:38:02.126 01:23:55 -- pm/common@44 -- $ pid=3967353 00:38:02.126 01:23:55 -- pm/common@50 -- $ kill -TERM 3967353 00:38:02.126 01:23:55 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:38:02.126 01:23:55 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:38:02.126 01:23:55 -- pm/common@44 -- $ pid=3967355 00:38:02.126 01:23:55 -- pm/common@50 -- $ kill -TERM 3967355 00:38:02.126 01:23:55 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:38:02.126 01:23:55 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:38:02.126 01:23:55 -- pm/common@44 -- $ pid=3967386 00:38:02.126 01:23:55 -- pm/common@50 -- $ sudo -E kill -TERM 3967386 00:38:02.126 + [[ -n 3530299 ]] 00:38:02.126 + sudo kill 3530299 00:38:02.135 [Pipeline] } 00:38:02.152 [Pipeline] // stage 00:38:02.158 [Pipeline] } 00:38:02.174 [Pipeline] // timeout 00:38:02.179 [Pipeline] } 00:38:02.197 [Pipeline] // catchError 00:38:02.203 [Pipeline] } 00:38:02.220 [Pipeline] // wrap 00:38:02.225 [Pipeline] } 00:38:02.241 [Pipeline] // catchError 00:38:02.250 [Pipeline] stage 00:38:02.251 [Pipeline] { (Epilogue) 00:38:02.265 [Pipeline] catchError 00:38:02.267 [Pipeline] { 00:38:02.280 [Pipeline] echo 00:38:02.281 Cleanup processes 00:38:02.287 [Pipeline] sh 00:38:02.567 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:38:02.568 3967489 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/sdr.cache 00:38:02.568 3967616 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:38:02.587 [Pipeline] sh 00:38:02.874 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:38:02.874 ++ grep -v 'sudo pgrep' 00:38:02.874 ++ awk '{print $1}' 00:38:02.874 + sudo kill -9 3967489 00:38:02.884 [Pipeline] sh 00:38:03.165 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:38:13.172 [Pipeline] sh 00:38:13.452 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:38:13.452 Artifacts sizes are good 00:38:13.466 [Pipeline] archiveArtifacts 00:38:13.472 Archiving artifacts 00:38:13.692 [Pipeline] sh 00:38:13.974 + sudo chown -R sys_sgci /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:38:13.987 [Pipeline] cleanWs 00:38:13.997 [WS-CLEANUP] Deleting project workspace... 00:38:13.997 [WS-CLEANUP] Deferred wipeout is used... 00:38:14.003 [WS-CLEANUP] done 00:38:14.005 [Pipeline] } 00:38:14.026 [Pipeline] // catchError 00:38:14.039 [Pipeline] sh 00:38:14.318 + logger -p user.info -t JENKINS-CI 00:38:14.326 [Pipeline] } 00:38:14.344 [Pipeline] // stage 00:38:14.350 [Pipeline] } 00:38:14.368 [Pipeline] // node 00:38:14.374 [Pipeline] End of Pipeline 00:38:14.415 Finished: SUCCESS